error in reduceGroup operator when changing the Flink version from 0.7 to 0.8

2015-02-24 Thread HungChang
Hi, when changing the version from 0.7 to 0.8, reduceGroup operator gets the following error: "The method reduceGroup(GroupReduceFunction) in the type DataSet is not applicable for the arguments (InDegreeDistribution.CountVertices)" Tried to figure out the error but failed to fix it. Could you pl

HDFS Clustering

2015-02-24 Thread Giacomo Licari
Hi guys, I'm Giacomo from Italy, I'm newbie with Flink. I setted up a cluster with Hadoop 1.2 and Flink. I would like to ask to you how to run the WordCount example taking the input file from hdfs (example myuser/testWordCount/hamlet. txt) and put the output also inside hdfs (example myuser/testW

Re: error in reduceGroup operator when changing the Flink version from 0.7 to 0.8

2015-02-24 Thread Stephan Ewen
Hi Hung! Can you tell us who exactly gives you that error? The java compiler, or Flink, when you run the program? If it is Flink, can you attach the stack trace of the exception? Greetings, Stephan On Tue, Feb 24, 2015 at 10:50 AM, HungChang wrote: > Hi, when changing the version from 0.7 to

Re: HDFS Clustering

2015-02-24 Thread Márton Balassi
Hey, Just add the the right prefix pointing to your hdfs filepath: bin/flink run -v flink-java-examples-*-WordCount.jar hdfs://hostname:port/PATH/TO/INPUT hdfs://hostname:port/PATH/TO/OUTPUT Best, Marton On Tue, Feb 24, 2015 at 11:13 AM, Giacomo Licari wrote: > Hi guys, > I'm Giacomo from It

Re: error in reduceGroup operator when changing the Flink version from 0.7 to 0.8

2015-02-24 Thread HungChang
Thanks for your reply. The error is from java compiler (Eclipse). It looks like the data type of output and input are OK in 0.7 version, but not proper in 0.8 version. -- View this message in context: http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Error-in-reduce

Re: HDFS Clustering

2015-02-24 Thread Max Michels
Hi Giacomo, Congratulations on setting up a Flink cluster with HDFS :) To run the WordCount example provided with Flink, you should first upload your input file to HDFS. If you have not done so, please run > hdfs dfs -put -p file:///home/user/yourinputfile hdfs:///wc_input Then, you can use the

Re: error in reduceGroup operator when changing the Flink version from 0.7 to 0.8

2015-02-24 Thread Aljoscha Krettek
The problem is that someone changed how project() works. Sorry for the inconvenience. To make it work, you have to add the type parameter manually, so that the result of project() has the correct type: DataSet numVertices edges.>project(1)).distinct().reduceGroup(new CountVertices()) On Tue, Feb

Re: HDFS Clustering

2015-02-24 Thread Giacomo Licari
Thanks a lot Marton and Max, it worked perfectly. Regards from Italy :) On Tue, Feb 24, 2015 at 11:31 AM, Max Michels wrote: > Hi Giacomo, > > Congratulations on setting up a Flink cluster with HDFS :) To run the > WordCount example provided with Flink, you should first upload your > input file

Re: error in reduceGroup operator when changing the Flink version from 0.7 to 0.8

2015-02-24 Thread HungChang
Thank you!This is complete solving the problem. -- View this message in context: http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/Error-in-reduceGroup-operator-when-changing-the-Flink-version-from-0-7-to-0-8-tp785p793.html Sent from the Apache Flink (Incubator) Us

Re: Subscribe request from a developer

2015-02-24 Thread Robert Metzger
Hey Rahul, to subscribe to the flink mailinglists, you have to send emails to user-subscr...@flink.apache.org and dev-subscr...@flink.apache.org. You should then get a "CONFIRM SUBSCRIBE" message which you have to reply. Then you should be subscribed. If that's not working, we have to ask the Ap

Re: OutOfMemory during serialization

2015-02-24 Thread Robert Metzger
Twitter has merged my improved BitSetSerializer for Kryo: https://github.com/twitter/chill/pull/220 Once they've released a new version, I'll update our twitter-chill dependency. On Fri, Feb 20, 2015 at 2:13 PM, Robert Metzger wrote: > Lets create an issue in Flink to somehow fix the issue. > >