Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered

2013-12-20 Thread Michael Kun Yang
I figured it out, the problem is that the version of "spark-core" in my project is different from the version in the pseudo-cluster. On Fri, Dec 20, 2013 at 2:47 PM, Michael Kun Yang wrote: > Thank you very much. > > > On Friday, December 20, 2013, Christopher Nguyen wrot

Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered

2013-12-20 Thread Michael Kun Yang
han guessing over emails. > > If @freeman is willing, you can send a private message to him to set that > up over Google Hangout. > > -- > Christopher T. Nguyen > Co-founder & CEO, Adatao <http://adatao.com> > linkedin.com/in/ctnguyen > > > > On Fri, Dec 2

Re: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered

2013-12-20 Thread Michael Kun Yang
fixed by restarting Spark. > > > > > > On Dec 20, 2013, at 3:12 PM, Michael Kun Yang > > > wrote: > > Hi, > > I really need help, I went through previous posts on the mailing list but > still cannot resolve this problem. > > It works when I use local[n] option, bu

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered

2013-12-20 Thread Michael Kun Yang
Hi, I really need help, I went through previous posts on the mailing list but still cannot resolve this problem. It works when I use local[n] option, but error is occurred when I use spark://master.local:7077. I checked the UI, the workers are correctly registered and I set the SPARK_MEM compati

Re: How to run compile .jar in spark?

2013-11-25 Thread Michael Kun Yang
nkur > > On 25 Nov 2013, at 14:12, Michael Kun Yang wrote: > > How to run compile .jar in spark? > For hadoop, I can use > hadoop -jar ... > > Thank you! > -M > > >

How to run compile .jar in spark?

2013-11-25 Thread Michael Kun Yang
How to run compile .jar in spark? For hadoop, I can use hadoop -jar ... Thank you! -M

convert streaming data into table

2013-11-19 Thread Michael Kun Yang
Hi spark-enthusiasts, I am new to spark streaming. I need to fit a moving average model for stock prices. How to convert a data stream {x_1, x_2, x_3, ..., x_n, ...} into a table with the format: x_1, x_2, x_3, x_4, x_5, x_6 x_2, x_3, x_4, x_5, x_6, x_7 ... x_{n + 1}, x_{n + 2}, ..., x_{n + 7} ..

row number

2013-11-19 Thread Michael Kun Yang
Is there a way to get the row index of large table for sparkContext? Thanks!

Re: convert streaming data into table

2013-11-19 Thread Michael Kun Yang
- andy.petre...@nextlab.be >- andy.petre...@gmail.com > > Socials: > >- Twitter: https://twitter.com/#!/noootsab >- LinkedIn: http://be.linkedin.com/in/andypetrella >- Blogger: http://ska-la.blogspot.com/ >- GitHub: https://github.com/andypetrella >- Maste

Re: convert streaming data into table

2013-11-19 Thread Michael Kun Yang
> > HTH (a bit :D) > > andy > > > On Wed, Nov 20, 2013 at 1:01 AM, Michael Kun Yang wrote: > >> Hi spark-enthusiasts, >> >> I am new to spark streaming. I need to convert streaming data into table. >> >> How to convert a data stream >> {x

convert streaming data into table

2013-11-19 Thread Michael Kun Yang
Hi spark-enthusiasts, I am new to spark streaming. I need to convert streaming data into table. How to convert a data stream {x_1, x_2, x_3, ..., x_n, ...} into a table with the format: x_1, x_2, x_3, x_4, x_5, x_6 x_2, x_3, x_4, x_5, x_6, x_7 ... x_{n + 1}, x_{n + 2}, ..., x_{n + 7} ... Thank y

Re: how to avoid reading the first line of dataframe?

2013-09-24 Thread Michael Kun Yang
t;> >>> You shouldn't even need the index. >>> >>> Just: >>> >>> data.mapPartitions(_.drop(1)) >>> >>> should work, I think. >>> >>> >>> On Wed, Sep 25, 2013 at 1:52 AM, Michael Kun Yang >>>

Re: how to avoid reading the first line of dataframe?

2013-09-24 Thread Michael Kun Yang
thank you! But can you explain in more detail? I only want to skip the first line, not the whole block. On Tue, Sep 24, 2013 at 8:54 PM, Jason Lenderman wrote: > Perhaps you could use mapPartitionsWithIndex to do this. > > > On Tue, Sep 24, 2013 at 4:52 PM, Michael Kun Yang wrote:

Re: how to avoid reading the first line of dataframe?

2013-09-24 Thread Michael Kun Yang
Spark's filter can do this job, but it need to scan very line (row). Is there a way to just skip the first line in the file? any feedback? On Tue, Sep 24, 2013 at 4:14 PM, Michael Kun Yang wrote: > Dataframes usually have headers in the first row, how can I avoid reading > the fir

how to avoid reading the first line of dataframe?

2013-09-24 Thread Michael Kun Yang
Dataframes usually have headers in the first row, how can I avoid reading the first row? I know in hadoop, I can figure it out by the line number. Best