I figured it out, the problem is that the version of "spark-core" in my
project is different from the version in the pseudo-cluster.
On Fri, Dec 20, 2013 at 2:47 PM, Michael Kun Yang wrote:
> Thank you very much.
>
>
> On Friday, December 20, 2013, Christopher Nguyen wrot
han guessing over emails.
>
> If @freeman is willing, you can send a private message to him to set that
> up over Google Hangout.
>
> --
> Christopher T. Nguyen
> Co-founder & CEO, Adatao <http://adatao.com>
> linkedin.com/in/ctnguyen
>
>
>
> On Fri, Dec 2
fixed by restarting Spark.
>
>
>
>
>
> On Dec 20, 2013, at 3:12 PM, Michael Kun Yang
> >
> wrote:
>
> Hi,
>
> I really need help, I went through previous posts on the mailing list but
> still cannot resolve this problem.
>
> It works when I use local[n] option, bu
Hi,
I really need help, I went through previous posts on the mailing list but
still cannot resolve this problem.
It works when I use local[n] option, but error is occurred when I use
spark://master.local:7077.
I checked the UI, the workers are correctly registered and I set the
SPARK_MEM compati
nkur
>
> On 25 Nov 2013, at 14:12, Michael Kun Yang wrote:
>
> How to run compile .jar in spark?
> For hadoop, I can use
> hadoop -jar ...
>
> Thank you!
> -M
>
>
>
How to run compile .jar in spark?
For hadoop, I can use
hadoop -jar ...
Thank you!
-M
Hi spark-enthusiasts,
I am new to spark streaming. I need to fit a moving average model for stock
prices.
How to convert a data stream
{x_1, x_2, x_3, ..., x_n, ...}
into a table with the format:
x_1, x_2, x_3, x_4, x_5, x_6
x_2, x_3, x_4, x_5, x_6, x_7
...
x_{n + 1}, x_{n + 2}, ..., x_{n + 7}
..
Is there a way to get the row index of large table for sparkContext?
Thanks!
- andy.petre...@nextlab.be
>- andy.petre...@gmail.com
>
> Socials:
>
>- Twitter: https://twitter.com/#!/noootsab
>- LinkedIn: http://be.linkedin.com/in/andypetrella
>- Blogger: http://ska-la.blogspot.com/
>- GitHub: https://github.com/andypetrella
>- Maste
>
> HTH (a bit :D)
>
> andy
>
>
> On Wed, Nov 20, 2013 at 1:01 AM, Michael Kun Yang wrote:
>
>> Hi spark-enthusiasts,
>>
>> I am new to spark streaming. I need to convert streaming data into table.
>>
>> How to convert a data stream
>> {x
Hi spark-enthusiasts,
I am new to spark streaming. I need to convert streaming data into table.
How to convert a data stream
{x_1, x_2, x_3, ..., x_n, ...}
into a table with the format:
x_1, x_2, x_3, x_4, x_5, x_6
x_2, x_3, x_4, x_5, x_6, x_7
...
x_{n + 1}, x_{n + 2}, ..., x_{n + 7}
...
Thank y
t;>
>>> You shouldn't even need the index.
>>>
>>> Just:
>>>
>>> data.mapPartitions(_.drop(1))
>>>
>>> should work, I think.
>>>
>>>
>>> On Wed, Sep 25, 2013 at 1:52 AM, Michael Kun Yang
>>>
thank you! But can you explain in more detail? I only want to skip the
first line, not the whole block.
On Tue, Sep 24, 2013 at 8:54 PM, Jason Lenderman wrote:
> Perhaps you could use mapPartitionsWithIndex to do this.
>
>
> On Tue, Sep 24, 2013 at 4:52 PM, Michael Kun Yang wrote:
Spark's filter can do this job, but it need to scan very line (row). Is
there a way to just skip the first line in the file?
any feedback?
On Tue, Sep 24, 2013 at 4:14 PM, Michael Kun Yang wrote:
> Dataframes usually have headers in the first row, how can I avoid reading
> the fir
Dataframes usually have headers in the first row, how can I avoid reading
the first row?
I know in hadoop, I can figure it out by the line number.
Best
15 matches
Mail list logo