Re: How to link code pull request with JIRA ID?

2015-05-13 Thread Nicholas Chammas
There's no magic to it. We're doing the same, except Josh automated it in
the PR dashboard he created.

https://spark-prs.appspot.com/

Nick

On Wed, May 13, 2015 at 6:20 PM Markus Weimer  wrote:

> Hi,
>
> how did you set this up? Over in the REEF incubation project, we
> painstakingly create the forwards- and backwards links despite having
> the IDs in the PR descriptions...
>
> Thanks!
>
> Markus
>
>
> On 2015-05-13 11:56, Ted Yu wrote:
> > Subproject tag should follow SPARK JIRA number.
> > e.g.
> >
> > [SPARK-5277][SQL] ...
> >
> > Cheers
> >
> > On Wed, May 13, 2015 at 11:50 AM, Stephen Boesch 
> wrote:
> >
> >> following up from Nicholas, it is
> >>
> >> [SPARK-12345] Your PR description
> >>
> >> where 12345 is the jira number.
> >>
> >>
> >> One thing I tend to forget is when/where to include the subproject tag
> e.g.
> >>  [MLLIB]
> >>
> >>
> >> 2015-05-13 11:11 GMT-07:00 Nicholas Chammas  >:
> >>
> >>> That happens automatically when you open a PR with the JIRA key in the
> PR
> >>> title.
> >>>
> >>> On Wed, May 13, 2015 at 2:10 PM Chandrashekhar Kotekar <
> >>> shekhar.kote...@gmail.com> wrote:
> >>>
>  Hi,
> 
>  I am new to open source contribution and trying to understand the
> >> process
>  starting from pulling code to uploading patch.
> 
>  I have managed to pull code from GitHub. In JIRA I saw that each JIRA
> >>> issue
>  is connected with pull request. I would like to know how do people
> >> attach
>  pull request details to JIRA issue?
> 
>  Thanks,
>  Chandrash3khar Kotekar
>  Mobile - +91 8600011455
> 
> >>>
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: Change for submitting to yarn in 1.3.1

2015-05-13 Thread Chester @work
Patrick
Thanks for responding. Yes. many of are features requests not private 
client related. These are the things I have been working with since last year. 
I have trying to push the PR for these changes. If the new Launcher lib is 
the way to go , we will try to work with new APIs. 

  Thanks
Chester

Sent from my iPhone

> On May 13, 2015, at 7:22 PM, Patrick Wendell  wrote:
> 
> Hey Chester,
> 
> Thanks for sending this. It's very helpful to have this list.
> 
> The reason we made the Client API private was that it was never
> intended to be used by third parties programmatically and we don't
> intend to support it in its current form as a stable API. We thought
> the fact that it was for internal use would be obvious since it
> accepts arguments as a string array of CL args. It was always intended
> for command line use and the stable API was the command line.
> 
> When we migrated the Launcher library we figured we covered most of
> the use cases in the off chance someone was using the Client. It
> appears we regressed one feature which was a clean way to get the app
> ID.
> 
> The items you list here 2-6 all seem like new feature requests rather
> than a regression caused by us making that API private.
> 
> I think the way to move forward is for someone to design a proper
> long-term stable API for the things you mentioned here. That could
> either be by extension of the Launcher library. Marcelo would be
> natural to help with this effort since he was heavily involved in both
> YARN support and the launcher. So I'm curious to hear his opinion on
> how best to move forward.
> 
> I do see how apps that run Spark would benefit of having a control
> plane for querying status, both on YARN and elsewhere.
> 
> - Patrick
> 
>> On Wed, May 13, 2015 at 5:44 AM, Chester At Work  
>> wrote:
>> Patrick
>> There are several things we need, some of them already mentioned in the 
>> mailing list before.
>> 
>> I haven't looked at the SparkLauncher code, but here are few things we need 
>> from our perspectives for Spark Yarn Client
>> 
>> 1) client should not be private ( unless alternative is provided) so we 
>> can call it directly.
>> 2) we need a way to stop the running yarn app programmatically ( the PR 
>> is already submitted)
>> 3) before we start the spark job, we should have a call back to the 
>> application, which will provide the yarn container capacity (number of cores 
>> and max memory ), so spark program will not set values beyond max values (PR 
>> submitted)
>> 4) call back could be in form of yarn app listeners, which call back 
>> based on yarn status changes ( start, in progress, failure, complete etc), 
>> application can react based on these events in PR)
>> 
>> 5) yarn client passing arguments to spark program in the form of main 
>> program, we had experience problems when we pass a very large argument due 
>> the length limit. For example, we use json to serialize the argument and 
>> encoded, then parse them as argument. For wide columns datasets, we will run 
>> into limit. Therefore, an alternative way of passing additional larger 
>> argument is needed. We are experimenting with passing the args via a 
>> established akka messaging channel.
>> 
>>6) spark yarn client in yarn-cluster mode right now is essentially a 
>> batch job with no communication once it launched. Need to establish the 
>> communication channel so that logs, errors, status updates, progress bars, 
>> execution stages etc can be displayed on the application side. We added an 
>> akka communication channel for this (working on PR ).
>> 
>>   Combined with others items in this list, we are able to redirect print 
>> and error statement to application log (outside of the hadoop cluster), so 
>> spark UI equivalent progress bar via spark listener. We can show yarn 
>> progress via yarn app listener before spark started; and status can be 
>> updated during job execution.
>> 
>>We are also experimenting with long running job with additional spark 
>> commands and interactions via this channel.
>> 
>> 
>> Chester
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Sent from my iPad
>> 
>>> On May 12, 2015, at 20:54, Patrick Wendell  wrote:
>>> 
>>> Hey Kevin and Ron,
>>> 
>>> So is the main shortcoming of the launcher library the inability to
>>> get an app ID back from YARN? Or are there other issues here that
>>> fundamentally regress things for you.
>>> 
>>> It seems like adding a way to get back the appID would be a reasonable
>>> addition to the launcher.
>>> 
>>> - Patrick
>>> 
 On Tue, May 12, 2015 at 12:51 PM, Marcelo Vanzin  
 wrote:
 On Tue, May 12, 2015 at 11:34 AM, Kevin Markey 
 wrote:
 
> I understand that SparkLauncher was supposed to address these issues, but
> it really doesn't.  Yarn already provides indirection and an arm's length
> transaction for starting Spark on a cluster. The launcher introduces yet
> another layer of indirec

Re: Change for submitting to yarn in 1.3.1

2015-05-13 Thread Patrick Wendell
Hey Chester,

Thanks for sending this. It's very helpful to have this list.

The reason we made the Client API private was that it was never
intended to be used by third parties programmatically and we don't
intend to support it in its current form as a stable API. We thought
the fact that it was for internal use would be obvious since it
accepts arguments as a string array of CL args. It was always intended
for command line use and the stable API was the command line.

When we migrated the Launcher library we figured we covered most of
the use cases in the off chance someone was using the Client. It
appears we regressed one feature which was a clean way to get the app
ID.

The items you list here 2-6 all seem like new feature requests rather
than a regression caused by us making that API private.

I think the way to move forward is for someone to design a proper
long-term stable API for the things you mentioned here. That could
either be by extension of the Launcher library. Marcelo would be
natural to help with this effort since he was heavily involved in both
YARN support and the launcher. So I'm curious to hear his opinion on
how best to move forward.

I do see how apps that run Spark would benefit of having a control
plane for querying status, both on YARN and elsewhere.

- Patrick

On Wed, May 13, 2015 at 5:44 AM, Chester At Work  wrote:
> Patrick
>  There are several things we need, some of them already mentioned in the 
> mailing list before.
>
> I haven't looked at the SparkLauncher code, but here are few things we need 
> from our perspectives for Spark Yarn Client
>
>  1) client should not be private ( unless alternative is provided) so we 
> can call it directly.
>  2) we need a way to stop the running yarn app programmatically ( the PR 
> is already submitted)
>  3) before we start the spark job, we should have a call back to the 
> application, which will provide the yarn container capacity (number of cores 
> and max memory ), so spark program will not set values beyond max values (PR 
> submitted)
>  4) call back could be in form of yarn app listeners, which call back 
> based on yarn status changes ( start, in progress, failure, complete etc), 
> application can react based on these events in PR)
>
>  5) yarn client passing arguments to spark program in the form of main 
> program, we had experience problems when we pass a very large argument due 
> the length limit. For example, we use json to serialize the argument and 
> encoded, then parse them as argument. For wide columns datasets, we will run 
> into limit. Therefore, an alternative way of passing additional larger 
> argument is needed. We are experimenting with passing the args via a 
> established akka messaging channel.
>
> 6) spark yarn client in yarn-cluster mode right now is essentially a 
> batch job with no communication once it launched. Need to establish the 
> communication channel so that logs, errors, status updates, progress bars, 
> execution stages etc can be displayed on the application side. We added an 
> akka communication channel for this (working on PR ).
>
>Combined with others items in this list, we are able to redirect print 
> and error statement to application log (outside of the hadoop cluster), so 
> spark UI equivalent progress bar via spark listener. We can show yarn 
> progress via yarn app listener before spark started; and status can be 
> updated during job execution.
>
> We are also experimenting with long running job with additional spark 
> commands and interactions via this channel.
>
>
>  Chester
>
>
>
>
>
>
>
>
>
> Sent from my iPad
>
> On May 12, 2015, at 20:54, Patrick Wendell  wrote:
>
>> Hey Kevin and Ron,
>>
>> So is the main shortcoming of the launcher library the inability to
>> get an app ID back from YARN? Or are there other issues here that
>> fundamentally regress things for you.
>>
>> It seems like adding a way to get back the appID would be a reasonable
>> addition to the launcher.
>>
>> - Patrick
>>
>> On Tue, May 12, 2015 at 12:51 PM, Marcelo Vanzin  wrote:
>>> On Tue, May 12, 2015 at 11:34 AM, Kevin Markey 
>>> wrote:
>>>
 I understand that SparkLauncher was supposed to address these issues, but
 it really doesn't.  Yarn already provides indirection and an arm's length
 transaction for starting Spark on a cluster. The launcher introduces yet
 another layer of indirection and dissociates the Yarn Client from the
 application that launches it.

>>>
>>> Well, not fully. The launcher was supposed to solve "how to launch a Spark
>>> app programatically", but in the first version nothing was added to
>>> actually gather information about the running app. It's also limited in the
>>> way it works because of Spark's limitations (one context per JVM, etc).
>>>
>>> Still, adding things like this is something that is definitely in the scope
>>> for the launcher library; information such as app id can be useful for the
>

Re: [IMPORTANT] Committers please update merge script

2015-05-13 Thread Patrick Wendell
Hi All - unfortunately the fix introduced another bug, which is that
fixVersion was not updated properly. I've updated the script and had
one other person test it.

So committers please pull from master again thanks!

- Patrick

On Tue, May 12, 2015 at 6:25 PM, Patrick Wendell  wrote:
> Due to an ASF infrastructure change (bug?) [1] the default JIRA
> resolution status has switched to "Pending Closed". I've made a change
> to our merge script to coerce the correct status of "Fixed" when
> resolving [2]. Please upgrade the merge script to master.
>
> I've manually corrected JIRA's that were closed with the incorrect
> status. Let me know if you have any issues.
>
> [1] https://issues.apache.org/jira/browse/INFRA-9646
>
> [2] 
> https://github.com/apache/spark/commit/1b9e434b6c19f23a01e9875a3c1966cd03ce8e2d

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



[build system] scheduled datacenter downtime, sunday may 17th

2015-05-13 Thread shane knapp
our datacenter is rejiggering our network (read: fully re-engineering large
portions from the ground up) and has downtime scheduled from 9am-3pm PDT,
this sunday may17th.

this means our jenkins instance will not be available to the outside world,
and i will be putting jenkins in to quiet mode the night before.  this will
allow any running builds to finish, and to save me from getting up @ 6am on
my day off.  :)

once things are back up and running (~3pm or earlier), i will purge the
build queue and bring jenkins out of quiet mode.

of course, stay tuned to this bat-channel for future, and potentially
riveting updates!


Re: How to link code pull request with JIRA ID?

2015-05-13 Thread Markus Weimer
Hi,

how did you set this up? Over in the REEF incubation project, we
painstakingly create the forwards- and backwards links despite having
the IDs in the PR descriptions...

Thanks!

Markus


On 2015-05-13 11:56, Ted Yu wrote:
> Subproject tag should follow SPARK JIRA number.
> e.g.
> 
> [SPARK-5277][SQL] ...
> 
> Cheers
> 
> On Wed, May 13, 2015 at 11:50 AM, Stephen Boesch  wrote:
> 
>> following up from Nicholas, it is
>>
>> [SPARK-12345] Your PR description
>>
>> where 12345 is the jira number.
>>
>>
>> One thing I tend to forget is when/where to include the subproject tag e.g.
>>  [MLLIB]
>>
>>
>> 2015-05-13 11:11 GMT-07:00 Nicholas Chammas :
>>
>>> That happens automatically when you open a PR with the JIRA key in the PR
>>> title.
>>>
>>> On Wed, May 13, 2015 at 2:10 PM Chandrashekhar Kotekar <
>>> shekhar.kote...@gmail.com> wrote:
>>>
 Hi,

 I am new to open source contribution and trying to understand the
>> process
 starting from pulling code to uploading patch.

 I have managed to pull code from GitHub. In JIRA I saw that each JIRA
>>> issue
 is connected with pull request. I would like to know how do people
>> attach
 pull request details to JIRA issue?

 Thanks,
 Chandrash3khar Kotekar
 Mobile - +91 8600011455

>>>
>>
> 

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Task scheduling times

2015-05-13 Thread Reynold Xin
Maybe JIT? The 1st stage -- the scheduler code isn't JITed yet.

On Wed, May 13, 2015 at 9:18 AM, Akshat Aranya  wrote:

> Hi,
> Any input on this?  I'm willing to instrument further and experiment
> if there are any ideas.
>
> On Mon, May 4, 2015 at 11:27 AM, Akshat Aranya  wrote:
> > Hi,
> >
> > I have been investigating scheduling delays in Spark and I found some
> > unexplained anomalies.  In my use case, I have two stages after
> > collapsing the transformations: the first is a mapPartitions() and the
> > second is a sortByKey().  I found that the task serialization for the
> > first stage takes much longer than the second.
> >
> > 1. mapPartitions() - this launches 256 tasks in 603 ms (avg. 2.363
> > ms). Each task serializes to 1220 bytes.
> > 2. sortByKey() - this launches 64 tasks in 12 ms (avg. 0.187 ms). Each
> > task serializes to 1139 bytes.
> >
> > Note that the serialized size of the task is similar, but the avg.
> > scheduling time is very different.  I also instrumented the code to
> > print out the serialization time, and it seems like it is indeed the
> > serialization that takes much longer.  This seemed weird to me because
> > the biggest part of the Task, the taskBinary is actually directly
> > copied from a byte array.
> >
> > Any explanation of why this happens?
> >
> > Thanks,
> > Akshat
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: How to link code pull request with JIRA ID?

2015-05-13 Thread Ted Yu
Subproject tag should follow SPARK JIRA number.
e.g.

[SPARK-5277][SQL] ...

Cheers

On Wed, May 13, 2015 at 11:50 AM, Stephen Boesch  wrote:

> following up from Nicholas, it is
>
> [SPARK-12345] Your PR description
>
> where 12345 is the jira number.
>
>
> One thing I tend to forget is when/where to include the subproject tag e.g.
>  [MLLIB]
>
>
> 2015-05-13 11:11 GMT-07:00 Nicholas Chammas :
>
> > That happens automatically when you open a PR with the JIRA key in the PR
> > title.
> >
> > On Wed, May 13, 2015 at 2:10 PM Chandrashekhar Kotekar <
> > shekhar.kote...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I am new to open source contribution and trying to understand the
> process
> > > starting from pulling code to uploading patch.
> > >
> > > I have managed to pull code from GitHub. In JIRA I saw that each JIRA
> > issue
> > > is connected with pull request. I would like to know how do people
> attach
> > > pull request details to JIRA issue?
> > >
> > > Thanks,
> > > Chandrash3khar Kotekar
> > > Mobile - +91 8600011455
> > >
> >
>


Re: How to link code pull request with JIRA ID?

2015-05-13 Thread Stephen Boesch
following up from Nicholas, it is

[SPARK-12345] Your PR description

where 12345 is the jira number.


One thing I tend to forget is when/where to include the subproject tag e.g.
 [MLLIB]


2015-05-13 11:11 GMT-07:00 Nicholas Chammas :

> That happens automatically when you open a PR with the JIRA key in the PR
> title.
>
> On Wed, May 13, 2015 at 2:10 PM Chandrashekhar Kotekar <
> shekhar.kote...@gmail.com> wrote:
>
> > Hi,
> >
> > I am new to open source contribution and trying to understand the process
> > starting from pulling code to uploading patch.
> >
> > I have managed to pull code from GitHub. In JIRA I saw that each JIRA
> issue
> > is connected with pull request. I would like to know how do people attach
> > pull request details to JIRA issue?
> >
> > Thanks,
> > Chandrash3khar Kotekar
> > Mobile - +91 8600011455
> >
>


Re: How to link code pull request with JIRA ID?

2015-05-13 Thread Nicholas Chammas
That happens automatically when you open a PR with the JIRA key in the PR
title.

On Wed, May 13, 2015 at 2:10 PM Chandrashekhar Kotekar <
shekhar.kote...@gmail.com> wrote:

> Hi,
>
> I am new to open source contribution and trying to understand the process
> starting from pulling code to uploading patch.
>
> I have managed to pull code from GitHub. In JIRA I saw that each JIRA issue
> is connected with pull request. I would like to know how do people attach
> pull request details to JIRA issue?
>
> Thanks,
> Chandrash3khar Kotekar
> Mobile - +91 8600011455
>


How to link code pull request with JIRA ID?

2015-05-13 Thread Chandrashekhar Kotekar
Hi,

I am new to open source contribution and trying to understand the process
starting from pulling code to uploading patch.

I have managed to pull code from GitHub. In JIRA I saw that each JIRA issue
is connected with pull request. I would like to know how do people attach
pull request details to JIRA issue?

Thanks,
Chandrash3khar Kotekar
Mobile - +91 8600011455


Re: s3 vfs on Mesos Slaves

2015-05-13 Thread Stephen Carman
Thank you for the suggestions, the problem exists in the fact we need to 
initialize the vfs s3 driver so what you suggested Akhil wouldn’t fix the 
problem.

Basically a job is submitted to the cluster and it tries to pull down the data 
from s3, but fails because the s3 uri hasn’t been initilized in the  vfs and it 
doesn’t know how to handle
the URI.

What I’m asking is, how do we before the job is ran, run some bootstrapping or 
setup code that will let us do this initialization or configuration step for 
the vfs so that when it executes the job
it has the information it needs to be able to handle the s3 URI.

Thanks,
Steve

On May 13, 2015, at 12:35 PM, jay vyas 
mailto:jayunit100.apa...@gmail.com>> wrote:


Might I ask why vfs?  I'm new to vfs and not sure wether or not it predates the 
hadoop file system interfaces (HCFS).

After all spark natively supports any HCFS by leveraging the hadoop FileSystem 
api and class loaders and so on.

So simply putting those resources on your classpath should be sufficient to 
directly connect to s3. By using the sc.hadoopFile (...) commands.

On May 13, 2015 12:16 PM, "Akhil Das" 
mailto:ak...@sigmoidanalytics.com>> wrote:
Did you happened to have a look at this https://github.com/abashev/vfs-s3

Thanks
Best Regards

On Tue, May 12, 2015 at 11:33 PM, Stephen Carman 
mailto:scar...@coldlight.com>>
wrote:

> We have a small mesos cluster and these slaves need to have a vfs setup on
> them so that the slaves can pull down the data they need from S3 when spark
> runs.
>
> There doesn’t seem to be any obvious way online on how to do this or how
> easily accomplish this. Does anyone have some best practices or some ideas
> about how to accomplish this?
>
> An example stack trace when a job is ran on the mesos cluster…
>
> Any idea how to get this going? Like somehow bootstrapping spark on run or
> something?
>
> Thanks,
> Steve
>
>
> java.io.IOException: Unsupported scheme s3n for URI s3n://removed
> at com.coldlight.ccc.vfs.NeuronPath.toPath(NeuronPath.java:43)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.makeInputStream(ClquetPartitionedData.java:465)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.access$200(ClquetPartitionedData.java:42)
> at
> com.coldlight.neuron.data.ClquetPartitionedData$Iter.(ClquetPartitionedData.java:330)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:304)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> at org.apache.spark.scheduler.Task.run(Task.scala:64)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 15/05/12 13:57:51 ERROR Executor: Exception in task 0.1 in stage 0.0 (TID
> 1)
> java.lang.RuntimeException: java.io.IOException: Unsupported scheme s3n
> for URI s3n://removed
> at
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:307)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> at org.apache.spark.scheduler.Task.run(Task.scala:64)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Unsupported scheme s3n for URI
> s3n://removed
> at com.coldlight.ccc.vfs.NeuronPath.toPath(NeuronPath.java:43)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.makeInputStream(ClquetPartitionedData.java:465)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.access$200(ClquetPartitionedData.java:42)
> at
> com.coldlight.neuron.data.ClquetPartitionedData$Iter.(ClquetPartitionedData.java:330)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:304)
> ... 8 more
>
> This e-mail is intended solely for the above-mentioned recipient and it
> may contain confidential or privileged information. If you have received it
> in error, please notify us immediately and delete the e-mail. You must not
> copy, distribute, disclose or take any action in reliance on it. In
> addition, the contents of an attachment to this e-mail may contain software
> viruses which could damage your own computer system. Whil

Re: s3 vfs on Mesos Slaves

2015-05-13 Thread jay vyas
Might I ask why vfs?  I'm new to vfs and not sure wether or not it predates
the hadoop file system interfaces (HCFS).

After all spark natively supports any HCFS by leveraging the hadoop
FileSystem api and class loaders and so on.

So simply putting those resources on your classpath should be sufficient to
directly connect to s3. By using the sc.hadoopFile (...) commands.
On May 13, 2015 12:16 PM, "Akhil Das"  wrote:

> Did you happened to have a look at this https://github.com/abashev/vfs-s3
>
> Thanks
> Best Regards
>
> On Tue, May 12, 2015 at 11:33 PM, Stephen Carman 
> wrote:
>
> > We have a small mesos cluster and these slaves need to have a vfs setup
> on
> > them so that the slaves can pull down the data they need from S3 when
> spark
> > runs.
> >
> > There doesn’t seem to be any obvious way online on how to do this or how
> > easily accomplish this. Does anyone have some best practices or some
> ideas
> > about how to accomplish this?
> >
> > An example stack trace when a job is ran on the mesos cluster…
> >
> > Any idea how to get this going? Like somehow bootstrapping spark on run
> or
> > something?
> >
> > Thanks,
> > Steve
> >
> >
> > java.io.IOException: Unsupported scheme s3n for URI s3n://removed
> > at com.coldlight.ccc.vfs.NeuronPath.toPath(NeuronPath.java:43)
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData.makeInputStream(ClquetPartitionedData.java:465)
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData.access$200(ClquetPartitionedData.java:42)
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData$Iter.(ClquetPartitionedData.java:330)
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:304)
> > at
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> > at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> > at
> > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> > at org.apache.spark.scheduler.Task.run(Task.scala:64)
> > at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> > 15/05/12 13:57:51 ERROR Executor: Exception in task 0.1 in stage 0.0 (TID
> > 1)
> > java.lang.RuntimeException: java.io.IOException: Unsupported scheme s3n
> > for URI s3n://removed
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:307)
> > at
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> > at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> > at
> > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> > at org.apache.spark.scheduler.Task.run(Task.scala:64)
> > at
> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.IOException: Unsupported scheme s3n for URI
> > s3n://removed
> > at com.coldlight.ccc.vfs.NeuronPath.toPath(NeuronPath.java:43)
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData.makeInputStream(ClquetPartitionedData.java:465)
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData.access$200(ClquetPartitionedData.java:42)
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData$Iter.(ClquetPartitionedData.java:330)
> > at
> >
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:304)
> > ... 8 more
> >
> > This e-mail is intended solely for the above-mentioned recipient and it
> > may contain confidential or privileged information. If you have received
> it
> > in error, please notify us immediately and delete the e-mail. You must
> not
> > copy, distribute, disclose or take any action in reliance on it. In
> > addition, the contents of an attachment to this e-mail may contain
> software
> > viruses which could damage your own computer system. While ColdLight
> > Solutions, LLC has taken every reasonable precaution to minimize this
> risk,
> > we cannot accept liability for any damage which you sustain as a result
> of
> > software viruses. You should perform your own virus checks before opening
> > the attachment.
> >
>


Re: Task scheduling times

2015-05-13 Thread Akshat Aranya
Hi,
Any input on this?  I'm willing to instrument further and experiment
if there are any ideas.

On Mon, May 4, 2015 at 11:27 AM, Akshat Aranya  wrote:
> Hi,
>
> I have been investigating scheduling delays in Spark and I found some
> unexplained anomalies.  In my use case, I have two stages after
> collapsing the transformations: the first is a mapPartitions() and the
> second is a sortByKey().  I found that the task serialization for the
> first stage takes much longer than the second.
>
> 1. mapPartitions() - this launches 256 tasks in 603 ms (avg. 2.363
> ms). Each task serializes to 1220 bytes.
> 2. sortByKey() - this launches 64 tasks in 12 ms (avg. 0.187 ms). Each
> task serializes to 1139 bytes.
>
> Note that the serialized size of the task is similar, but the avg.
> scheduling time is very different.  I also instrumented the code to
> print out the serialization time, and it seems like it is indeed the
> serialization that takes much longer.  This seemed weird to me because
> the biggest part of the Task, the taskBinary is actually directly
> copied from a byte array.
>
> Any explanation of why this happens?
>
> Thanks,
> Akshat

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: s3 vfs on Mesos Slaves

2015-05-13 Thread Akhil Das
Did you happened to have a look at this https://github.com/abashev/vfs-s3

Thanks
Best Regards

On Tue, May 12, 2015 at 11:33 PM, Stephen Carman 
wrote:

> We have a small mesos cluster and these slaves need to have a vfs setup on
> them so that the slaves can pull down the data they need from S3 when spark
> runs.
>
> There doesn’t seem to be any obvious way online on how to do this or how
> easily accomplish this. Does anyone have some best practices or some ideas
> about how to accomplish this?
>
> An example stack trace when a job is ran on the mesos cluster…
>
> Any idea how to get this going? Like somehow bootstrapping spark on run or
> something?
>
> Thanks,
> Steve
>
>
> java.io.IOException: Unsupported scheme s3n for URI s3n://removed
> at com.coldlight.ccc.vfs.NeuronPath.toPath(NeuronPath.java:43)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.makeInputStream(ClquetPartitionedData.java:465)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.access$200(ClquetPartitionedData.java:42)
> at
> com.coldlight.neuron.data.ClquetPartitionedData$Iter.(ClquetPartitionedData.java:330)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:304)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> at org.apache.spark.scheduler.Task.run(Task.scala:64)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 15/05/12 13:57:51 ERROR Executor: Exception in task 0.1 in stage 0.0 (TID
> 1)
> java.lang.RuntimeException: java.io.IOException: Unsupported scheme s3n
> for URI s3n://removed
> at
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:307)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> at org.apache.spark.scheduler.Task.run(Task.scala:64)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Unsupported scheme s3n for URI
> s3n://removed
> at com.coldlight.ccc.vfs.NeuronPath.toPath(NeuronPath.java:43)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.makeInputStream(ClquetPartitionedData.java:465)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.access$200(ClquetPartitionedData.java:42)
> at
> com.coldlight.neuron.data.ClquetPartitionedData$Iter.(ClquetPartitionedData.java:330)
> at
> com.coldlight.neuron.data.ClquetPartitionedData.compute(ClquetPartitionedData.java:304)
> ... 8 more
>
> This e-mail is intended solely for the above-mentioned recipient and it
> may contain confidential or privileged information. If you have received it
> in error, please notify us immediately and delete the e-mail. You must not
> copy, distribute, disclose or take any action in reliance on it. In
> addition, the contents of an attachment to this e-mail may contain software
> viruses which could damage your own computer system. While ColdLight
> Solutions, LLC has taken every reasonable precaution to minimize this risk,
> we cannot accept liability for any damage which you sustain as a result of
> software viruses. You should perform your own virus checks before opening
> the attachment.
>


Re: [PySpark DataFrame] When a Row is not a Row

2015-05-13 Thread Nicholas Chammas
Is there some way around this? For example, can Row just be an
implementation of namedtuple throughout?

from collections import namedtuple
class Row(namedtuple):
...

>From a user perspective, it’s confusing that there are 2 different
implementations of the Row class with the same name.

In my case, I was writing a method to recursively convert a Row to a dict
(since a Row can contain other Rows).

I couldn’t directly check type(obj) == pyspark.sql.types.Row so I ended up
having to do it like this:

def row_to_dict(obj):
"""
Take a PySpark Row and convert it, and any of its nested Row
objects, into Python dictionaries.
"""
if isinstance(obj, list):
return [row_to_dict(x) for x in obj]
else:
try:
# We can't reliably check that this is a row object
# due to some weird bug.
d = obj.asDict()
return {k: row_to_dict(v) for k, v in d.iteritems()}
except:
return obj

That comment about a “weird bug” was my initial reaction, though now I
understand that we have 2 implementations of Row.

Isn’t this worth fixing? It’s just going to confuse people, IMO.

Nick

On Tue, May 12, 2015 at 10:22 PM Davies Liu  wrote:

 The class (called Row) for rows from Spark SQL is created on the fly, is
> different from pyspark.sql.Row (is an public API to create Row by users).
>
> The reason we done it in this way is that we want to have better
> performance when accessing the columns. Basically, the rows are just named
> tuples (called `Row`).
>
> --
> Davies Liu
> Sent with Sparrow 
>
> 已使用 Sparrow 
>
> 在 2015年5月12日 星期二,上午4:49,Nicholas Chammas 写道:
>
> This is really strange.
>
> # Spark 1.3.1
> print type(results)
>
> 
>
> a = results.take(1)[0]
>
>
> print type(a)
>
> 
>
> print pyspark.sql.types.Row
>
> 
>
> print type(a) == pyspark.sql.types.Row
>
> False
>
> print isinstance(a, pyspark.sql.types.Row)
>
> False
>
> If I set a as follows, then the type checks pass fine.
>
> a = pyspark.sql.types.Row('name')('Nick')
>
> Is this a bug? What can I do to narrow down the source?
>
> results is a massive DataFrame of spark-perf results.
>
> Nick
> ​
>
>
>  ​


Re: lots of test warning messages from UISeleniumSuite

2015-05-13 Thread Yi Tian
Shixiong have a PR working on this.

https://github.com/apache/spark/pull/5983

Sent from my iPhone

> On May 13, 2015, at 16:52, Reynold Xin  wrote:
> 
> Was looking at a PR test log just now. Can somebody take a look and remove
> the warnings (or just hide them)?
> 
> 
> 15/05/13 01:49:35 INFO UISeleniumSuite: Trying to start HiveThriftServer2:
> port=13125, mode=binary, attempt=0
> 15/05/13 01:50:28 INFO UISeleniumSuite: HiveThriftServer2 started
> successfully
> 15/05/13 01:50:31 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [10:11] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:31 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [10:11] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:31 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [15:41] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:31 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [15:41] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [26:14] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [26:14] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [39:24] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [39:24] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [67:23] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [67:23] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [69:190] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [69:190] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [72:31] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [72:31] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [73:45] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [73:45] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [74:45] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
> http://localhost:29132/static/bootstrap.min.css' [74:45] Ignoring the
> following declarations in this rule.
> 15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
> http://localhost:29132/static/bootstrap.min.css' [75:44] Error in style
> rule. (Invalid token "*". Was expecting one of: , , , "}",
> ";".)


Re: @since version tag for all dataframe/sql methods

2015-05-13 Thread Nicholas Chammas
Are we not doing the same thing for the Python API?

On Wed, May 13, 2015 at 10:43 AM Olivier Girardot  wrote:

> that's a great idea !
>
> Le mer. 13 mai 2015 à 07:38, Reynold Xin  a écrit :
>
> > I added @since version tag for all public dataframe/sql methods/classes
> in
> > this patch: https://github.com/apache/spark/pull/6101/files
> >
> > From now on, if you merge anything related to DF/SQL, please make sure
> the
> > public functions have @since tag. Thanks.
> >
>


Re: @since version tag for all dataframe/sql methods

2015-05-13 Thread Olivier Girardot
that's a great idea !

Le mer. 13 mai 2015 à 07:38, Reynold Xin  a écrit :

> I added @since version tag for all public dataframe/sql methods/classes in
> this patch: https://github.com/apache/spark/pull/6101/files
>
> From now on, if you merge anything related to DF/SQL, please make sure the
> public functions have @since tag. Thanks.
>


Re: [build system] brief downtime tomorrow morning (5-12-15, 7am PDT)

2015-05-13 Thread shane knapp
this is already done

On Tue, May 12, 2015 at 1:14 PM, shane knapp  wrote:

> i will need to restart jenkins to finish a plugin install and resolve
> https://issues.apache.org/jira/browse/SPARK-7561
>
> this will be very brief, and i'll retrigger any errant jobs i kill.
>
> please let me know if there are any comments/questions/concerns.
>
> thanks!
>
> shane
>
>


Re: Change for submitting to yarn in 1.3.1

2015-05-13 Thread Chester At Work
Patrick 
 There are several things we need, some of them already mentioned in the 
mailing list before. 

I haven't looked at the SparkLauncher code, but here are few things we need 
from our perspectives for Spark Yarn Client

 1) client should not be private ( unless alternative is provided) so we 
can call it directly.
 2) we need a way to stop the running yarn app programmatically ( the PR is 
already submitted) 
 3) before we start the spark job, we should have a call back to the 
application, which will provide the yarn container capacity (number of cores 
and max memory ), so spark program will not set values beyond max values (PR 
submitted)
 4) call back could be in form of yarn app listeners, which call back based 
on yarn status changes ( start, in progress, failure, complete etc), 
application can react based on these events in PR)
 
 5) yarn client passing arguments to spark program in the form of main 
program, we had experience problems when we pass a very large argument due the 
length limit. For example, we use json to serialize the argument and encoded, 
then parse them as argument. For wide columns datasets, we will run into limit. 
Therefore, an alternative way of passing additional larger argument is needed. 
We are experimenting with passing the args via a established akka messaging 
channel. 

6) spark yarn client in yarn-cluster mode right now is essentially a batch 
job with no communication once it launched. Need to establish the communication 
channel so that logs, errors, status updates, progress bars, execution stages 
etc can be displayed on the application side. We added an akka communication 
channel for this (working on PR ).

   Combined with others items in this list, we are able to redirect print 
and error statement to application log (outside of the hadoop cluster), so 
spark UI equivalent progress bar via spark listener. We can show yarn progress 
via yarn app listener before spark started; and status can be updated during 
job execution.

We are also experimenting with long running job with additional spark 
commands and interactions via this channel.


 Chester




 
  



Sent from my iPad

On May 12, 2015, at 20:54, Patrick Wendell  wrote:

> Hey Kevin and Ron,
> 
> So is the main shortcoming of the launcher library the inability to
> get an app ID back from YARN? Or are there other issues here that
> fundamentally regress things for you.
> 
> It seems like adding a way to get back the appID would be a reasonable
> addition to the launcher.
> 
> - Patrick
> 
> On Tue, May 12, 2015 at 12:51 PM, Marcelo Vanzin  wrote:
>> On Tue, May 12, 2015 at 11:34 AM, Kevin Markey 
>> wrote:
>> 
>>> I understand that SparkLauncher was supposed to address these issues, but
>>> it really doesn't.  Yarn already provides indirection and an arm's length
>>> transaction for starting Spark on a cluster. The launcher introduces yet
>>> another layer of indirection and dissociates the Yarn Client from the
>>> application that launches it.
>>> 
>> 
>> Well, not fully. The launcher was supposed to solve "how to launch a Spark
>> app programatically", but in the first version nothing was added to
>> actually gather information about the running app. It's also limited in the
>> way it works because of Spark's limitations (one context per JVM, etc).
>> 
>> Still, adding things like this is something that is definitely in the scope
>> for the launcher library; information such as app id can be useful for the
>> code launching the app, not just in yarn mode. We just have to find a clean
>> way to provide that information to the caller.
>> 
>> 
>>> I am still reading the newest code, and we are still researching options
>>> to move forward.  If there are alternatives, we'd like to know.
>>> 
>>> 
>> Super hacky, but if you launch Spark as a child process you could parse the
>> stderr and get the app ID.
>> 
>> --
>> Marcelo
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
> 

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



lots of test warning messages from UISeleniumSuite

2015-05-13 Thread Reynold Xin
Was looking at a PR test log just now. Can somebody take a look and remove
the warnings (or just hide them)?


15/05/13 01:49:35 INFO UISeleniumSuite: Trying to start HiveThriftServer2:
port=13125, mode=binary, attempt=0
15/05/13 01:50:28 INFO UISeleniumSuite: HiveThriftServer2 started
successfully
15/05/13 01:50:31 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [10:11] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:31 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [10:11] Ignoring the
following declarations in this rule.
15/05/13 01:50:31 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [15:41] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:31 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [15:41] Ignoring the
following declarations in this rule.
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [26:14] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [26:14] Ignoring the
following declarations in this rule.
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [39:24] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [39:24] Ignoring the
following declarations in this rule.
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [67:23] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [67:23] Ignoring the
following declarations in this rule.
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [69:190] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [69:190] Ignoring the
following declarations in this rule.
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [72:31] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [72:31] Ignoring the
following declarations in this rule.
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [73:45] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [73:45] Ignoring the
following declarations in this rule.
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [74:45] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS warning: '
http://localhost:29132/static/bootstrap.min.css' [74:45] Ignoring the
following declarations in this rule.
15/05/13 01:50:32 WARN DefaultCssErrorHandler: CSS error: '
http://localhost:29132/static/bootstrap.min.css' [75:44] Error in style
rule. (Invalid token "*". Was expecting one of: , , , "}",
";".)