where is the source code for org.apache.spark.launcher.Main?

2015-07-02 Thread Shiyao Ma
Hi,

It seems to me spark launches a process to read the spark-deaults.conf
and then launch another process to do the app stuff.

The code here should confirm it:
https://github.com/apache/spark/blob/master/bin/spark-class#L76

$RUNNER -cp $LAUNCH_CLASSPATH org.apache.spark.launcher.Main $@


But I look around there is no such o.a.s.launcher.Main method.

My question is, where is source for the Main method?


-- 

吾輩は猫である。ホームーページはhttp://introo.me。

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: where is the source code for org.apache.spark.launcher.Main?

2015-07-02 Thread Shiyao Ma
After clicking the github spark repo, it is clearly here:
https://github.com/apache/spark/tree/master/launcher/src/main/java/org/apache/spark/launcher

My intellij project sidebar was fully expanded and I was lost in anther folder.

Problem solved.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Task InputSize source code location

2015-07-01 Thread Shiyao Ma
Hi,

When running tasks, I found some task has input size of zero, while others not.

For example, in this picture: http://snag.gy/g6iJX.jpg

I suspect it has something to do with the block manager.

But where is the exact source code that monitors the task input size?


Thanks.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Why does driver transfer application jar to executors?

2015-06-17 Thread Shiyao Ma
Hi,

Looking from my executor logs, the submitted application jar is
transmitted to each executors?

Why does spark do the above? To my understanding, the tasks to be run
are already serialized with TaskDescription.


Regards.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Can standalone cluster manager provide I/O information on worker nodes?

2015-05-11 Thread Shiyao Ma
Hi,

Can standalone cluster manager provide I/O information on worker nodes?

If not, possible to point out what's the proper file to modify to
achieve that functionality?

Besides, does Mesos support that?


Regards.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Understanding the build params for spark with sbt.

2015-04-20 Thread Shiyao Ma
Hi.

My usage is only about the spark core and hdfs, so no spark sql or
mlib or other components invovled.


I saw the hint on the
http://spark.apache.org/docs/latest/building-spark.html, with a sample
like:
build/sbt -Pyarn -Phadoop-2.3 assembly. (what's the -P for?)


Fundamentally, I'd like to let sbt only compile and package the core
and the hadoop.

Meanwhile, it would be appreciated if you could inform me what's the
scala file that controls the logic of -Pyarn, so that I can dig into
the build source and have a finer control.



Thanks.

-- 

吾輩は猫である。ホームーページはhttp://introo.me。

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org