I get warnings:

SparkContext: Requesting executors is only supported in coarse-grained mode
ExecutorAllocationManager: Unable to reach the cluster manager to request 2 
total executors

I get info messages:
INFO ContextCleaner: Cleaned accumulator 4

Then my "job" just seems to hang - I don't know how to see if it is running. 
When I look using
Ganglia. I see cluster cpu utilization go up to about 50% from 10% when the job 
started.

I expect this to be mostly IO bound, but I don't have any way to estimate how 
long the job should run.

Also, since I am running the spark-shell, I don't know how to run it in nohup.  
I am fairly certain that the putty windo to the cluster times out and kills my 
job.

Any help appreciated.



________________________________

This message (including any attachments) contains confidential and/or 
privileged information. It is intended for a specific individual and purpose 
and is protected by law. If you are not the intended recipient, please notify 
the sender immediately and delete this message. Any disclosure, copying, or 
distribution of this message, or the taking of any action based on it, is 
strictly prohibited.

Reply via email to