Re: Fwd: seeing this message repeatedly.

2016-09-05 Thread Radoslaw Gruchalski
All your workers go via public IP. Do you have the ports opened? Why public
IP? Is it not better to use the private 10.x address?

–
Best regards,
Radek Gruchalski
ra...@gruchalski.com


On September 5, 2016 at 11:49:30 AM, kant kodali (kanth...@gmail.com) wrote:



-- Forwarded message --
From: kant kodali 
Date: Sat, Sep 3, 2016 at 5:39 PM
Subject: seeing this message repeatedly.
To: "user @spark" 



Hi Guys,

I am running my driver program on my local machine and my spark cluster is
on AWS. The big question is I don't know what are the right settings to get
around this public and private ip thing on AWS? my spark-env.sh currently
has the the following lines

export SPARK_PUBLIC_DNS="52.44.36.224"
export SPARK_WORKER_CORES=12
export SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=4"

I am seeing the lines below when I run my driver program on my local
machine. not sure what is going on ?



16/09/03 17:32:15 INFO DAGScheduler: Submitting 50 missing tasks from
ShuffleMapStage 0 (MapPartitionsRDD[1] at start at Consumer.java:41)
16/09/03 17:32:15 INFO TaskSchedulerImpl: Adding task set 0.0 with 50 tasks
16/09/03 17:32:30 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources
16/09/03 17:32:45 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources


Re: Assembly build on spark 2.0.0

2016-08-27 Thread Radoslaw Gruchalski
Ah, an uberjar. Normally one would build the uberjar with a Maven Shade plugin. 
Haven't looked into Spark code much recently, it wouldn't make much sense 
having a separate maven command to build an uberjar while building a 
distribution because, from memory, if you open the tgz file, the uberjar sits 
in the lib directory.
-- 
Best regards,
Rad

_
From: Srikanth Sampath 
Sent: Saturday, August 27, 2016 5:24 am
Subject: Re: Assembly build on spark 2.0.0
To: Radoslaw Gruchalski 
Cc:  


Hi,Thanks Radek.  However mvn package does not build the uber jar.  I am 
looking for an uber jar and not a distribution.  I have seen references to the 
uber jar here What I see in the spark 2.0 codeline (assembly/pom.xml) builds a 
distribution. I

    

      bigtop-dist

      

      

        

          

            org.apache.maven.plugins

            maven-assembly-plugin

            

              

                dist

                package

                

                  single

                

                

                  

                    src/main/assembly/assembly.xml

                  

                

              

            

          

...        

      

    




In src/main/assembly/assembly.xml we see



  dist

  

    tar.gz

    dir

  

  false



.



On Sat, Aug 27, 2016 at 1:02 AM, Radoslaw Gruchalski  
wrote:
mvn package might be the command you’re looking for.




– 
Best regards,
RadekGruchalski
ra...@gruchalski.com

 


On August 26, 2016 at 3:59:24 PM, Srikanth Sampath (ssampath.apa...@gmail.com) 
wrote: Hi,mvn assembly is creating a .tgz distribution.  How can Icreate a 
plain jar archive?  I would like to create 
aspark-assembly-.jar-Srikanth






Re: Assembly build on spark 2.0.0

2016-08-26 Thread Radoslaw Gruchalski
mvn package might be the command you’re looking for.

–
Best regards,
Radek Gruchalski
ra...@gruchalski.com


On August 26, 2016 at 3:59:24 PM, Srikanth Sampath (
ssampath.apa...@gmail.com) wrote:

Hi,
mvn assembly is creating a .tgz distribution.  How can I create a plain jar
archive?  I would like to create a spark-assembly-.jar
-Srikanth


Re: spark on kubernetes

2016-05-23 Thread Radoslaw Gruchalski
Sounds surprisingly close to this:
https://github.com/apache/spark/pull/9608

I can ressurect the work on the bridge mode for Spark 2. The reason why the 
work on the old one was suspended was because Spark was going through so many 
changes at that time that a lot of work done, was wiped out by the changes 
towards 2.0.

I know that Lightbend was also interested in having bridge mode.
–  
Best regards,

Radek Gruchalski

ra...@gruchalski.com
de.linkedin.com/in/radgruchalski

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.

On May 23, 2016 at 7:14:51 PM, Timothy Chen (tnac...@gmail.com) wrote:

This will also simplify Mesos users as well, DCOS has to work around  
this with our own proxying.  

Tim  

On Sun, May 22, 2016 at 11:53 PM, Gurvinder Singh  
 wrote:  
> Hi Reynold,  
>  
> So if that's OK with you, can I go ahead and create JIRA for this. As it  
> seems this feature is missing currently and can benefit not just for  
> kubernetes users but in general Spark standalone mode users too.  
>  
> - Gurvinder  
> On 05/22/2016 12:49 PM, Gurvinder Singh wrote:  
>> On 05/22/2016 10:23 AM, Sun Rui wrote:  
>>> If it is possible to rewrite URL in outbound responses in Knox or other 
>>> reverse proxy, would that solve your issue?  
>> Any process which can keep track of workers and application drivers IP  
>> addresses and route traffic to those will work. Considering Spark Master  
>> does exactly this due to all workers and application has to register to  
>> the master, therefore I propose master to be the place to add such a  
>> functionality.  
>>  
>> I am not aware with Knox capabilities but Nginx or any other normal  
>> reverse proxy will not be able to this on its own due to dynamic nature  
>> of application drivers and to some extent workers too.  
>>  
>> - Gurvinder  
 On May 22, 2016, at 14:55, Gurvinder Singh  
 wrote:  
  
 On 05/22/2016 08:32 AM, Reynold Xin wrote:  
> Kubernetes itself already has facilities for http proxy, doesn't it?  
>  
 Yeah kubernetes has ingress controller which can act the L7 load  
 balancer and router traffic to Spark UI in this case. But I am referring  
 to link present in UI to worker and application UI. Replied in the  
 detail to Sun Rui's mail where I gave example of possible scenario.  
  
 - Gurvinder  
>  
> On Sat, May 21, 2016 at 9:30 AM, Gurvinder Singh  
> mailto:gurvinder.si...@uninett.no>> wrote:  
>  
> Hi,  
>  
> I am currently working on deploying Spark on kuberentes (K8s) and it is  
> working fine. I am running Spark with standalone mode and checkpointing  
> the state to shared system. So if master fails K8s starts it and from  
> checkpoint it recover the earlier state and things just works fine. I  
> have an issue with the Spark master Web UI to access the worker and  
> application UI links. In brief, kubernetes service model allows me to  
> expose the master service to internet, but accessing the  
> application/workers UI is not possible as then I have to expose them too  
> individually and given I can have multiple application it becomes hard  
> to manage.  
>  
> One solution can be that the master can act as reverse proxy to access  
> information/state/logs from application/workers. As it has the  
> information about their endpoint when application/worker register with  
> master, so when a user initiate a request to access the information,  
> master can proxy the request to corresponding endpoint.  
>  
> So I am wondering if someone has already done work in this direction  
> then it would be great to know. If not then would the community will be  
> interesting in such feature. If yes then how and where I should get  
> started as it would be helpful for me to have some guidance to start  
> working on this.  
>  
> Kind Regards,  
> Gurvinder  
>  
> -  
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org  
>   
> For additional commands, e-mail: dev-h...@spark.apache.org  
>   
>  
>  
  
  
 -  
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org  
 For additional commands, e-mail: dev-h...@spark.apache.org  
  
>>>  
>>>  
>>>  
>>> -  
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org  
>>> For additional commands, e-mail: dev-h...@spark.apache.org