Re: Write Spark Connection client application in Go

2023-09-14 Thread bo yang
at’s so cool! Great work y’all :) >> >> On Tue, Sep 12, 2023 at 8:14 PM bo yang wrote: >> >>> Hi Spark Friends, >>> >>> Anyone interested in using Golang to write Spark application? We created >>> a Spark Connect Go Client library >>>

Re: Write Spark Connection client application in Go

2023-09-13 Thread Martin Grund
one interested in using Golang to write Spark application? We created >> a Spark Connect Go Client library >> <https://github.com/apache/spark-connect-go>. Would love to hear >> feedback/thoughts from the community. >> >> Please see the quick start guide >>

Re: Write Spark Connection client application in Go

2023-09-12 Thread Holden Karau
That’s so cool! Great work y’all :) On Tue, Sep 12, 2023 at 8:14 PM bo yang wrote: > Hi Spark Friends, > > Anyone interested in using Golang to write Spark application? We created a > Spark > Connect Go Client library <https://github.com/apache/spark-connect-go>. > Wou

Write Spark Connection client application in Go

2023-09-12 Thread bo yang
Hi Spark Friends, Anyone interested in using Golang to write Spark application? We created a Spark Connect Go Client library <https://github.com/apache/spark-connect-go>. Would love to hear feedback/thoughts from the community. Please see the quick start guide <https://github.com/apa

Re: [Spark] spark client for Hadoop 2.x

2022-04-06 Thread Morven Huang
e use Hadoop > 2.7.7 in our infrastructure currently. > > 1) Does Spark have a plan to publish the Spark client dependencies for Hadoop > 2.x? > 2) Are the new Spark clients capable of connecting to the Hadoop 2.x cluster? > (According to a simple test, Spark client 3.2.1 had

[Spark] spark client for Hadoop 2.x

2022-04-06 Thread Amin Borjian
>From Spark version 3.1.0 onwards, the clients provided for Spark are built >with Hadoop 3 and placed in maven Repository. Unfortunately we use Hadoop >2.7.7 in our infrastructure currently. 1) Does Spark have a plan to publish the Spark client dependencies for Hadoop 2.x? 2) Ar

Re: Spark standalone , client mode. How do I monitor?

2017-06-29 Thread Nirav Patel
you can use ganglia, ambari or nagios to monitor spark workers/masters. Spark executors are resilient. There are may proprietary software companies as well that just do hadoop application monitoring. On Tue, Jun 27, 2017 at 5:03 PM, anna stax wrote: > Hi all, > > I have

Spark standalone , client mode. How do I monitor?

2017-06-27 Thread anna stax
Hi all, I have a spark standalone cluster. I am running a spark streaming application on it and the deploy mode is client. I am looking for the best way to monitor the cluster and application so that I will know when the application/cluster is down. I cannot move to cluster deploy mode now. I

spark streaming client program needs to be restarted after few hours of idle time. how can I fix it?

2016-10-18 Thread kant kodali
Hi Guys, My Spark Streaming Client program works fine as the long as the receiver receives the data but say my receiver has no more data to receive for few hours like (4-5 hours) and then its starts receiving the data again at that point spark client program doesn't seem to process any data

Re: High virtual memory consumption on spark-submit client.

2016-05-13 Thread jone
no, i have set master to yarn-cluster. when the sparkpi.running,the result of  free -t as follow [running]mqq@10.205.3.29:/data/home/hive/conf$ free -t total   used   free shared    buffers cached Mem:  32740732   32105684 635048  0 683332  

Re: High virtual memory consumption on spark-submit client.

2016-05-12 Thread Harsh J
How many CPU cores are on that machine? Read http://qr.ae/8Uv3Xq You can also confirm the above by running the pmap utility on your process and most of the virtual memory would be under 'anon'. On Fri, 13 May 2016 09:11 jone, wrote: > The virtual memory is 9G When i run

Re: High virtual memory consumption on spark-submit client.

2016-05-12 Thread Mich Talebzadeh
can you please do the following: jps|grep SparkSubmit| and send the output of ps aux|grep pid top -p PID and the output of free HTH Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

High virtual memory consumption on spark-submit client.

2016-05-12 Thread jone
The virtual memory is 9G When i run org.apache.spark.examples.SparkPi under yarn-cluster model,which using default configurations.   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

Re: Spark for client

2016-03-01 Thread Todd Nist
Minudika Malshan < >>>> minudika...@gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> I think zeppelin spark interpreter will give a solution to your >>>>> problem. >>>>> >>>>> Regards. >

Re: Spark for client

2016-03-01 Thread Mich Talebzadeh
t;>> >>>> Regards. >>>> Minudika >>>> >>>> Minudika Malshan >>>> Undergraduate >>>> Department of Computer Science and Engineering >>>> University of Moratuwa. >>>> *Mobile : +94715659887 <%2B947156598

Re: Spark for client

2016-03-01 Thread Mohannad Ali
> Department of Computer Science and Engineering >>> University of Moratuwa. >>> *Mobile : +94715659887 <%2B94715659887>* >>> *LinkedIn* : https://lk.linkedin.com/in/minudika >>> >>> >>> >>> On Tue, Mar 1, 2016 at 12:35

Re: Spark for client

2016-02-29 Thread Mich Talebzadeh
659887>* >> *LinkedIn* : https://lk.linkedin.com/in/minudika >> >> >> >> On Tue, Mar 1, 2016 at 12:35 AM, Sabarish Sasidharan < >> sabarish.sasidha...@manthan.com> wrote: >> >>> Zeppelin? >>> >>> Regards >>> S

Re: Spark for client

2016-02-29 Thread Minudika Malshan
;> Zeppelin? >> >> Regards >> Sab >> On 01-Mar-2016 12:27 am, "Mich Talebzadeh" <mich.talebza...@gmail.com> >> wrote: >> >>> Hi, >>> >>> Is there such thing as Spark for client much like RDBMS client that have >>> cut

Re: Spark for client

2016-02-29 Thread Minudika Malshan
at 12:35 AM, Sabarish Sasidharan < sabarish.sasidha...@manthan.com> wrote: > Zeppelin? > > Regards > Sab > On 01-Mar-2016 12:27 am, "Mich Talebzadeh" <mich.talebza...@gmail.com> > wrote: > >> Hi, >> >> Is there such thing as Spark fo

Re: Spark for client

2016-02-29 Thread Sabarish Sasidharan
Zeppelin? Regards Sab On 01-Mar-2016 12:27 am, "Mich Talebzadeh" <mich.talebza...@gmail.com> wrote: > Hi, > > Is there such thing as Spark for client much like RDBMS client that have > cut down version of their big brother useful for client connectivity but > canno

Spark for client

2016-02-29 Thread Mich Talebzadeh
Hi, Is there such thing as Spark for client much like RDBMS client that have cut down version of their big brother useful for client connectivity but cannot be used as server. Thanks Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id

spark yarn client mode

2016-01-19 Thread Sanjeev Verma
Hi Do I need to install spark on all the yarn cluster node if I want to submit the job to yarn client? is there any way exists in which I can spawn a spark job executors on the cluster nodes where I have not installed spark. Thanks Sanjeev

Re: spark yarn client mode

2016-01-19 Thread 刘虓
Hi, No,you don't need to. However,when submitting jobs certain resources will be uploaded to hdfs,which could be a performance issue read the log and you will understand: 15/12/29 11:10:06 INFO Client: Uploading resource file:/data/spark/spark152/lib/spark-assembly-1.5.2-hadoop2.6.0.jar -> hdfs

Re: strange behavior in spark yarn-client mode

2016-01-14 Thread Marcelo Vanzin
On Thu, Jan 14, 2016 at 10:17 AM, Sanjeev Verma wrote: > now it spawn a single executors with 1060M size, I am not able to understand > why this time it executes executors with 1G+overhead not 2G what I > specified. Where are you looking for the memory size for the

Re: strange behavior in spark yarn-client mode

2016-01-14 Thread Marcelo Vanzin
Please reply to the list. The web ui does not show the total size of the executor's heap. It shows the amount of memory available for caching data, which is, give or take, 60% of the heap by default. On Thu, Jan 14, 2016 at 11:03 AM, Sanjeev Verma wrote: > I am

strange behavior in spark yarn-client mode

2016-01-14 Thread Sanjeev Verma
I am seeing a strange behaviour while running spark in yarn client mode.I am observing this on the single node yarn cluster.in spark-default I have configured the executors memory as 2g and started the spark shell as follows bin/spark-shell --master yarn-client which trigger the 2 executors

Re: Stop Spark yarn-client job

2015-11-26 Thread Jeff Zhang
Could you attach the yarn AM log ? On Fri, Nov 27, 2015 at 8:10 AM, Jagat Singh <jagatsi...@gmail.com> wrote: > Hi, > > What is the correct way to stop fully the Spark job which is running as > yarn-client using spark-submit. > > We are using sc.stop in the code and can s

Stop Spark yarn-client job

2015-11-26 Thread Jagat Singh
Hi, What is the correct way to stop fully the Spark job which is running as yarn-client using spark-submit. We are using sc.stop in the code and can see the job still running (in yarn resource manager) after final hive insert is complete. The code flow is start context do somework insert

Spark Yarn-client Kerberos on remote cluster

2015-04-14 Thread philippe L
Dear All, I would like to know if its possible to configure the SparkConf() in order to interact with a remote kerberized cluster in yarn-client mode. the spark will not be installed on the cluster itself and the localhost can't ask for a ticket, But a keytab as been generated in purpose

Re: Spark Yarn-client Kerberos on remote cluster

2015-04-14 Thread Neal Yin
kerberized cluster in yarn-client mode. the spark will not be installed on the cluster itself and the localhost can't ask for a ticket, But a keytab as been generated in purpose and provide for the localhost. My purpose is to code in Eclipse on my localhost and submit my code in yarn-client mode

Spark yarn-client submission example?

2015-03-17 Thread Michal Klos
Hi, We have a Scala application and we want it to programmatically submit Spark jobs to a Spark-YARN cluster in yarn-client mode. We're running into a lot of classpath issues, e.g. once submitted it looks for jars in our parent Scala application's local directory, jars that it shouldn't need.

Re: Spark (yarn-client mode) Hangs in final stages of Collect or Reduce

2015-02-09 Thread nitin
Have you checked the corresponding executor logs as well? I think information provided by you here is less to actually understand your issue. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-yarn-client-mode-Hangs-in-final-stages-of-Collect-or-Reduce

Re: Spark clustered client

2014-07-23 Thread Nick Pentreath
At the moment your best bet for sharing SparkContexts across jobs will be Ooyala job server: https://github.com/ooyala/spark-jobserver It doesn't yet support spark 1.0 though I did manage to amend it to get it to build and run on 1.0 — Sent from Mailbox On Wed, Jul 23, 2014 at 1:21 AM, Asaf

Spark clustered client

2014-07-22 Thread Asaf Lahav
Hi Folks, I have been trying to dig up some information in regards to what are the possibilities when wanting to deploy more than one client process that consumes Spark. Let's say I have a Spark Cluster of 10 servers, and would like to setup 2 additional servers which are sending requests to it