at’s so cool! Great work y’all :)
>>
>> On Tue, Sep 12, 2023 at 8:14 PM bo yang wrote:
>>
>>> Hi Spark Friends,
>>>
>>> Anyone interested in using Golang to write Spark application? We created
>>> a Spark Connect Go Client library
>>>
one interested in using Golang to write Spark application? We created
>> a Spark Connect Go Client library
>> <https://github.com/apache/spark-connect-go>. Would love to hear
>> feedback/thoughts from the community.
>>
>> Please see the quick start guide
>>
That’s so cool! Great work y’all :)
On Tue, Sep 12, 2023 at 8:14 PM bo yang wrote:
> Hi Spark Friends,
>
> Anyone interested in using Golang to write Spark application? We created a
> Spark
> Connect Go Client library <https://github.com/apache/spark-connect-go>.
> Wou
Hi Spark Friends,
Anyone interested in using Golang to write Spark application? We
created a Spark
Connect Go Client library <https://github.com/apache/spark-connect-go>.
Would love to hear feedback/thoughts from the community.
Please see the quick start guide
<https://github.com/apa
e use Hadoop
> 2.7.7 in our infrastructure currently.
>
> 1) Does Spark have a plan to publish the Spark client dependencies for Hadoop
> 2.x?
> 2) Are the new Spark clients capable of connecting to the Hadoop 2.x cluster?
> (According to a simple test, Spark client 3.2.1 had
>From Spark version 3.1.0 onwards, the clients provided for Spark are built
>with Hadoop 3 and placed in maven Repository. Unfortunately we use Hadoop
>2.7.7 in our infrastructure currently.
1) Does Spark have a plan to publish the Spark client dependencies for Hadoop
2.x?
2) Ar
you can use ganglia, ambari or nagios to monitor spark workers/masters.
Spark executors are resilient. There are may proprietary software companies
as well that just do hadoop application monitoring.
On Tue, Jun 27, 2017 at 5:03 PM, anna stax wrote:
> Hi all,
>
> I have
Hi all,
I have a spark standalone cluster. I am running a spark streaming
application on it and the deploy mode is client. I am looking for the best
way to monitor the cluster and application so that I will know when the
application/cluster is down. I cannot move to cluster deploy mode now.
I
Hi Guys,
My Spark Streaming Client program works fine as the long as the receiver
receives the data but say my receiver has no more data to receive for few
hours like (4-5 hours) and then its starts receiving the data again at that
point spark client program doesn't seem to process any data
no, i have set master to yarn-cluster.
when the sparkpi.running,the result of free -t as follow
[running]mqq@10.205.3.29:/data/home/hive/conf$ free -t
total used free shared buffers cached
Mem: 32740732 32105684 635048 0 683332
How many CPU cores are on that machine? Read http://qr.ae/8Uv3Xq
You can also confirm the above by running the pmap utility on your process
and most of the virtual memory would be under 'anon'.
On Fri, 13 May 2016 09:11 jone, wrote:
> The virtual memory is 9G When i run
can you please do the following:
jps|grep SparkSubmit|
and send the output of
ps aux|grep pid
top -p PID
and the output of
free
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
The virtual memory is 9G When i run org.apache.spark.examples.SparkPi under yarn-cluster model,which using default configurations.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
Minudika Malshan <
>>>> minudika...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I think zeppelin spark interpreter will give a solution to your
>>>>> problem.
>>>>>
>>>>> Regards.
>
t;>>
>>>> Regards.
>>>> Minudika
>>>>
>>>> Minudika Malshan
>>>> Undergraduate
>>>> Department of Computer Science and Engineering
>>>> University of Moratuwa.
>>>> *Mobile : +94715659887 <%2B947156598
> Department of Computer Science and Engineering
>>> University of Moratuwa.
>>> *Mobile : +94715659887 <%2B94715659887>*
>>> *LinkedIn* : https://lk.linkedin.com/in/minudika
>>>
>>>
>>>
>>> On Tue, Mar 1, 2016 at 12:35
659887>*
>> *LinkedIn* : https://lk.linkedin.com/in/minudika
>>
>>
>>
>> On Tue, Mar 1, 2016 at 12:35 AM, Sabarish Sasidharan <
>> sabarish.sasidha...@manthan.com> wrote:
>>
>>> Zeppelin?
>>>
>>> Regards
>>> S
;> Zeppelin?
>>
>> Regards
>> Sab
>> On 01-Mar-2016 12:27 am, "Mich Talebzadeh" <mich.talebza...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Is there such thing as Spark for client much like RDBMS client that have
>>> cut
at 12:35 AM, Sabarish Sasidharan <
sabarish.sasidha...@manthan.com> wrote:
> Zeppelin?
>
> Regards
> Sab
> On 01-Mar-2016 12:27 am, "Mich Talebzadeh" <mich.talebza...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Is there such thing as Spark fo
Zeppelin?
Regards
Sab
On 01-Mar-2016 12:27 am, "Mich Talebzadeh" <mich.talebza...@gmail.com>
wrote:
> Hi,
>
> Is there such thing as Spark for client much like RDBMS client that have
> cut down version of their big brother useful for client connectivity but
> canno
Hi,
Is there such thing as Spark for client much like RDBMS client that have
cut down version of their big brother useful for client connectivity but
cannot be used as server.
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id
Hi
Do I need to install spark on all the yarn cluster node if I want to submit
the job to yarn client?
is there any way exists in which I can spawn a spark job executors on the
cluster nodes where I have not installed spark.
Thanks
Sanjeev
Hi,
No,you don't need to.
However,when submitting jobs certain resources will be uploaded to
hdfs,which could be a performance issue
read the log and you will understand:
15/12/29 11:10:06 INFO Client: Uploading resource
file:/data/spark/spark152/lib/spark-assembly-1.5.2-hadoop2.6.0.jar -> hdfs
On Thu, Jan 14, 2016 at 10:17 AM, Sanjeev Verma
wrote:
> now it spawn a single executors with 1060M size, I am not able to understand
> why this time it executes executors with 1G+overhead not 2G what I
> specified.
Where are you looking for the memory size for the
Please reply to the list.
The web ui does not show the total size of the executor's heap. It
shows the amount of memory available for caching data, which is, give
or take, 60% of the heap by default.
On Thu, Jan 14, 2016 at 11:03 AM, Sanjeev Verma
wrote:
> I am
I am seeing a strange behaviour while running spark in yarn client mode.I
am observing this on the single node yarn cluster.in spark-default I have
configured the executors memory as 2g and started the spark shell as follows
bin/spark-shell --master yarn-client
which trigger the 2 executors
Could you attach the yarn AM log ?
On Fri, Nov 27, 2015 at 8:10 AM, Jagat Singh <jagatsi...@gmail.com> wrote:
> Hi,
>
> What is the correct way to stop fully the Spark job which is running as
> yarn-client using spark-submit.
>
> We are using sc.stop in the code and can s
Hi,
What is the correct way to stop fully the Spark job which is running as
yarn-client using spark-submit.
We are using sc.stop in the code and can see the job still running (in yarn
resource manager) after final hive insert is complete.
The code flow is
start context
do somework
insert
Dear All,
I would like to know if its possible to configure the SparkConf() in order
to interact with a remote kerberized cluster in yarn-client mode.
the spark will not be installed on the cluster itself and the localhost
can't ask for a ticket, But a keytab as been generated in purpose
kerberized cluster in yarn-client mode.
the spark will not be installed on the cluster itself and the localhost
can't ask for a ticket, But a keytab as been generated in purpose and
provide for the localhost.
My purpose is to code in Eclipse on my localhost and submit my code in
yarn-client mode
Hi,
We have a Scala application and we want it to programmatically submit Spark
jobs to a Spark-YARN cluster in yarn-client mode.
We're running into a lot of classpath issues, e.g. once submitted it looks
for jars in our parent Scala application's local directory, jars that it
shouldn't need.
Have you checked the corresponding executor logs as well? I think information
provided by you here is less to actually understand your issue.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-yarn-client-mode-Hangs-in-final-stages-of-Collect-or-Reduce
At the moment your best bet for sharing SparkContexts across jobs will be
Ooyala job server: https://github.com/ooyala/spark-jobserver
It doesn't yet support spark 1.0 though I did manage to amend it to get it to
build and run on 1.0
—
Sent from Mailbox
On Wed, Jul 23, 2014 at 1:21 AM, Asaf
Hi Folks,
I have been trying to dig up some information in regards to what are the
possibilities when wanting to deploy more than one client process that
consumes Spark.
Let's say I have a Spark Cluster of 10 servers, and would like to setup 2
additional servers which are sending requests to it
34 matches
Mail list logo