og/hadoop-yarn/containers
>>>
>>> Is there a way to clean up these directories while the spark streaming
>>> application is running?
>>>
>>> Thanks
>>>
>>
--
Take Care
Fawze Abujaber
17, 2018 at 2:01 AM Manu Zhang wrote:
> Hi Fawze,
>
> Sorry but I'm not familiar with CM. Maybe you can look into the logs (or
> turn on DEBUG log).
>
> On Thu, Aug 16, 2018 at 3:05 PM Fawze Abujaber wrote:
>
>> Hi Manu,
>>
>> I'm using cloudera manager
ble to log onto the node where UI has been launched, then try
> `ps -aux | grep HistoryServer` and the first column of output should be the
> user.
>
> On Wed, Aug 15, 2018 at 10:26 PM Fawze Abujaber wrote:
>
>> Thanks Manu, Do you know how i can see which user the UI is runni
like Spark will do.
>
>
> On Wed, Aug 15, 2018 at 6:38 PM Fawze Abujaber wrote:
>
>> Hi Manu,
>>
>> Thanks for your response.
>>
>> Yes, i see but still interesting to know how i can see these applications
>> from the spark history UI.
>&
Hi Manu,
Thanks for your response.
Yes, i see but still interesting to know how i can see these applications
from the spark history UI.
How i can know with which user i'm logged in when i'm navigating the spark
history UI.
The Spark process is running with cloudera-scm and the events written
running with different users but
was unable to find it.
Anyone who ran into this issue and solved it?
Thanks in advance.
--
Take Care
Fawze Abujaber
ts/includes
> directory where your *.so library resides
>
> On Thursday, May 3, 2018, 5:06:35 AM PDT, Fawze Abujaber <
> fawz...@gmail.com> wrote:
>
>
> Hi Guys,
>
> I'm running into issue where my spark jobs are failing on the below error,
> I'm using Spa
cloudera-scm 62268 Oct 4 2017
hadoop-lzo-0.4.15-cdh5.13.0.jar
lrwxrwxrwx 1 cloudera-scm cloudera-scm31 May 3 07:23 hadoop-lzo.jar ->
hadoop-lzo-0.4.15-cdh5.13.0.jar
drwxr-xr-x 2 cloudera-scm cloudera-scm 4096 Oct 4 2017 native
--
Take Care
Fawze Abujaber
|* 1.85 19.63 3.09 *|
>>> 133.57 00h 53m 2000.27 1.59 1.0031.42
>>>|* 6.06 12.25 1.33 *|
>>> 14 13.74 02h 59m 2000.27 1.65 0.89 1.00
>>>|* 1.84
Shmuel Blitz <shmuel.bl...@similarweb.com
> > wrote:
>
>> Hi Rohit,
>>
>> Thanks for the analysis.
>>
>> I can use repartition on the slow task. But how can I tell what part of
>> the code is in charge of the slow tasks?
>>
>> It woul
t;
>
> On Mon, Mar 26, 2018 at 10:48 AM, Fawze Abujaber <fawz...@gmail.com>
> wrote:
> > I distributed this config to all the nodes cross the cluster and with no
> > success, new spark logs still uncompressed.
> >
> > On Mon, Mar 26, 2018 at 8:12 PM, Marce
ication from a different node, if the setting is
> there, Spark should be using it.
>
> You can also look in the UI's environment page to see the
> configuration that the app is using.
>
> On Mon, Mar 26, 2018 at 10:10 AM, Fawze Abujaber <fawz...@gmail.com>
> wrote:
> &g
van...@cloudera.com> wrote:
> If the spark-defaults.conf file in the machine where you're starting
> the Spark app has that config, then that's all that should be needed.
>
> On Mon, Mar 26, 2018 at 10:02 AM, Fawze Abujaber <fawz...@gmail.com>
> wrote:
> > Thanks Marcelo,
>
e event logs in compressed format.
>
> The SHS doesn't compress existing logs.
>
> On Mon, Mar 26, 2018 at 9:17 AM, Fawze Abujaber <fawz...@gmail.com> wrote:
> > Hi All,
> >
> > I'm trying to compress the logs at SPark history server, i added
> > spark.eventL
Hi All,
I'm trying to compress the logs at SPark history server, i
added spark.eventLog.compress=true to spark-defaults.conf to spark Spark
Client Advanced Configuration Snippet (Safety Valve) for
spark-conf/spark-defaults.conf
which i see applied only to the spark gateway servers spark conf.
> I switched to the spark_1.6 branch, and also compiled against the specific
> image of Spark we are using (cdh5.7.6).
>
> Now I need to figure out what the output means... :P
>
> Shmuel
>
> On Fri, Mar 23, 2018 at 7:24 PM, Fawze Abujaber <fawz...@gmail.com> wrote:
>
&g
for the jar should be an hdfs path as i'm using it in
cluster mode.
On Fri, Mar 23, 2018 at 6:33 AM, Fawze Abujaber <fawz...@gmail.com> wrote:
> Hi Shmuel,
>
> Did you compile the code against the right branch for Spark 1.6.
>
> I tested it and it looks working and now i'm testing
rote:
>>
>>> Thanks everyone!
>>> Please share how it works and how it doesn't. Both help.
>>>
>>> Fawaze, just made few changes to make this work with spark 1.6. Can you
>>> please try building from branch *spark_1.6*
>>>
>>> th
It's super amazing i see it was tested on spark 2.0.0 and above, what
about Spark 1.6 which is still part of Cloudera's main versions?
We have a vast Spark applications with version 1.6.0
On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau wrote:
> Super exciting! I look
It's recommended to sue executor-cores of 5.
Each executor here will utilize 20 GB which mean the spark job will utilize
50 cpu cores and 100GB memory.
You can not run more than 4 executors because your cluster doesn't have
enough memory.
Use see 5 executor because 4 for the job and one for the
Hi all,
I upgraded my Hadoop cluster which include spark 1.6.0, I noticed that
sometimes the job is running with scala version 2.10.5 and sometimes with
2.10.4, any idea why this happening?
Hi Soheil,
Resource manager and NodeManager are enough, of your you need the roles of
DataNode and NameNode to be able accessing the Data.
On Thu, 18 Jan 2018 at 10:12 Soheil Pourbafrani
wrote:
> I am setting up a Yarn cluster to run Spark applications on that, but I'm
>
22 matches
Mail list logo