If you launched the job in yarn-cluster mode, the tracking URL is
printed on the output of the launched process. That will lead you to
the Spark UI once the job is running.

If you're using CM, you can reach the same link by clicking on the
"Resource Manager UI" link on your Yarn service, then finding the app
on the list and clicking on its "Application Master" link.

On Wed, Sep 24, 2014 at 12:07 PM, Raghuveer Chanda
<raghuveer.cha...@gmail.com> wrote:
> Yeah I got the logs and its reporting about the memory.
>
> 14/09/25 00:08:26 WARN YarnClusterScheduler: Initial job has not accepted
> any resources; check your cluster UI to ensure that workers are registered
> and have sufficient memory
>
> Now I shifted to big cluster with more memory but here im not able to view
> the UI while the job is running .. I need to check the status and spark UI
> of job.
>
> cse-hadoop-xx:18080 doesnt have running job it only had the jobs with master
> as spark://cse-hadoop-xx:7077 and not yarn-cluster .
>
> Im not ableto view the following links ..
> http://cse-hadoop-xx:50070/
> http://cse-hadoop-xx:8088/
>
> Is this due to some security option im not able to view the UI how can
> change it in cloudera.?
>
>
>
>
> On Thu, Sep 25, 2014 at 12:04 AM, Marcelo Vanzin <van...@cloudera.com>
> wrote:
>>
>> You need to use the command line yarn application that I mentioned
>> ("yarn logs"). You can't look at the logs through the UI after the app
>> stops.
>>
>> On Wed, Sep 24, 2014 at 11:16 AM, Raghuveer Chanda
>> <raghuveer.cha...@gmail.com> wrote:
>> >
>> > Thanks for the reply .. This is the error in the logs obtained from UI
>> > at
>> >
>> > http://dml3:8042/node/containerlogs/container_1411578463780_0001_02_000001/chanda
>> >
>> > So now how to set the Log Server url ..
>> >
>> > Failed while trying to construct the redirect url to the log server. Log
>> > Server url may not be configured
>> >
>> > Container does not exist.
>> >
>> >
>> >
>> >
>> > On Wed, Sep 24, 2014 at 11:37 PM, Marcelo Vanzin <van...@cloudera.com>
>> > wrote:
>> >>
>> >> You'll need to look at the driver output to have a better idea of
>> >> what's going on. You can use "yarn logs --applicationId blah" after
>> >> your app is finished (e.g. by killing it) to look at it.
>> >>
>> >> My guess is that your cluster doesn't have enough resources available
>> >> to service the container request you're making. That will show up in
>> >> the driver as periodic messages that no containers have been allocated
>> >> yet.
>> >>
>> >> On Wed, Sep 24, 2014 at 10:25 AM, Raghuveer Chanda
>> >> <raghuveer.cha...@gmail.com> wrote:
>> >> > Hi,
>> >> >
>> >> > I'm new to spark and facing problem with running a job in cluster
>> >> > using
>> >> > YARN.
>> >> >
>> >> > Initially i ran jobs using spark master as --master spark://dml2:7077
>> >> > and it
>> >> > is running fine on 3 workers.
>> >> >
>> >> > But now im shifting to YARN, so installed YARN in cloud era on 3 node
>> >> > cluster and changed the master to yarn-cluster but it is not working
>> >> > I
>> >> > attached the screenshots of UI which are not progressing and just
>> >> > hanging
>> >> > on.
>> >> >
>> >> > Output on terminal :
>> >> >
>> >> > This error is repeating
>> >> >
>> >> > ./spark-submit --class "class-name" --master yarn-cluster
>> >> > --num-executors 3
>> >> > --executor-cores 3  jar-with-dependencies.jar
>> >> >
>> >> >
>> >> > Do i need to configure YARN or why it is not getting all the workers
>> >> > ..
>> >> > please help ...
>> >> >
>> >> >
>> >> > 14/09/24 22:44:21 INFO yarn.Client: Application report from ASM:
>> >> > application identifier: application_1411578463780_0001
>> >> > appId: 1
>> >> > clientToAMToken: null
>> >> > appDiagnostics:
>> >> > appMasterHost: dml3
>> >> > appQueue: root.chanda
>> >> > appMasterRpcPort: 0
>> >> > appStartTime: 1411578513545
>> >> > yarnAppState: RUNNING
>> >> > distributedFinalState: UNDEFINED
>> >> > appTrackingUrl:
>> >> > http://dml2:8088/proxy/application_1411578463780_0001/
>> >> > appUser: chanda
>> >> > 14/09/24 22:44:22 INFO yarn.Client: Application report from ASM:
>> >> > application identifier: application_1411578463780_0001
>> >> > appId: 1
>> >> > clientToAMToken: null
>> >> > appDiagnostics:
>> >> > appMasterHost: dml3
>> >> > appQueue: root.chanda
>> >> > appMasterRpcPort: 0
>> >> > appStartTime: 1411578513545
>> >> > yarnAppState: RUNNING
>> >> > distributedFinalState: UNDEFINED
>> >> > appTrackingUrl:
>> >> > http://dml2:8088/proxy/application_1411578463780_0001/
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Regards,
>> >> > Raghuveer Chanda
>> >> > 4th year Undergraduate Student
>> >> > Computer Science and Engineering
>> >> > IIT Kharagpur
>> >> >
>> >> >
>> >> > ---------------------------------------------------------------------
>> >> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> >> > For additional commands, e-mail: user-h...@spark.apache.org
>> >>
>> >>
>> >>
>> >> --
>> >> Marcelo
>> >
>> >
>> >
>> >
>> > --
>> > Regards,
>> > Raghuveer Chanda
>> > 4th year Undergraduate Student
>> > Computer Science and Engineering
>> > IIT Kharagpur
>>
>>
>>
>> --
>> Marcelo
>
>
>
>
> --
> Regards,
> Raghuveer Chanda
> 4th year Undergraduate Student
> Computer Science and Engineering
> IIT Kharagpur



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to