Re: yarn-cluster mode error

2016-05-17 Thread Sandeep Nemuri
Can you post the complete stack trace ?
ᐧ

On Tue, May 17, 2016 at 7:00 PM,  wrote:

> Hi,
>
> i am getting error below while running application on yarn-cluster mode.
>
> *ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM*
>
> Anyone can suggest why i am getting this error message?
>
> Thanks
> Raj
>
>
>
>
> Sent from Yahoo Mail. Get the app 
>



-- 
*  Regards*
*  Sandeep Nemuri*


Re: yarn-cluster

2016-05-04 Thread nsalian
Hi,

this is a good spot to start for Spark and YARN.
https://spark.apache.org/docs/1.5.0/running-on-yarn.html

specific to the version you are on, you can toggle between pages.



-
Neelesh S. Salian
Cloudera
--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/yarn-cluster-tp26846p26882.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: yarn-cluster

2016-05-03 Thread nsalian
Hello,

Thank you for the question.
The Status UNDEFINED means the application has not been completed and not
been resourced.
Upon getting assignment it will progress to RUNNING and then SUCCEEDED upon
completion.

It isn't a problem that you should worry about.
You should make sure to tune your YARN settings to help this work
appropriately and get the number of containers that the application needs.





-
Neelesh S. Salian
Cloudera
--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/yarn-cluster-tp26846p26871.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: (YARN CLUSTER MODE) Where to find logs within Spark RDD processing function ?

2016-04-29 Thread nguyen duc tuan
what does the WebUI show? What do you see when you click on "stderr" and
"stdout" links ? These links must contain stdoutput and stderr for each
executor.
About your custom logging in executor, are you sure you checked "${spark.
yarn.app.container.log.dir}/spark-app.log"
Actual location of this file each executor is ${
yarn.nodemanager.remote-app-log-dir}/{applicationId}/${spark.
yarn.app.container.log.dir}/spark-app.log (yarn.nodemanager.remote-app-log-dir
setting can found in yarn-site.xml in hadoop config folder)
For example, in above example, when I click to "stdout" link with respect
to hslave-13, I get link "
http://hslave-13:8042/node/containerlogs/container_1459219311185_2456_01_04/tuannd/stdout?start=-4096;,
this means the location of file is in hslave-13: ${
yarn.nodemanager.remote-app-log-dir}/appId/
container_1459219311185_2456_01_04/spark-app.log

I also see that you forgot to send file "log4j.properties" to executors in
spark-submit command. Executors will try to find log4j.properties in its
execution's folder. In this case, this file is not found, the setting for
logging will be ignored.
You have to add parameters --files /path/to/your/log4j.properties in order
to send this file to executors.
​​

Finally, In order to debug what is happening in executors, you should write
it directly to stdout or stderr. It's much easier to check than go directly
to executor and find your log file :)

2016-04-29 21:30 GMT+07:00 dev loper :

> Hi Ted & Nguyen,
>
> @Ted , I was under the belief that if the log4j.properties file would be
> taken from the application classpath if  file path is not specified.
> Please correct me if I am wrong. I tried your approach as well still I
> couldn't find the logs.
>
> @nguyen I am running it on a Yarn cluster , so Spark UI is redirecting me
> to Yarn UI. I couldn't see the logs there as well. I checked the logs on
> both Master and worker. I am running a cluster with one master and one
> worker.  Even I tired yarn logs there also its not turning up. Does yarn
> logs  include executor logs as well ?
>
>
> Request your help to identify the issue .
>
> On Fri, Apr 29, 2016 at 7:32 PM, Ted Yu  wrote:
>
>> Please use the following syntax:
>>
>> --conf
>>  
>> "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:///local/file/log4j.properties"
>>
>> FYI
>>
>> On Fri, Apr 29, 2016 at 6:03 AM, dev loper  wrote:
>>
>>> Hi Spark Team,
>>>
>>> I have asked the same question on stack overflow  , no luck yet.
>>>
>>>
>>> http://stackoverflow.com/questions/36923949/where-to-find-logs-within-spark-rdd-processing-function-yarn-cluster-mode?noredirect=1#comment61419406_36923949
>>>
>>> I am running my Spark Application on Yarn Cluster. No matter what I do,
>>> I am not able to get the logs within the RDD function printed . Below you
>>> can find the sample snippet which I have written for the RDD processing
>>> function . I have simplified the code to illustrate the syntax I have used
>>> to write the function. When I am running it locally I am able to see the
>>> logs but not in cluster mode. Neither System.err.println nor the logger
>>> seems to be working. But I could see all my driver logs. I even tried to
>>> log using the Root logger , but it was not working at all within the RDD
>>> processing function .I was desperate to see the log messages so finally I
>>> found a guide to use logger as transient (
>>> https://www.mapr.com/blog/how-log-apache-spark) ,but event that didn't
>>> help
>>>
>>> class SampleFlatMapFunction implements PairFlatMapFunction 
>>> ,String,String>{
>>>
>>> private static final long serialVersionUID = 6565656322667L;
>>> transient Logger  executorLogger = 
>>> LogManager.getLogger("sparkExecutor");
>>>
>>>
>>> private void readObject(java.io.ObjectInputStream in)
>>> throws IOException, ClassNotFoundException {
>>> in.defaultReadObject();
>>> executorLogger = LogManager.getLogger("sparkExecutor");
>>> }
>>> @Override
>>> public Iterable> call(Tuple2 
>>> tuple)throws Exception {
>>>
>>> executorLogger.info(" log testing from  executorLogger ::");
>>> System.err.println(" log testing from  executorLogger system error 
>>> stream ");
>>>
>>>
>>> List> updates = new ArrayList<>();
>>> //process Tuple , expand and add it to list.
>>> return updates;
>>>
>>>  }
>>>  };
>>>
>>> My Log4j Configuration is given below
>>>
>>> log4j.appender.console=org.apache.log4j.ConsoleAppender
>>> log4j.appender.console.target=System.err
>>> log4j.appender.console.layout=org.apache.log4j.PatternLayout
>>> log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} 
>>> %p %c{1}: %m%n
>>>
>>> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
>>> 

Re: (YARN CLUSTER MODE) Where to find logs within Spark RDD processing function ?

2016-04-29 Thread dev loper
Hi Ted & Nguyen,

@Ted , I was under the belief that if the log4j.properties file would be
taken from the application classpath if  file path is not specified.
Please correct me if I am wrong. I tried your approach as well still I
couldn't find the logs.

@nguyen I am running it on a Yarn cluster , so Spark UI is redirecting me
to Yarn UI. I couldn't see the logs there as well. I checked the logs on
both Master and worker. I am running a cluster with one master and one
worker.  Even I tired yarn logs there also its not turning up. Does yarn
logs  include executor logs as well ?


Request your help to identify the issue .

On Fri, Apr 29, 2016 at 7:32 PM, Ted Yu  wrote:

> Please use the following syntax:
>
> --conf
>  
> "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:///local/file/log4j.properties"
>
> FYI
>
> On Fri, Apr 29, 2016 at 6:03 AM, dev loper  wrote:
>
>> Hi Spark Team,
>>
>> I have asked the same question on stack overflow  , no luck yet.
>>
>>
>> http://stackoverflow.com/questions/36923949/where-to-find-logs-within-spark-rdd-processing-function-yarn-cluster-mode?noredirect=1#comment61419406_36923949
>>
>> I am running my Spark Application on Yarn Cluster. No matter what I do, I
>> am not able to get the logs within the RDD function printed . Below you can
>> find the sample snippet which I have written for the RDD processing
>> function . I have simplified the code to illustrate the syntax I have used
>> to write the function. When I am running it locally I am able to see the
>> logs but not in cluster mode. Neither System.err.println nor the logger
>> seems to be working. But I could see all my driver logs. I even tried to
>> log using the Root logger , but it was not working at all within the RDD
>> processing function .I was desperate to see the log messages so finally I
>> found a guide to use logger as transient (
>> https://www.mapr.com/blog/how-log-apache-spark) ,but event that didn't
>> help
>>
>> class SampleFlatMapFunction implements PairFlatMapFunction 
>> ,String,String>{
>>
>> private static final long serialVersionUID = 6565656322667L;
>> transient Logger  executorLogger = LogManager.getLogger("sparkExecutor");
>>
>>
>> private void readObject(java.io.ObjectInputStream in)
>> throws IOException, ClassNotFoundException {
>> in.defaultReadObject();
>> executorLogger = LogManager.getLogger("sparkExecutor");
>> }
>> @Override
>> public Iterable> call(Tuple2 
>> tuple)throws Exception {
>>
>> executorLogger.info(" log testing from  executorLogger ::");
>> System.err.println(" log testing from  executorLogger system error 
>> stream ");
>>
>>
>> List> updates = new ArrayList<>();
>> //process Tuple , expand and add it to list.
>> return updates;
>>
>>  }
>>  };
>>
>> My Log4j Configuration is given below
>>
>> log4j.appender.console=org.apache.log4j.ConsoleAppender
>> log4j.appender.console.target=System.err
>> log4j.appender.console.layout=org.apache.log4j.PatternLayout
>> log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p 
>> %c{1}: %m%n
>>
>> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
>> log4j.appender.stdout.target=System.out
>> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
>> log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p 
>> %c{1}: %m%n
>>
>> log4j.appender.RollingAppender=org.apache.log4j.DailyRollingFileAppender
>> log4j.appender.RollingAppender.File=/var/log/spark/spark.log
>> log4j.appender.RollingAppender.DatePattern='.'-MM-dd
>> log4j.appender.RollingAppender.layout=org.apache.log4j.PatternLayout
>> log4j.appender.RollingAppender.layout.ConversionPattern=[%p] %d %c %M - 
>> %m%n
>>
>> log4j.appender.RollingAppenderU=org.apache.log4j.DailyRollingFileAppender
>> 
>> log4j.appender.RollingAppenderU.File=${spark.yarn.app.container.log.dir}/spark-app.log
>> log4j.appender.RollingAppenderU.DatePattern='.'-MM-dd
>> log4j.appender.RollingAppenderU.layout=org.apache.log4j.PatternLayout
>> log4j.appender.RollingAppenderU.layout.ConversionPattern=[%p] %d %c %M - 
>> %m%n
>>
>>
>> # By default, everything goes to console and file
>> log4j.rootLogger=INFO, RollingAppender, console
>>
>> # My custom logging goes to another file
>> log4j.logger.sparkExecutor=INFO, stdout, RollingAppenderU
>>
>>
>> i have tried yarn logs, Spark UI Logs nowhere I could see the log
>> statements from RDD processing functions . I tried below Approaches but it
>> didn't work
>>
>> yarn logs -applicationId
>>
>> I checked even below HDFS path also
>>
>> /tmp/logs/
>>
>>
>> I am running my spark-submit command by passing below arguments, Even
>> then its not working
>>
>>   --master 

Re: (YARN CLUSTER MODE) Where to find logs within Spark RDD processing function ?

2016-04-29 Thread nguyen duc tuan
These are executor's logs, not the driver logs. To see this log files, you
have to go to executor machines where tasks is running. To see what you
will print to stdout or stderr you can either go to the executor machines
directly (will store in "stdout" and "stderr" files somewhere in the
executor machine) or see through webui

2016-04-29 20:03 GMT+07:00 dev loper :

> Hi Spark Team,
>
> I have asked the same question on stack overflow  , no luck yet.
>
>
> http://stackoverflow.com/questions/36923949/where-to-find-logs-within-spark-rdd-processing-function-yarn-cluster-mode?noredirect=1#comment61419406_36923949
>
> I am running my Spark Application on Yarn Cluster. No matter what I do, I
> am not able to get the logs within the RDD function printed . Below you can
> find the sample snippet which I have written for the RDD processing
> function . I have simplified the code to illustrate the syntax I have used
> to write the function. When I am running it locally I am able to see the
> logs but not in cluster mode. Neither System.err.println nor the logger
> seems to be working. But I could see all my driver logs. I even tried to
> log using the Root logger , but it was not working at all within the RDD
> processing function .I was desperate to see the log messages so finally I
> found a guide to use logger as transient (
> https://www.mapr.com/blog/how-log-apache-spark) ,but event that didn't
> help
>
> class SampleFlatMapFunction implements PairFlatMapFunction 
> ,String,String>{
>
> private static final long serialVersionUID = 6565656322667L;
> transient Logger  executorLogger = LogManager.getLogger("sparkExecutor");
>
>
> private void readObject(java.io.ObjectInputStream in)
> throws IOException, ClassNotFoundException {
> in.defaultReadObject();
> executorLogger = LogManager.getLogger("sparkExecutor");
> }
> @Override
> public Iterable> call(Tuple2 tuple) 
>throws Exception {
>
> executorLogger.info(" log testing from  executorLogger ::");
> System.err.println(" log testing from  executorLogger system error 
> stream ");
>
>
> List> updates = new ArrayList<>();
> //process Tuple , expand and add it to list.
> return updates;
>
>  }
>  };
>
> My Log4j Configuration is given below
>
> log4j.appender.console=org.apache.log4j.ConsoleAppender
> log4j.appender.console.target=System.err
> log4j.appender.console.layout=org.apache.log4j.PatternLayout
> log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p 
> %c{1}: %m%n
>
> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> log4j.appender.stdout.target=System.out
> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p 
> %c{1}: %m%n
>
> log4j.appender.RollingAppender=org.apache.log4j.DailyRollingFileAppender
> log4j.appender.RollingAppender.File=/var/log/spark/spark.log
> log4j.appender.RollingAppender.DatePattern='.'-MM-dd
> log4j.appender.RollingAppender.layout=org.apache.log4j.PatternLayout
> log4j.appender.RollingAppender.layout.ConversionPattern=[%p] %d %c %M - 
> %m%n
>
> log4j.appender.RollingAppenderU=org.apache.log4j.DailyRollingFileAppender
> 
> log4j.appender.RollingAppenderU.File=${spark.yarn.app.container.log.dir}/spark-app.log
> log4j.appender.RollingAppenderU.DatePattern='.'-MM-dd
> log4j.appender.RollingAppenderU.layout=org.apache.log4j.PatternLayout
> log4j.appender.RollingAppenderU.layout.ConversionPattern=[%p] %d %c %M - 
> %m%n
>
>
> # By default, everything goes to console and file
> log4j.rootLogger=INFO, RollingAppender, console
>
> # My custom logging goes to another file
> log4j.logger.sparkExecutor=INFO, stdout, RollingAppenderU
>
>
> i have tried yarn logs, Spark UI Logs nowhere I could see the log
> statements from RDD processing functions . I tried below Approaches but it
> didn't work
>
> yarn logs -applicationId
>
> I checked even below HDFS path also
>
> /tmp/logs/
>
>
> I am running my spark-submit command by passing below arguments, Even then
> its not working
>
>   --master yarn --deploy-mode cluster   --conf 
> "spark.driver.extraJavaOptions=-Dlog4j.configuration=log4j.properties"  
> --conf 
> "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j.properties"
>
> Can somebody guide me on logging within spark RDD and map functions ? What
> am I missing in the above steps ?
>
> Thanks
>
> Dev
>


Re: yarn-cluster mode throwing NullPointerException

2015-10-12 Thread Venkatakrishnan Sowrirajan
Hi Rachana,


Are you by any chance saying something like this in your code
​?
​

"sparkConf.setMaster("yarn-cluster");"

​SparkContext is not supported with yarn-cluster mode.​


I think you are hitting this bug -- >
https://issues.apache.org/jira/browse/SPARK-7504. This got fixed in
Spark-1.4.0, so you can try in 1.4.0

Regards
Venkata krishnan

On Sun, Oct 11, 2015 at 8:49 PM, Rachana Srivastava <
rachana.srivast...@markmonitor.com> wrote:

> I am trying to submit a job using yarn-cluster mode using spark-submit
> command.  My code works fine when I use yarn-client mode.
>
>
>
> *Cloudera Version:*
>
> CDH-5.4.7-1.cdh5.4.7.p0.3
>
>
>
> *Command Submitted:*
>
> spark-submit --class "com.markmonitor.antifraud.ce.KafkaURLStreaming"  \
>
> --driver-java-options
> "-Dlog4j.configuration=file:///etc/spark/myconf/log4j.sample.properties" \
>
> --conf
> "spark.driver.extraJavaOptions=-Dlog4j.configuration=file:///etc/spark/myconf/log4j.sample.properties"
> \
>
> --conf
> "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:///etc/spark/myconf/log4j.sample.properties"
> \
>
> --num-executors 2 \
>
> --executor-cores 2 \
>
> ../target/mm-XXX-ce-0.0.1-SNAPSHOT-jar-with-dependencies.jar \
>
> yarn-cluster 10 "XXX:2181" "XXX:9092" groups kafkaurl 5 \
>
> "hdfs://ip-10-0-0-XXX.us-west-2.compute.internal:8020/user/ec2-user/urlFeature.properties"
> \
>
> "hdfs://ip-10-0-0-XXX.us-west-2.compute.internal:8020/user/ec2-user/urlFeatureContent.properties"
> \
>
> "hdfs://ip-10-0-0-XXX.us-west-2.compute.internal:8020/user/ec2-user/hdfsOutputNEWScript/OUTPUTYarn2"
> false
>
>
>
>
>
> *Log Details:*
>
> INFO : org.apache.spark.SparkContext - Running Spark version 1.3.0
>
> INFO : org.apache.spark.SecurityManager - Changing view acls to: ec2-user
>
> INFO : org.apache.spark.SecurityManager - Changing modify acls to: ec2-user
>
> INFO : org.apache.spark.SecurityManager - SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(ec2-user);
> users with modify permissions: Set(ec2-user)
>
> INFO : akka.event.slf4j.Slf4jLogger - Slf4jLogger started
>
> INFO : Remoting - Starting remoting
>
> INFO : Remoting - Remoting started; listening on addresses
> :[akka.tcp://sparkdri...@ip-10-0-0-xxx.us-west-2.compute.internal:49579]
>
> INFO : Remoting - Remoting now listens on addresses:
> [akka.tcp://sparkdri...@ip-10-0-0-xxx.us-west-2.compute.internal:49579]
>
> INFO : org.apache.spark.util.Utils - Successfully started service
> 'sparkDriver' on port 49579.
>
> INFO : org.apache.spark.SparkEnv - Registering MapOutputTracker
>
> INFO : org.apache.spark.SparkEnv - Registering BlockManagerMaster
>
> INFO : org.apache.spark.storage.DiskBlockManager - Created local directory
> at
> /tmp/spark-1c805495-c7c4-471d-973f-b1ae0e2c8ff9/blockmgr-fff1946f-a716-40fc-a62d-bacba5b17638
>
> INFO : org.apache.spark.storage.MemoryStore - MemoryStore started with
> capacity 265.4 MB
>
> INFO : org.apache.spark.HttpFileServer - HTTP File server directory is
> /tmp/spark-8ed6f513-854f-4ee4-95ea-87185364eeaf/httpd-75cee1e7-af7a-4c82-a9ff-a124ce7ca7ae
>
> INFO : org.apache.spark.HttpServer - Starting HTTP Server
>
> INFO : org.spark-project.jetty.server.Server - jetty-8.y.z-SNAPSHOT
>
> INFO : org.spark-project.jetty.server.AbstractConnector - Started
> SocketConnector@0.0.0.0:46671
>
> INFO : org.apache.spark.util.Utils - Successfully started service 'HTTP
> file server' on port 46671.
>
> INFO : org.apache.spark.SparkEnv - Registering OutputCommitCoordinator
>
> INFO : org.spark-project.jetty.server.Server - jetty-8.y.z-SNAPSHOT
>
> INFO : org.spark-project.jetty.server.AbstractConnector - Started
> SelectChannelConnector@0.0.0.0:4040
>
> INFO : org.apache.spark.util.Utils - Successfully started service
> 'SparkUI' on port 4040.
>
> INFO : org.apache.spark.ui.SparkUI - Started SparkUI at
> http://ip-10-0-0-XXX.us-west-2.compute.internal:4040
>
> INFO : org.apache.spark.SparkContext - Added JAR
> file:/home/ec2-user/CE/correlationengine/scripts/../target/mm-anti-fraud-ce-0.0.1-SNAPSHOT-jar-with-dependencies.jar
> at
> http://10.0.0.XXX:46671/jars/mm-anti-fraud-ce-0.0.1-SNAPSHOT-jar-with-dependencies.jar
> with timestamp 1444620509463
>
> INFO : org.apache.spark.scheduler.cluster.YarnClusterScheduler - Created
> YarnClusterScheduler
>
> ERROR: org.apache.spark.scheduler.cluster.YarnClusterSchedulerBackend -
> Application ID is not set.
>
> INFO : org.apache.spark.network.netty.NettyBlockTransferService - Server
> created on 33880
>
> INFO : org.apache.spark.storage.BlockManagerMaster - Trying to register
> BlockManager
>
> INFO : org.apache.spark.storage.BlockManagerMasterActor - Registering
> block manager ip-10-0-0-XXX.us-west-2.compute.internal:33880 with 265.4 MB
> RAM, BlockManagerId(, ip-10-0-0-XXX.us-west-2.compute.internal,
> 33880)
>
> INFO : org.apache.spark.storage.BlockManagerMaster - Registered
> BlockManager
>
> INFO : org.apache.spark.scheduler.EventLoggingListener - Logging events to
> 

Re: yarn-cluster spark-submit process not dying

2015-05-28 Thread Corey Nolet
Thanks Sandy- I was digging through the code in the deploy.yarn.Client and
literally found that property right before I saw your reply. I'm on 1.2.x
right now which doesn't have the property. I guess I need to update sooner
rather than later.

On Thu, May 28, 2015 at 3:56 PM, Sandy Ryza sandy.r...@cloudera.com wrote:

 Hi Corey,

 As of this PR https://github.com/apache/spark/pull/5297/files, this can
 be controlled with spark.yarn.submit.waitAppCompletion.

 -Sandy

 On Thu, May 28, 2015 at 11:48 AM, Corey Nolet cjno...@gmail.com wrote:

 I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
 noticing the jvm that fires up to allocate the resources, etc... is not
 going away after the application master and executors have been allocated.
 Instead, it just sits there printing 1 second status updates to the
 console. If I kill it, my job still runs (as expected).

 Is there an intended way to stop this from happening and just have the
 local JVM die when it's done allocating the resources and deploying the
 application master?





Re: yarn-cluster spark-submit process not dying

2015-05-28 Thread Sandy Ryza
Hi Corey,

As of this PR https://github.com/apache/spark/pull/5297/files, this can be
controlled with spark.yarn.submit.waitAppCompletion.

-Sandy

On Thu, May 28, 2015 at 11:48 AM, Corey Nolet cjno...@gmail.com wrote:

 I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
 noticing the jvm that fires up to allocate the resources, etc... is not
 going away after the application master and executors have been allocated.
 Instead, it just sits there printing 1 second status updates to the
 console. If I kill it, my job still runs (as expected).

 Is there an intended way to stop this from happening and just have the
 local JVM die when it's done allocating the resources and deploying the
 application master?