I put the design requirements and description in the commit comment. So I
will close the PR. please refer the following commit

https://github.com/AlpineNow/spark/commit/5b336bbfe92eabca7f4c20e5d49e51bb3721da4d



On Mon, May 25, 2015 at 3:21 PM, Chester Chen <ches...@alpinenow.com> wrote:

> All,
>      I have created a PR just for the purpose of helping document the use
> case, requirements and design. As it is unlikely to get merge in. So it
> only used to illustrate the problems we trying and solve and approaches we
> took.
>
>    https://github.com/apache/spark/pull/6398
>
>
>     Hope this helps the discussion
>
> Chester
>
>
>
>
>
>
> On Fri, May 22, 2015 at 10:55 AM, Kevin Markey <kevin.mar...@oracle.com>
> wrote:
>
>>  Thanks.  We'll look at it.
>> I've sent another reply addressing some of your other comments.
>> Kevin
>>
>>
>> On 05/22/2015 10:27 AM, Marcelo Vanzin wrote:
>>
>>  Hi Kevin,
>>
>>  One thing that might help you in the meantime, while we work on a better
>> interface for all this...
>>
>> On Thu, May 21, 2015 at 5:21 PM, Kevin Markey <kevin.mar...@oracle.com>
>> wrote:
>>
>>> Making *yarn.Client* private has prevented us from moving from Spark
>>> 1.0.x to Spark 1.2 or 1.3 despite many alluring new features.
>>>
>>
>>  Since you're not afraid to use private APIs, and to avoid using ugly
>> reflection hacks, you could abuse the fact that private things in Scala are
>> not really private most of the time. For example (trimmed to show just
>> stuff that might be interesting to you):
>>
>> # javap -classpath
>> /opt/cloudera/parcels/CDH/jars/spark-assembly-1.3.0-cdh5.4.0-hadoop2.6.0-cdh5.4.0.jar
>> org.apache.spark.deploy.yarn.Client
>> Compiled from "Client.scala"
>> public class org.apache.spark.deploy.yarn.Client implements
>> org.apache.spark.Logging {
>>   ...
>>   public org.apache.hadoop.yarn.client.api.YarnClient
>> org$apache$spark$deploy$yarn$Client$$yarnClient();
>>   public void run();
>>   public
>> org.apache.spark.deploy.yarn.Client(org.apache.spark.deploy.yarn.ClientArguments,
>> org.apache.hadoop.conf.Configuration, org.apache.spark.SparkConf);
>>   public
>> org.apache.spark.deploy.yarn.Client(org.apache.spark.deploy.yarn.ClientArguments,
>> org.apache.spark.SparkConf);
>>   public
>> org.apache.spark.deploy.yarn.Client(org.apache.spark.deploy.yarn.ClientArguments);
>> }
>>
>>  So it should be easy to write a small Java wrapper around this. No less
>> hacky than relying on the "private-but-public" code of before.
>>
>> --
>> Marcelo
>>
>>
>>
>

Reply via email to