Re: [build system] experiencing network issues, git fetch timeouts likely

2018-04-02 Thread Reynold Xin
Thanks Shane for taking care of this!

On Mon, Apr 2, 2018 at 9:12 PM shane knapp  wrote:

> the problem was identified and fixed, and we should be good as of about an
> hour ago.
>
> sorry for any inconvenience!
>
> On Mon, Apr 2, 2018 at 4:15 PM, shane knapp  wrote:
>
>> hey all!
>>
>> we're having network issues on campus right now, and the jenkins workers
>> are experiencing up to 40% packet loss on our pings to github.
>>
>> this can cause builds to time out when attempting to git fetch.
>>
>> i'll post an update on the network status when i find out more about
>> what's going on.
>>
>> shane
>> --
>> Shane Knapp
>> UC Berkeley EECS Research / RISELab Staff Technical Lead
>> https://rise.cs.berkeley.edu
>>
>
>
>
> --
> Shane Knapp
> UC Berkeley EECS Research / RISELab Staff Technical Lead
> https://rise.cs.berkeley.edu
>


Re: [build system] experiencing network issues, git fetch timeouts likely

2018-04-02 Thread shane knapp
the problem was identified and fixed, and we should be good as of about an
hour ago.

sorry for any inconvenience!

On Mon, Apr 2, 2018 at 4:15 PM, shane knapp  wrote:

> hey all!
>
> we're having network issues on campus right now, and the jenkins workers
> are experiencing up to 40% packet loss on our pings to github.
>
> this can cause builds to time out when attempting to git fetch.
>
> i'll post an update on the network status when i find out more about
> what's going on.
>
> shane
> --
> Shane Knapp
> UC Berkeley EECS Research / RISELab Staff Technical Lead
> https://rise.cs.berkeley.edu
>



-- 
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu


答复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread wangzhenhua (G)

Thanks everyone! It’s my great pleasure to be part of such a professional and 
innovative community!


best regards,
-Zhenhua(Xander)




Re: Hadoop 3 support

2018-04-02 Thread Saisai Shao
Yes, the main blocking issue is the hive version used in Spark
(1.2.1.spark) doesn't support run on Hadoop 3. Hive will check the Hadoop
version in the runtime [1]. Besides this I think some pom changes should be
enough to support Hadoop 3.

If we want to use Hadoop 3 shaded client jar, then the pom requires lots of
changes, but this is not necessary.


[1]
https://github.com/apache/hive/blob/6751225a5cde4c40839df8b46e8d241fdda5cd34/shims/common/src/main/java/org/apache/hadoop/hive/shims/ShimLoader.java#L144

2018-04-03 4:57 GMT+08:00 Marcelo Vanzin :

> Saisai filed SPARK-23534, but the main blocking issue is really
> SPARK-18673.
>
>
> On Mon, Apr 2, 2018 at 1:00 PM, Reynold Xin  wrote:
> > Does anybody know what needs to be done in order for Spark to support
> Hadoop
> > 3?
> >
>
>
>
> --
> Marcelo
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>


[build system] experiencing network issues, git fetch timeouts likely

2018-04-02 Thread shane knapp
hey all!

we're having network issues on campus right now, and the jenkins workers
are experiencing up to 40% packet loss on our pings to github.

this can cause builds to time out when attempting to git fetch.

i'll post an update on the network status when i find out more about what's
going on.

shane
-- 
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Bryan Cutler
Congratulations Zhenhua!

On Mon, Apr 2, 2018 at 12:01 PM, ron8hu  wrote:

> Congratulations, Zhenhua!  Well deserved!!
>
> Ron
>
>
>
> --
> Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>


Re: Hadoop 3 support

2018-04-02 Thread Marcelo Vanzin
I haven't looked at it in detail...

Somebody's been trying to do that in
https://github.com/apache/spark/pull/20659, but that's kind of a huge
change.

The parts where I'd be concerned are:
- using Hive's original hive-exec package brings in a bunch of shaded
dependencies, which may break Spark in weird ways. HIVE-16391 was
supposed to fix that but nothing has really been done as part of that
bug.
- the hive-exec "core" package avoids the shaded dependencies but used
to have issues of its own. Maybe it's better now, haven't looked.
- what about the current thrift server which is basically a fork of
the Hive 1.2 source code?
- when using Hadoop 3 + an old metastore client that doesn't know
about Hadoop 3, things may break.

The latter one has two possible fixes: say that Hadoop 3 builds of
Spark don't support old metastores; or add code so that Spark loads a
separate copy of Hadoop libraries in that case (search for
"sharesHadoopClasses" in IsolatedClientLoader for where to start with
that).

If trying to update Hive it would be good to avoid having to fork it,
like it's done currently. But not sure that will be possible given the
current hive-exec packaging.

On Mon, Apr 2, 2018 at 2:58 PM, Reynold Xin  wrote:
> Is it difficult to upgrade Hive execution version to the latest version? The
> metastore used to be an issue but now that part had been separated from the
> execution part.
>
>
> On Mon, Apr 2, 2018 at 1:57 PM, Marcelo Vanzin  wrote:
>>
>> Saisai filed SPARK-23534, but the main blocking issue is really
>> SPARK-18673.
>>
>>
>> On Mon, Apr 2, 2018 at 1:00 PM, Reynold Xin  wrote:
>> > Does anybody know what needs to be done in order for Spark to support
>> > Hadoop
>> > 3?
>> >
>>
>>
>>
>> --
>> Marcelo
>
>



-- 
Marcelo

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: Hadoop 3 support

2018-04-02 Thread Reynold Xin
Is it difficult to upgrade Hive execution version to the latest version?
The metastore used to be an issue but now that part had been separated from
the execution part.


On Mon, Apr 2, 2018 at 1:57 PM, Marcelo Vanzin  wrote:

> Saisai filed SPARK-23534, but the main blocking issue is really
> SPARK-18673.
>
>
> On Mon, Apr 2, 2018 at 1:00 PM, Reynold Xin  wrote:
> > Does anybody know what needs to be done in order for Spark to support
> Hadoop
> > 3?
> >
>
>
>
> --
> Marcelo
>


Re: Hadoop 3 support

2018-04-02 Thread Marcelo Vanzin
Saisai filed SPARK-23534, but the main blocking issue is really SPARK-18673.


On Mon, Apr 2, 2018 at 1:00 PM, Reynold Xin  wrote:
> Does anybody know what needs to be done in order for Spark to support Hadoop
> 3?
>



-- 
Marcelo

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: Hadoop 3 support

2018-04-02 Thread Reynold Xin
That's just a nice to have improvement right? I'm more curious what is the
minimal amount of work required to support 3.0, without all the bells and
whistles. (Of course we can also do the bells and whistles, but those would
come after we can actually get 3.0 running).


On Mon, Apr 2, 2018 at 1:50 PM, Mridul Muralidharan 
wrote:

> Specifically to run spark with hadoop 3 docker support, I have filed a
> few jira's tracked under [1].
>
> Regards,
> Mridul
>
> [1] https://issues.apache.org/jira/browse/SPARK-23717
>
>
> On Mon, Apr 2, 2018 at 1:00 PM, Reynold Xin  wrote:
> > Does anybody know what needs to be done in order for Spark to support
> Hadoop
> > 3?
> >
>


Re: Hadoop 3 support

2018-04-02 Thread Mridul Muralidharan
Specifically to run spark with hadoop 3 docker support, I have filed a
few jira's tracked under [1].

Regards,
Mridul

[1] https://issues.apache.org/jira/browse/SPARK-23717


On Mon, Apr 2, 2018 at 1:00 PM, Reynold Xin  wrote:
> Does anybody know what needs to be done in order for Spark to support Hadoop
> 3?
>

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Hadoop 3 support

2018-04-02 Thread Reynold Xin
Does anybody know what needs to be done in order for Spark to support
Hadoop 3?


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread ron8hu
Congratulations, Zhenhua!  Well deserved!!

Ron 



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Joseph Bradley
Welcome!

On Mon, Apr 2, 2018 at 11:00 AM, Takuya UESHIN 
wrote:

> Congratulations!
>
> On Mon, Apr 2, 2018 at 10:34 AM, Dongjoon Hyun 
> wrote:
>
>> Congratulations!
>>
>> Bests,
>> Dongjoon.
>>
>> On Mon, Apr 2, 2018 at 07:57 Cody Koeninger  wrote:
>>
>>> Congrats!
>>>
>>> On Mon, Apr 2, 2018 at 12:28 AM, Wenchen Fan 
>>> wrote:
>>> > Hi all,
>>> >
>>> > The Spark PMC recently added Zhenhua Wang as a committer on the
>>> project.
>>> > Zhenhua is the major contributor of the CBO project, and has been
>>> > contributing across several areas of Spark for a while, focusing
>>> especially
>>> > on analyzer, optimizer in Spark SQL. Please join me in welcoming
>>> Zhenhua!
>>> >
>>> > Wenchen
>>>
>>> -
>>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>>
>>>
>
>
> --
> Takuya UESHIN
> Tokyo, Japan
>
> http://twitter.com/ueshin
>



-- 

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

[image: http://databricks.com] 


Re: [Kubernetes] Resource requests and limits for Driver and Executor Pods

2018-04-02 Thread Anirudh Ramanathan
In summary, it looks like a combination of David's (#20943
) and Yinan's PR (#20553
) are good solutions here.
Agreed on the importance of requesting memoryoverhead up front.

I'm also wondering if we should support running in other QoS classes -
https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes,
like maybe best-effort as well
i.e. launching in a configuration that has neither the limit nor the
request specified. I haven't seen a use-case but I can imagine this is a
way for people to achieve better utilization with low priority long-running
jobs.

On Fri, Mar 30, 2018 at 3:06 PM Yinan Li  wrote:

> Yes, the PR allows you to set say 1.5. The New configuration property
> defaults to spark.executor.cores, which defaults to 1.
>
> On Fri, Mar 30, 2018, 3:03 PM Kimoon Kim  wrote:
>
>> David, glad it helped! And thanks for your clear example.
>>
>> > The only remaining question would then be what a sensible default for
>> *spark.kubernetes.executor.cores *would be. Seeing that I wanted more
>> than 1 and Yinan wants less, leaving it at 1 night be best.
>>
>> 1 as default SGTM.
>>
>> Thanks,
>> Kimoon
>>
>> On Fri, Mar 30, 2018 at 1:38 PM, David Vogelbacher <
>> dvogelbac...@palantir.com> wrote:
>>
>>> Thanks for linking that PR Kimoon.
>>>
>>>
>>> It actually does mostly address the issue I was referring to. As the
>>> issue  I
>>> linked in my first email states, one physical cpu might not be enough to
>>> execute a task in a performant way.
>>>
>>>
>>>
>>> So if I set *spark.executor.cores=1* and *spark.task.cpus=1* , I will
>>> get 1 core from Kubernetes and execute one task per Executor and run into
>>> performance problems.
>>>
>>> Being able to specify `spark.kubernetes.executor.cores=1.2` would fix
>>> the issue (1.2 is just an example).
>>>
>>> I am curious as to why you, Yinan, would want to use this property to
>>> request less than 1 physical cpu (that is how it sounds to me on the PR).
>>>
>>> Do you have testing that indicates that less than 1 physical CPU is
>>> enough for executing tasks?
>>>
>>>
>>>
>>> In the end it boils down to the question proposed by Yinan:
>>>
>>> > A relevant question is should Spark on Kubernetes really be
>>> opinionated on how to set the cpu request and limit and even try to
>>> determine this automatically?
>>>
>>>
>>>
>>> And I completely agree with your answer Kimoon, we should provide
>>> sensible defaults and make it configurable, as Yinan’s PR does.
>>>
>>> The only remaining question would then be what a sensible default for 
>>> *spark.kubernetes.executor.cores
>>> *would be. Seeing that I wanted more than 1 and Yinan wants less,
>>> leaving it at 1 night be best.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> David
>>>
>>>
>>>
>>> *From: *Kimoon Kim 
>>> *Date: *Friday, March 30, 2018 at 4:28 PM
>>> *To: *Yinan Li 
>>> *Cc: *David Vogelbacher , "
>>> dev@spark.apache.org" 
>>> *Subject: *Re: [Kubernetes] Resource requests and limits for Driver and
>>> Executor Pods
>>>
>>>
>>>
>>> I see. Good to learn the interaction between spark.task.cpus and
>>> spark.executor.cores. But am I right to say that PR #20553 can be still
>>> used as an additional knob on top of those two? Say a user wants 1.5 core
>>> per executor from Kubernetes, not the rounded up integer value 2?
>>>
>>>
>>>
>>> > A relevant question is should Spark on Kubernetes really be
>>> opinionated on how to set the cpu request and limit and even try to
>>> determine this automatically?
>>>
>>>
>>>
>>> Personally, I don't see how this can be auto-determined at all. I think
>>> the best we can do is to come up with sensible default values for the most
>>> common case, and provide and well-document other knobs for edge cases.
>>>
>>>
>>> Thanks,
>>>
>>> Kimoon
>>>
>>>
>>>
>>> On Fri, Mar 30, 2018 at 12:37 PM, Yinan Li  wrote:
>>>
>>> PR #20553 [github.com]
>>> 
>>>  is
>>> more for allowing users to use a fractional value for cpu requests. The
>>> existing spark.executor.cores is sufficient for specifying more than one
>>> cpus.
>>>
>>>
>>>
>>> > One way to solve this could be to request more than 1 core from
>>> Kubernetes per task. The exact amount we should request is unclear to me
>>> (it largely depends on how many threads actually get spawned for a task).
>>>
>>> A good indication is spark.task.cpus, and on average how many tasks are
>>> expected to run by a single executor at any point in time. If each 

Re: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Takuya UESHIN
Congratulations!

On Mon, Apr 2, 2018 at 10:34 AM, Dongjoon Hyun 
wrote:

> Congratulations!
>
> Bests,
> Dongjoon.
>
> On Mon, Apr 2, 2018 at 07:57 Cody Koeninger  wrote:
>
>> Congrats!
>>
>> On Mon, Apr 2, 2018 at 12:28 AM, Wenchen Fan  wrote:
>> > Hi all,
>> >
>> > The Spark PMC recently added Zhenhua Wang as a committer on the project.
>> > Zhenhua is the major contributor of the CBO project, and has been
>> > contributing across several areas of Spark for a while, focusing
>> especially
>> > on analyzer, optimizer in Spark SQL. Please join me in welcoming
>> Zhenhua!
>> >
>> > Wenchen
>>
>> -
>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>
>>


-- 
Takuya UESHIN
Tokyo, Japan

http://twitter.com/ueshin


Re: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Dongjoon Hyun
Congratulations!

Bests,
Dongjoon.

On Mon, Apr 2, 2018 at 07:57 Cody Koeninger  wrote:

> Congrats!
>
> On Mon, Apr 2, 2018 at 12:28 AM, Wenchen Fan  wrote:
> > Hi all,
> >
> > The Spark PMC recently added Zhenhua Wang as a committer on the project.
> > Zhenhua is the major contributor of the CBO project, and has been
> > contributing across several areas of Spark for a while, focusing
> especially
> > on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!
> >
> > Wenchen
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>


Re: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Cody Koeninger
Congrats!

On Mon, Apr 2, 2018 at 12:28 AM, Wenchen Fan  wrote:
> Hi all,
>
> The Spark PMC recently added Zhenhua Wang as a committer on the project.
> Zhenhua is the major contributor of the CBO project, and has been
> contributing across several areas of Spark for a while, focusing especially
> on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!
>
> Wenchen

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Xiao Li
Congratulations!

Xiao

On Mon, Apr 2, 2018 at 4:57 AM Hadrien Chicault 
wrote:

> Congrats
>
> Le lun. 2 avr. 2018 à 12:06, Weichen Xu  a
> écrit :
>
>> Congrats Zhenhua!
>>
>> On Mon, Apr 2, 2018 at 5:32 PM, Gengliang  wrote:
>>
>>> Congrats, Zhenhua!
>>>
>>>
>>>
>>> On Mon, Apr 2, 2018 at 5:19 PM, Marco Gaido 
>>> wrote:
>>>
 Congrats Zhenhua!

 2018-04-02 11:00 GMT+02:00 Saisai Shao :

> Congrats, Zhenhua!
>
> 2018-04-02 16:57 GMT+08:00 Takeshi Yamamuro :
>
>> Congrats, Zhenhua!
>>
>> On Mon, Apr 2, 2018 at 4:13 PM, Ted Yu  wrote:
>>
>>> Congratulations, Zhenhua
>>>
>>>  Original message 
>>> From: 雨中漫步 <601450...@qq.com>
>>> Date: 4/1/18 11:30 PM (GMT-08:00)
>>> To: Yuanjian Li , Wenchen Fan <
>>> cloud0...@gmail.com>
>>> Cc: dev 
>>> Subject: 回复: Welcome Zhenhua Wang as a Spark committer
>>>
>>> Congratulations Zhenhua Wang
>>>
>>>
>>> -- 原始邮件 --
>>> *发件人:* "Yuanjian Li";
>>> *发送时间:* 2018年4月2日(星期一) 下午2:26
>>> *收件人:* "Wenchen Fan";
>>> *抄送:* "Spark dev list";
>>> *主题:* Re: Welcome Zhenhua Wang as a Spark committer
>>>
>>> Congratulations Zhenhua!!
>>>
>>> 2018-04-02 13:28 GMT+08:00 Wenchen Fan :
>>>
 Hi all,

 The Spark PMC recently added Zhenhua Wang as a committer on the
 project. Zhenhua is the major contributor of the CBO project, and has 
 been
 contributing across several areas of Spark for a while, focusing 
 especially
 on analyzer, optimizer in Spark SQL. Please join me in welcoming 
 Zhenhua!

 Wenchen

>>>
>>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro
>>
>
>

>>>
>>


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Hadrien Chicault
Congrats

Le lun. 2 avr. 2018 à 12:06, Weichen Xu  a
écrit :

> Congrats Zhenhua!
>
> On Mon, Apr 2, 2018 at 5:32 PM, Gengliang  wrote:
>
>> Congrats, Zhenhua!
>>
>>
>>
>> On Mon, Apr 2, 2018 at 5:19 PM, Marco Gaido 
>> wrote:
>>
>>> Congrats Zhenhua!
>>>
>>> 2018-04-02 11:00 GMT+02:00 Saisai Shao :
>>>
 Congrats, Zhenhua!

 2018-04-02 16:57 GMT+08:00 Takeshi Yamamuro :

> Congrats, Zhenhua!
>
> On Mon, Apr 2, 2018 at 4:13 PM, Ted Yu  wrote:
>
>> Congratulations, Zhenhua
>>
>>  Original message 
>> From: 雨中漫步 <601450...@qq.com>
>> Date: 4/1/18 11:30 PM (GMT-08:00)
>> To: Yuanjian Li , Wenchen Fan <
>> cloud0...@gmail.com>
>> Cc: dev 
>> Subject: 回复: Welcome Zhenhua Wang as a Spark committer
>>
>> Congratulations Zhenhua Wang
>>
>>
>> -- 原始邮件 --
>> *发件人:* "Yuanjian Li";
>> *发送时间:* 2018年4月2日(星期一) 下午2:26
>> *收件人:* "Wenchen Fan";
>> *抄送:* "Spark dev list";
>> *主题:* Re: Welcome Zhenhua Wang as a Spark committer
>>
>> Congratulations Zhenhua!!
>>
>> 2018-04-02 13:28 GMT+08:00 Wenchen Fan :
>>
>>> Hi all,
>>>
>>> The Spark PMC recently added Zhenhua Wang as a committer on the
>>> project. Zhenhua is the major contributor of the CBO project, and has 
>>> been
>>> contributing across several areas of Spark for a while, focusing 
>>> especially
>>> on analyzer, optimizer in Spark SQL. Please join me in welcoming 
>>> Zhenhua!
>>>
>>> Wenchen
>>>
>>
>>
>
>
> --
> ---
> Takeshi Yamamuro
>


>>>
>>
>


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Weichen Xu
Congrats Zhenhua!

On Mon, Apr 2, 2018 at 5:32 PM, Gengliang  wrote:

> Congrats, Zhenhua!
>
>
>
> On Mon, Apr 2, 2018 at 5:19 PM, Marco Gaido 
> wrote:
>
>> Congrats Zhenhua!
>>
>> 2018-04-02 11:00 GMT+02:00 Saisai Shao :
>>
>>> Congrats, Zhenhua!
>>>
>>> 2018-04-02 16:57 GMT+08:00 Takeshi Yamamuro :
>>>
 Congrats, Zhenhua!

 On Mon, Apr 2, 2018 at 4:13 PM, Ted Yu  wrote:

> Congratulations, Zhenhua
>
>  Original message 
> From: 雨中漫步 <601450...@qq.com>
> Date: 4/1/18 11:30 PM (GMT-08:00)
> To: Yuanjian Li , Wenchen Fan <
> cloud0...@gmail.com>
> Cc: dev 
> Subject: 回复: Welcome Zhenhua Wang as a Spark committer
>
> Congratulations Zhenhua Wang
>
>
> -- 原始邮件 --
> *发件人:* "Yuanjian Li";
> *发送时间:* 2018年4月2日(星期一) 下午2:26
> *收件人:* "Wenchen Fan";
> *抄送:* "Spark dev list";
> *主题:* Re: Welcome Zhenhua Wang as a Spark committer
>
> Congratulations Zhenhua!!
>
> 2018-04-02 13:28 GMT+08:00 Wenchen Fan :
>
>> Hi all,
>>
>> The Spark PMC recently added Zhenhua Wang as a committer on the
>> project. Zhenhua is the major contributor of the CBO project, and has 
>> been
>> contributing across several areas of Spark for a while, focusing 
>> especially
>> on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!
>>
>> Wenchen
>>
>
>


 --
 ---
 Takeshi Yamamuro

>>>
>>>
>>
>


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Gengliang
Congrats, Zhenhua!



On Mon, Apr 2, 2018 at 5:19 PM, Marco Gaido  wrote:

> Congrats Zhenhua!
>
> 2018-04-02 11:00 GMT+02:00 Saisai Shao :
>
>> Congrats, Zhenhua!
>>
>> 2018-04-02 16:57 GMT+08:00 Takeshi Yamamuro :
>>
>>> Congrats, Zhenhua!
>>>
>>> On Mon, Apr 2, 2018 at 4:13 PM, Ted Yu  wrote:
>>>
 Congratulations, Zhenhua

  Original message 
 From: 雨中漫步 <601450...@qq.com>
 Date: 4/1/18 11:30 PM (GMT-08:00)
 To: Yuanjian Li , Wenchen Fan <
 cloud0...@gmail.com>
 Cc: dev 
 Subject: 回复: Welcome Zhenhua Wang as a Spark committer

 Congratulations Zhenhua Wang


 -- 原始邮件 --
 *发件人:* "Yuanjian Li";
 *发送时间:* 2018年4月2日(星期一) 下午2:26
 *收件人:* "Wenchen Fan";
 *抄送:* "Spark dev list";
 *主题:* Re: Welcome Zhenhua Wang as a Spark committer

 Congratulations Zhenhua!!

 2018-04-02 13:28 GMT+08:00 Wenchen Fan :

> Hi all,
>
> The Spark PMC recently added Zhenhua Wang as a committer on the
> project. Zhenhua is the major contributor of the CBO project, and has been
> contributing across several areas of Spark for a while, focusing 
> especially
> on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!
>
> Wenchen
>


>>>
>>>
>>> --
>>> ---
>>> Takeshi Yamamuro
>>>
>>
>>
>


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Marco Gaido
Congrats Zhenhua!

2018-04-02 11:00 GMT+02:00 Saisai Shao :

> Congrats, Zhenhua!
>
> 2018-04-02 16:57 GMT+08:00 Takeshi Yamamuro :
>
>> Congrats, Zhenhua!
>>
>> On Mon, Apr 2, 2018 at 4:13 PM, Ted Yu  wrote:
>>
>>> Congratulations, Zhenhua
>>>
>>>  Original message 
>>> From: 雨中漫步 <601450...@qq.com>
>>> Date: 4/1/18 11:30 PM (GMT-08:00)
>>> To: Yuanjian Li , Wenchen Fan <
>>> cloud0...@gmail.com>
>>> Cc: dev 
>>> Subject: 回复: Welcome Zhenhua Wang as a Spark committer
>>>
>>> Congratulations Zhenhua Wang
>>>
>>>
>>> -- 原始邮件 --
>>> *发件人:* "Yuanjian Li";
>>> *发送时间:* 2018年4月2日(星期一) 下午2:26
>>> *收件人:* "Wenchen Fan";
>>> *抄送:* "Spark dev list";
>>> *主题:* Re: Welcome Zhenhua Wang as a Spark committer
>>>
>>> Congratulations Zhenhua!!
>>>
>>> 2018-04-02 13:28 GMT+08:00 Wenchen Fan :
>>>
 Hi all,

 The Spark PMC recently added Zhenhua Wang as a committer on the
 project. Zhenhua is the major contributor of the CBO project, and has been
 contributing across several areas of Spark for a while, focusing especially
 on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!

 Wenchen

>>>
>>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro
>>
>
>


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Saisai Shao
Congrats, Zhenhua!

2018-04-02 16:57 GMT+08:00 Takeshi Yamamuro :

> Congrats, Zhenhua!
>
> On Mon, Apr 2, 2018 at 4:13 PM, Ted Yu  wrote:
>
>> Congratulations, Zhenhua
>>
>>  Original message 
>> From: 雨中漫步 <601450...@qq.com>
>> Date: 4/1/18 11:30 PM (GMT-08:00)
>> To: Yuanjian Li , Wenchen Fan <
>> cloud0...@gmail.com>
>> Cc: dev 
>> Subject: 回复: Welcome Zhenhua Wang as a Spark committer
>>
>> Congratulations Zhenhua Wang
>>
>>
>> -- 原始邮件 --
>> *发件人:* "Yuanjian Li";
>> *发送时间:* 2018年4月2日(星期一) 下午2:26
>> *收件人:* "Wenchen Fan";
>> *抄送:* "Spark dev list";
>> *主题:* Re: Welcome Zhenhua Wang as a Spark committer
>>
>> Congratulations Zhenhua!!
>>
>> 2018-04-02 13:28 GMT+08:00 Wenchen Fan :
>>
>>> Hi all,
>>>
>>> The Spark PMC recently added Zhenhua Wang as a committer on the project.
>>> Zhenhua is the major contributor of the CBO project, and has been
>>> contributing across several areas of Spark for a while, focusing especially
>>> on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!
>>>
>>> Wenchen
>>>
>>
>>
>
>
> --
> ---
> Takeshi Yamamuro
>


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Takeshi Yamamuro
Congrats, Zhenhua!

On Mon, Apr 2, 2018 at 4:13 PM, Ted Yu  wrote:

> Congratulations, Zhenhua
>
>  Original message 
> From: 雨中漫步 <601450...@qq.com>
> Date: 4/1/18 11:30 PM (GMT-08:00)
> To: Yuanjian Li , Wenchen Fan 
>
> Cc: dev 
> Subject: 回复: Welcome Zhenhua Wang as a Spark committer
>
> Congratulations Zhenhua Wang
>
>
> -- 原始邮件 --
> *发件人:* "Yuanjian Li";
> *发送时间:* 2018年4月2日(星期一) 下午2:26
> *收件人:* "Wenchen Fan";
> *抄送:* "Spark dev list";
> *主题:* Re: Welcome Zhenhua Wang as a Spark committer
>
> Congratulations Zhenhua!!
>
> 2018-04-02 13:28 GMT+08:00 Wenchen Fan :
>
>> Hi all,
>>
>> The Spark PMC recently added Zhenhua Wang as a committer on the project.
>> Zhenhua is the major contributor of the CBO project, and has been
>> contributing across several areas of Spark for a while, focusing especially
>> on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!
>>
>> Wenchen
>>
>
>


-- 
---
Takeshi Yamamuro


Re: 回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Ted Yu
Congratulations, Zhenhua 
 Original message From: 雨中漫步 <601450...@qq.com> Date: 4/1/18  
11:30 PM  (GMT-08:00) To: Yuanjian Li , Wenchen Fan 
 Cc: dev  Subject: 回复: Welcome 
Zhenhua Wang as a Spark committer 
Congratulations Zhenhua Wang

-- 原始邮件 --发件人: "Yuanjian 
Li";发送时间: 2018年4月2日(星期一) 下午2:26收件人: "Wenchen 
Fan";抄送: "Spark dev list"; 主题: Re: 
Welcome Zhenhua Wang as a Spark committer
Congratulations Zhenhua!!

2018-04-02 13:28 GMT+08:00 Wenchen Fan :
Hi all,
The Spark PMC recently added Zhenhua Wang as a committer on the project. 
Zhenhua is the major contributor of the CBO project, and has been contributing 
across several areas of Spark for a while, focusing especially on analyzer, 
optimizer in Spark SQL. Please join me in welcoming Zhenhua!
Wenchen



回复: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread 雨中漫步
Congratulations Zhenhua Wang




-- 原始邮件 --
发件人: "Yuanjian Li";
发送时间: 2018年4月2日(星期一) 下午2:26
收件人: "Wenchen Fan";
抄送: "Spark dev list"; 
主题: Re: Welcome Zhenhua Wang as a Spark committer



Congratulations Zhenhua!!


2018-04-02 13:28 GMT+08:00 Wenchen Fan :
Hi all,

The Spark PMC recently added Zhenhua Wang as a committer on the project. 
Zhenhua is the major contributor of the CBO project, and has been contributing 
across several areas of Spark for a while, focusing especially on analyzer, 
optimizer in Spark SQL. Please join me in welcoming Zhenhua!


Wenchen

Re: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Yuanjian Li
Congratulations Zhenhua!!

2018-04-02 13:28 GMT+08:00 Wenchen Fan :

> Hi all,
>
> The Spark PMC recently added Zhenhua Wang as a committer on the project.
> Zhenhua is the major contributor of the CBO project, and has been
> contributing across several areas of Spark for a while, focusing especially
> on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!
>
> Wenchen
>


Re: Welcome Zhenhua Wang as a Spark committer

2018-04-02 Thread Matei Zaharia
Welcome, Zhenhua!

Matei

> On Apr 1, 2018, at 10:28 PM, Wenchen Fan  wrote:
> 
> Hi all,
> 
> The Spark PMC recently added Zhenhua Wang as a committer on the project. 
> Zhenhua is the major contributor of the CBO project, and has been 
> contributing across several areas of Spark for a while, focusing especially 
> on analyzer, optimizer in Spark SQL. Please join me in welcoming Zhenhua!
> 
> Wenchen


-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org