not have been possible without you.
Dongjoon Hyun
Hi, Cheng.
Thank you for the suggestion. Your suggestion seems to have at least two
themes.
A. Adding a new Apache Spark community policy (contract) to guarantee MySQL
LTS Versions Support.
B. Dropping the support of non-LTS version support (MySQL 8.3/8.2/8.1)
And, it brings me three questions.
master
[image: Screenshot 2024-02-29 at 21.12.24.png]
Could you do the follow-up, please?
Thank you in advance.
Dongjoon.
On Thu, Feb 29, 2024 at 2:48 PM John Zhuge wrote:
> Excellent work, congratulations!
>
> On Wed, Feb 28, 2024 at 10:12 PM Dongjoon Hyun
> wrote:
>
>> C
Congratulations!
Bests,
Dongjoon.
On Wed, Feb 28, 2024 at 11:43 AM beliefer wrote:
> Congratulations!
>
>
>
> At 2024-02-28 17:43:25, "Jungtaek Lim"
> wrote:
>
> Hi everyone,
>
> We are happy to announce the availability of Spark 3.5.1!
>
> Spark 3.5.1 is a maintenance release containing
would not have been possible without you.
Dongjoon Hyun
Hi, All.
As a part of Apache Spark 4.0.0 (SPARK-44111), the Apache Spark community
starts to have test coverage for all supported Python versions from Today.
- https://github.com/apache/spark/actions/runs/7061665420
Here is a summary.
1. Main CI: All PRs and commits on `master` branch are
not have been possible without you.
Dongjoon Hyun
.
Dongjoon Hyun
.
Dongjoon Hyun
to keep this
> active.
>
>
>
> On Mon, Apr 3, 2023 at 16:46 Dongjoon Hyun
> wrote:
>
>> Shall we summarize the discussion so far?
>>
>> To sum up, "ASF Slack" vs "3rd-party Slack" was the real background to
>> initiate this thread instead
reference. They are going with the way they are convenient.
>>>
>>> Same applies here - if ASF Slack requires a restricted invitation
>>> mechanism then it won't work. Looks like there is a link for an invitation,
>>> but we are also talking about the cost as well
rom relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Mon, 3 Apr 2023 at 20:59, Dongjoon Hyun
> wrote:
>
>> As Mich Tal
, I stand
>>>>>> corrected
>>>>>> - To be clear, I intentionally didn't refer to any specific mailing
>>>>>> list because we didn't set up any rule here yet.
>>>>>>fair enough
>>>>>>
>>>>>>
communities. TBH, we are kind of late. I think we can do the
> same in our community?
>
> We can follow the guide when the ASF has an official process for ASF
> archiving. Since our PMC are the owner of the slack workspace, we can make
> a change based on the policy. WDYT?
>
>
Hi, Xiao and all.
(cc Matei)
Please hold on the vote.
There is a concern expressed by ASF board because recent Slack activities
created an isolated silo outside of ASF mailing list archive.
We need to establish a way to embrace it back to ASF archive before
starting anything official.
Bests,
Thank you for considering me, but may I ask what makes you think to put me
there, Mich? I'm curious about your reason.
> I have put dongjoon.hyun as a shepherd.
BTW, unfortunately, I cannot help you with that due to my on-going personal
stuff. I'll adjust the JIRA first.
Thanks,
Dongjoon.
On
Thank you, Chao!
On Wed, Nov 30, 2022 at 8:16 AM Yang,Jie(INF) wrote:
> Thanks, Chao!
>
>
>
> *发件人**: *Maxim Gekk
> *日期**: *2022年11月30日 星期三 19:40
> *收件人**: *Jungtaek Lim
> *抄送**: *Wenchen Fan , Chao Sun ,
> dev , user
> *主题**: *Re: [ANNOUNCE] Apache Spark 3.2.3 released
>
>
>
> Thank you,
It's great. Thank you so much, Yuming!
Dongjoon
On Tue, Oct 25, 2022 at 11:23 PM Yuming Wang wrote:
> We are happy to announce the availability of Apache Spark 3.3.1!
>
> Spark 3.3.1 is a maintenance release containing stability fixes. This
> release is based on the branch-3.3 maintenance
.
Dongjoon Hyun
Thank you again, Huaxin!
Dongjoon.
On Fri, Jan 28, 2022 at 6:23 PM DB Tsai wrote:
> Thank you, Huaxin for the 3.2.1 release!
>
> Sent from my iPhone
>
> On Jan 28, 2022, at 5:45 PM, Chao Sun wrote:
>
>
> Thanks Huaxin for driving the release!
>
> On Fri, Jan 28, 2022 at 5:37 PM Ruifeng
Thank you so much, Gengliang and all!
Dongjoon.
On Tue, Oct 19, 2021 at 8:48 AM Xiao Li wrote:
> Thank you, Gengliang!
>
> Congrats to our community and all the contributors!
>
> Xiao
>
> Henrik Peng 于2021年10月19日周二 上午8:26写道:
>
>> Congrats and thanks!
>>
>>
>> Gengliang Wang 于2021年10月19日
Thank you, Yi!
On Thu, Jun 24, 2021 at 10:52 PM Yi Wu wrote:
> We are happy to announce the availability of Spark 3.0.3!
>
> Spark 3.0.3 is a maintenance release containing stability fixes. This
> release is based on the branch-3.0 maintenance branch of Spark. We strongly
> recommend all 3.0
Hi, Stephen and Steve.
Apache Spark community starts to publish it as a snapshot and Apache Spark
3.2.0 will be the first release has it.
-
https://repository.apache.org/content/groups/snapshots/org/apache/spark/spark-hadoop-cloud_2.12/3.2.0-SNAPSHOT/
Please check the snapshot artifacts and
.
Dongjoon Hyun
It took a long time. Thank you, Hyukjin and all!
Bests,
Dongjoon.
On Wed, Mar 3, 2021 at 3:23 AM Gabor Somogyi
wrote:
> Good to hear and great work Hyukjin!
>
> On Wed, 3 Mar 2021, 11:15 Jungtaek Lim,
> wrote:
>
>> Thanks Hyukjin for driving the huge release, and thanks everyone for
>>
.
Dongjoon Hyun
Hi, All.
Apache Spark 3.1.0 Release Window is adjusted like the following today.
Please check the latest information on the official website.
-
https://github.com/apache/spark-website/commit/0cd0bdc80503882b4737db7e77cc8f9d17ec12ca
- https://spark.apache.org/versioning-policy.html
me (using older spark version to extract
> out of hive, then switch to newer spark version) so i am not too worried
> about this. just making sure i understand.
>
> thanks
>
> On Sat, Oct 3, 2020 at 8:17 PM Dongjoon Hyun
> wrote:
>
>> Hi, All.
>>
>> As of today,
4, 2020 at 10:53 AM Dongjoon Hyun
wrote:
> Thank you all.
>
> BTW, Xiao and Mridul, I'm wondering what date you have in your mind
> specifically.
>
> Usually, `Christmas and New Year season` doesn't give us much additional
> time.
>
> If you think so, could you make a
le syntax:
>>https://issues.apache.org/jira/browse/SPARK-31257
>>- Bloom filter join: https://issues.apache.org/jira/browse/SPARK-32268
>>
>> Thanks,
>>
>> Xiao
>>
>>
>> Hyukjin Kwon 于2020年10月3日周六 下午5:41写道:
>>
>>> Nice summa
Hi, All.
As of today, master branch (Apache Spark 3.1.0) resolved
852+ JIRA issues and 606+ issues are 3.1.0-only patches.
According to the 3.1.0 release window, branch-3.1 will be
created on November 1st and enters QA period.
Here are some notable updates I've been monitoring.
*Language*
01.
It's great. Thank you, Ruifeng!
Bests,
Dongjoon.
On Fri, Sep 11, 2020 at 1:54 AM 郑瑞峰 wrote:
> Hi all,
>
> We are happy to announce the availability of Spark 3.0.1!
> Spark 3.0.1 is a maintenance release containing stability fixes. This
> release is based on the branch-3.0 maintenance branch of
Thank you so much, Holden! :)
On Wed, Jun 10, 2020 at 6:59 PM Hyukjin Kwon wrote:
> Yay!
>
> 2020년 6월 11일 (목) 오전 10:38, Holden Karau 님이 작성:
>
>> We are happy to announce the availability of Spark 2.4.6!
>>
>> Spark 2.4.6 is a maintenance release containing stability, correctness,
>> and
PM, Reynold Xin wrote:
>
>> I looked up our usage logs (sorry I can't share this publicly) and trim
>> has at least four orders of magnitude higher usage than char.
>>
>>
>> On Mon, Mar 16, 2020 at 5:27 PM, Dongjoon Hyun
>> wrote:
>>
>>> T
nsistently everywhere.
>
>
> Cheers,
>
> Steve C
>
> On 17 Mar 2020, at 10:01 am, Dongjoon Hyun
> wrote:
>
> Hi, Reynold.
> (And +Michael Armbrust)
>
> If you think so, do you think it's okay that we change the return value
> silently? Then, I'm wondering why we r
0the%20default.=Snowflake%20currently%20deviates%20from%20common,space%2Dpadded%20at%20the%20end.>
>> :
>> "Snowflake currently deviates from common CHAR semantics in that strings
>> shorter than the maximum length are not space-padded at the end."
>>
>> MyS
code that was working for char(3) would now stop
> working.
>
> For new users, depending on whether the underlying metastore char(3) is
> either supported but different from ansi Sql (which is not that big of a
> deal if we explain it) or not supported.
>
> On Sat, Mar 14, 2020 at 3
Hi, All.
Apache Spark has been suffered from a known consistency issue on `CHAR`
type behavior among its usages and configurations. However, the evolution
direction has been gradually moving forward to be consistent inside Apache
Spark because we don't have `CHAR` offically. The following is the
There was a typo in one URL. The correct release note URL is here.
https://spark.apache.org/releases/spark-release-2-4-5.html
On Sat, Feb 8, 2020 at 5:22 PM Dongjoon Hyun
wrote:
> We are happy to announce the availability of Spark 2.4.5!
>
> Spark 2.4.5 is a maintenance release c
all community members for contributing to this
release. This release would not have been possible without you.
Dongjoon Hyun
Indeed! Thank you again, Yuming and all.
Bests,
Dongjoon.
On Tue, Dec 24, 2019 at 13:38 Takeshi Yamamuro
wrote:
> Great work, Yuming!
>
> Bests,
> Takeshi
>
> On Wed, Dec 25, 2019 at 6:00 AM Xiao Li wrote:
>
>> Thank you all. Happy Holidays!
>>
>> Xiao
>>
>> On Tue, Dec 24, 2019 at 12:53 PM
+1 for Apache ORC 1.4.5 release.
Thank you for making the release.
I'd like to mention some notable changes here.
Apache ORC 1.4.5 is not a drop-in replacement for 1.4.4 because of the
following.
ORC-498: ReaderImpl and RecordReaderImpl open separate file handles.
Applications should be
all community members for contributing to this
release. This release would not have been possible without you.
Dongjoon Hyun
Hi, All.
The vote passes. Thanks to all who helped with this release 2.4.4!
It was very intensive vote with +11 (including +8 PMC votes) and no -1.
I'll follow up later with a release announcement once everything is
published.
+1 (* = binding):
Dongjoon Hyun
Kazuaki Ishizaki
Sean Owen*
Wenchen
Hi, All.
Thanks to your many many contributions,
Apache Spark master branch starts to pass on JDK11 as of today.
(with `hadoop-3.2` profile: Apache Hadoop 3.2 and Hive 2.3.6)
;
>>>>> On Tue, Aug 13, 2019 at 5:22 PM Sean Owen wrote:
>>>>>
>>>>>> Seems fine to me if there are enough valuable fixes to justify another
>>>>>> release. If there are any other important fixes imminent, it's fin
Hi, All.
Spark 2.4.3 was released three months ago (8th May).
As of today (13th August), there are 112 commits (75 JIRAs) in `branch-24`
since 2.4.3.
It would be great if we can have Spark 2.4.4.
Shall we start `2.4.4 RC1` next Monday (19th August)?
Last time, there was a request for K8s issue
oon for being a release manager.
>
> If the assumed dates are ok, I would like to volunteer for an 2.3.4
> release manager.
>
> Best Regards,
> Kazuaki Ishizaki,
>
>
>
> From:Dongjoon Hyun
> To:dev , "user @spark" <
> user@spark.apache.
Hi, Apache Spark PMC members.
Can we cut Apache Spark 2.4.4 next Monday (22nd July)?
Bests,
Dongjoon.
On Fri, Jul 12, 2019 at 3:18 PM Dongjoon Hyun
wrote:
> Thank you, Jacek.
>
> BTW, I added `@private` since we need PMC's help to make an Apache Spark
> release.
>
> Can I
(if we are on
schedule).
- 2.4.4 at the end of July
- 2.3.4 at the end of August (since 2.3.0 was released at the end of
February 2018)
- 3.0.0 (possibily September?)
- 3.1.0 (January 2020?)
Bests,
Dongjoon.
On Thu, Jul 11, 2019 at 1:30 PM Jacek Laskowski wrote:
> Hi,
>
> Thanks Dong
Additionally, one more correctness patch landed yesterday.
- SPARK-28015 Check stringToDate() consumes entire input for the
and -[m]m formats
Bests,
Dongjoon.
On Tue, Jul 9, 2019 at 10:11 AM Dongjoon Hyun
wrote:
> Thank you for the reply, Sean. Sure. 2.4.x should be a
n before 3.0, but could. Usually maintenance
> releases happen 3-4 months apart and the last one was 2 months ago. If
> these are significant issues, sure. It'll probably be August before
> it's out anyway.
>
> On Tue, Jul 9, 2019 at 11:15 AM Dongjoon Hyun
> wrote:
> >
> > Hi,
Hi, All.
Spark 2.4.3 was released two months ago (8th May).
As of today (9th July), there exist 45 fixes in `branch-2.4` including the
following correctness or blocker issues.
- SPARK-26038 Decimal toScalaBigInt/toJavaBigInteger not work for
decimals not fitting in long
- SPARK-26045
Thank you, Hyukjin !
On Sun, Jun 16, 2019 at 4:12 PM Hyukjin Kwon wrote:
> Labels look good and useful.
>
> On Sat, 15 Jun 2019, 02:36 Dongjoon Hyun, wrote:
>
>> Now, you can see the exposed component labels (ordered by the number of
>> PRs) here and click
Now, you can see the exposed component labels (ordered by the number of
PRs) here and click the component to search.
https://github.com/apache/spark/labels?sort=count-desc
Dongjoon.
On Fri, Jun 14, 2019 at 1:15 AM Dongjoon Hyun
wrote:
> Hi, All.
>
> JIRA and PR is ready fo
Hi, All.
JIRA and PR is ready for reviews.
https://issues.apache.org/jira/browse/SPARK-28051 (Exposing JIRA issue
component types at GitHub PRs)
https://github.com/apache/spark/pull/24871
Bests,
Dongjoon.
On Thu, Jun 13, 2019 at 10:48 AM Dongjoon Hyun
wrote:
> Thank you for the feedba
y be updated later: so keeping them in sync may be
> an extra effort..
>
> On Thu, 13 Jun 2019, 08:09 Reynold Xin, wrote:
>
>> Seems like a good idea. Can we test this with a component first?
>>
>> On Thu, Jun 13, 2019 at 6:17 AM Dongjoon Hyun
>> wrote:
>>
&
Hi, All.
Since we use both Apache JIRA and GitHub actively for Apache Spark
contributions, we have lots of JIRAs and PRs consequently. One specific
thing I've been longing to see is `Jira Issue Type` in GitHub.
How about exposing JIRA issue types at GitHub PRs as GitHub `Labels`? There
are two
We are happy to announce the availability of Spark 2.2.3!
Apache Spark 2.2.3 is a maintenance release, based on the branch-2.2
maintenance branch of Spark. We strongly recommend all 2.2.x users to
upgrade to this stable release.
To download Spark 2.2.3, head over to the download page:
Hi, All.
The vote passes. Thanks to all who helped with this release 2.2.3 (the
final 2.2.x)!
I'll follow up later with a release announcement once everything is
published.
+1 (* = binding):
DB Tsai*
Wenchen Fan*
Dongjoon Hyun
Denny Lee
Sean Owen*
Hyukjin Kwon
John Zhuge
+0: None
-1: None
Finally, thank you all. Especially, thanks to the release manager, Wenchen!
Bests,
Dongjoon.
On Thu, Nov 8, 2018 at 11:24 AM Wenchen Fan wrote:
> + user list
>
> On Fri, Nov 9, 2018 at 2:20 AM Wenchen Fan wrote:
>
>> resend
>>
>> On Thu, Nov 8, 2018 at 11:02 PM Wenchen Fan wrote:
>>
>>>
>>>
Hi, Jerry.
There is a JIRA issue for that,
https://issues.apache.org/jira/browse/SPARK-24360 .
So far, it's in progress for Hive 3.1.0 Metastore for Apache Spark 2.5.0.
You can track that issue there.
Bests,
Dongjoon.
On Mon, Sep 17, 2018 at 7:01 PM 白也诗无敌 <445484...@qq.com> wrote:
> Hi, guys
You may hit SPARK-23355 (convertMetastore should not ignore table properties).
Since it's a known Spark issue for all Hive tables (Parquet/ORC), could you
check that too?
Bests,
Dongjoon.
On 2018/03/28 01:00:55, Dongjoon Hyun <dongj...@apache.org> wrote:
> Hi, Eric.
>
> Fo
Hi, Eric.
For me, Spark 2.3 works correctly like the following. Could you give us some
reproducible example?
```
scala> sql("set spark.sql.orc.impl=native")
scala> sql("set spark.sql.orc.compression.codec=zlib")
res1: org.apache.spark.sql.DataFrame = [key: string, value: string]
scala>
; Hi
>
> Thanks for this work.
>
> Will this affect both:
> 1) spark.read.format("orc").load("...")
> 2) spark.sql("select ... from my_orc_table_in_hive")
>
> ?
>
>
> Le 10 janv. 2018 à 20:14, Dongjoon Hyun écrivait :
> > Hi, All.
> >
> >
Hi, All.
Vectorized ORC Reader is now supported in Apache Spark 2.3.
https://issues.apache.org/jira/browse/SPARK-16060
It has been a long journey. From now, Spark can read ORC files faster
without feature penalty.
Thank you for all your support, especially Wenchen Fan.
It's done by two
Hi, All.
Today, Apache Spark starts to use Apache ORC 1.4 as a `native` ORC
implementation.
SPARK-20728 Make OrcFileFormat configurable between `sql/hive` and
`sql/core`.
-
https://github.com/apache/spark/commit/326f1d6728a7734c228d8bfaa69442a1c7b92e9b
Thank you so much for all your supports
Did you follow the guide in `IDE Setup` -> `IntelliJ` section of
http://spark.apache.org/developer-tools.html ?
Bests,
Dongjoon.
On Wed, Jun 28, 2017 at 5:13 PM, satyajit vegesna <
satyajit.apas...@gmail.com> wrote:
> Hi All,
>
> When i try to build source code of apache spark code from
>
+dev
I forget to add @user.
Dongjoon.
-- Forwarded message -
From: Dongjoon Hyun <dongj...@apache.org>
Date: Thu, Dec 8, 2016 at 16:00
Subject: Question about SPARK-11374 (skip.header.line.count)
To: <d...@spark.apache.org>
Hi, All.
Could you give me
69 matches
Mail list logo