Hi all,
Are there any plans to backport the recent (2.4) updates to the Spark-Kafka
adapter for use with Spark v2.3, or will the updates just be for v2.4+?
Thanks,
Basil
Sean, thanks for checking! The MLlib blockers were resolved today by
reverting breaking API changes. We still have some documentation work to
wrap up. -Xiangrui
+Weichen Xu
On Fri, Sep 21, 2018 at 6:54 AM Sean Owen wrote:
> Yes, documentation for 2.4 has to be done before the 2.4 release. Or
+1 (non-binding)
Bumping our build to 2.3.2 rc6 and Avro to 1.8.2 and Parquet to 1.8.3 works
for us, running on version 2.3.2 rc6 and older Spark versions.
https://github.com/bigdatagenomics/adam/pull/2055
michael
On Thu, Sep 20, 2018 at 7:09 PM, Ryan Blue
wrote:
> Changing my vote to +1
Yes, documentation for 2.4 has to be done before the 2.4 release. Or
else it's not for 2.4. Likewise auditing that must happen before 2.4,
must happen before 2.4 is released.
"Foo for 2.4" as Blocker for 2.4 needs to be resolved for 2.4, by
definition. Or else it's not a Blocker, not for 2.4.
I
Hi, SeanAfter brief investigation, I found there are some tickets/PRs about this issue. I just didn't know that. https://issues.apache.org/jira/browse/SPARK-20156https://github.com/apache/spark/pull/17527https://github.com/apache/spark/pull/17655
I have carefully read the
Hi, RaynoldSorry for slow response. Thanks for your suggestion. I'd like to document this in the API docs - SQL built-in functions. BTW, this is a real case we met in production, the Turkish data is from other systems through ETL. As what you mentioned, we use UDFs to avoid
Hi Wenchen,
Thank you for the clarification. I agree that this is more a bug fix rather
than an improvement. I apologize for the error. Please consider this as a
design doc.
Thanks,
Marco
Il giorno ven 21 set 2018 alle ore 12:04 Wenchen Fan
ha scritto:
> Hi Marco,
>
> Thanks for sending it!
Hi Marco,
Thanks for sending it! The problem is clearly explained in this email, but
I would not treat it as a SPIP. It proposes a fix for a very tricky bug,
and SPIP is usually for new features. Others please correct me if I was
wrong.
Thanks,
Wenchen
On Fri, Sep 21, 2018 at 5:47 PM Marco
Hi all,
I am writing this e-mail in order to discuss the issue which is reported in
SPARK-25454 and according to Wenchen's suggestion I prepared a design doc
for it.
The problem we are facing here is that our rules for decimals operations
are taken from Hive and MS SQL server and they explicitly
I think the point is we actually need to do these validation before completing
the release...
From: Wenchen Fan
Sent: Friday, September 21, 2018 12:02 AM
To: Sean Owen
Cc: Spark dev list
Subject: Re: 2.4.0 Blockers, Critical, etc
Sean thanks for checking them!
Got it. I just removed 3.0.0 when there're multiple versions, except
SPARK-25431 which has (2.4.1, 3.0.0) pair since other version is targeting
to bugfix version.
https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%203.0.0
2018년 9월 21일 (금) 오후 4:05, Wenchen
Thanks! If both versions are specified, yes we can just remove 3.0.0
On Fri, Sep 21, 2018 at 1:38 PM Jungtaek Lim wrote:
> OK got it. Thanks for clarifying.
>
> I can help checking and modifying version, but not sure the case both
> versions are specified, like "2.4.0/3.0.0". Removing 3.0.0
Sean thanks for checking them!
I made one pass and re-targeted/closed some of them. Most of them are
documentation and auditing, do we need to block the release for them?
On Fri, Sep 21, 2018 at 6:01 AM Sean Owen wrote:
> Because we're into 2.4 release candidates, I thought I'd look at
>
I'm going to be doing this again tomorrow, Friday the 21st, at 9am -
https://www.youtube.com/watch?v=xb2FsHaozVQ / http://twitch.tv/holdenkarau
:) As always if you have anything you want me to look at in particular send
me a message. https://github.com/apache/spark/pull/22275 (Arrow
out-of-order
14 matches
Mail list logo