Thanks Wenchen!
On Wed, Dec 18, 2019 at 7:25 PM Wenchen Fan wrote:
> Hi Aakash,
>
> You can try the latest DS v2 with the 3.0 preview, and the API is in a
> quite stable shape now. With the latest API, a Writer is created from a
> Table, and the Table has the partitioning information.
>
> Thanks
+1, all tests pass
On Thu, Dec 19, 2019 at 7:18 AM Takeshi Yamamuro
wrote:
> Thanks, Yuming!
>
> I checked the links and the prepared binaries.
> Also, I run tests with -Pyarn -Phadoop-2.7 -Phive -Phive-thriftserver
> -Pmesos -Pkubernetes -Psparkr
> on java version "1.8.0_181.
> All the things
Hi Aakash,
You can try the latest DS v2 with the 3.0 preview, and the API is in a
quite stable shape now. With the latest API, a Writer is created from a
Table, and the Table has the partitioning information.
Thanks,
Wenchen
On Wed, Dec 18, 2019 at 3:22 AM aakash aakash
wrote:
> Thanks Andrew!
Thanks, Yuming!
I checked the links and the prepared binaries.
Also, I run tests with -Pyarn -Phadoop-2.7 -Phive -Phive-thriftserver
-Pmesos -Pkubernetes -Psparkr
on java version "1.8.0_181.
All the things above look fine.
Bests,
Takeshi
On Thu, Dec 19, 2019 at 6:31 AM Dongjoon Hyun
wrote:
>
+1
I also check the signatures and docs. And, built and tested with JDK
11.0.5, Hadoop 3.2, Hive 2.3.
In addition, the newly added
`spark-3.0.0-preview2-bin-hadoop2.7-hive1.2.tgz` distribution looks correct.
Thank you Yuming and all.
Bests,
Dongjoon.
On Tue, Dec 17, 2019 at 4:11 PM Sean Owen
With the development of Spark and Hive,in current sql/hive-thriftserver module, we need to do a lot of work to solve code conflicts for different built-in hive versions.It's an annoying and unending work in current ways. And these issues have limited our abili