Congrats!
On Aug 8, 2017 9:31 PM, "Minho Kim" wrote:
> Congrats, Hyukjin and Sameer!!
>
> 2017-08-09 9:55 GMT+09:00 Sandeep Joshi :
>
>> Congratulations Hyukjin and Sameer !
>>
>> On 7 Aug 2017 9:23 p.m., "Matei Zaharia" wrote:
>>
>>> Hi everyone,
>>>
>>> The Spark PMC recently voted to add Hyu
@Reynold no I don't use the HiveCatalog -- I'm using a custom
implementation of ExternalCatalog instead.
On Thu, Aug 10, 2017 at 3:34 PM, Dong Joon Hyun
wrote:
> Thank you, Andrew and Reynold.
>
>
>
> Yes, it will reduce the old Hive dependency eventually, at least, ORC
> codes.
>
>
>
> And, Spa
Thank you, Andrew and Reynold.
Yes, it will reduce the old Hive dependency eventually, at least, ORC codes.
And, Spark without `-Phive` can ORC like Parquet.
This is one milestone for `Feature parity for ORC with Parquet (SPARK-20901)`.
Bests,
Dongjoon
From: Reynold Xin
Date: Thursday, August
Do you not use the catalog?
On Thu, Aug 10, 2017 at 3:22 PM, Andrew Ash wrote:
> I would support moving ORC from sql/hive -> sql/core because it brings me
> one step closer to eliminating Hive from my Spark distribution by removing
> -Phive at build time.
>
> On Thu, Aug 10, 2017 at 9:48 AM, Do
I would support moving ORC from sql/hive -> sql/core because it brings me
one step closer to eliminating Hive from my Spark distribution by removing
-Phive at build time.
On Thu, Aug 10, 2017 at 9:48 AM, Dong Joon Hyun
wrote:
> Thank you again for coming and reviewing this PR.
>
>
>
> So far, we
Thank you again for coming and reviewing this PR.
So far, we discussed the followings.
1. `Why are we adding this to core? Why not just the hive module?` (@rxin)
- `sql/core` module gives more benefit than `sql/hive`.
- Apache ORC library (`no-hive` version) is a general and resonably small
As I remembered using Spark 2.1 Driver to communicate with Spark 2.2
executors will throw some RPC exceptions (I don't remember the details of
exception).
On Thu, Aug 10, 2017 at 4:23 PM, Ted Yu wrote:
> Hi,
> Has anyone used Spark 2.1.x client with Spark 2.2.0 cluster ?
>
> If so, is there any
Hi,
Has anyone used Spark 2.1.x client with Spark 2.2.0 cluster ?
If so, is there any compatibility issue observed ?
Thanks