Hi All,
In the above scenario if the field delimiter is default of hive then Spark
is able to parse the data as expected , hence i believe this is a bug.
Regards,
Shiva Achari
On Tue, Apr 5, 2016 at 8:15 PM, Shiva Achari <shiva.ach...@gmail.com> wrote:
> Hi,
>
> I have
Hi,
I have created a hive external table stored as textfile partitioned by
event_date Date.
How do we have to specify a specific format of csv while reading in spark
from Hive table ?
The environment is
1. 1.Spark 1.5.0 - cdh5.5.1 Using Scala version 2.10.4(Java
HotSpot(TM) 64 - Bit