Hi guha,

Thanks for your quick response, option a and b are in our table already.
For option b, again the same problem, we don't know which column is date.


Thanks,
-G

On Sun, Mar 18, 2018 at 9:36 PM, Deepak Sharma <deepakmc...@gmail.com>
wrote:

> The other approach would to write to temp table and then merge the data.
> But this may be expensive solution.
>
> Thanks
> Deepak
>
> On Mon, Mar 19, 2018, 08:04 Gurusamy Thirupathy <thirug...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I am trying to read data from Hive as DataFrame, then trying to write the
>> DF into the Oracle data base. In this case, the date field/column in hive
>> is with Type Varchar(20)
>> but the corresponding column type in Oracle is Date. While reading from
>> hive , the hive table names are dynamically decided(read from another
>> table) based on some job condition(ex. Job1). There are multiple tables
>> like this, so column and the table names are decided only run time. So I
>> can't do type conversion explicitly when read from Hive.
>>
>> So is there any utility/api available in Spark to achieve this conversion
>> issue?
>>
>>
>> Thanks,
>> Guru
>>
>


-- 
Thanks,
Guru

Reply via email to