Have you taken a look at the TeradataDBInputFormat? Spark is compatible
with arbitrary hadoop input formats - so this might work for you:
http://developer.teradata.com/extensibility/articles/hadoop-mapreduce-connector-to-teradata-edw

On Thu, Jan 8, 2015 at 10:53 AM, gen tang <gen.tan...@gmail.com> wrote:

> Thanks a lot for your reply.
> In fact, I need to work on almost all the data in teradata (~100T). So, I
> don't think that jdbcRDD is a good choice.
>
> Cheers
> Gen
>
>
> On Thu, Jan 8, 2015 at 7:39 PM, Reynold Xin <r...@databricks.com> wrote:
>
>> Depending on your use cases. If the use case is to extract small amount
>> of data out of teradata, then you can use the JdbcRDD and soon a jdbc input
>> source based on the new Spark SQL external data source API.
>>
>>
>>
>> On Wed, Jan 7, 2015 at 7:14 AM, gen tang <gen.tan...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have a stupid question:
>>> Is it possible to use spark on Teradata data warehouse, please? I read
>>> some news on internet which say yes. However, I didn't find any example
>>> about this issue
>>>
>>> Thanks in advance.
>>>
>>> Cheers
>>> Gen
>>>
>>>
>>
>

Reply via email to