You can do the following with option("delimiter") ..

val df = spark.read.option("header",
false).option("delimiter","\t").csv("hdfs://rhes564:9000/tmp/nw_10124772.tsv")

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 23 September 2016 at 07:56, Sea <261810...@qq.com> wrote:

> Hi, I want to run sql directly on files, I find that spark has supported
> sql like select * from csv.`/path/to/file`, but files may not be split by
> ','. Maybe it is split by '\001', how can I specify delimiter?
>
> Thank you!
>
>
>

Reply via email to