[ 
https://issues.apache.org/jira/browse/SPARK-37198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17438789#comment-17438789
 ] 

Chuck Connell commented on SPARK-37198:
---------------------------------------

There are many hints/techtips on the Internet which say that 
{{file://local_path }}already works to read and write local files from a Spark 
cluster. But in my testing (from Databricks) this is not true. I have never 
gotten it to work.

If there is already a way to read/write local files, please say the exact, 
tested method to do so. 

> pyspark.pandas read_csv() and to_csv() should handle local files 
> -----------------------------------------------------------------
>
>                 Key: SPARK-37198
>                 URL: https://issues.apache.org/jira/browse/SPARK-37198
>             Project: Spark
>          Issue Type: Sub-task
>          Components: PySpark
>    Affects Versions: 3.2.0
>            Reporter: Chuck Connell
>            Priority: Major
>
> Pandas programmers who move their code to Spark would like to import and 
> export text files to and from their local disk. I know there are technical 
> hurdles to this (since Spark is usually in a cluster that does not know where 
> your local computer is) but it would really help code migration. 
> For read_csv() and to_csv(), the syntax {{*file://c:/Temp/my_file.csv* }}(or 
> something like this) should import and export to the local disk on Windows. 
> Similarly for Mac and Linux. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to