[ https://issues.apache.org/jira/browse/SPARK-36610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17409328#comment-17409328 ]
Apache Spark commented on SPARK-36610: -------------------------------------- User 'itholic' has created a pull request for this issue: https://github.com/apache/spark/pull/33907 > Add `thousands` argument to `ps.read_csv`. > ------------------------------------------ > > Key: SPARK-36610 > URL: https://issues.apache.org/jira/browse/SPARK-36610 > Project: Spark > Issue Type: Sub-task > Components: PySpark > Affects Versions: 3.2.0 > Reporter: Haejoon Lee > Priority: Major > > When reading csv file in pandas, pandas automatically detect the thousand > separator if `thousands` argument is specified. > {code:java} > >>> pd.read_csv(path, sep=";") > name age job money > 0 Jorge 30 Developer 1,000,000 > 1 Bob 32 Developer 1000000 > >>> pd.read_csv(path, sep=";", thousands=",") > name age job money > 0 Jorge 30 Developer 1000000 > 1 Bob 32 Developer 1000000{code} > However, pandas-on-Spark doesn't support it. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org