[ https://issues.apache.org/jira/browse/SPARK-14663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Marcelo Vanzin resolved SPARK-14663. ------------------------------------ Resolution: Not A Problem Those are Java property files, and you can use unicode escapes for this. Not the prettiest thing but it works. {code} value=\u000A {code} {{value}} will be the new line character. > Parse escape sequences in spark-defaults.conf > --------------------------------------------- > > Key: SPARK-14663 > URL: https://issues.apache.org/jira/browse/SPARK-14663 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.6.1 > Reporter: Sergey > Priority: Minor > > I am trying to specify > spark.hadoop.textinputformat.record.delimiter in spark-defaults.conf, namely, > to set it to "\n" (the #10 character). I know how to do it in > sc.newAPIHadoopFile, but I'd like to set it in configuration, so I can keep > using sc.textFile (because it also works with zipped files). > However, I can't find a way to accomplish it. > I have tried > spark.hadoop.textinputformat.record.delimiter \n > spark.hadoop.textinputformat.record.delimiter '\n' > spark.hadoop.textinputformat.record.delimiter "\n" > spark.hadoop.textinputformat.record.delimiter \\n (that's two slashes and > the letter n) > spark.hadoop.textinputformat.record.delimiter > (just pressing enter) > None of them works. I check in sc._conf.getAll(), and none of them gives me > the right result. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org