I've become accustomed to being able to use system properties to override
properties in the Hadoop Configuration objects. I just recently noticed
that when Spark creates the Hadoop Configuraiton in the SparkContext, it
cycles through any properties prefixed with spark.hadoop. and add those
properties to the Hadoop Configuration (minus the spark.hadoop.). I don't
see this advertised anywhere in the documentation. Is this a method that is
supposed to be public to users? If so, should we add that to the
documentation?

Reply via email to