[ https://issues.apache.org/jira/browse/SPARK-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14739302#comment-14739302 ]
Marcelo Vanzin commented on SPARK-10528: ---------------------------------------- Thanks! That is really a bug in Hive then: {code} FsPermission writableHDFSDirPermission = new FsPermission((short)00733); FileSystem fs = rootHDFSDirPath.getFileSystem(conf); if (!fs.exists(rootHDFSDirPath)) { Utilities.createDirsWithPermission(conf, rootHDFSDirPath, writableHDFSDirPermission, true); } FsPermission currentHDFSDirPermission = fs.getFileStatus(rootHDFSDirPath).getPermission(); LOG.debug("HDFS root scratch dir: " + rootHDFSDirPath + ", permission: " + currentHDFSDirPermission); // If the root HDFS scratch dir already exists, make sure it is writeable. if (!((currentHDFSDirPermission.toShort() & writableHDFSDirPermission .toShort()) == writableHDFSDirPermission.toShort())) { {code} Basically it assumes that on all FileSystems, the execute bit needs to be set for you to be able to read from / write to the directory. That doesn't seem to be true for the Windows local fs. > spark-shell throws java.lang.RuntimeException: The root scratch dir: > /tmp/hive on HDFS should be writable. > ---------------------------------------------------------------------------------------------------------- > > Key: SPARK-10528 > URL: https://issues.apache.org/jira/browse/SPARK-10528 > Project: Spark > Issue Type: Bug > Components: Spark Shell > Affects Versions: 1.5.0 > Environment: Windows 7 x64 > Reporter: Aliaksei Belablotski > Priority: Minor > > Starting spark-shell throws > java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: > /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw- -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org