"hdfs" is hardcoded in few places in the code which inhibits use of other file
systems
--------------------------------------------------------------------------------------
Key: HIVE-1444
URL: https://issues.apache.org/jira/browse/HIVE-1444
Project: Hadoop Hive
Issue Type: Bug
Components: Query Processor
Affects Versions: 0.5.0, 0.4.1, 0.4.0, 0.3.0, 0.3.1, 0.4.2, 0.5.1, 0.6.0,
0.7.0
Environment: any
Reporter: Yuliya Feldman
Priority: Minor
In quite a few places "hdfs" is hardcoded, which is OK for majority of the
cases, except when it is not really hdfs, but s3 or any other file system.
The place where it really breaks is:
in ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java :
method: private void applyConstraints(URI fromURI, URI toURI, Tree ast, boolean
isLocal)
First few lines are check for file system:
if (!fromURI.getScheme().equals("file")
&& !fromURI.getScheme().equals("hdfs")) {
throw new SemanticException(ErrorMsg.INVALID_PATH.getMsg(ast,
"only \"file\" or \"hdfs\" file systems accepted"));
}
"hdfs" is hardcoded.
I don't think you need to have this check at all as you are checking whether
filesystem is local or not later on anyway and in regards to non locla file
system - if one would be bad one you would get problems or have it look like
local before you even come to "applyConstraints" method.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.