[ https://issues.apache.org/jira/browse/SPARK-15427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
deng updated SPARK-15427: ------------------------- Description: I use sparkSql load data from Apache Phoenix. SQLContext sqlContext = new SQLContext(sc); Map<String, String> options = new HashMap(); options.put("driver", driver); options.put("url", PhoenixUtil.p.getProperty("phoenixURL")); options.put("dbtable", "(select "value","name" from "user")"); DataFrame jdbcDF = sqlContext.load("jdbc", options); It always throws exception, like "can't find field VALUE". I tracked the code and found spark will use: val rs = conn.prepareStatement(s"SELECT * FROM $table WHERE 1=0").executeQuery() to get the field.But the field already be uppercased, like "value" to VALUE. So it will always throws "can't find field VALUE"; It didn't think of the the case when data loaded from source in which filed is case sensitive. was: i am use sparkSql load data from Apache Phoenix. SQLContext sqlContext = new SQLContext(sc); Map<String, String> options = new HashMap(); options.put("driver", driver); options.put("url", PhoenixUtil.p.getProperty("phoenixURL")); options.put("dbtable", "(select "value","name" from "user")"); DataFrame jdbcDF = sqlContext.load("jdbc", options); It will always throws exception, like "can't find field VALUE". I track the code and find spark will use: val rs = conn.prepareStatement(s"SELECT * FROM $table WHERE 1=0").executeQuery() to get the field.But the field already be uppercase like "value" to VALUE. So it will always throws "can't find field VALUE"; It didn't think of the the case when data loaded from source in which filed is case sensitive. > Spark SQL doesn't support field case sensitive when load data use Phoenix > ------------------------------------------------------------------------- > > Key: SPARK-15427 > URL: https://issues.apache.org/jira/browse/SPARK-15427 > Project: Spark > Issue Type: Bug > Components: Spark Core, SQL > Affects Versions: 1.5.0 > Reporter: deng > Labels: easyfix, features, newbie > > I use sparkSql load data from Apache Phoenix. > SQLContext sqlContext = new SQLContext(sc); > Map<String, String> options = new HashMap(); > options.put("driver", driver); > options.put("url", PhoenixUtil.p.getProperty("phoenixURL")); > options.put("dbtable", "(select "value","name" from "user")"); > DataFrame jdbcDF = sqlContext.load("jdbc", options); > It always throws exception, like "can't find field VALUE". > I tracked the code and found spark will use: > val rs = conn.prepareStatement(s"SELECT * FROM $table WHERE > 1=0").executeQuery() > to get the field.But the field already be uppercased, like "value" to VALUE. > So it will always throws "can't find field VALUE"; > It didn't think of the the case when data loaded from source in which filed > is case sensitive. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org