anyone took a look at this issue:
https://issues.apache.org/jira/browse/HIVE-11166
i got same exception by inserting into hbase table
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-1-6-0-Hive-HBase-tp16128p16332.html
Sent from the Apache Spark
in our hive warehouse there are many tables with a lot of partitions, such as
scala hiveContext.sql(use db_external)
scala val result = hiveContext.sql(show partitions et_fullorders).count
result: Long = 5879
i noticed that this part of code:
thanks a lot, Hao, finally solved this problem, changes of CSVSerDe are here:
https://github.com/chutium/csv-serde/commit/22c667c003e705613c202355a8791978d790591e
btw, add jar in spark hive or hive-thriftserver always doesn't work, we
build the spark with libraryDependencies += csv-serde
Hi Cheng, thank you very much for helping me to finally find out the secret
of this magic...
actually we defined this external table with
SID STRING
REQUEST_ID STRING
TIMES_DQ TIMESTAMP
TOTAL_PRICE FLOAT
...
using desc table ext_fullorders it is only shown as
[# col_name
has anyone tried to build it on hadoop.version=2.0.0-mr1-cdh4.3.0 or
hadoop.version=1.0.3-mapr-3.0.3 ?
see comments in
https://issues.apache.org/jira/browse/SPARK-3124
https://github.com/apache/spark/pull/2035
i built spark snapshot on hadoop.version=1.0.3-mapr-3.0.3
and the ticket creator built
is there any dataType auto convert or detect or something in HiveContext ?all
columns of a table is defined as string in hive metastoreone column is
total_price with values like 123.45, then this column will be recognized as
dataType Float in HiveContext...this is a feature or a bug? it really
is there any dataType auto convert or detect or something in HiveContext ?
all columns of a table is defined as string in hive metastore
one column is total_price with values like 123.45, then this column will be
recognized as dataType Float in HiveContext...
this is a feature or a bug? it
oops, i tried on a managed table, column types will not be changed
so it is mostly due to the serde lib CSVSerDe
(https://github.com/ogrodnek/csv-serde/blob/master/src/main/java/com/bizo/hive/serde/csv/CSVSerde.java#L123)
or maybe CSVReader from opencsv?...
but if the columns are defined as
as far as i know, HQL queries try to find the schema info of all the tables
in this query from hive metastore, so it is not possible to join tables from
sqlContext using hiveContext.hql
but this should work:
hiveContext.hql(select ...).regAsTable(a)
sqlContext.jsonFile(xxx).regAsTable(b)
then
no one use spark-shell in master branch?
i created a PR as follow up commit of SPARK-2678 and PR #1801:
https://github.com/apache/spark/pull/1861
--
View this message in context:
10 matches
Mail list logo