Hi,
I use the clean version just clone from the master branch, build with:
build/mvn -Phive -Phadoop-2.4 -DskipTests package
And BUILD FAILURE at last, due to:
[error] while compiling:
/Users/yijie/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala
[error]
your side!
On Thu, May 21, 2015 at 9:37 AM, Yijie Shen henry.yijies...@gmail.com wrote:
Hi all,
I’ve seen the Blog of Project Tungsten here, it sounds awesome to me!
I’ve also noticed there is a plan to change the code generation from
record-at-a-time evaluation to a vectorized one, which
Hi all,
I’ve seen the Blog of Project Tungsten here, it sounds awesome to me!
I’ve also noticed there is a plan to change the code generation from
record-at-a-time evaluation to a vectorized one, which interests me most.
What’s the status of vectorized evaluation? Is this an inner effort of
Master web UI at
http://master url:8080.
Are there any programmatically methods I could get the driverID submitted by my
`ProcessBuilder` and query status about the query?
Any Suggestions?
—
Best Regards!
Yijie Shen
Hi,
Suppose I create a dataRDD which extends RDD[Row], and each row is
GenericMutableRow(Array(Int, Array[Byte])). A same Array[Byte] object is
reused among rows but has different content each time. When I convert it to
a dataFrame and save it as Parquet File, the file's row group statistic(max
Thx Qiuzhuang, the problems disappeared after I add assembly jar at the head of
list dependencies in *.iml, but while running test in Spark SQL(SQLQuerySuite
in sql-core), another two error occurs:
Error 1:
Error:scalac:
while compiling: