Hi all,
I am getting the following error message in one of my Spark SQL's. I
realize this may be related to the version of Spark or a configuration
change but want to know the details and resolution.
Thanks
spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but
current version of
yes, issue is with array type only, I have confirmed that.
I exploded array to struct but still getting the same error,
*Exception in thread "main" org.apache.spark.sql.AnalysisException: Union
can only be performed on tables with the compatible column types.
struct
<>
struct
at the 21th column
Have you tryed to narrow down the problem so that we can be 100% sure that it
lies on the array types ? Just exclude them for sake of testing.
If we know 100% that it is on this array stuff try to explode that columns into
simple types.
Jorge Machado
> On 4 Jun 2018, at 11:09, Pranav
I am ordering the columns before doing union, so I think that should not be
an issue,
* String[] columns_original_order = baseDs.columns();
String[] columns = baseDs.columns();Arrays.sort(columns);
baseDs=baseDs.selectExpr(columns);
Try the same union with a dataframe without the arrays types. Could be
something strange there like ordering or so.
Jorge Machado
> On 4 Jun 2018, at 10:17, Pranav Agrawal wrote:
>
> schema is exactly the same, not sure why it is failing though.
>
> root
> |-- booking_id: integer
schema is exactly the same, not sure why it is failing though.
root
|-- booking_id: integer (nullable = true)
|-- booking_rooms_room_category_id: integer (nullable = true)
|-- booking_rooms_room_id: integer (nullable = true)
|-- booking_source: integer (nullable = true)
|-- booking_status:
Hi Pranav,
I don´t have an answer to your issue, but what I generally do in this cases
is to first try to simplify it to a point where it is easier to check
what´s going on, and then adding back ¨pieces¨ one by one until I spot the
error.
In your case I can suggest to:
1) project the dataset to
can't get around this error when performing union of two datasets
(ds1.union(ds2)) having complex data type (struct, list),
*18/06/02 15:12:00 INFO ApplicationMaster: Final app status: FAILED,
exitCode: 15, (reason: User class threw exception:
org.apache.spark.sql.AnalysisException: Union can
can't get around this error when performing union of two datasets having
complex data type (struct, list),
*18/06/02 15:12:00 INFO ApplicationMaster: Final app status: FAILED,
exitCode: 15, (reason: User class threw exception:
org.apache.spark.sql.AnalysisException: Union can only be performed
;cj"<124411...@qq.com>;
: "user"<user@spark.apache.org>;
: Re: read parquetfile in spark-sql error
Hi,
Seems your query was not consist with the HQL syntax.
you'd better off re-checking the definitions:
https://cwiki.apache.org/confluence/display/Hive/LanguageManu
..@qq.com>;
Cc: "user"<user@spark.apache.org>; "lian.cs.zju"<lian.cs....@gmail.com>;
Subject: Re: read parquetfile in spark-sql error
I hope the below sample helps you:
val parquetDF = hiveContext.read.parquet("hdfs://.parquet")
parquetDF.re
Hi,
Seems your query was not consist with the HQL syntax.
you'd better off re-checking the definitions:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTable
// maropu
On Mon, Jul 25, 2016 at 11:36 PM, Kabeer Ahmed
wrote:
>
I hope the below sample helps you:
val parquetDF = hiveContext.read.parquet("hdfs://.parquet")
parquetDF.registerTempTable("parquetTable")
sql("SELECT * FROM parquetTable").collect().foreach(println)
Kabeer.
Sent from
Nylas
hi,all:
I use spark1.6.1 as my work env.
when I saved the following content as test1.sql file :
CREATE TEMPORARY TABLE parquetTableUSING org.apache.spark.sql.parquet OPTIONS (
path "examples/src/main/resources/people.parquet" ) SELECT * FROM parquetTable
and use
Looks like some JVM got killed or OOM. You can check the log to see the real
causes.
Thanks.
Zhan Zhang
On Nov 3, 2015, at 9:23 AM, YaoPau
> wrote:
java.io.FileNotFoun
oop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Vague-Spark-SQL-error-message-with-saveAsParquetFile-tp25265.html
Sent from the Apache Spark User Li
Ritchard's comments on why the --files option may be
> redundant in
> your case.
>
> Regards,
> Dilip Biswal
> Tel: 408-463-4980
> dbis...@us.ibm.com
>
>
>
> From:Giri <giridhar.madduk...@gmail.com>
> To:user@spark.apache.org
> Date:
nts on why the --files option may be
redundant in
your case.
Regards,
Dilip Biswal
Tel: 408-463-4980
dbis...@us.ibm.com
From: Giri <giridhar.madduk...@gmail.com>
To: user@spark.apache.org
Date: 10/15/2015 02:44 AM
Subject: Re: SPARK SQL Error
Hi Ritchard,
Thank you so much
http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-SQL-Error-tp25050p25075.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additiona
;yarn-cluster". Unfortunately, I do not have experience
using yarn, so can't help you there. Here is more documentation for yarn you
can read: http://spark.apache.org/docs/latest/running-on-yarn.html.
-Nick
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.
working directory of each executor." Since
you are going to read the "people_csv" file from hdfs, rather than the local
file system, it seems unnecessary.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-SQL-
--files
hdfs:///people_csv /home/cloudera/Desktop/TestMain.jar
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-SQL-Error-tp25050p25052.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
It seem an issue with the ES connector
https://github.com/elastic/elasticsearch-hadoop/issues/482
Thanks
Best Regards
On Tue, Jul 28, 2015 at 6:14 AM, An Tran tra...@gmail.com wrote:
Hello all,
I am currently having an error with Spark SQL access Elasticsearch using
Elasticsearch Spark
Hello all,
I am currently having an error with Spark SQL access Elasticsearch using
Elasticsearch Spark integration. Below is the series of command I issued
along with the stacktrace. I am unclear what the error could mean. I can
print the schema correctly but error out if i try and display a
Hi,
Thanks for the response. I was looking for a java solution. I will check the
scala and python ones.
Regards,
Anand.C
From: Todd Nist [mailto:tsind...@gmail.com]
Sent: Tuesday, May 19, 2015 6:17 PM
To: Chandra Mohan, Ananda Vel Murugan
Cc: ayan guha; user
Subject: Re: Spark sql error while
, Ananda Vel Murugan; user
*Subject:* Re: Spark sql error while writing Parquet file- Trying to
write more fields than contained in row
Hi
Give a try with dtaFrame.fillna function to fill up missing column
Best
Ayan
On Mon, May 18, 2015 at 8:29 PM, Chandra Mohan, Ananda Vel Murugan
artifactIdspark-sql_2.10/artifactId
version1.3.1/version
/dependency
Regards,
Anand.C
From: ayan guha [mailto:guha.a...@gmail.com]
Sent: Monday, May 18, 2015 5:19 PM
To: Chandra Mohan, Ananda Vel Murugan; user
Subject: Re: Spark sql error while writing Parquet file- Trying to write more
Hi,
I am using spark-sql to read a CSV file and write it as parquet file. I am
building the schema using the following code.
String schemaString = a b c;
ListStructField fields = new ArrayListStructField();
MetadataBuilder mb = new MetadataBuilder();
Hi
Give a try with dtaFrame.fillna function to fill up missing column
Best
Ayan
On Mon, May 18, 2015 at 8:29 PM, Chandra Mohan, Ananda Vel Murugan
ananda.muru...@honeywell.com wrote:
Hi,
I am using spark-sql to read a CSV file and write it as parquet file. I am
building the schema
You are probably using an encoding that we don't support. I think this PR
may be adding that support: https://github.com/apache/spark/pull/5422
On Sat, Apr 18, 2015 at 5:40 PM, Abhishek R. Singh
abhis...@tetrationanalytics.com wrote:
I have created a bunch of protobuf based parquet files that
I have created a bunch of protobuf based parquet files that I want to
read/inspect using Spark SQL. However, I am running into exceptions and not
able to proceed much further:
This succeeds successfully (probably because there is no action yet). I can
also printSchema() and count() without any
-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10390.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
some unknown characters like $line11.$read$
$line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$ ?
Thanks,
Victor
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10390.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
$line11.$read$
$line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$ ?
Thanks,
Victor
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10390.html
Sent from
Hi,Kevin
I tried it on spark1.0.0, it works fine.
It's a bug in spark1.0.1 ...
Thanks,
Victor
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read
...@gmail.com
wrote:
Hi,Kevin
I tried it on spark1.0.0, it works fine.
It's a bug in spark1.0.1 ...
Thanks,
Victor
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize
sql.
One machine both master and slave. OS is CentOS 5.
And not use mesos.
Thanks,
Victor
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read
.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10274.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi,Svend
Your reply is very helpful to me. I'll keep an eye on that ticket.
And also... Cheers :)
Best Regards,
Victor
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Classpath
/app/hadoop/sparklib/hadoop-lzo-0.4.15.jar System Classpath
/home/hadoop/src/hadoop/confSystem Classpath
/home/hadoop/src/hadoop/lib/System Classpath
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang
42 matches
Mail list logo