Lior Dor created CARBONDATA-4273:
------------------------------------

             Summary: Cannot create table with partitions in Spark in EMR
                 Key: CARBONDATA-4273
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-4273
             Project: CarbonData
          Issue Type: Bug
          Components: spark-integration
    Affects Versions: 2.2.0
         Environment: Release label:emr-5.24.1
Hadoop distribution:Amazon 2.8.5
Applications:
Hive 2.3.4, Pig 0.17.0, Hue 4.4.0, Flink 1.8.0, Spark 2.4.2, Presto 0.219, 
JupyterHub 0.9.6

Jar complied with:
apache-carbondata:2.2.0
spark:2.4.5
hadoop:2.8.3
            Reporter: Lior Dor


 

When trying to create a table like this:
{code:sql}
CREATE TABLE IF NOT EXISTS will_not_work(
timestamp string,
name string
)
PARTITIONED BY (dt string, hr string)
STORED AS carbondata
LOCATION 's3a://my-bucket/CarbonDataTests/will_not_work
{code}
I get the following error:
{noformat}
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
Partition is not supported for external table
  at 
org.apache.spark.sql.parser.CarbonSparkSqlParserUtil$.buildTableInfoFromCatalogTable(CarbonSparkSqlParserUtil.scala:219)
  at org.apache.spark.sql.CarbonSource$.createTableInfo(CarbonSource.scala:235)
  at org.apache.spark.sql.CarbonSource$.createTableMeta(CarbonSource.scala:394)
  at 
org.apache.spark.sql.execution.command.table.CarbonCreateDataSourceTableCommand.processMetadata(CarbonCreateDataSourceTableCommand.scala:69)
  at 
org.apache.spark.sql.execution.command.MetadataCommand$$anonfun$run$1.apply(package.scala:137)
  at 
org.apache.spark.sql.execution.command.MetadataCommand$$anonfun$run$1.apply(package.scala:137)
  at 
org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:118)
  at 
org.apache.spark.sql.execution.command.MetadataCommand.runWithAudit(package.scala:134)
  at 
org.apache.spark.sql.execution.command.MetadataCommand.run(package.scala:137)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
  at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
  at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
  at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
  at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643)
  ... 64 elided
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to