Hi dear community, could anybody please kindly tell me what happened?  

*Env*:

1.spark 2.2.1 + carbon1.4.1
2.spark.jars.packages 
com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.2
3.spark.driver.extraClassPath
file:///usr/local/Cellar/apache-spark/2.2.1/lib/*
spark.executor.extraClassPath
file:///usr/local/Cellar/apache-spark/2.2.1/lib/* 
lib folder include below jars
-rw-r--r--@ 1 aaron  staff    52M Aug 29 20:50
apache-carbondata-1.4.1-bin-spark2.2.1-hadoop2.7.2.jar
-rw-r--r--  1 aaron  staff   764K Aug 29 21:33 httpclient-4.5.4.jar
-rw-r--r--  1 aaron  staff   314K Aug 29 21:40 httpcore-4.4.jar


*Code*:

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.CarbonSession._
import org.apache.spark.sql.catalyst.util._
import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.constants.CarbonCommonConstants
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.LOCK_TYPE,
"HDFSLOCK")
val carbon =
SparkSession.builder().config(sc.getConf).config("spark.hadoop.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem").config("spark.hadoop.fs.s3a.access.key",
"xxx").config("spark.hadoop.fs.s3a.secret.key",
"xxx").getOrCreateCarbonSession("hdfs://localhost:9000/usr/carbon-meta")

carbon.sql("CREATE TABLE IF NOT EXISTS test_s3_table(id string, name string,
city string, age Int) STORED BY 'carbondata' LOCATION
's3a://key:password@aaron-s3-poc/'")
carbon.sql("LOAD DATA INPATH
'hdfs://localhost:9000/usr/carbon-s3/sample.csv' INTO TABLE test_s3_table")

*s3 files,*

aws s3 ls s3://aaron-s3-poc/ --human --recursive
2018-08-29 22:13:32    0 Bytes LockFiles/tablestatus.lock
2018-08-29 21:41:36  616 Bytes Metadata/schema


*Issue 1,* when I create table, carbondata raise Exception
"com.amazonaws.AmazonClientException: Unable to load AWS credentials from
any provider in the chain" even if 
a. I set related properties in spark-default.conf like
spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  
spark.hadoop.fs.s3a.awsAccessKeyId=xxx
spark.hadoop.fs.s3a.awsSecretAccessKey=xxx
spark.hadoop.fs.s3a.access.key=xxx
spark.hadoop.fs.s3a.secret.key=xxx
b.config in code
val carbon =
SparkSession.builder().config(sc.getConf).config("spark.hadoop.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem").config("spark.hadoop.fs.s3a.access.key",
"xxx").config("spark.hadoop.fs.s3a.secret.key",
"xxx").getOrCreateCarbonSession("hdfs://localhost:9000/usr/carbon-meta")
c. spark-submit conf
Finally I succeed when I put credentials in LOCATION
's3a://key:password@aaron-s3-poc/'", But it's very strange. Who could tell
me why?


*Issue 2,* Load data failed

scala> carbon.sql("LOAD DATA INPATH
'hdfs://localhost:9000/usr/carbon-s3/sample.csv' INTO TABLE test_s3_table")
18/08/29 22:13:35 ERROR CarbonLoaderUtil: main Unable to unlock Table lock
for tabledefault.test_s3_table during table status updation
18/08/29 22:13:35 ERROR CarbonLoadDataCommand: main 
java.lang.ArrayIndexOutOfBoundsException
        at java.lang.System.arraycopy(Native Method)
        at java.io.BufferedOutputStream.write(BufferedOutputStream.java:128)
        at 
org.apache.hadoop.fs.s3a.S3AOutputStream.write(S3AOutputStream.java:164)
        at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
        at java.io.DataOutputStream.write(DataOutputStream.java:107)
        at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStream(S3CarbonFile.java:111)
        at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStreamUsingAppend(S3CarbonFile.java:93)
        at
org.apache.carbondata.core.datastore.impl.FileFactory.getDataOutputStreamUsingAppend(FileFactory.java:276)
        at org.apache.carbondata.core.locks.S3FileLock.lock(S3FileLock.java:96)
        at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:41)
        at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:59)
        at
org.apache.carbondata.processing.util.CarbonLoaderUtil.recordNewLoadMetadata(CarbonLoaderUtil.java:247)
        at
org.apache.carbondata.processing.util.CarbonLoaderUtil.recordNewLoadMetadata(CarbonLoaderUtil.java:204)
        at
org.apache.carbondata.processing.util.CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(CarbonLoaderUtil.java:437)
        at
org.apache.carbondata.processing.util.CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(CarbonLoaderUtil.java:446)
        at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:263)
        at
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
        at
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:183)
        at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:107)
        at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:96)
        at 
org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:154)
        at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:94)
        at
$line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:36)
        at
$line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:41)
        at
$line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:43)
        at 
$line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:45)
        at $line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:47)
        at $line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:49)
        at $line21.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(<console>:51)
        at $line21.$read$$iw$$iw$$iw$$iw$$iw.<init>(<console>:53)
        at $line21.$read$$iw$$iw$$iw$$iw.<init>(<console>:55)
        at $line21.$read$$iw$$iw$$iw.<init>(<console>:57)
        at $line21.$read$$iw$$iw.<init>(<console>:59)
        at $line21.$read$$iw.<init>(<console>:61)
        at $line21.$read.<init>(<console>:63)
        at $line21.$read$.<init>(<console>:67)
        at $line21.$read$.<clinit>(<console>)
        at $line21.$eval$.$print$lzycompute(<console>:7)
        at $line21.$eval$.$print(<console>:6)
        at $line21.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
        at 
scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
        at
scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
        at
scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
        at
scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
        at
scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
        at
scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
        at 
scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
        at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
        at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
        at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
        at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
        at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
        at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
        at
scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
        at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
        at org.apache.spark.repl.Main$.doMain(Main.scala:74)
        at org.apache.spark.repl.Main$.main(Main.scala:54)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/08/29 22:13:35 AUDIT CarbonLoadDataCommand:
[aaron.local][aaron][Thread-1]Dataload failure for default.test_s3_table.
Please check the logs
18/08/29 22:13:35 ERROR CarbonLoadDataCommand: main Got exception
java.lang.ArrayIndexOutOfBoundsException when processing data. But this
command does not support undo yet, skipping the undo part.
java.lang.ArrayIndexOutOfBoundsException
  at java.lang.System.arraycopy(Native Method)
  at java.io.BufferedOutputStream.write(BufferedOutputStream.java:128)
  at
org.apache.hadoop.fs.s3a.S3AOutputStream.write(S3AOutputStream.java:164)
  at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  at java.io.DataOutputStream.write(DataOutputStream.java:107)
  at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStream(S3CarbonFile.java:111)
  at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStreamUsingAppend(S3CarbonFile.java:93)
  at
org.apache.carbondata.core.datastore.impl.FileFactory.getDataOutputStreamUsingAppend(FileFactory.java:276)
  at org.apache.carbondata.core.locks.S3FileLock.lock(S3FileLock.java:96)
  at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:41)
  at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:59)
  at
org.apache.carbondata.processing.util.CarbonLoaderUtil.recordNewLoadMetadata(CarbonLoaderUtil.java:247)
  at
org.apache.carbondata.processing.util.CarbonLoaderUtil.recordNewLoadMetadata(CarbonLoaderUtil.java:204)
  at
org.apache.carbondata.processing.util.CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(CarbonLoaderUtil.java:437)
  at
org.apache.carbondata.processing.util.CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(CarbonLoaderUtil.java:446)
  at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:263)
  at
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:183)
  at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:107)
  at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:96)
  at
org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:154)
  at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:94)
  ... 52 elided


Thanks
Aaron



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/

Reply via email to