I'm using spark 1.6.0, and the carbondata is the latest master branch in the 
github.


my carbon.properties is configured as :


carbon.ddl.base.hdfs.url=hdfs://master:9000/carbondata/data
carbon.badRecords.location=/opt/Carbon/Spark/badrecords
carbon.kettle.home=/opt/spark-1.6.0/carbonlib/carbonplugins

carbon.lock.type=HDFSLOCK
....


my spark-default.conf is configured as :


spark.master                                    spark://master:7077
spark.yarn.dist.files                           
/opt/spark-1.6.0/conf/carbon.properties
spark.yarn.dist.archives                        
/opt/spark-1.6.0/carbonlib/carbondata_2.10-1.0.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar
spark.executor.extraJavaOptions                 
-Dcarbon.properties.filepath=carbon.properties
#spark.executor.extraClassPath                  
/opt/spark-1.6.0/carbonlib/carbondata_2.10-1.0.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
#spark.driver.extraClassPath                    
/opt/spark-1.6.0/carbonlib/carbondata_2.10-1.0.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
spark.driver.extraJavaOptions                   
-Dcarbon.properties.filepath=/opt/spark-1.6.0/conf/carbon.properties
carbon.kettle.home                              
/opt/spark-1.6.0/carbonlib/carbonplugins





------------------ ???????? ------------------
??????: "Ravindra Pesala";<ravi.pes...@gmail.com>;
????????: 2016??12??27??(??????) ????4:15
??????: "dev"<dev@carbondata.incubator.apache.org>; 

????: Re: Dictionary file is locked for updation



Hi,

It seems the store path location is taking default location. Did you set
the store location properly? Which spark version you are using?

Regards,
Ravindra

On Tue, Dec 27, 2016, 1:38 PM 251469031 <251469...@qq.com> wrote:

> Hi Kumar,
>
>
>   thx to your repley, the full logs is as follows:
>
>
> 16/12/27 12:30:17 INFO locks.HdfsFileLock: Executor task launch worker-0
> HDFS lock
> path:hdfs://master:9000../carbon.store/default/test_table/2e9b7efa-2934-463a-9280-ff50c5129268.lock
> 16/12/27 12:30:17 INFO storage.ShuffleBlockFetcherIterator: Getting 1
> non-empty blocks out of 1 blocks
> 16/12/27 12:30:17 INFO storage.ShuffleBlockFetcherIterator: Started 1
> remote fetches in 1 ms
> 16/12/27 12:30:32 ERROR rdd.CarbonGlobalDictionaryGenerateRDD: Executor
> task launch worker-0
> java.lang.RuntimeException: Dictionary file name is locked for updation.
> Please try after some time
>         at scala.sys.package$.error(package.scala:27)
>         at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.<init>(CarbonGlobalDictionaryRDD.scala:364)
>         at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:302)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>         at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>         at org.apache.spark.scheduler.Task.run(Task.scala:89)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
>
>
>
> as u see, the lock file path
> is:hdfs://master:9000../carbon.store/default/test_table/2e9b7efa-2934-463a-9280-ff50c5129268.lock
>
>
>
>
> ------------------ ???????? ------------------
> ??????: "Kumar Vishal";<kumarvishal1...@gmail.com>;
> ????????: 2016??12??27??(??????) ????3:25
> ??????: "dev"<dev@carbondata.incubator.apache.org>;
>
> ????: Re: Dictionary file is locked for updation
>
>
>
> Hi,
> can you please find *"HDFS lock path"* string in executor log and let me
> know the complete log message.
>
> -Regards
> Kumar Vishal
>
> On Tue, Dec 27, 2016 at 12:45 PM, 251469031 <251469...@qq.com> wrote:
>
> > Hi all,
> >
> >
> > when I run the following script:
> > scala> cc.sql(s"load data inpath
> 'hdfs://master:9000/carbondata/sample.csv'
> > into table test_table")
> >
> >
> > it turns out that:
> > WARN  27-12 12:37:58,044 - Lost task 1.3 in stage 2.0 (TID 13, slave1):
> > java.lang.RuntimeException: Dictionary file name is locked for updation.
> > Please try after some time
> >
> >
> > what I have done are:
> > 1.in carbon.properties, set carbon.lock.type=HDFSLOCK
> > 2.send carbon.properties & spark-defaults.conf to all nodes of the
> clusters
> >
> >
> > if any of you have any idea, looking forward to your replay, thx~

Reply via email to