Hi Manish,

I installed it another test cluster and now it is working fine when I init
CarbonContext by setting storepath as suggested by Yinwei.
val cc = new CarbonContext(sc, "hdfs://localhost:9000/opt/CarbonStore")

Thanks

On Mon, Feb 6, 2017 at 6:12 PM, manish gupta <tomanishgupt...@gmail.com>
wrote:

> Hi Sanoj,
>
> Can you please try the below things.
>
> 1. Remove carbon.properties file and let the system take all the default
> values. In the logs shared by you I can see that while creating the
> CarbonContext it is printing the carbon.properties file path and printing
> all the properties in it. So give some invalid path for carbon.properties
> and ensure none of the properties is getting printed while creating
> CarbonContext.
>
> 2. If the 1st point does not work out then set the below property in
> carbon.properties file and try loading the data.
>
> carbon.lock.type=HDFSLOCK
>
>
> Regards
>
> Manish Gupta
>
>
> On Mon, Feb 6, 2017 at 5:59 PM, Sanoj M George <sanoj.geo...@gmail.com>
> wrote:
>
> > Hi Manish,
> >
> > Could not find any .lock files incarbon store.
> >
> > I am getting the error while running spark-shell, did not try thrift
> > server. However, as you can see from attached logs, it is taking the
> > default store location ( not the one from carbon.properties )
> >
> > scala> cc.storePath
> > res0: String = /home/cduser/carbon.store
> >
> >
> > Thanks,
> > Sanoj
> >
> >
> >
> >
> >
> > On Mon, Feb 6, 2017 at 1:23 PM, manish gupta <tomanishgupt...@gmail.com>
> > wrote:
> >
> >> Hi Sanoj,
> >>
> >> Please check if there is any file with .lock extension in the carbon
> >> store.
> >>
> >> Also when you start thrift server carbon store location will be printed
> in
> >> the thrift server logs. Please validate if there is nay mismatch in the
> >> store location provided by you and the store location getting printed in
> >> the thrift server logs.
> >>
> >> Also please provide the complete logs for failure.
> >>
> >> Regards
> >> Manish Gupta
> >>
> >> On Mon, Feb 6, 2017 at 2:18 PM, Sanoj M George <sanoj.geo...@gmail.com>
> >> wrote:
> >>
> >> > Not yet resolved, still getting same error.
> >> >
> >> > On Mon, Feb 6, 2017 at 12:41 PM Raghunandan S <
> >> > carbondatacontributi...@gmail.com> wrote:
> >> >
> >> > > You mean the issue is resolved?
> >> > >
> >> > > Regards
> >> > > Raghunandan
> >> > >
> >> > > On 06-Feb-2017 1:36 PM, "Sanoj M George" <sanoj.geo...@gmail.com>
> >> wrote:
> >> > >
> >> > > Thanks Raghunandan.  Checked the thread but it seems this error is
> >> due to
> >> > > something else.
> >> > >
> >> > > Below are the parameters that I changed :
> >> > >
> >> > > **** carbon.properties :
> >> > > carbon.storelocation=hdfs://localhost:9000/opt/CarbonStore
> >> > > carbon.ddl.base.hdfs.url=hdfs://localhost:9000/opt/data
> >> > > carbon.kettle.home=/home/cduser/spark/carbonlib/carbonplugins
> >> > >
> >> > > **** spark-defaults.conf  :
> >> > > carbon.kettle.home
> >> > > /home/cduser/spark/carbonlib/carbonplugins
> >> > > spark.driver.extraJavaOptions
> >> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.
> >> properties
> >> > > spark.executor.extraJavaOptions
> >> > > -Dcarbon.properties.filepath=/home/cduser/spark/conf/carbon.
> >> properties
> >> > >
> >> > > Although store location is specified in carbon.properties,
> spark-shell
> >> > was
> >> > > using "/home/cduser/carbon.store" as store location.
> >> > >
> >> > > Regards
> >> > >
> >> > > On Sun, Feb 5, 2017 at 4:49 PM, Raghunandan S <
> >> > > carbondatacontributi...@gmail.com> wrote:
> >> > >
> >> > > > Dear sanoj,
> >> > > > Pls refer to
> >> > > > http://apache-carbondata-mailing-list-archive.1130556.
> >> > > > n5.nabble.com/Dictionary-file-is-locked-for-updation-td5076.html
> >> > > >
> >> > > > Let me know if this thread didn't address your problem.
> >> > > >
> >> > > > Regards
> >> > > >
> >> > > >
> >> > > > On 05-Feb-2017 5:22 PM, "Sanoj M George" <sanoj.geo...@gmail.com>
> >> > wrote:
> >> > > >
> >> > > > Hi All,
> >> > > >
> >> > > > I am getting below error while trying out Carbondata with Spark
> >> 1.6.2 /
> >> > > > Hadoop 2.6.5 / Carbondata 1.
> >> > > >
> >> > > > ./bin/spark-shell --jars
> >> > > > carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-
> >> > hadoop2.2.0.jar
> >> > > > scala> import org.apache.spark.sql.CarbonContext
> >> > > > scala> val cc = new CarbonContext(sc)
> >> > > > scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name
> >> string,
> >> > > city
> >> > > > string, age Int) STORED BY 'carbondata'")
> >> > > > scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv'
> >> INTO
> >> > > TABLE
> >> > > > t1")
> >> > > > INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
> >> > > > '/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
> >> > > > INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully
> >> after
> >> > > data
> >> > > > load
> >> > > > java.lang.RuntimeException: Table is locked for updation. Please
> try
> >> > > after
> >> > > > some time
> >> > > >         at scala.sys.package$.error(package.scala:27)
> >> > > >         at
> >> > > > org.apache.spark.sql.execution.command.LoadTable.
> >> > > > run(carbonTableSchema.scala:360)
> >> > > >         at
> >> > > > org.apache.spark.sql.execution.ExecutedCommand.
> >> > > > sideEffectResult$lzycompute(
> >> > > > commands.scala:58)
> >> > > >         at
> >> > > > org.apache.spark.sql.execution.ExecutedCommand.
> >> > sideEffectResult(commands.
> >> > > > scala:56)
> >> > > >
> >> > > >
> >> > > > I followed the docs at
> >> > > > https://github.com/apache/incubator-carbondata/blob/
> >> > > > master/docs/installation-guide.md#installing-and-
> >> > > > configuring-carbondata-on-
> >> > > > standalone-spark-cluster
> >> > > > <http://installation-guide.md#installing-and-configuring-
> >> > > carbondata-on-%0Astandalone-spark-cluster>
> >> > > > and
> >> > > > https://github.com/apache/incubator-carbondata/blob/
> >> > > > master/docs/quick-start-guide.md
> >> > > > to install carbondata.
> >> > > >
> >> > > > While creating the table, I observed below WARN msg in the log :
> >> > > >
> >> > > > main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS
> >> (TABLENAME
> >> > > > "DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]
> >> > > >
> >> > > > WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe
> >> for
> >> > > data
> >> > > > source provider carbondata. Persisting data source relation
> >> > > `default`.`t1`
> >> > > > into Hive metastore in Spark SQL specific format, which is NOT
> >> > compatible
> >> > > > with Hive.
> >> > > > INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
> >> > > > dbName:default, owner:cduser, createTime:1486290870,
> >> lastAccessTime:0,
> >> > > > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
> >> > > > type:array<string>, comment:from deserializer)], location:null,
> >> > > > inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
> >> > > > outputFormat:org.apache.hadoop.hive.ql.io.
> >> > HiveSequenceFileOutputFormat,
> >> > > > compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
> >> > > > serializationLib:org.apache.hadoop.hive.serde2.
> >> > > > MetadataTypedColumnsetSerDe,
> >> > > > parameters:{tablePath=/home/cduser/carbon.store/default/t1,
> >> > > > serialization.format=1, tableName=default.t1}), bucketCols:[],
> >> > > sortCols:[],
> >> > > > parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[],
> >> > > > skewedColValues:[],
> >> > > > skewedColValueLocationMaps:{})), partitionKeys:[],
> >> > > > parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=
> carbondata},
> >> > > > viewOriginalText:null, viewExpandedText:null,
> >> tableType:MANAGED_TABLE,
> >> > > > privileges:PrincipalPrivilegeSet(userPrivileges:{},
> >> > groupPrivileges:null,
> >> > > > rolePrivileges:null))
> >> > > >
> >> > > >
> >> > > > Appreciate any help in resolving this.
> >> > > >
> >> > > > Thanks,
> >> > > > Sanoj
> >> > > >
> >> > >
> >> > --
> >> > Sent from my iPhone
> >> >
> >>
> >
> >
>

Reply via email to