Re: org.apache.hadoop.hive.ql.metadata.HiveException
50) > ... 21 more > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521) > ... 31 more > Caused by: java.lang.NoClassDefFoundError: > javax/jdo/JDOObjectNotFoundException > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:274) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getClass(MetaStoreUtils.java:1489) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:63) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:66) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72) > at > org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:199) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:181) > at > org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.(HiveClientCache.java:330) > ... 36 more > Caused by: java.lang.ClassNotFoundException: > javax.jdo.JDOObjectNotFoundException > at java.net.URLClassLoader$1.run(URLClassLoader.java:366) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > ... 50 more > > > > > > > -- 原始邮件 -- > 发件人: "ShaoFeng Shi";<shaofeng...@apache.org>; > 发送时间: 2016年1月5日(星期二) 下午3:21 > 收件人: "dev"<dev@kylin.apache.org>; > > 主题: Re: org.apache.hadoop.hive.ql.metadata.HiveException > > > > Hi hefeng, > > It seems the hcatalog jars doesn't exist on your hadoop node; The solution > is to upload the jar files to an HDFS folder, and then set that patch as > the value of "kylin.job.mr.lib.dir" in kylin.properties, you can checkout > this JIRA: https://issues.apache.org/jira/browse/KYLIN-1021 > > In our env, the "kylin.job.mr.lib.dir" folder has the following 4 jar > files, just for your reference: > hive-common-xx.jar > hive-exec-xx.jar > hive-hcatalog-core-xx.jar > hive-metastore-xx.jar > > Here "xx" means the version number; > > Just take a try and let us know whether it works. > > 2016-01-05 14:47 GMT+08:00 和风 <363938...@qq.com>: > > > Thanks for your help. error logs: > > [pool-7-thread-1]:[2016-01-05 > > > 14:45:31,312][INFO][org.apache.kylin.job.manager.ExecutableManager.updateJobOutput(ExecutableManager.java:241)] > > - job id:d0e2f259-9541-4b6f-9f54-c502781549e2-00 from RUNNING to SUCCEED > > [pool-7-thread-1]:[2016-01-05 > > > 14:45:31,438][INFO][org.apache.kylin.job.manager.ExecutableManager.updateJobOutput(ExecutableManager.java:241)] > > - job id:d0e2f259-9541-4b6f-9f54-c502781549e2 from RUNNING to READY > > [pool-6-thread-1]:[2016-01-05 > > > 14:45:31,483][INFO][org.apache.kylin.job.impl.threadpool.DefaultScheduler$FetcherRunner.run(DefaultScheduler.java:102)] > > - CubingJob{id=d0e2f259-9541-4b6f-9f54-c502781549e2, > name=learn_kylin_four > > - 2015020100_2015122900 - BUILD - GMT-08:00 2016-01-04 22:44:05, > > state=READY} prepare to schedule > > [pool-6-thread-1]:[2016-01-05 > > > 14:45:31,484][INFO][org.apache.kylin.job.impl.threadpool.DefaultScheduler$FetcherRunner.run(DefaultScheduler.java:106)] > > - CubingJob{id=d0e2f259-9541-4b6f-9f54-c50278154
Re: org.apache.hadoop.hive.ql.metadata.HiveException
Hi, This error is caused because there is no Snappy compression codec available in your setup and Kylin expects it by default. As a work around, you can disable the use of snappy in the configuration files of Kylin. > Comment the compression.codec line in kylin.properties > comment the properties in the kylin_job_conf.xml which are related to compression. I guess there are around 4 properties to be commented. This was the work around I used for a while but its recommended to use compression techniques to minimize the memory shuffling between reducers. Thank you. Sai Kiriti B On Jan 5, 2016 12:31 PM, "和风" <363938...@qq.com> wrote: > hi: > execution "bulid" cube, jobs exception : > org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException > > > logs: > > > OS command error exit with 2 -- hive -e "USE default; > DROP TABLE IF EXISTS > kylin_intermediate_learn_kylin_two_2013122900_2016011200_d22e7c10_032a_4d22_a802_3b74937e86db; > > > CREATE EXTERNAL TABLE IF NOT EXISTS > kylin_intermediate_learn_kylin_two_2013122900_2016011200_d22e7c10_032a_4d22_a802_3b74937e86db > ( > DEFAULT_KYLIN_CAL_DT_AGE_FOR_QTR_ID smallint > ,DEFAULT_KYLIN_CAL_DT_AGE_FOR_MONTH_ID smallint > ,DEFAULT_KYLIN_CAL_DT_AGE_FOR_DT_ID smallint > ,DEFAULT_KYLIN_CAL_DT_AGE_FOR_RTL_MONTH_ID smallint > ,DEFAULT_KYLIN_CAL_DT_AGE_FOR_CS_WEEK_ID smallint > ,DEFAULT_KYLIN_CAL_DT_YEAR_ID string > ) > ROW FORMAT DELIMITED FIELDS TERMINATED BY '\177' > STORED AS SEQUENCEFILE > LOCATION > '/kylin/kylin_metadata/kylin-d22e7c10-032a-4d22-a802-3b74937e86db/kylin_intermediate_learn_kylin_two_2013122900_2016011200_d22e7c10_032a_4d22_a802_3b74937e86db'; > > > SET mapreduce.job.split.metainfo.maxsize=-1; > SET mapred.compress.map.output=true; > SET > mapred.map.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; > SET mapred.output.compress=true; > SET > mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec; > SET mapred.output.compression.type=BLOCK; > SET mapreduce.job.max.split.locations=2000; > SET dfs.replication=2; > SET hive.merge.mapfiles=true; > SET hive.merge.mapredfiles=true; > SET hive.merge.size.per.task=268435456; > SET hive.support.concurrency=false; > SET hive.exec.compress.output=true; > SET hive.auto.convert.join.noconditionaltask = true; > SET hive.auto.convert.join.noconditionaltask.size = 3; > INSERT OVERWRITE TABLE > kylin_intermediate_learn_kylin_two_2013122900_2016011200_d22e7c10_032a_4d22_a802_3b74937e86db > SELECT > KYLIN_CAL_DT.AGE_FOR_QTR_ID > ,KYLIN_CAL_DT.AGE_FOR_MONTH_ID > ,KYLIN_CAL_DT.AGE_FOR_DT_ID > ,KYLIN_CAL_DT.AGE_FOR_RTL_MONTH_ID > ,KYLIN_CAL_DT.AGE_FOR_CS_WEEK_ID > ,KYLIN_CAL_DT.YEAR_ID > FROM DEFAULT.KYLIN_CAL_DT as KYLIN_CAL_DT > WHERE (KYLIN_CAL_DT.CAL_DT >= '2013-12-29' AND KYLIN_CAL_DT.CAL_DT < > '2016-01-12') > ; > > > " > > > Logging initialized using configuration in > jar:file:/usr/local/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > OK > Time taken: 0.936 seconds > OK > Time taken: 0.112 seconds > OK > Time taken: 0.438 seconds > Query ID = root_20160105105405_88149f4a-a970-47d0-ba32-9a21ee5afde3 > Total jobs = 3 > Launching Job 1 out of 3 > Number of reduce tasks is set to 0 since there's no reduce operator > Starting Job = job_1449731904014_1636, Tracking URL = > http://cloud001:8088/proxy/application_1449731904014_1636/ > Kill Command = /usr/local/hadoop/bin/hadoop job -kill > job_1449731904014_1636 > Hadoop job information for Stage-1: number of mappers: 1; number of > reducers: 0 > 2016-01-05 10:54:26,177 Stage-1 map = 0%, reduce = 0% > 2016-01-05 10:54:27,236 Stage-1 map = 100%, reduce = 0% > Ended Job = job_1449731904014_1636 with errors > Error during job, obtaining debugging information... > Examining task ID: task_1449731904014_1636_m_00 (and more) from job > job_1449731904014_1636 > > > Task with the most failures(1): > - > Task ID: > task_1449731904014_1636_m_00 > > > URL: > > http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1449731904014_1636=task_1449731904014_1636_m_00 > - > Diagnostic Messages for this Task: > java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row >
Re: org.apache.hadoop.hive.ql.metadata.HiveException
Hi 和风, screenshot is search engine unfriendly, please use text as much as possible 2016-01-05 14:34 GMT+08:00 hongbin ma: > can't see attachment. please provide detailed log > > > -- > Regards, > > *Bin Mahone | 马洪宾* > Apache Kylin: http://kylin.io > Github: https://github.com/binmahone > -- Best regards, Shaofeng Shi
Re: org.apache.hadoop.hive.ql.metadata.HiveException
can't see attachment. please provide detailed log -- Regards, *Bin Mahone | 马洪宾* Apache Kylin: http://kylin.io Github: https://github.com/binmahone