testing subscription
I have stopped receiving any email from this list!
Re: testing subscription
ASF infra had a mail outage for several days last week, impacting all mailing lists. It's fixed now and is still burning through backlog, please be patient. More info: https://blogs.apache.org/infra/entry/mail_outage -- Sean On May 11, 2014 4:18 AM, Lefty Leverenz leftylever...@gmail.com wrote: Same here. The archiveshttp://mail-archives.apache.org/mod_mbox/hive-user/201405.mbox/dateonly have a few messages recently, but at least one failed to reach me and a reply I sent on the thread Re: build Hive-0,13http://mail-archives.apache.org/mod_mbox/hive-user/201405.mbox/%3cCALr1C9pSZS454wKbx5_c7kGvQuZVO=y7gtcvryem4w5ruo2...@mail.gmail.com%3e isn't in the archives. The dev@hive list seems to have a similar problem. I'm checking its archives now to see what I'm missing. At least I got your message, Peyman. Thanks for testing. -- Lefty On Sat, May 10, 2014 at 5:15 PM, Peyman Mohajerian mohaj...@gmail.comwrote: I have stopped receiving any email from this list!
java.lang.NoSuchFieldError: HIVE_ORC_FILE_MEMORY_POOL when inserting data to ORC table
Resending since this mailing list had issue to post message in last few days. From: John Zeng Sent: Friday, May 9, 2014 6:18 PM To: user@hive.apache.org Subject: java.lang.NoSuchFieldError: HIVE_ORC_FILE_MEMORY_POOL when inserting data to ORC table Hi, All, I created a ORC table by doing this: add jar /home/dguser/hive-0.12.0/lib/hive-exec-0.12.0.jar; CREATE TABLE orc_UserDataTest2( PassportNumbers1 STRING, PassportNumbers2 STRING, TaxID STRING, CM11 STRING, CM13 STRING, CM15 STRING, Name STRING, EmailAddress STRING ) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS inputformat 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' outputformat 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; The table creation is successful and I can see a new folder under warehouse directory: Logging initialized using configuration in jar:file:/home/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hive/lib/hive-common-0.10.0-cdh4.2.0.jar!/hive-log4j.properties Hive history file=/tmp/dguser/hive_job_log_dguser_201405091727_1340301870.txt Added /home/dguser/hive-0.12.0/lib/hive-exec-0.12.0.jar to class path Added resource: /home/dguser/hive-0.12.0/lib/hive-exec-0.12.0.jar OK Time taken: 1.953 seconds But when I inserted data to it, I got following fatal error in map task: 2014-05-09 17:37:48,447 FATAL ExecMapper: java.lang.NoSuchFieldError: HIVE_ORC_FILE_MEMORY_POOL at org.apache.hadoop.hive.ql.io.orc.MemoryManager.init(MemoryManager.java:83) at org.apache.hadoop.hive.ql.io.orc.OrcFile.getMemoryManager(OrcFile.java:302) at org.apache.hadoop.hive.ql.io.orc.OrcFile.access$000(OrcFile.java:32) at org.apache.hadoop.hive.ql.io.orc.OrcFile$WriterOptions.init(OrcFile.java:145) at org.apache.hadoop.hive.ql.io.orc.OrcFile.writerOptions(OrcFile.java:241) at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getHiveRecordWriter(OrcOutputFormat.java:115) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:250) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:237) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:496) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:543) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800) at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800) at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546) at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:333) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.Child.main(Child.java:262) The way I inserted data is just copying data from another table (which has 100 rows): add jar /home/dguser/hive-0.12.0/lib/hive-exec-0.12.0.jar; insert overwrite table orc_UserDataTest2 select * from UserDataTest2; Any idea for what this error is? Thanks John
Re: Tez HIve problems.
Hi Bikas, I was able to resolve this issue, by copying hive-exec.jar to hdfs://user/admin/, It was not able to localize it while running Tez. Though, I am facing some other problem now. My Tez job is not progressing, My Tez job doesn't progress, it just keeps printing: Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 Map 1: 0/2 Map 5: 0/2 Reducer 2: 0/1 Reducer 3: 0/1 Reducer 4: 0/1 In hive logs also, it keeps on printing the same. Any idea, where should I look into or how should I debug the problem. Or pointers on problem itself would help. Thanks Archit. On Sun, May 11, 2014 at 7:28 AM, Bikas Saha bi...@hortonworks.com wrote: Setting execution engine to mr and framework name to yarn = Hive compiles to MR and runs on MR. Setting execution engine to mr and framework name to yarn-tez = Hive compiles to MR and runs on Tez. Setting execution engine to tez = Hive compiles to Tez and runs on Tez. Did you change the execution engine in the same shell that was earlier running with execution engine set to mr? If yes, then try and set the execution engine in hive-site.xml and use a new hive shell. Have you set all required hive configurations? If you can still reproduce the issue, then you should create a jira in the Apache hive project to track the bug. Bikas *From:* Archit Thakur [mailto:archit279tha...@gmail.com] *Sent:* Thursday, May 08, 2014 4:42 AM *To:* user@hive.apache.org; u...@tez.incubator.apache.org *Subject:* Tez HIve problems. Hi, I am facing few problems while using Tez on Hive. I am able to run Tez through YARN submitting independent MR procesing job. Also, when I run SELECT query on hive shell by setting *set hive.execution.engine=mr;* but setting *mapreduce.framework.name http://mapreduce.framework.name=yarn-tez* This launches tez job, I checked by seeing AM's UI of machine. But when I *set hive.execution.engine=tez;* *It gives me *Query ID = admin_20140508112525_37cf7b55-75ac-408c-b96f-8e40ba2f8e2b Total jobs = 1 Launching Job 1 out of 1 FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask *I checked the hive logs:* exec.Task (TezTask.java:execute(185)) - Failed to execute tez graph. java.io.FileNotFoundException: File does not exist: hdfs:/user/admin at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102) at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getDefaultDestDir(DagUtils.java:638) at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getHiveJarDirectory(DagUtils.java:728) at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createJarLocalResource(TezSessionState.java:314) at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:169) at