[no subject]

2024-02-03 Thread Gavin McDonald
Hello to all users, contributors and Committers!

The Travel Assistance Committee (TAC) are pleased to announce that
travel assistance applications for Community over Code EU 2024 are now
open!

We will be supporting Community over Code EU, Bratislava, Slovakia,
June 3th - 5th, 2024.

TAC exists to help those that would like to attend Community over Code
events, but are unable to do so for financial reasons. For more info
on this years applications and qualifying criteria, please visit the
TAC website at < https://tac.apache.org/ >. Applications are already
open on https://tac-apply.apache.org/, so don't delay!

The Apache Travel Assistance Committee will only be accepting
applications from those people that are able to attend the full event.

Important: Applications close on Friday, March 1st, 2024.

Applicants have until the the closing date above to submit their
applications (which should contain as much supporting material as
required to efficiently and accurately process their request), this
will enable TAC to announce successful applications shortly
afterwards.

As usual, TAC expects to deal with a range of applications from a
diverse range of backgrounds; therefore, we encourage (as always)
anyone thinking about sending in an application to do so ASAP.

For those that will need a Visa to enter the Country - we advise you apply
now so that you have enough time in case of interview delays. So do not
wait until you know if you have been accepted or not.

We look forward to greeting many of you in Bratislava, Slovakia in June,
2024!

Kind Regards,

Gavin

(On behalf of the Travel Assistance Committee)


[no subject]

2023-11-20 Thread Rajbir singh
user-unsubscribe


-- 
Regards,
Rajbir


[no subject]

2023-10-12 Thread luckydog xf
Hi, listAccording to this link
https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+3.0+AdministrationJump
to Section unning the Metastore Without Hive
In order to run metastore without hive, set the following.
metastore.task.threads.always
org.apache.hadoop.hive.metastore.events.EventCleanerTask,org.apache.hadoop.hive.metastore.MaterializationsCacheCleanerTask

However, since hive-standalone-metastore 3.1.0, such setting has been
replaced.
I checked the v3.1.0, 3.1.2 and 3.1.3. The new configuration is
===
 
metastore.task.threads.always

org.apache.hadoop.hive.metastore.events.EventCleanerTask,org.apache.hadoop.hive.metastore.RuntimeStatsCleanerTask,org.apache.hadoop.hive.metastore.repl.DumpDirCleanerTask
Comma separated list of tasks that will be started in
separate threads.  These will always be started, regardless of whether the
metastore is running in embedded mode or in server mode.  They must
implement org.apache.hadoop.hive.metastore.MetastoreTaskThread
  
===
I  googled release note and change log, turns out nothing was found.

So I guess the documentation is out-of-date. What's the new setup if I  use
3.1.x ?
Thanks.


[no subject]

2022-08-13 Thread 冯 小虎
unsubscribe


[no subject]

2021-05-26 Thread Wen Shi
Hi fellow hive developers,

I am developing some custom storagebased authorization manager for hive
metastore, details here:

https://cwiki.apache.org/confluence/display/Hive/Storage+Based+Authorization+in+the+Metastore+Server

And I know there is a standalone HMS server in hive:

https://github.com/apache/hive/tree/master/standalone-metastore

However my question is, it seems standalone HMS does not have the
authorization manager config:
https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/conf/MetastoreConf.java

I am not able to find the config
key hive.security.metastore.authorization.manager in the above file.
Does that mean standalone HMS does not support this config?

Thanks a lot

Wen


[no subject]

2020-06-17 Thread PengHui Li
user-unsubscribe


[no subject]

2018-06-12 Thread Sowjanya Kakarala
Hi Guys,


I have 4datanodes and one master node EMR cluster with 120GB data storage
left. I have been running sqoop jobs which loads data to hive table. After
some jobs ran successfully I suddenly see these errors all over the name
node logs and datanodes logs.

I have tried changing so many configurations as suggeted in stackoverflow
and hortonworks sites but couldnt find a way for fixing it.


Here is the error:

2018-06-12 15:32:35,933 WARN [main] org.apache.hadoop.mapred.YarnChild:
Exception running child :
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/user/hive/warehouse/monolith.db/tblname/_SCRATCH0.28417629602676764/time_stamp=2018-04-02/_temporary/1/_temporary/attempt_1528318855054_3528_m_00_1/part-m-0
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 4 datanode(s) running and no node(s) are excluded in this operation.

at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1735)

at
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2561)

at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:829)

at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510)

at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)

at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)

at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)


at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489)

at org.apache.hadoop.ipc.Client.call(Client.java:1435)

at org.apache.hadoop.ipc.Client.call(Client.java:1345)

at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)

at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)

at com.sun.proxy.$Proxy14.addBlock(Unknown Source)

at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:444)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)

at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)

at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)

at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)

at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)

at com.sun.proxy.$Proxy15.addBlock(Unknown Source)

at
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1838)

at
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1638)

at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704)


References I already followed:

https://community.hortonworks.com/articles/16144/write-or-append-failures-in-very-small-clusters-un.html

https://stackoverflow.com/questions/14288453/writing-to-hdfs-from-java-getting-could-only-be-replicated-to-0-nodes-instead

https://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo

https://stackoverflow.com/questions/36015864/hadoop-be-replicated-to-0-nodes-instead-of-minreplication-1-there-are-1/36310025


Any help is appreciated.


Thanks

Sowjanya


[no subject]

2018-04-25 Thread wang wei
Hi,all
 Did hive support execute history command like shell(use number like !10)?

And I find beeline not support this:

! 


Executes a shell command from the Hive shell.

All of command with ! prefix will be SQLLine CLI command,I think it's a very 
useful command.


[no subject]

2017-11-28 Thread Angel Francisco orta
Unsubscribe


[no subject]

2017-11-28 Thread Angel Francisco orta
Unsubscribe


[no subject]

2016-04-16 Thread 469564481
I do not want to receive email .Thaks! 


-- Original --
From:  "Jörn Franke";;
Date:  Sat, Apr 16, 2016 03:10 PM
To:  "user"; 

Subject:  Re: Mappers spawning Hive queries



Just out of curiosity, what is the use case behind this?

How do you call the shell script?

> On 16 Apr 2016, at 00:24, Shirish Tatikonda  
> wrote:
> 
> Hello,
> 
> I am trying to run multiple hive queries in parallel by submitting them 
> through a map-reduce job. 
> More specifically, I have a map-only hadoop streaming job where each mapper 
> runs a shell script that does two things -- 1) parses input lines obtained 
> via streaming; and 2) submits a very simple hive query (via hive -e ...) with 
> parameters computed from step-1. 
> 
> Now, when I run the streaming job, the mappers seem to be stuck and I don't 
> know what is going on. When I looked on resource manager web UI, I don't see 
> any new MR Jobs (triggered from the hive query). I am trying to understand 
> this behavior. 
> 
> This may be a bad idea to begin with, and there may be better ways to 
> accomplish the same task. However, I would like to understand the behavior of 
> such a MR job.
> 
> Any thoughts?
> 
> Thank you,
> Shirish
>

[no subject]

2015-03-25 Thread jake lawson
Stop emailing me


[no subject]

2015-02-24 Thread Jadhav Shweta
Hi

I am trying to run hive query
Its getting executed from beeline interface
but its throwing
java.lang.OutOfMemoryError: Java heap space
error when connecting using jdbc.
I am using hive 0.13.0 version and hiveserver2.
which parameters i need to configure for the same.
thanks

Shweta Jadhav
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




[no subject]

2015-02-17 Thread Payal Radheshamji Agrawal
unsuscribe


[no subject]

2015-01-17 Thread DU DU
Hi folks,
The window clause in Hive 0.13.* does not work for the following example
statement

   - BETWEEN 2 PRECEDING AND 1 PRECEDING
   - BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING

Is there a reported JIRA for this? If not, I'll create Jira for this.
Thanks,
Will

 jdbc:hive2://> SELECT name, dept_num, salary,

. . . . . . .> MAX(salary) OVER (PARTITION BY dept_num ORDER BY

. . . . . . .> name ROWS

. . . . . . .> BETWEEN 2 PRECEDING AND 1 PRECEDING) win4_alter

. . . . . . .> FROM employee_contract

. . . . . . .> ORDER BY dept_num, name;

Error: Error while compiling statement: FAILED: SemanticException Failed to
breakup Windowing invocations into Groups. At least 1 group must only
depend on input columns. Also check for circular dependencies.

Underlying error: Window range invalid, start boundary is greater than end
boundary: window(start=range(2 PRECEDING), end=range(1 PRECEDING))
(state=42000,code=4)



jdbc:hive2://> SELECT name, dept_num, salary,

. . . . . . .> MAX(salary) OVER (PARTITION BY dept_num ORDER BY

. . . . . . .> name ROWS

. . . . . . .> BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) win1

. . . . . . .> FROM employee_contract

. . . . . . .> ORDER BY dept_num, name;

Error: Error while compiling statement: FAILED: SemanticException End of a
WindowFrame cannot be UNBOUNDED PRECEDING (state=42000,code=4)


[no subject]

2014-09-23 Thread Poorvi Ahirwal
Hi,
I am executing a mapreduce program with hcatalog and hive database. Even if
the jars are included its showing this error:

Exception in thread "main" java.io.IOException:
com.google.common.util.concurrent.UncheckedExecutionException:
javax.jdo.JDOFatalUserException: Class
org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
NestedThrowables:
java.lang.ClassNotFoundException:
org.datanucleus.api.jdo.JDOPersistenceManagerFactory
at
org.apache.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:88)
at
org.apache.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:64)
..
Please help

thanks


[no subject]

2014-07-09 Thread Grandl Robert
Hi guys,

I am trying to identify a DAG in Tez with a different id, based on job 
name(for e.g. query55.sql from hive-testbench) + input size. 


So my new identifier should be for example query55_2048MB. It seems that a DAG 
in tez, already takes a name which comes from a jobPlan.getName() 
passed through google protobuf thing. Because the DAG is generated in Hive, I 
think the new identifier should come from Hive right ?

Can you pinpoint me which classes, I should change in Hive in order to 
propagate the new identifier for the DAG ? The TEZ dag seems to be created in 
TezTask.java, but I am not sure how to take the job name from command line and 
propagate it here. when DAG is build. 


Thanks,
Robert


[no subject]

2014-01-08 Thread Kishore kumar
Hi Experts,

Is there  a way to change multiple column names and types ?

-- 

*Kishore Kumar*
ITIM


[no subject]

2013-11-02 Thread Mohammad Islam
Hi,

I ran "mvn clean install -DskipTests;cd itests; mvn clean install -DskipTests" 
and got the following.

Error
===
ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hive-it-qfile: Compilation failure: 
Compilation failure:
[ERROR] 
/Users/mislam/apache/hive/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriver.java:[7170,15]
 testCliDriver_mybucket_1() is already defined in 
org.apache.hadoop.hive.cli.TestCliDriver
[ERROR] 
/Users/mislam/apache/hive/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriver.java:[7174,15]
 testCliDriver_mybucket_1() is already defined in 
org.apache.hadoop.hive.cli.TestCliDriver
[ERROR] 
/Users/mislam/apache/hive/itests/qtest/target/generated-test-sources/java/org/apache/hadoop/hive/cli/TestCliDriver.java:[7178,15]
 testCliDriver_mybucket_1() is already defined in 
org.apache.hadoop.hive.cli.TestCliDriver

My investigations:

I did some investigation and found the TestCliDriver.vm (Line 105)  is looking 
for **first** instance of "." and used the first part as function name in 
something like this "testCliDriver_mybucket_1".  

The problem is: there are three mybucket_1.*.q 3  (mybucket_1.5.q, 
mybucket_1.7.q, mybucket_1.8.q). Therefore the above logic would create the 
**same** function name for three .q files causing the above compilation error.

Possible solutions:
===
1. Change the code in TestCliDriver.vm to consider this use cases during source 
generation.
2. Let's not support the *.q file with multiple dots "." in file name. So we 
can rename those three files into something else removing  the second ".".

Next step:
=
If someone can confirm it, I can create a JIRA and provide a patch.

Regards,
Mohammad


[no subject]

2013-08-07 Thread daniel intskirveli
Hello,

I've been trying to build hive 0.12.0 from trunk against Hadoop 0.23.3,
because I want to test some changes I made to HCatalog.

The entire project seems to build fine.
I have a test mapreduce job that uses classes from my newly
built hcatalog-core-0.12.0-SNAPSHOT.jar, and that builds fine too.

I'm running my test as follows:
HADOOP_OPTS=-verbose hadoop jar ~/hadoop_test/target/hcat-test-1.0.jar
org.myorg.HCatTest -files $HCATJAR -libjars $LIBJARS

if $HCATJAR is set to hcatalog-core-0.5.0-cdh4.3.0.jar (cloudera's hcatalog
which I was using previously), the test works correctly.

However, when I set $HCATJAR to hcatalog-core-0.12.0-SNAPSHOT.jar, the
hcatalog jar compiled as part of hive 0.12.0, all the map tasks fail with
the following error:

Error: tried to access method
org.apache.hadoop.mapred.JobContextImpl.(Lorg/apache/hadoop/mapred/JobConf;Lorg/apache/hadoop/mapreduce/JobID;Lorg/apache/hadoop/util/Progressable;)V
from class org.apache.hcatalog.shims.HCatHadoopShims23

Has anyone seen this before?


[no subject]

2013-04-22 Thread suneel hadoop
Can any one help me to change this SQL to pig Latin



SELECT ('CSS'||DB.DISTRICT_CODE||DB.BILLING_ACCOUNT_NO) BAC_KEY,

CASE WHEN T1.TAC_142 IS NULL THEN 'N' ELSE T1.TAC_142 END TAC_142 FROM

(



SELECT DISTRICT_CODE,BILLING_ACCOUNT_NO,

MAX(CASE WHEN TAC_1 = 'Y' AND (TAC_2 = 'Y' OR TAC_3 = 'Y') THEN 'Y' ELSE
'N' END) TAC_142 FROM

(

SELECT DI.DISTRICT_CODE,DI.BILLING_ACCOUNT_NO,DI.INST_SEQUENCE_NO,

MAX(CASE WHEN TRIM(DIP.PRODUCT_CODE) = 'A14493' AND UPPER(DI.HAZARD) LIKE
'%999%EMERGENCY%LINE%' AND UPPER(DI.WARNING) LIKE '%USE%999%ALERT%METHOD%'
THEN 'Y' ELSE 'N' END) TAC_1,

MAX(CASE WHEN TRIM(DIP.PRODUCT_TYPE) IN ('20','21') AND
TRIM(DIP.MAINTENANCE_CONTRACT) IN ('E','T') THEN 'Y' ELSE 'N' END) TAC_2,

MAX(CASE WHEN TRIM(DIP.PRODUCT_CODE) IN ('A14498','A14428','A22640') THEN
'Y' ELSE 'N' END) TAC_3

FROM

D_INSTALLATION DI,

D_INSTALLATION_PRODUCT DIP

WHERE

DIP.INST_SEQUENCE_NO = DI.INST_SEQUENCE_NO AND

DIP.BAC_WID = DI.BAC_WID

GROUP BY DI.DISTRICT_CODE,DI.BILLING_ACCOUNT_NO,DI.INST_SEQUENCE_NO

)

GROUP BY DISTRICT_CODE,BILLING_ACCOUNT_NO)

T1,

D_BILLING_ACCOUNT DB

WHERE

DB.DISTRICT_CODE = T1.DISTRICT_CODE(+) AND

DB.BILLING_ACCOUNT_NO = T1.BILLING_ACCOUNT_NO(+)


[no subject]

2013-04-22 Thread YouPeng Yang
Hi hive users

  This is my first time to post a question here.

I have gotten an exception when I count the rows of  my hive table after I
have loaded the data:

hive>create EXTERNAL TABLE  NMS_CMTS_CPU_CDX_TEST (CMTSID INT,MSEQ
INT,GOTTIME BIGINT,CMTSINDEX INT,CPUTOTAL INT,DESCR STRING) ROW FORMAT
 DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'  STORED AS
TEXTFILE;

hive>load data inpath '/user/sqoop/NMS_CMTS_CPU_CDX3/NMS_CMTS_CPU_CDX3'
 into table NMS_CMTS_CPU_CDX_TEST;

hive> select count(1) from NMS_CMTS_CPU_CDX_TEST;
I get an exception on step 3,logs are as follows.

Any helps will be gratefull.

Regards
---

WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please
use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties
files.
Execution log at:
/tmp/hive/hive_20130422162020_791a7b61-6ba0-466d-99ba-5c2556bafaa4.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 1; number of reducers: 1
2013-04-22 16:24:21,604 null map = 0%,  reduce = 0%
2013-04-22 16:25:21,965 null map = 0%,  reduce = 0%
2013-04-22 16:26:22,902 null map = 0%,  reduce = 0%
2013-04-22 16:26:27,312 null map = 100%,  reduce = 0%
Ended Job = job_1364348895095_0055 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1364348895095_0055_m_00 (and more) from job
job_1364348895095_0055
Unable to retrieve URL for Hadoop Task logs. Does not contain a valid
host:port authority: local

Task with the most failures(4):
-
Task ID:
  task_1364348895095_0055_m_00

URL:
  Unavailable
-
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: java.io.FileNotFoundException:
/tmp/hive/hive_2013-04-22_16-20-45_720_3839682514463028560/-mr-10001/89dd576e-fb9d-409a-8b46-2e46b7d21160
(No such file or directory)
at
org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:224)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
at
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:536)
at
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:160)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:381)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:152)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:147)
Caused by: java.io.FileNotFoundException:
/tmp/hive/hive_2013-04-22_16-20-45_720_3839682514463028560/-mr-10001/89dd576e-fb9d-409a-8b46-2e46b7d21160
(No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:120)
at java.io.FileInputStream.(FileInputStream.java:79)
at
org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:215)
... 12 more


Execution failed with exit status: 2
13/04/22 16:26:28 ERROR exec.Task: Execution failed with exit status: 2
Obtaining error information
13/04/22 16:26:28 ERROR exec.Task: Obtaining error information

Task failed!
Task ID:
  Stage-1

Logs:

13/04/22 16:26:28 ERROR exec.Task:
Task failed!
Task ID:
  Stage-1

Logs:

13/04/22 16:26:28 ERROR exec.ExecDriver: Execution failed with exit status:
2
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
13/04/22 16:26:28 ERROR ql.Driver: FAILED: Execution Error, return code 2
from org.apache.hadoop.hive.ql.exec.MapRedTask
13/04/22 16:26:28 INFO ql.Driver: 
13/04/22 16:26:28 INFO ql.Driver: 
13/04/22 16:26:28 INFO ql.Driver: 


[no subject]

2013-02-14 Thread neelesh gadhia
Hello,


I am a Newbie to using UDF's on hive. But 
implemented these GenericJDF ( https://issues.apache.org/jira/browse/HIVE-2361 
) on hive 0.9.0 and hadoop 1.1.1. Was able to add jar to hive

hive> select * from emp;
OK
1    10    1000
2    10    1200
3    12    1500
4    12    300
5    12    1800
6    20    5000
7    20    7000
8    20    1
Time taken: 0.191 seconds

hive> add jar /usr/local/Cellar/hive/0.9.0/libexec/lib/GenUDF.jar;  
   
Added /usr/local/Cellar/hive/0.9.0/libexec/lib/GenUDF.jar to class path
Added resource: /usr/local/Cellar/hive/0.9.0/libexec/lib/GenUDF.jar

hive> create temporary function nexr_sum as 
'com.nexr.platform.analysis.udf.GenericUDFSum';
OK
Time taken: 0.012 seconds



and kicked the sample sql shown below.  


SELECT t.empno, t.deptno, t.sal, nexr_sum(hash(t.deptno),t.sal) as sal_sum
FROM (
select a.empno, a.deptno, a.sal from emp a
distribute by hash(a.deptno)
sort BY a.deptno, a.empno
) t;

The sql failed with errors. Any pointers or advise towards resolving this is 
much appreciated. 

2013-02-13 23:30:18,925 INFO org.apache.hadoop.mapred.JobTracker: 
Adding task (REDUCE) 'attempt_201302132324_0002_r_00_3' to tip 
task_201302132324_0002_r_00, for tracker 
'tracker_192.168.0.151:localhost/127.0.0.1:50099'
2013-02-13 23:30:18,925 INFO org.apache.hadoop.mapred.JobTracker: Removing task 
'attempt_201302132324_0002_r_00_2'
2013-02-13 23:30:26,484 INFO org.apache.hadoop.mapred.TaskInProgress: 
Error from attempt_201302132324_0002_r_00_3: 
java.lang.RuntimeException: Error in configuring object
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:486)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
... 9 more
Caused by: java.lang.RuntimeException: Reduce operator initialization failed
at org.apache.hadoop.hive.ql.exec.ExecReducer.configure(ExecReducer.java:157)
... 14 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:137)
at org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:896)
at 
org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:922)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
at 
org.apache.hadoop.hive.ql.exec.ExtractOperator.initializeOp(ExtractOperator.java:40)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
at org.apache.hadoop.hive.ql.exec.ExecReducer.configure(ExecReducer.java:150)
... 14 more
2013-02-13 23:30:29,819 INFO org.apache.hadoop.mapred.TaskInProgress: 
TaskInProgress task_201302132324_0002_r_00 has failed 4 times.
2013-02-13 23:30:29,820 INFO org.apache.hadoop.mapred.JobInProgress: 
TaskTracker at '192.168.0.151' turned 'flaky'
 12 more lines..

Tried different function "GenericUDFMax".. same error.

Any pointers/advise, what could be wrong?


[no subject]

2013-01-11 Thread amar, dilshad
unsubscribe

Thanks & Regards,
Dilshad Amar
IT Specialist - Global Commercial Services


_
The information contained in this message is proprietary and/or confidential. 
If you are not the intended recipient, please: (i) delete the message and all 
copies; (ii) do not disclose, distribute or use the message in any manner; and 
(iii) notify the sender immediately. In addition, please be aware that any 
message addressed to our domain is subject to archiving and review by persons 
other than the intended recipient. Thank you.


[no subject]

2012-11-28 Thread imen Megdiche
Hello,

I got this error when trying to create a test table with hive
FAILED: Error in metadata: MetaException (message: Got exception:
java.io.FileNotFoundException File file :/ user / hive / warehouse / test does
not exist.)


I changed the default directories warhouse hive.metastore.warehouse.dir the
file hive-defaut.xml and I execute the  commands for hadoop HDFS to create
the new warehouse directory. but I have still this error

 thank you in advance


[no subject]

2012-11-20 Thread Mohit Chaudhary01
Hello

I want some help in text analysis using hive give me some source code or hql 
which I can use in it.
I used ngram and context ngram for a text but I want some other  that is 
usefull as a beginner .

Thanks


 CAUTION - Disclaimer *
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are 
not
to copy, disclose, or distribute this e-mail or its contents to any other 
person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has 
taken
every reasonable precaution to minimize this risk, but is not liable for any 
damage
you may sustain as a result of any virus in this e-mail. You should carry out 
your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this 
e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS End of Disclaimer INFOSYS***


[no subject]

2012-11-20 Thread imen Megdiche
hello,

I want to know the principle of hiveql requests i.e how it translates these
queries into MapReduce job. Is there any  piece of source code that can
explain that.
thank you very much for your responses


[no subject]

2012-11-13 Thread imen Megdiche
Hello,
I can not find a solution to run hive under cygwin.
Although hadoop works very well, the command hive starts  to turn as
infinite
Thank you in advance for your answers


[no subject]

2012-08-28 Thread rohithsharma
Hi

 

I am using  PostgreSQl-9.0.7 as metastore and + Hive-0.9.0. I integrated
postgres with hive. Few queries are working fine. I am using 

postgresql-9.0-802.jdbc3.jar for connecting to JDBC.

 

But  "drop table query" is hanging. Following is Hive DEBUG log . 

 

08/12/28 06:02:09 DEBUG lazy.LazySimpleSerDe: LazySimpleSerDe initialized

with: columnNames=[a] columnTypes=[int] separator=[[B@e4600c0] nullstring=\N

lastColumnTakesRest=false

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: get_table : db=default

tbl=erer

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: drop_table : db=default

tbl=erer

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: get_table : db=default

tbl=erer

*08/12/28 06:02:09 DEBUG metastore.ObjectStore: Executing listMPartitions

 

 

I find there is no bug in open source . Is there any way to overcome this
problem please help me.

 

 

 

Regards

Rohith Sharma K S

 



[no subject]

2012-05-23 Thread Debarshi Basak
When i am trying to run a query with index i am getting this exception.My hive version is 0.7.1
 
java.lang.OutOfMemoryError: GC overhead limit exceeded    at java.nio.ByteBuffer.wrap(ByteBuffer.java:369)    at org.apache.hadoop.io.Text.decode(Text.java:327)    at org.apache.hadoop.io.Text.toString(Text.java:254)    at org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexResult.add(HiveCompactIndexResult.java:118)    at org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexResult.(HiveCompactIndexResult.java:107)    at org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexInputFormat.getSplits(HiveCompactIndexInputFormat.java:89)    at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971)    at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963)    at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)    at java.security.AccessController.doPrivileged(Native Method)    at javax.security.auth.Subject.doAs(Subject.java:415)    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)    at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:671)    at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:123)    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:131)    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1063)    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:900)    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:748)    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:209)    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:286)    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:516)    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)    at java.lang.reflect.Method.invoke(Method.java:601)    at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
Debarshi BasakTata Consultancy ServicesMailto: debarshi.ba...@tcs.comWebsite: http://www.tcs.comExperience certainty. IT ServicesBusiness SolutionsOutsourcing 

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you




[no subject]

2012-01-10 Thread Lu, Wei
Hi,

I am using ThriftHive.Client to access a pretty large table.

SQL Statement:
selecta11.asin  asin,
max(a11.title)  title,
a11.salesrank  salesrank,
a11.category  category,
avg(a11.avg_rating)  WJXBFS1,
sum(a11.total_num_reviews)  WJXBFS2,
sum(a11.num_subcategories)  WJXBFS3
from  table_details  a11
group by  a11.asin,
a11.salesrank,
a11.category.

The statement will select a pretty large result set (1,000,000+ rows). When I 
use ThriftHive.Client fetchAll() to get all the row strings, an exception 
returns like below:

Exception in thread "main" org.apache.thrift.TApplicationException: Internal 
error processing fetchAll
  at 
org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
  at 
org.apache.hadoop.hive.service.ThriftHive$Client.recv_fetchAll(ThriftHive.java:224)
  at 
org.apache.hadoop.hive.service.ThriftHive$Client.fetchAll(ThriftHive.java:208)
  ... ...

Why does that happen? How could I deal with it? It seem that incremental fetch 
is not supported by hive.

Regards,
Wei


[no subject]

2011-11-22 Thread Denis Kreis
Hi list,

I am new to hive and have encountered a problem. Settings in my
conf/hive-site.xml file seem to have no effect. I've tried to set the
HIVE_CONF_DIR variable, but this did not help. Any Ideas?

Denis


[no subject]

2011-02-16 Thread Stuart Scott
Thanks for the reply.. (I'm new to Hive). 

I can't find the driver class. Do you know which files I should be
looking for?

Regards

Stuart

 

by the sound of the error ... it sounds like you don't have HiveDriver
in your path
Can you locate the calss that supposedly has the HiveDriver class?

Cheers,
Ajo

On Wed, Feb 16, 2011 at 2:03 PM, Stuart Scott 
wrote:

Hi,

 

Does anyone know how to get a Windows client to Connect to Hive
successfully? I've tried the code below:

 

  Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver");

  Connection con =
DriverManager.getConnection("jdbc:hive://192.168.1.1:1/default", "",
"");

  Statement stmt = con.createStatement();

  stmt.executeQuery("select * from x");

 

 

But get the following error:

 

Exception in thread "main" java.lang.ClassNotFoundException:
org.apache.hadoop.hive.jdbc.HiveDriver

 

Is it possible to do this?

Any help would be really appreciated.

 

Regards

 

Stuart Scott

System Architect 
emis intellectual technology 
Fulford Grange, Micklefield Lane 
Rawdon Leeds LS19 6BA 
E-mail: stuart.sc...@e-mis.com   
Website: www.emisit.com 

Privileged and/or Confidential information may be contained in this
message. If you are not the original addressee indicated in this message
(or responsible for delivery of the message to such person), you may not
copy or deliver this message to anyone. In such case, please delete this
message, and notify us immediately. Opinions, conclusions and other
information expressed in this message are not given or endorsed by EMIS
nor can I conclude contracts on its behalf unless otherwise indicated by
an authorised representative independently of this message. 

EMIS reserves the right to monitor, intercept and (where appropriate)
read all incoming and outgoing communications. By replying to this
message and where necessary you are taken as being aware of and giving
consent to such monitoring, interception and reading.


EMIS is a trading name of Egton Medical Information Systems Limited.
Registered in England. No 2117205. Registered Office: Fulford Grange,
Micklefield Lane, Rawdon, Leeds, LS19 6BA