Unsubscribe

2019-11-08 Thread Ajit Kumar Shreevastava
Unsubscribe

With Regards
Ajit Kumar Shreevastava
::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. E-mail transmission is not guaranteed to be 
secure or error-free as information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or may contain viruses in transmission. 
The e mail and its contents (with or without referred errors) shall therefore 
not attach any liability on the originator or HCL or its affiliates. Views or 
opinions, if any, presented in this email are solely those of the author and 
may not necessarily reflect the views or opinions of HCL or its affiliates. Any 
form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of HCL is strictly prohibited. If you have 
received this email in error please delete it and notify the sender 
immediately. Before opening any email and/or attachments, please check them for 
viruses and other defects.



Exporting hive table data into oracle give date format error

2013-03-13 Thread Ajit Kumar Shreevastava
Hi All,

Can you please let me know how can I bypass this error. I am currently using 
Apache  SQOOP version 1.4.2.


[hadoop@NHCLT-PC44-2 sqoop-oper]$ sqoop export --connect 
jdbc:oracle:thin:@10.99.42.11:1521/clouddb --username HDFSUSER  --table 
BTTN_BKP_TEST --export-dir  /home/hadoop/user/hive/warehouse/bttn_bkp -P -m 1  
--input-fields-terminated-by '\0001' --verbose --input-null-string '\\N' 
--input-null-non-string '\\N'

Please set $HBASE_HOME to the root of your HBase installation.
13/03/13 18:20:42 DEBUG tool.BaseSqoopTool: Enabled debug logging.
Enter password:
13/03/13 18:20:47 DEBUG sqoop.ConnFactory: Loaded manager factory: 
com.cloudera.sqoop.manager.DefaultManagerFactory
13/03/13 18:20:47 DEBUG sqoop.ConnFactory: Trying ManagerFactory: 
com.cloudera.sqoop.manager.DefaultManagerFactory
13/03/13 18:20:47 DEBUG manager.DefaultManagerFactory: Trying with scheme: 
jdbc:oracle:thin:@10.99.42.11
13/03/13 18:20:47 DEBUG manager.OracleManager$ConnCache: Instantiated new 
connection cache.
13/03/13 18:20:47 INFO manager.SqlManager: Using default fetchSize of 1000
13/03/13 18:20:47 DEBUG sqoop.ConnFactory: Instantiated ConnManager 
org.apache.sqoop.manager.OracleManager@74b23210
13/03/13 18:20:47 INFO tool.CodeGenTool: Beginning code generation
13/03/13 18:20:47 DEBUG manager.OracleManager: Using column names query: SELECT 
t.* FROM BTTN_BKP_TEST t WHERE 1=0
13/03/13 18:20:47 DEBUG manager.OracleManager: Creating a new connection for 
jdbc:oracle:thin:@10.99.42.11:1521/clouddb, using username: HDFSUSER
13/03/13 18:20:47 DEBUG manager.OracleManager: No connection paramenters 
specified. Using regular API for making connection.
13/03/13 18:20:47 INFO manager.OracleManager: Time zone has been set to GMT
13/03/13 18:20:47 DEBUG manager.SqlManager: Using fetchSize for next query: 1000
13/03/13 18:20:47 INFO manager.SqlManager: Executing SQL statement: SELECT t.* 
FROM BTTN_BKP_TEST t WHERE 1=0
13/03/13 18:20:47 DEBUG manager.OracleManager$ConnCache: Caching released 
connection for jdbc:oracle:thin:@10.99.42.11:1521/clouddb/HDFSUSER
13/03/13 18:20:47 DEBUG orm.ClassWriter: selected columns:
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BTTN_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   DATA_INST_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   SCR_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BTTN_NU
13/03/13 18:20:47 DEBUG orm.ClassWriter:   CAT
13/03/13 18:20:47 DEBUG orm.ClassWriter:   WDTH
13/03/13 18:20:47 DEBUG orm.ClassWriter:   HGHT
13/03/13 18:20:47 DEBUG orm.ClassWriter:   KEY_SCAN
13/03/13 18:20:47 DEBUG orm.ClassWriter:   KEY_SHFT
13/03/13 18:20:47 DEBUG orm.ClassWriter:   FRGND_CPTN_COLR
13/03/13 18:20:47 DEBUG orm.ClassWriter:   FRGND_CPTN_COLR_PRSD
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BKGD_CPTN_COLR
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BKGD_CPTN_COLR_PRSD
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BLM_FL
13/03/13 18:20:47 DEBUG orm.ClassWriter:   LCLZ_FL
13/03/13 18:20:47 DEBUG orm.ClassWriter:   MENU_ITEM_NU
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BTTN_ASGN_LVL_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   ON_ATVT
13/03/13 18:20:47 DEBUG orm.ClassWriter:   ON_CLIK
13/03/13 18:20:47 DEBUG orm.ClassWriter:   ENBL_FL
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BLM_SET_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BTTN_ASGN_LVL_NAME
13/03/13 18:20:47 DEBUG orm.ClassWriter:   MKT_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   CRTE_TS
13/03/13 18:20:47 DEBUG orm.ClassWriter:   CRTE_USER_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   UPDT_TS
13/03/13 18:20:47 DEBUG orm.ClassWriter:   UPDT_USER_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   DEL_TS
13/03/13 18:20:47 DEBUG orm.ClassWriter:   DEL_USER_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   DLTD_FL
13/03/13 18:20:47 DEBUG orm.ClassWriter:   MENU_ITEM_NA
13/03/13 18:20:47 DEBUG orm.ClassWriter:   PRD_CD
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BLM_SET_NA
13/03/13 18:20:47 DEBUG orm.ClassWriter:   SOUND_FILE_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   IS_DYNMC_BTTN
13/03/13 18:20:47 DEBUG orm.ClassWriter:   FRGND_CPTN_COLR_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   FRGND_CPTN_COLR_PRSD_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BKGD_CPTN_COLR_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter:   BKGD_CPTN_COLR_PRSD_ID
13/03/13 18:20:47 DEBUG orm.ClassWriter: Writing source file: 
/tmp/sqoop-hadoop/compile/69b6a9d2ebb99cebced808e559528531/BTTN_BKP_TEST.java
13/03/13 18:20:47 DEBUG orm.ClassWriter: Table name: BTTN_BKP_TEST
13/03/13 18:20:47 DEBUG orm.ClassWriter: Columns: BTTN_ID:2, DATA_INST_ID:2, 
SCR_ID:2, BTTN_NU:2, CAT:2, WDTH:2, HGHT:2, KEY_SCAN:2, KEY_SHFT:2, 
FRGND_CPTN_COLR:12, FRGND_CPTN_COLR_PRSD:12, BKGD_CPTN_COLR:12, 
BKGD_CPTN_COLR_PRSD:12, BLM_FL:2, LCLZ_FL:2, MENU_ITEM_NU:2, 
BTTN_ASGN_LVL_ID:2, ON_ATVT:2, ON_CLIK:2, ENBL_FL:2, BLM_SET_ID:2, 
BTTN_ASGN_LVL_NAME:12, MKT_ID:2, CRTE_TS:93, CRTE_USER_ID:12, UPDT_TS:93, 
UPDT_USER_ID:12, DEL_TS:93, DEL_USER_ID:12, DLTD_FL:2, MENU_ITEM_NA:12, 
PRD_CD:2, BLM_SET_NA:12, SOUND_FILE_I

RE: Data mismatch when importing data from Oracle to Hive through Sqoop without an error

2013-03-08 Thread Ajit Kumar Shreevastava
Hi Venkat,

All most column have some value except these three.

Regards,
Ajit

-Original Message-
From: Venkat Ranganathan [mailto:vranganat...@hortonworks.com] 
Sent: Wednesday, March 06, 2013 9:36 PM
To: user@hive.apache.org
Cc: u...@sqoop.apache.org
Subject: Re: Data mismatch when importing data from Oracle to Hive through 
Sqoop without an error

Hi Ajit

Do you know if rest of the columns also are null when the three non null 
columns are null

Venkat

On Wed, Mar 6, 2013 at 12:35 AM, Ajit Kumar Shreevastava 
 wrote:
> Hi Abhijeet,
>
>
>
> Thanks for your response.
>
> If values that don't fit in double must be getting inserted as Null is 
> the case then count should not be mis-match in both the case.
>
> Here the null value inserted are extra value apart from the other 
> value which is already present in both Oracle Table and Hive table.
>
>
>
> Correct me if I am wrong in interpretation.
>
>
>
> Thanks and Regards,
>
> Ajit Kumar Shreevastava
>
>
>
> From: abhijeet gaikwad [mailto:abygaikwa...@gmail.com]
> Sent: Wednesday, March 06, 2013 1:46 PM
> To: user@hive.apache.org
> Cc: u...@sqoop.apache.org
> Subject: Re: Data mismatch when importing data from Oracle to Hive 
> through Sqoop without an error
>
>
>
> Sqoop maps numeric and decimal types (RDBMS) to double (Hive). I think 
> the values that don't fit in double must be getting inserted as NULL.
> You can see this warning in your logs.
>
> Thanks,
> Abhijeet
>
> On Wed, Mar 6, 2013 at 1:32 PM, Ajit Kumar Shreevastava 
>  wrote:
>
> Hi all,
>
> I have notice one interesting thing in the below result-set.
>
> I have fired one query in both Oracle and Hive shell and found the 
> following result set:à
>
>
>
> SQL> select count(1) from bttn
>
>   2  where bttn_id is null or data_inst_id is null or scr_id is null;
>
>
>
>   COUNT(1)
>
> --
>
>  0
>
> hive> select count(1) from bttn
>
> > where bttn_id is null or data_inst_id is null or scr_id is null;
>
> Total MapReduce jobs = 1
>
> Launching Job 1 out of 1
>
> Number of reduce tasks determined at compile time: 1
>
> In order to change the average load for a reducer (in bytes):
>
>   set hive.exec.reducers.bytes.per.reducer=
>
> In order to limit the maximum number of reducers:
>
>   set hive.exec.reducers.max=
>
> In order to set a constant number of reducers:
>
>   set mapred.reduce.tasks=
>
> Starting Job = job_201303051835_0020, Tracking URL =
> http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201303051835_0020
>
> Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill
> job_201303051835_0020
>
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 1
>
> 2013-03-06 13:22:56,908 Stage-1 map = 0%,  reduce = 0%
>
> 2013-03-06 13:23:05,928 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:06,931 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:07,934 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:08,938 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:09,941 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:10,944 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:11,947 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:12,956 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:13,959 Stage-1 map = 100%,  reduce = 0%, Cumulative 
> CPU 5.2 sec
>
> 2013-03-06 13:23:14,962 Stage-1 map = 100%,  reduce = 33%, Cumulative 
> CPU
> 5.2 sec
>
> 2013-03-06 13:23:15,965 Stage-1 map = 100%,  reduce = 33%, Cumulative 
> CPU
> 5.2 sec
>
> 2013-03-06 13:23:16,969 Stage-1 map = 100%,  reduce = 33%, Cumulative 
> CPU
> 5.2 sec
>
> 2013-03-06 13:23:17,974 Stage-1 map = 100%,  reduce = 100%, Cumulative 
> CPU
> 6.95 sec
>
> 2013-03-06 13:23:18,977 Stage-1 map = 100%,  reduce = 100%, Cumulative 
> CPU
> 6.95 sec
>
> 2013-03-06 13:23:19,981 Stage-1 map = 100%,  reduce = 100%, Cumulative 
> CPU
> 6.95 sec
>
> 2013-03-06 13:23:20,985 Stage-1 map = 100%,  reduce = 100%, Cumulative 
> CPU
> 6.95 sec
>
> 2013-03-06 13:23:21,988 Stage-1 map = 100%,  reduce = 100%, Cumulative 
> CPU
> 6.95 sec
>
> 2013-03-06 13:23:22,995 Stage-1 map = 100%,  reduce = 100%, Cumulative 
> CPU
> 6.95 sec
>
> 2013-03-06 13:23:23,998 Stage-1 map = 100%,  reduce = 100%, Cumulative 
> CPU
> 6.95 sec
>
> MapReduce Total cumulati

Error while exporting table data from hive to Oracle through Sqoop

2013-03-05 Thread Ajit Kumar Shreevastava
red.JobClient:  map 43% reduce 0%
13/03/05 19:21:10 INFO mapred.JobClient:  map 44% reduce 0%
13/03/05 19:21:13 INFO mapred.JobClient:  map 48% reduce 0%
13/03/05 19:21:18 INFO mapred.JobClient: Task Id : 
attempt_201303051835_0010_m_00_0, Status : FAILED
java.util.NoSuchElementException
at java.util.AbstractList$Itr.next(AbstractList.java:350)
at BTTN_BKP.__loadFromFields(BTTN_BKP.java:1349)
at BTTN_BKP.parse(BTTN_BKP.java:1148)
   at 
org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:77)
at 
org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at 
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

13/03/05 19:21:19 INFO mapred.JobClient:  map 0% reduce 0%
13/03/05 19:21:27 INFO mapred.JobClient: Task Id : 
attempt_201303051835_0010_m_00_1, Status : FAILED
java.io.IOException: java.sql.BatchUpdateException: ORA-1: unique 
constraint (HDFSUSER.BTTN_BKP_PK) violated

at 
org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:220)
at 
org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.write(AsyncSqlRecordWriter.java:46)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:639)
at 
org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at 
org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:78)
at 
org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at 
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.sql.BatchUpdateException: ORA-1: unique constraint 
(HDFSUSER.BTTN_BKP_PK) violated

at 
oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10345)
at 
oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:230)
at 
org.apache.sqoop.mapreduce.AsyncSqlOutputFormat$AsyncSqlExecThread.run(AsyncSqlOutputFormat.java:228)

13/03/05 19:21:48 WARN mapred.JobClient: Error reading task outputConnection 
timed out
13/03/05 19:22:09 WARN mapred.JobClient: Error reading task outputConnection 
timed out
13/03/05 19:22:09 INFO mapred.JobClient: Job complete: job_201303051835_0010
13/03/05 19:22:09 INFO mapred.JobClient: Counters: 8
13/03/05 19:22:09 INFO mapred.JobClient:   Job Counters
13/03/05 19:22:09 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=77152
13/03/05 19:22:09 INFO mapred.JobClient: Total time spent by all reduces 
waiting after reserving slots (ms)=0
13/03/05 19:22:09 INFO mapred.JobClient: Total time spent by all maps 
waiting after reserving slots (ms)=0
13/03/05 19:22:09 INFO mapred.JobClient: Rack-local map tasks=3
13/03/05 19:22:09 INFO mapred.JobClient: Launched map tasks=4
13/03/05 19:22:09 INFO mapred.JobClient: Data-local map tasks=1
13/03/05 19:22:09 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/03/05 19:22:09 INFO mapred.JobClient: Failed map tasks=1
13/03/05 19:22:09 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 110.4837 
seconds (0 bytes/sec)
13/03/05 19:22:09 INFO mapreduce.ExportJobBase: Exported 0 records.
13/03/05 19:22:09 ERROR tool.ExportTool: Error during export: Export job failed!
[hadoop@NHCLT-PC44-2 sqoop-oper]$

Regards,
Ajit Kumar Shreevastava


::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. T

RE: FAILED: Hive Internal Error

2012-10-28 Thread Ajit Kumar Shreevastava
Hi sagar,
Firstly you should start the dfs and mapreduce service.
This error gives when these service are not started.

Regards,
Ajit

From: sagar nikam [mailto:sagarnikam...@gmail.com]
Sent: Sunday, October 28, 2012 12:58 PM
To: user@hive.apache.org; bejoy...@yahoo.com
Subject: Re: FAILED: Hive Internal Error

i tried but not

shell>:~/Hadoop/hadoop-0.20.2$ bin/hadoop dfs namenode -format
12/10/28 12:56:42 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 0 time(s).
12/10/28 12:56:43 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 1 time(s).
12/10/28 12:56:44 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 2 time(s).
12/10/28 12:56:45 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 3 time(s).
12/10/28 12:56:46 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 4 time(s).
12/10/28 12:56:47 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 5 time(s).
12/10/28 12:56:48 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 6 time(s).
12/10/28 12:56:49 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 7 time(s).
12/10/28 12:56:50 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 8 time(s).
12/10/28 12:56:51 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:9000. Already tried 9 time(s).
Bad connection to FS. command aborted.



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




RE: FAILED: Hive Internal Error: java.lang.RuntimeException(java.net.ConnectException

2012-10-28 Thread Ajit Kumar Shreevastava
Hi Sagar,

MapReduce is not working.
Please check the corresponding xml file and try jps command to see whether  it 
started or not. You can also check the log for task and job tracker.

Regards
Ajit

From: sagar nikam [mailto:sagarnikam...@gmail.com]
Sent: Monday, October 29, 2012 11:02 AM
To: user@hive.apache.org
Subject: FAILED: Hive Internal Error: 
java.lang.RuntimeException(java.net.ConnectException

hive> show databases;
OK
default
mm
mm2
xyz
Time taken: 6.058 seconds
hive> use mm2;
OK
Time taken: 0.039 seconds
hive> show tables;
OK
cidade
concessionaria
familia
modelo
venda
Time taken: 0.354 seconds
hive> select count(*) from familia;

FAILED: Hive Internal Error: 
java.lang.RuntimeException(java.net.ConnectException: Call to 
localhost/127.0.0.1:54310 failed on connection 
exception: java.net.ConnectException: Connection refused)
java.lang.RuntimeException: java.net.ConnectException: Call to 
localhost/127.0.0.1:54310 failed on connection 
exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:151)
at 
org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:190)
at 
org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:247)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:900)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6594)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
at 
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Call to 
localhost/127.0.0.1:54310 failed on connection 
exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at 
org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:145)
... 15 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at 
org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 28 more
=
afterthat,i did this also

shell $> jps
3630 TaskTracker
3403 JobTracker
3086 DataNode
3678 Jps
3329 SecondaryNameNode
==
JT - job tracker web interface. is running well through address 
http://localhost:50030/jobtracker.jsp in browser & showing

localhost Hadoop Map/Reduce Adm

error in log file

2012-10-10 Thread Ajit Kumar Shreevastava
Hi All,

When I fire a Query language through hive shell query run perfectly but 
hive.log file records some error like:-->

2012-10-10 15:26:33,226 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be re
solved.
2012-10-10 15:26:33,226 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be re
solved.
2012-10-10 15:26:33,228 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
cannot be reso
lved.
2012-10-10 15:26:33,228 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
cannot be reso
lved.
2012-10-10 15:26:33,228 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.
2012-10-10 15:26:33,228 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.
2012-10-10 15:26:39,126 WARN  mapred.JobClient 
(JobClient.java:copyAndConfigureFiles(667)) - Use GenericOptionsParser for 
parsing the arguments. Applications should imp
lement Tool for the same.
2012-10-10 15:26:39,282 WARN  snappy.LoadSnappy (LoadSnappy.java:(46)) 
- Snappy native library not loaded

Please help me in this regard.

Thanks and Regards
Ajit Kumar Shreevastava
ADCOE (App Development Center Of Excellence )
Mobile: 9717775634



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




RE: hive mapred problem

2012-10-09 Thread Ajit Kumar Shreevastava
Hi Nitin

Sorry Nitin, Actually I mean Fully distributed mode( hadoop on multimode).
I want configuration file for both hadoop and hive.



-Original Message-
From: Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Tuesday, October 09, 2012 2:46 PM
To: user@hive.apache.org
Subject: Re: hive mapred problem

I did not get the distributed mode for hadoop and hive question. can
you explain what exactly what you want to achieve ?

Thanks,
Nitin

On Tue, Oct 9, 2012 at 2:42 PM, Ajit Kumar Shreevastava
 wrote:
> Hi Nitin,
>
>
>
> Thanks for your reply...
>
>
>
> Now my query is running but output is like :-->
>
>
>
> hive> select count(1) from pokes;
>
> Total MapReduce jobs = 1
>
> Launching Job 1 out of 1
>
> Number of reduce tasks determined at compile time: 1
>
> In order to change the average load for a reducer (in bytes):
>
>   set hive.exec.reducers.bytes.per.reducer=
>
> In order to limit the maximum number of reducers:
>
>   set hive.exec.reducers.max=
>
> In order to set a constant number of reducers:
>
>   set mapred.reduce.tasks=
>
> Starting Job = job_201210091435_0001, Tracking URL =
> http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210091435_0001
>
> Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill
> job_201210091435_0001
>
> Hadoop job information for Stage-1: number of mappers: 1; number of
> reducers: 2
>
> 2012-10-09 14:37:14,587 Stage-1 map = 0%,  reduce = 0%
>
> 2012-10-09 14:37:20,609 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:21,613 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:22,620 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:23,625 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:24,630 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:25,634 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:26,638 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:27,642 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:28,650 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:29,654 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:30,658 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:31,662 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU
> 0.47 sec
>
> 2012-10-09 14:37:32,667 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU
> 1.66 sec
>
> 2012-10-09 14:37:33,672 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU
> 1.66 sec
>
> 2012-10-09 14:37:34,678 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:35,682 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:36,686 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:37,690 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:38,694 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:39,698 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> 2012-10-09 14:37:40,702 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU
> 3.0 sec
>
> MapReduce Total cumulative CPU time: 3 seconds 0 msec
>
> Ended Job = job_201210091435_0001
>
> MapReduce Jobs Launched:
>
> Job 0: Map: 1  Reduce: 2   Cumulative CPU: 3.0 sec   HDFS Read: 6034 HDFS
> Write: 6 SUCCESS
>
> Total MapReduce CPU Time Spent: 3 seconds 0 msec
>
> OK
>
> 500
>
> 0
>
> Time taken: 35.161 seconds
>
>
>
> Can you do one favor for me? I want configuration file template for
> distributed mode for both hadoop and hive.
>
>
>
> Regards
>
> Ajit
>
>
>
>
>
> -Original Message-
> From: Nitin Pawar [mailto:nitinpawar...@gmail.com]
> Sent: Monday, October 08, 2012 5:52 PM
> To: user@hive.apache.org
> Subject: Re: hive mapred problem
>
>
>
> from the error looks like you have some incorrect hive settings which
>
> are failing the job initialization.
>
>
>
> this is the error
>
>>java.io.IOException: Number of maps in JobConf doesn't match number of
>
>> recieved splits for job job_201210051717_0015! numMapTasks=10
>
>
>
> can you tell us if you are setting in hive variables before firing up
>
> the query?  something like split size or # no maps etc
>
>
>
> On Mon, Oct 8, 2012 at 5:30 PM, Ajit Kumar Shreevastava
>
>  wrote:
>
>>

hive mapred problem

2012-10-08 Thread Ajit Kumar Shreevastava
Hi,

When I run the query "select count(1) from pokes;" it fails with the message as 
below.

hive> select count(1) from pokes;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201210051717_0015, Tracking URL = 
http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210051717_0015
Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill 
job_201210051717_0015
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2012-10-08 15:43:30,351 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201210051717_0015 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

Thanks and Regards
Ajit Kumar Shreevastava
ADCOE (App Development Center Of Excellence )
Mobile: 9717775634



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




hive query fail

2012-10-03 Thread Ajit Kumar Shreevastava
Hi All,

I am using oracle as a remote metastore for hive.

I want to run select count(1) from pokes; it fails.
Again when I want to access 
"http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210031252_0003"; from 
internet explorer page cannot be displayed.

[hadoop@NHCLT-PC44-2 ~]$ hive
Logging initialized using configuration in 
file:/home/hadoop/Hive/conf/hive-log4j.properties
Hive history 
file=/home/hadoop/tmp/hadoop/hive_job_log_hadoop_201210031257_2024792684.txt
hive> select count(1) from pokes;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201210031252_0003, Tracking URL = 
http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210031252_0003
Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill 
job_201210031252_0003

[hadoop@NHCLT-PC44-2 hadoop]$ cat hive.log
2012-10-03 17:20:31,965 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be resolved.
2012-10-03 17:20:31,965 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be resolved.
2012-10-03 17:20:31,967 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
cannot be resolved.
2012-10-03 17:20:31,967 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
cannot be resolved.
2012-10-03 17:20:31,967 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.
2012-10-03 17:20:31,967 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.
2012-10-03 17:20:36,732 WARN  mapred.JobClient 
(JobClient.java:copyAndConfigureFiles(667)) - Use GenericOptionsParser for 
parsing the arguments. Applications should implement Tool for the same.
2012-10-03 17:20:36,856 WARN  snappy.LoadSnappy (LoadSnappy.java:(46)) 
- Snappy native library not loaded


My hive-site.xml is :-->

[hadoop@NHCLT-PC44-2 conf]$ cat hive-site.xml






hive.exec.scratchdir
/home/hadoop/tmp/hive-${user.name}
Scratch space for Hive jobs
  

  hive.metastore.warehouse.dir
/home/hadoop/user/hive/warehouse
  location of default database for the warehouse


  hive.querylog.location
  /home/hadoop/tmp/${user.name}
Directory where Directory where session is created in this 
directory. If this variable set to empty string session is created in this 
directory. If this variable set to empty string  



  hadoop.bin.path
  /home/hadoop/hadoop-1.0.3/bin/hadoop
The location of hadoop script which is used to submit 
jobs to hadoop when submitting through a separate jvm.


  mapred.job.tracker
hdfs://localhost:8021/
  Datanode1


  hadoop.config.dir
  /home/hadoop/hadoop-1.0.3/conf
  The location of the configuration directory of the hadoop 
installation


   javax.jdo.option.ConnectionURL
   jdbc:oracle:thin:@10.99.42.11:1521:clouddb


   javax.jdo.option.ConnectionDriverName
   oracle.jdbc.driver.OracleDriver


   javax.jdo.option.ConnectionUserName
  hiveuser


 javax.jdo.option.ConnectionPassword
 hiveuser



Thanks and Regards
Ajit Kumar Shreevastava
ADCOE (App Development Center Of Excellence )
Mobile: 9717775634



::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received thi

RE: hive query fail

2012-10-03 Thread Ajit Kumar Shreevastava
Hi All,

I am using oracle as a remote metastore for hive.

I want to run select count(1) from pokes; it fails.

[hadoop@NHCLT-PC44-2 ~]$ hive
Logging initialized using configuration in 
file:/home/hadoop/Hive/conf/hive-log4j.properties
Hive history 
file=/home/hadoop/tmp/hadoop/hive_job_log_hadoop_201210031257_2024792684.txt
hive> select count(1) from pokes;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201210031252_0003, Tracking URL = 
http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210031252_0003
Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill 
job_201210031252_0003

[hadoop@NHCLT-PC44-2 hadoop]$ cat hive.log
2012-10-03 17:20:31,965 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be resolved.
2012-10-03 17:20:31,965 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be resolved.
2012-10-03 17:20:31,967 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
cannot be resolved.
2012-10-03 17:20:31,967 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
cannot be resolved.
2012-10-03 17:20:31,967 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.
2012-10-03 17:20:31,967 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.
2012-10-03 17:20:36,732 WARN  mapred.JobClient 
(JobClient.java:copyAndConfigureFiles(667)) - Use GenericOptionsParser for 
parsing the arguments. Applications should implement Tool for the same.
2012-10-03 17:20:36,856 WARN  snappy.LoadSnappy (LoadSnappy.java:(46)) 
- Snappy native library not loaded


My hive-site.xml is :-->

[hadoop@NHCLT-PC44-2 conf]$ cat hive-site.xml






hive.exec.scratchdir
/home/hadoop/tmp/hive-${user.name}
Scratch space for Hive jobs
  

  hive.metastore.warehouse.dir
/home/hadoop/user/hive/warehouse
  location of default database for the warehouse


  hive.querylog.location
  /home/hadoop/tmp/${user.name}
Directory where Directory where session is created in this 
directory. If this variable set to empty string session is created in this 
directory. If this variable set to empty string  



  hadoop.bin.path
  /home/hadoop/hadoop-1.0.3/bin/hadoop
The location of hadoop script which is used to submit 
jobs to hadoop when submitting through a separate jvm.


  mapred.job.tracker
hdfs://localhost:8021/
  Datanode1


  hadoop.config.dir
  /home/hadoop/hadoop-1.0.3/conf
  The location of the configuration directory of the hadoop 
installation


   javax.jdo.option.ConnectionURL
   jdbc:oracle:thin:@10.99.42.11:1521:clouddb


   javax.jdo.option.ConnectionDriverName
   oracle.jdbc.driver.OracleDriver


   javax.jdo.option.ConnectionUserName
  hiveuser


 javax.jdo.option.ConnectionPassword
 hiveuser



Thanks and Regards
Ajit Kumar Shreevastava
ADCOE (App Development Center Of Excellence )
Mobile: 9717775634



::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it an

hive query fail

2012-10-03 Thread Ajit Kumar Shreevastava
Hi All,

I am using oracle as a remote metastore for hive.

Whenever I fired insert or select command on this its run successfully.
But when I want to run select count(1) from pokes; it fails.

[hadoop@NHCLT-PC44-2 ~]$ hive
Logging initialized using configuration in 
file:/home/hadoop/Hive/conf/hive-log4j.properties
Hive history 
file=/home/hadoop/tmp/hadoop/hive_job_log_hadoop_201210031257_2024792684.txt
hive> select count(1) from pokes;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201210031252_0003, Tracking URL = 
http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210031252_0003
Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill 
job_201210031252_0003

[hadoop@NHCLT-PC44-2 hadoop]$ cat hive.log
2012-10-03 12:57:41,327 WARN  conf.Configuration 
(Configuration.java:loadResource(1245)) - 
file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final 
parameter: mapred.job.tracker;  Ignoring.
2012-10-03 12:57:41,726 WARN  conf.Configuration 
(Configuration.java:loadResource(1245)) - 
file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final 
parameter: mapred.job.tracker;  Ignoring.
2012-10-03 12:57:41,738 WARN  conf.Configuration 
(Configuration.java:loadResource(1245)) - 
file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final 
parameter: mapred.job.tracker;  Ignoring.
2012-10-03 12:57:45,629 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be resolved.
2012-10-03 12:57:45,629 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it 
cannot be resolved.
2012-10-03 12:57:45,629 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
cannot be resolved.
2012-10-03 12:57:45,629 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it 
cannot be resolved.
2012-10-03 12:57:45,630 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.
2012-10-03 12:57:45,630 ERROR DataNucleus.Plugin (Log4JLogger.java:error(115)) 
- Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be 
resolved.
2012-10-03 12:57:46,321 WARN  conf.Configuration 
(Configuration.java:loadResource(1245)) - 
file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final 
parameter: mapred.job.tracker;  Ignoring.
2012-10-03 12:57:50,024 WARN  conf.Configuration 
(Configuration.java:loadResource(1245)) - 
file:/tmp/hive-default-4124574712561576117.xml:a attempt to override final 
parameter: fs.checkpoint.dir;  Ignoring.
2012-10-03 12:57:50,025 WARN  conf.Configuration 
(Configuration.java:loadResource(1245)) - 
file:/home/hadoop/Hive/conf/hive-site.xml:a attempt to override final 
parameter: mapred.job.tracker;  Ignoring.
2012-10-03 12:57:50,570 WARN  mapred.JobClient 
(JobClient.java:copyAndConfigureFiles(667)) - Use GenericOptionsParser for 
parsing the arguments. Applications should implement Tool for the same.
2012-10-03 12:57:50,748 WARN  snappy.LoadSnappy (LoadSnappy.java:(46)) 
- Snappy native library not loaded

My hive-site.xml is :-->

[hadoop@NHCLT-PC44-2 conf]$ cat hive-site.xml






hive.exec.scratchdir
/home/hadoop/tmp/hive-${user.name}
Scratch space for Hive jobs
  

  hive.metastore.warehouse.dir
/home/hadoop/user/hive/warehouse
  location of default database for the warehouse


  hive.querylog.location
  /home/hadoop/tmp/${user.name}
Directory where Directory where session is created in this 
directory. If this variable set to empty string session is created in this 
directory. If this variable set to empty string  



  hadoop.bin.path
  /home/hadoop/hadoop-1.0.3/bin/hadoop
The location of hadoop script which is used to submit 
jobs to hadoop when submitting through a separate jvm.


  mapred.job.tracker
hdfs://localhost:8021/
  Datanode1


  hadoop.config.dir
  /home/hadoop/hadoop-1.0.3/conf
  The location of the configuration directory of the hadoop 
installation


   javax.jdo.option.ConnectionURL
   jdbc:oracle:thin:@10.99.42.11:1521:clouddb


   javax.jdo.option.ConnectionDriverName
   oracle.jdbc.driver.OracleDriver


   javax.jdo.option.ConnectionUserName     
hiveuser


 javax.jdo.opt

hive query fails

2012-10-01 Thread Ajit Kumar Shreevastava
Dear all,

I am running following query and I m not getting any output. But select * from 
pokes is working fine.

[hadoop@NHCLT-PC44-2 bin]$ hive
Logging initialized using configuration in 
file:/home/hadoop/Hive/conf/hive-log4j.properties
Hive history 
file=/home/hadoop/tmp/hadoop/hive_job_log_hadoop_201210011620_669979453.txt
hive> select count(1) from pokes;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201210011527_0004, Tracking URL = 
http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210011527_0004
Kill Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill 
job_201210011527_0004


Thanks and Regards
Ajit Kumar Shreevastava
ADCOE (App Development Center Of Excellence )
Mobile: 9717775634



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




warning message while connecting Hive shell

2012-09-17 Thread Ajit Kumar Shreevastava
Hello ,
When I want to make connection with hive  I got the following warning message.

[hadoop@NHCLT-PC44-2 conf]$ hive
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use 
org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in 
jar:file:/home/hadoop/hive-0.8.1/lib/hive-common-0.8.1.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_201209171607_883156857.txt
hive> show tables;
OK
Time taken: 3.871 seconds
hive>

Does anyone have some clue regarding the warning message.


Thanks and Regards
Ajit Kumar Shreevastava
ADCOE (App Development Center Of Excellence )
Mobile: 9717775634



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




RE: New Learner for Haddop and Hive

2012-09-07 Thread Ajit Kumar Shreevastava
Hi,


I have Apace Hadoop 0.20.2 installed on Cygwin (on Windows 7). Hadoop is 
working fine through SSHD on Cygwin. After installing Hadoop, I have installed 
Hive 0.8.1 following the standard installation instructions. $HADOOP_HOME and 
$HIVE_HOME are both exported to PATH.

The problem I am facing is that when I type bin/hive from $HIVE_HOME I get the 
Hive (hive>) prompt. But when I type in something (e.g. SHOW TABLES; etc) there 
is no output. The cursor simply keeps on blinking.

I have tried some patches (e.g. https://issues.apache.org/jira/browse/HIVE-344) 
but it has not helped. Can somebody help?

Thanks,

Ajit

Regards,
Ajit

From: Babu, Prashanth [mailto:prashanth.b...@nttdata.com]
Sent: Friday, September 07, 2012 3:46 PM
To: user@hive.apache.org
Cc: u...@hadoop.apache.org; hdfs-user
Subject: RE: New Learner for Haddop and Hive

Ajit,

Please follow the guidelines as mentioned by Bertrand.

Apart from that, What is the output of
hive> show tables;

Does it display anything?
If not, what is the message or stacktrace you see on the console?

Also, post your Hadoop Environment details like what is the version of Hadoop, 
Hive, etc you are using etc and also the OS info, which will help the community 
to understand the issue better.


From: Bertrand Dechoux [mailto:decho...@gmail.com]
Sent: Wednesday, September 05, 2012 3:07 PM
To: user@hive.apache.org
Cc: u...@hadoop.apache.org; hdfs-user
Subject: Re: New Learner for Haddop and Hive


1) First do not post across multiple mailing list. I am replying to all in 
order to share my answer but you should then reply only on the mailing list 
which is concerned with your issue.
2) Does you hadoop installation work? If not, ask 
u...@hadoop.apache.org<mailto:u...@hadoop.apache.org>
3) Does your hive installation work? If not, ask 
user@hive.apache.org<mailto:user@hive.apache.org>
4) Explain your problem : " I got nothing" and no error message won't provide 
you any good feedback.
6) Hive installation is pretty well explained in the hive wiki : 
https://cwiki.apache.org/confluence/display/Hive/Home

Regards

Bertrand
On Wed, Sep 5, 2012 at 11:14 AM, Ajit Kumar Shreevastava 
mailto:ajit.shreevast...@hcl.com>> wrote:
Hi All,

I am new for Hive and Hadoop Technology. Can anyone tell me in detail " The 
process for setting Hadoop and Hive to start and work". I have visited lots of 
site and do the configuration according to that.
Now I want run Hive after starting all the services of hadoop,  I got nothing.

Ajit.Shreevastava@hclt-40124697<mailto:Ajit.Shreevastava@hclt-40124697> ~
$ start-all.sh
starting namenode, logging to 
/usr/local/hadoop/bin/../logs/hadoop-MYHADOOP-namenode-hclt-40124697.out
localhost: starting datanode, logging to 
/usr/local/hadoop/bin/../logs/hadoop-MYHADOOP-datanode-hclt-40124697.out
localhost: starting secondarynamenode, logging to 
/usr/local/hadoop/bin/../logs/hadoop-MYHADOOP-secondarynamenode-hclt-40124697.out
starting jobtracker, logging to 
/usr/local/hadoop/bin/../logs/hadoop-MYHADOOP-jobtracker-hclt-40124697.out
localhost: starting tasktracker, logging to 
/usr/local/hadoop/bin/../logs/hadoop-MYHADOOP-tasktracker-hclt-40124697.out

Ajit.Shreevastava@hclt-40124697<mailto:Ajit.Shreevastava@hclt-40124697> ~
$ hive
Logging initialized using configuration in 
jar:file:/C:/cygwin/usr/local/hadoop/hive/lib/hive-common-0.8.1.jar!/hive-log4j.properties
Hive history 
file=/tmp/ajit.shreevastava/hive_job_log_ajit.shreevastava_201209051440_1301558122.txt
hive> show tables;

I am also sending you my configuration files of Hive conf directory.

Please help me.

Regards,
Ajit Kumar Shreevastava


::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.
---

need helps for hive on cygwin

2012-09-07 Thread Ajit Kumar Shreevastava
I need some help on HIVE installation. Here is the scenario.

I have Apace Hadoop 0.20.2 installed on Cygwin (on Windows 7). Hadoop is 
working fine through SSHD on Cygwin. After installing Hadoop, I have installed 
Hive 0.8.1 following the standard installation instructions. $HADOOP_HOME and 
$HIVE_HOME are both exported to PATH.

The problem I am facing is that when I type bin/hive from $HIVE_HOME I get the 
Hive (hive>) prompt. But when I type in something (e.g. SHOW TABLES; etc) there 
is no output. The cursor simply keeps on blinking.

I have tried some patches (e.g. https://issues.apache.org/jira/browse/HIVE-344) 
but it has not helped. Can somebody help?

Thanks,

Ajit


Thanks and Regards
Ajit Kumar Shreevastava
ADCOE (App Development Center Of Excellence )
Mobile: 9717775634



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




Hive running issue on Cygwin

2012-09-06 Thread Ajit Kumar Shreevastava
Hi,
configure Hive 0.8.1 upon Hadoop 0.20.2 in Cygwin (on windows 7 64 bit). Hive 
is started properly as I am getting Hive CLI when type hive. But while running 
any command in hive its not returning any response.

Ajit.Shreevastava@hclt-40124697 ~
$ hive
Logging initialized using configuration in 
jar:file:/C:/cygwin/usr/local/hadoop/hive/lib/hive-common-0.8.1.jar!/hive-log4j.properties
Hive history 
file=/tmp/ajit.shreevastava/hive_job_log_ajit.shreevastava_201209061300_1652813806.txt
hive> show tables;




Thanks and Regards
Ajit Kumar Shreevastava
ADCOE (App Development Center Of Excellence )
Mobile: 9717775634



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.