RE: Hive Thrift Service - Not Running Continously

2013-08-05 Thread Bhaskar, Snehalata
Can you please try executing nohup hive --service hiveserver  command?

Regards,
Snehalata

From: Raj Hadoop [mailto:hadoop...@yahoo.com]
Sent: Monday, August 05, 2013 9:59 PM
To: Hive
Subject: Hive Thrift Service - Not Running Continously

Hi,


The hive thrift service is not running continously. I had to execute  the 
command (hive --service hiveserver ) very frequently . Can any one help me on 
this?

Thanks,
Raj


RE: java.io.FileNotFoundException(File does not exist) when running a hive query

2013-03-04 Thread Bhaskar, Snehalata
Does anyone know how to solve this issue??

Thanks and regards,
Snehalata Deorukhkar
Nortel No : 0229 -5814

From: Bhaskar, Snehalata [mailto:snehalata_bhas...@syntelinc.com]
Sent: Sunday, March 03, 2013 11:23 PM
To: user@hive.apache.org
Subject: java.io.FileNotFoundException(File does not exist) when running a hive 
query

Hi,

I am getting java.io.FileNotFoundException(File does not exist: 
/tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-1/1/emptyFile)'
  exception when running any join query :

Following is the query that I am using and exception thrown.


hive select * from retail_1 l join retail_2 t on l.product_name=t.product_name;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):

  set hive.exec.reducers.bytes.per.reducer=number

In order to limit the maximum number of reducers:

  set hive.exec.reducers.max=number

In order to set a constant number of reducers:

  set mapred.reduce.tasks=number

WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use 
org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.

Execution log at: 
/tmp/sb25634/sb25634_20130301232121_0c9f19d1-7846-4f4e-9469-401641fdd137.log

java.io.FileNotFoundException: File does not exist: 
/tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-1/1/emptyFile

at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:787)

at 
org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.init(CombineFileInputFormat.java:462)

at 
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256)

at 
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212)

at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:392)

at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:358)

at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:387)

at 
org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1041)

at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1033)

at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)

at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)

at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)

at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)

at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:870)

at 
org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)

at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:677)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

Job Submission failed with exception 'java.io.FileNotFoundException(File does 
not exist: 
/tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-1/1/emptyFile)'

Execution failed with exit status: 1

Obtaining error information



Task failed!

Task ID:

  Stage-1



Logs:



/tmp/sb25634/hive.log

FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.MapRedTask


What may be the cause of this error?

Please help me to resolve this issue.Thanks in advance.

Regards,
Snehalata Deorukhkar.



java.io.FileNotFoundException(File does not exist) when running a hive query

2013-03-03 Thread Bhaskar, Snehalata
Hi,

I am getting java.io.FileNotFoundException(File does not exist: 
/tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-1/1/emptyFile)'
  exception when running any join query :

Following is the query that I am using and exception thrown.


hive select * from retail_1 l join retail_2 t on l.product_name=t.product_name;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):

  set hive.exec.reducers.bytes.per.reducer=number

In order to limit the maximum number of reducers:

  set hive.exec.reducers.max=number

In order to set a constant number of reducers:

  set mapred.reduce.tasks=number

WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use 
org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.

Execution log at: 
/tmp/sb25634/sb25634_20130301232121_0c9f19d1-7846-4f4e-9469-401641fdd137.log

java.io.FileNotFoundException: File does not exist: 
/tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-1/1/emptyFile

at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:787)

at 
org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.init(CombineFileInputFormat.java:462)

at 
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256)

at 
org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212)

at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:392)

at 
org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:358)

at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:387)

at 
org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1041)

at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1033)

at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)

at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:943)

at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:896)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)

at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:896)

at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:870)

at 
org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)

at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:677)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.util.RunJar.main(RunJar.java:208)

Job Submission failed with exception 'java.io.FileNotFoundException(File does 
not exist: 
/tmp/sb25634/hive_2013-03-01_23-21-43_428_5325193042224363842/-mr-1/1/emptyFile)'

Execution failed with exit status: 1

Obtaining error information



Task failed!

Task ID:

  Stage-1



Logs:



/tmp/sb25634/hive.log

FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.MapRedTask


What may be the cause of this error?

Please help me to resolve this issue.Thanks in advance.

Regards,
Snehalata Deorukhkar.




RE: Adding comment to a table for columns

2013-02-21 Thread Bhaskar, Snehalata
Try using 'describe formatted' command  i.e.  describe formatted test

Thanks and regards,
Snehalata Deorukhkar

From: Chunky Gupta [mailto:chunky.gu...@vizury.com]
Sent: Thursday, February 21, 2013 4:47 PM
To: user@hive.apache.org
Subject: Adding comment to a table for columns


Hi,

I am using this syntax to add comments for all columns :-

CREATE EXTERNAL TABLE test ( c STRING COMMENT 'Common  class', time STRING 
COMMENT 'Common  time', url STRING COMMENT 'Site URL' ) PARTITIONED BY (dt 
STRING ) LOCATION 's3://BucketName/'

Output of Describe Extended table is like :- (Output is just an example copied 
from internet)

hive DESCRIBE EXTENDED table_name;

Detailed Table Information Table(tableName:table_name, dbName:benchmarking, 
owner:root, createTime:1309480053, lastAccessTime:0, retention:0, 
sd:StorageDescriptor(cols:[FieldSchema(name:session_key, type:string, 
comment:null), FieldSchema(name:remote_address, type:string, comment:null), 
FieldSchema(name:canister_lssn, type:string, comment:null), 
FieldSchema(name:canister_session_id, type:bigint, comment:null), 
FieldSchema(name:tltsid, type:string, comment:null), FieldSchema(name:tltuid, 
type:string, comment:null), FieldSchema(name:tltvid, type:string, 
comment:null), FieldSchema(name:canister_server, type:string, comment:null), 
FieldSchema(name:session_timestamp, type:string, comment:null), 
FieldSchema(name:session_duration, type:string, comment:null), 
FieldSchema(name:hit_count, type:bigint, comment:null), 
FieldSchema(name:http_user_agent, type:string, comment:null), 
FieldSchema(name:extractid, type:bigint, comment:null), 
FieldSchema(name:site_link, type:string, comment:null), FieldSchema(name:dt, 
type:string, comment:null), FieldSchema(name:hour, type:int, comment:null)], 
location:hdfs://hadoop2/user/hive/warehouse/benchmarking.db/table_name, 
inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe)

Is there any way of getting this detailed comments and column name in readable 
format, just like the output of Describe table_name ?.



Thanks,

Chunky.

Confidential: This electronic message and all contents contain information from 
Syntel, Inc. which may be privileged, confidential or otherwise protected from 
disclosure. The information is intended to be for the addressee only. If you 
are not the addressee, any disclosure, copy, distribution or use of the 
contents of this message is prohibited. If you have received this electronic 
message in error, please notify the sender immediately and destroy the original 
message and all copies.


RE: Hive or Hbase connectivity to jaspersoft reporting

2012-09-12 Thread Bhaskar, Snehalata
You may find this link helpful
http://www.javacodegeeks.com/2012/02/big-data-analytics-with-hive-and.ht
ml  .

 

The steps to generate reports from hive tables using Jaspersoft are
nicely explained in that article.

 

Thanks and regards,

Snehalata Deorukhkar

Nortel No : 0229 -5814

 

From: iwannaplay games [mailto:funnlearnfork...@gmail.com] 
Sent: Wednesday, September 12, 2012 12:01 PM
To: user; user
Subject: Hive or Hbase connectivity to jaspersoft reporting

 

Hi all,


I want to generate reports from hbase and hive tables.I installed
jaspersoft for the same but i am unable to create a DSN as its not
connecting to 1 port.
Can anybody suggest the steps we need to create a hive server  or how to
create a jdbc URL to connect to hive database.

Regards
Prabhjot


Confidential: This electronic message and all contents contain information from 
Syntel, Inc. which may be privileged, confidential or otherwise protected from 
disclosure. The information is intended to be for the addressee only. If you 
are not the addressee, any disclosure, copy, distribution or use of the 
contents of this message is prohibited. If you have received this electronic 
message in error, please notify the sender immediately and destroy the original 
message and all copies.


Array of Structs

2012-07-13 Thread Bhaskar, Snehalata
Hi all,

 

How to create an array of structs in hive? And How to populate that
table with data?

 

Please help.

 

Thanks and regards,

Snehalata Deorukhkar

Nortel No:0229-5814

 


Confidential: This electronic message and all contents contain information from 
Syntel, Inc. which may be privileged, confidential or otherwise protected from 
disclosure. The information is intended to be for the addressee only. If you 
are not the addressee, any disclosure, copy, distribution or use of the 
contents of this message is prohibited. If you have received this electronic 
message in error, please notify the sender immediately and destroy the original 
message and all copies.


loading data in an array within a map

2012-06-28 Thread Bhaskar, Snehalata
Hi all,

 

I have created a table which has array within a map. I am using
following query.

 

hive create table user_profiles

 (

 userid string,

 friends arraystring,

 properties mapstring, arraystring

 )

 ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' COLLECTION ITEMS
TERMINATED BY ':' MAP KEYS TERMINATED BY '#' LINES TERMINATED BY '\n'

 LOCATION '/user/sb25634/user_profiles_table';

OK

Time taken: 0.794 seconds

 

But I am not able to load data properly in the array inside a map. I am
facing problem with the delimiters used in the file used for data
loading. Is there any way I can indicate the delimiters for array
elements?

Please help me.

 

 

Thanks and regards,

Snehalata Deorukhkar

Nortel No:0229-5814

 


Confidential: This electronic message and all contents contain information from 
Syntel, Inc. which may be privileged, confidential or otherwise protected from 
disclosure. The information is intended to be for the addressee only. If you 
are not the addressee, any disclosure, copy, distribution or use of the 
contents of this message is prohibited. If you have received this electronic 
message in error, please notify the sender immediately and destroy the original 
message and all copies.