RE: Hive query taking too much time
Hi Wojciech Langiewicz/Paul Mackles, I tried your suggestion and it worked, now the performance has increased many folds, here are the results from my testing after implementing your suggestion Number of Files on HDFS File Size Select count(*) time taken in seconds Select count(*) result 1 (created from 2624 CSVs ) 708.8 MB 66.258 3,567,922 3 (each created from 2624 CSVs ) 708.8 MB * 3 119.92 10,703,766 3 (each created from 2624 CSVs ) + 14 (each created from almost 200 CSVs) 708.8 MB *3 + Combined size of 14 files (ranging 48 Mb to 68 MB) is : 708.8 MB 153.306 14,271,688 Thanks a lot for your help. Kind Regards, Keshav C Savant From: Paul Mackles [mailto:pmack...@adobe.com] Sent: Tuesday, December 06, 2011 8:14 PM To: user@hive.apache.org Subject: RE: Hive query taking too much time How much time is it spending in the map/reduce phases, respectively? The large number of files could be creating a lot of mappers which create a lot of overhead. What happens if you merge the 2624 files into a smaller number like 24 or 48. That should speed up the mapper phase significantly. From: Savant, Keshav [mailto:keshav.c.sav...@fisglobal.com] Sent: Tuesday, December 06, 2011 6:01 AM To: user@hive.apache.org Subject: Hive query taking too much time Hi All, My setup is hadoop-0.20.203.0 hive-0.7.1 I am having a total of 5 node cluster: 4 data nodes, 1 namenode (it is also acting as secondary name node). On namenode I have setup hive with HiveDerbyServerMode to support multiple hive server connection. I have inserted plain text CSV files in HDFS using 'LOAD DATA' hive query statements, total number of files is 2624 an their combined size is only 713 MB, which is very less from Hadoop perspective that can handle TBs of data very easily. The problem is, when I run a simple count query (i.e. select count(*) from a_table), it takes too much time in executing the query. For instance it takes almost 17 minutes to execute the said query if the table has 950,000 rows, I understand that time is too much for executing a query with only such small data. This is only a dev environment and in production environment the number of files and their combined size will move into millions and GBs respectively. On analyzing the logs on all the datanodes and namenode/secondary namenode I do not find any error in them. I have tried setting mapred.reduce.tasks to a fixed number also, but number of reduce always remains 1 while number of maps is determined by hive only. Any suggestion what I am doing wrong, or how can I improve the performance of hive queries? Any suggestion or pointer is highly appreciated. Keshav _ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. _ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you.
Re: Hive query taking too much time
Hi, In this case it's much easier and faster to merge all files using this command: cat *.csv output.csv hive -e load data local inpath 'output.csv' into table $table On 07.12.2011 07:00, Vikas Srivastava wrote: hey if u having the same col of all the files then you can easily merge by shell script list=`*.csv` $table=yourtable for file in $list do cat $filenew_file.csv done hive -e load data local inpath '$file' into table $table it will merge all the files in single file then you can upload it in the same query On Tue, Dec 6, 2011 at 8:16 PM, Mohit Gupta success.mohit.gu...@gmail.comwrote: Hi Paul, I am having the same problem. Do you know any efficient way of merging the files? -Mohit On Tue, Dec 6, 2011 at 8:14 PM, Paul Macklespmack...@adobe.com wrote: How much time is it spending in the map/reduce phases, respectively? The large number of files could be creating a lot of mappers which create a lot of overhead. What happens if you merge the 2624 files into a smaller number like 24 or 48. That should speed up the mapper phase significantly. ** ** *From:* Savant, Keshav [mailto:keshav.c.sav...@fisglobal.com] *Sent:* Tuesday, December 06, 2011 6:01 AM *To:* user@hive.apache.org *Subject:* Hive query taking too much time ** ** Hi All, ** ** My setup is hadoop-0.20.203.0 hive-0.7.1 ** ** I am having a total of 5 node cluster: 4 data nodes, 1 namenode (it is also acting as secondary name node). On namenode I have setup hive with HiveDerbyServerMode to support multiple hive server connection. ** ** I have inserted plain text CSV files in HDFS using ‘LOAD DATA’ hive query statements, total number of files is 2624 an their combined size is only 713 MB, which is very less from Hadoop perspective that can handle TBs of data very easily. ** ** The problem is, when I run a simple count query (i.e. *select count(*) from a_table*), it takes too much time in executing the query. ** ** For instance it takes almost 17 minutes to execute the said query if the table has 950,000 rows, I understand that time is too much for executing a query with only such small data. This is only a dev environment and in production environment the number of files and their combined size will move into millions and GBs respectively. ** ** On analyzing the logs on all the datanodes and namenode/secondary namenode I do not find any error in them. ** ** I have tried setting mapred.reduce.tasks to a fixed number also, but number of reduce always remains 1 while number of maps is determined by hive only. ** ** Any suggestion what I am doing wrong, or how can I improve the performance of hive queries? Any suggestion or pointer is highly appreciated. ** ** Keshav _ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. -- Best Regards, Mohit Gupta Software Engineer at Vdopia Inc.
failed to start hbase!!
# start-hbase.sh SFserver176: Exception in thread regionserver60020 java.lang.NullPointerException SFserver176: at org.apache.hadoop.hbase.regionserver.HRegionServer.join(HRegionServer.java:1417) SFserver176: at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:683) SFserver176: at java.lang.Thread.run(Thread.java:662) Hi all. What's wrong with NullPointerException when I start the regionserver?
RE: Hive query taking too much time
You are right Wojciech Langiewicz, we did the same thing and posted my result yesterday. Now we are planning to do this using a shell script because of dynamicity of our environment where file keep on coming. We will schedule the shell script using cron job. A query on this, we are planning to merge files based on either of the following approach 1. Based on file count: If file count goes to X number of files, then merge and insert in HDFS. 2. Based on merged file size: If merged file size crosses beyond X number of bytes, then insert into HDFS. I think option 2 is better because in that way we can say that all merged files will be almost of same bytes. What do you suggest? Kind Regards, Keshav C Savant -Original Message- From: Wojciech Langiewicz [mailto:wlangiew...@gmail.com] Sent: Wednesday, December 07, 2011 8:15 PM To: user@hive.apache.org Subject: Re: Hive query taking too much time Hi, In this case it's much easier and faster to merge all files using this command: cat *.csv output.csv hive -e load data local inpath 'output.csv' into table $table On 07.12.2011 07:00, Vikas Srivastava wrote: hey if u having the same col of all the files then you can easily merge by shell script list=`*.csv` $table=yourtable for file in $list do cat $filenew_file.csv done hive -e load data local inpath '$file' into table $table it will merge all the files in single file then you can upload it in the same query On Tue, Dec 6, 2011 at 8:16 PM, Mohit Gupta success.mohit.gu...@gmail.comwrote: Hi Paul, I am having the same problem. Do you know any efficient way of merging the files? -Mohit On Tue, Dec 6, 2011 at 8:14 PM, Paul Macklespmack...@adobe.com wrote: How much time is it spending in the map/reduce phases, respectively? The large number of files could be creating a lot of mappers which create a lot of overhead. What happens if you merge the 2624 files into a smaller number like 24 or 48. That should speed up the mapper phase significantly. ** ** *From:* Savant, Keshav [mailto:keshav.c.sav...@fisglobal.com] *Sent:* Tuesday, December 06, 2011 6:01 AM *To:* user@hive.apache.org *Subject:* Hive query taking too much time ** ** Hi All, ** ** My setup is hadoop-0.20.203.0 hive-0.7.1 ** ** I am having a total of 5 node cluster: 4 data nodes, 1 namenode (it is also acting as secondary name node). On namenode I have setup hive with HiveDerbyServerMode to support multiple hive server connection. ** ** I have inserted plain text CSV files in HDFS using 'LOAD DATA' hive query statements, total number of files is 2624 an their combined size is only 713 MB, which is very less from Hadoop perspective that can handle TBs of data very easily. ** ** The problem is, when I run a simple count query (i.e. *select count(*) from a_table*), it takes too much time in executing the query. ** ** For instance it takes almost 17 minutes to execute the said query if the table has 950,000 rows, I understand that time is too much for executing a query with only such small data. This is only a dev environment and in production environment the number of files and their combined size will move into millions and GBs respectively. ** ** On analyzing the logs on all the datanodes and namenode/secondary namenode I do not find any error in them. ** ** I have tried setting mapred.reduce.tasks to a fixed number also, but number of reduce always remains 1 while number of maps is determined by hive only. ** ** Any suggestion what I am doing wrong, or how can I improve the performance of hive queries? Any suggestion or pointer is highly appreciated. ** ** Keshav _ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you. -- Best Regards, Mohit Gupta Software Engineer at Vdopia Inc. _ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you.
Data loading from Datanode
Hi All, Is it possible to load data (in HDFS) using Hive Load data query from any of the Datanode? So that means can we insert files into datanode directly (or from hive installed on datanode) and then the master node syncs with datanodes later. Keshav C Savant _ The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you.