RE: Hive Bucketing

2016-01-27 Thread Mich Talebzadeh
rom: Gopal Vijayaraghavan [mailto:go...@hortonworks.com] On Behalf Of Gopal Vijayaraghavan Sent: 26 January 2016 21:36 To: user@hive.apache.org Subject: Re: Hive Bucketing > Ok so what is the resolution here? My understanding is that bucketing >does not improve the performance. Is that correct?

Re: Hive Bucketing

2016-01-26 Thread Gopal Vijayaraghavan
> Ok so what is the resolution here? My understanding is that bucketing >does not improve the performance. Is that correct? There are no right answers here - I spend a lot of time fixing over-zealous optimization attempts

RE: Hive Bucketing

2016-01-26 Thread Mich Talebzadeh
ty. From: Akansha Jain [mailto:akansha.15au...@gmail.com] Sent: 25 January 2016 22:44 To: user@hive.apache.org Subject: RE: Hive Bucketing Thanks for detailed explanation. Even without bucket pruning, expectation from bucketing is performance improvement. I am joining two tables which a

Re: Hive Bucketing

2016-01-26 Thread Prasanth Jayachandran
ology Ltd, its subsidiaries nor their employees accept any responsibility. From: 谭成灶 [mailto:tanx...@live.cn] Sent: 25 January 2016 15:37 To: user@hive.apache.org Subject: 答复: Hive Bucketing Hi,how to efficient insert into an orc bucket table,I found it too slow.thanks you _

Re: Hive Bucketing

2016-01-26 Thread 谭成灶
r their employees accept any responsibility. From: 谭成灶 [mailto:tanx...@live.cn] Sent: 25 January 2016 15:37 To: user@hive.apache.org Subject: 答复: Hive Bucketing Hi,how to efficient insert into an orc bucket table,I found it too slow.thanks you 发件人: Mich Talebzadeh 发送时间: ‎2

Re: 答复: Hive Bucketing

2016-01-25 Thread Gopal Vijayaraghavan
> Hi,how to efficient insert into an orc bucket table,I found it too >slow.thanks you Assuming you have partitions & slow inserts, you need to enable the flag from HIVE-6455 set hive.optimize.sort.dynamic.partition=true; Cheers, Gopal

RE: Hive Bucketing

2016-01-25 Thread Akansha Jain
sure that this email is virus > free, therefore neither Peridale Technology Ltd, its subsidiaries nor their > employees accept any responsibility. > > > > *From:* Akansha Jain [mailto:akansha.15au...@gmail.com] > *Sent:* 22 January 2016 23:20 > *To:* user@hive.apache.org >

RE: Hive Bucketing

2016-01-25 Thread Mich Talebzadeh
ent: 25 January 2016 15:37 To: user@hive.apache.org Subject: 答复: Hive Bucketing Hi,how to efficient insert into an orc bucket table,I found it too slow.thanks you _ 发件人: Mich Talebzadeh <mailto:m...@peridale.co.uk> 发送时间: ‎2016/‎1/‎23 7:31 收件人: user@hive.apache.org <mailto:

答复: Hive Bucketing

2016-01-25 Thread 谭成灶
Hi,how to efficient insert into an orc bucket table,I found it too slow.thanks you 发件人: Mich Talebzadeh<mailto:m...@peridale.co.uk> 发送时间: ‎2016/‎1/‎23 7:31 收件人: user@hive.apache.org<mailto:user@hive.apache.org> 主题: RE: Hive Bucketing Hi, I

RE: Hive Bucketing

2016-01-22 Thread Mich Talebzadeh
ty. From: Akansha Jain [mailto:akansha.15au...@gmail.com] Sent: 22 January 2016 23:20 To: user@hive.apache.org Subject: RE: Hive Bucketing Thanks for response. I am using 0.13 mapr version. Could you tell more about bucket pruning. On Jan 22, 2016 3:09 PM, "Mich Talebzadeh" mailto:m...

RE: Hive Bucketing

2016-01-22 Thread Akansha Jain
responsibility of the recipient to ensure that this email is virus > free, therefore neither Peridale Technology Ltd, its subsidiaries nor their > employees accept any responsibility. > > > > *From:* Akansha Jain [mailto:akansha.15au...@gmail.com] > *Sent:* 22 January 2016 21:55

RE: Hive Bucketing

2016-01-22 Thread Mich Talebzadeh
o stated. It is the responsibility of the recipient to ensure that this email is virus free, therefore neither Peridale Technology Ltd, its subsidiaries nor their employees accept any responsibility. From: Akansha Jain [mailto:akansha.15au...@gmail.com] Sent: 22 January 2016 21:55 To: user@h

Hive Bucketing

2016-01-22 Thread Akansha Jain
Hi All, I have enabled bucketing in table. I created 256 buckets on user id. Now when I am querying (select count(*) from table where userid =172839393) that table, map reduce should only use single partitioned file as input to mappers. But its considering all files as input to mapper and I don't s