Re: non map-reduce for simple queries
On 7/31/12 9:23 PM, "Owen O'Malley" wrote: >On Mon, Jul 30, 2012 at 11:38 PM, Namit Jain wrote: > >> That would be difficult. The % done can be estimated from the data >>already >> read. >> > >I'm confused. Wouldn't the maximum size of the data remaining over the >maximum size of the original query give a reasonable approximation of the >amount of work done? > Yes and No, the filter behavior can vary a lot with the rows. But, yes that is the best approximation we can have. > >> >> It might be simpler to have a check like: if the query isn't done in >> the first 5 seconds of running locally, you switch to mapreduce. >> > >There are three problems I see: > * If the query is 95% done at 5 seconds, it is a shame to kill it and >start over again at 0% on mapreduce with a much longer latency. (Instead >of >spending the additional 0.25 seconds you spend an additional 60+.) > * You can't print anything until you know whether you are going to kill >it or not. (The mapreduce results might come back in a different >order) >With user-facing programs, it is much better to start printing early >instead of later since it gives faster feedback to the user. We cannot do this in either of the above approaches. > * It isn't predictable how the query will run. That makes it very hard >to >build applications on top of Hive. > >Do those make sense?
Re: non map-reduce for simple queries
On Mon, Jul 30, 2012 at 11:38 PM, Namit Jain wrote: > That would be difficult. The % done can be estimated from the data already > read. > I'm confused. Wouldn't the maximum size of the data remaining over the maximum size of the original query give a reasonable approximation of the amount of work done? > > It might be simpler to have a check like: if the query isn't done in > the first 5 seconds of running locally, you switch to mapreduce. > There are three problems I see: * If the query is 95% done at 5 seconds, it is a shame to kill it and start over again at 0% on mapreduce with a much longer latency. (Instead of spending the additional 0.25 seconds you spend an additional 60+.) * You can't print anything until you know whether you are going to kill it or not. (The mapreduce results might come back in a different order) With user-facing programs, it is much better to start printing early instead of later since it gives faster feedback to the user. * It isn't predictable how the query will run. That makes it very hard to build applications on top of Hive. Do those make sense?
Re: non map-reduce for simple queries
On 7/31/12 12:01 PM, "Owen O'Malley" wrote: >On Mon, Jul 30, 2012 at 9:12 PM, Namit Jain wrote: > >> The total number of bytes of the input will be used to determine whether >> to not launch a map-reduce job for this >> query. That was in my original mail. >> >> However, given any complex where condition and the lack of column >> statistics in hive, we cannot determine the >> number of bytes that would be needed to satisfy the where condition. > > >All of these are heuristics are guidelines, clearly. My inclination would >be to use the maximum data volume as the primary metric until we have a >better understanding of cases where that doesn't work well. If we are >going Maximum data volume can be used to dictate the initial behavior. That has been already documented in the jira. >to try the local solution and fall back to mapreduce, it seems better to >put a limit well short of being done so that you don't waste as much work. >Perhaps, if the query isn't 10% done in the first 5 seconds of running >locally, you switch to mapreduce. Would that work? That would be difficult. The % done can be estimated from the data already read. It might be simpler to have a check like: if the query isn't done in the first 5 seconds of running locally, you switch to mapreduce. > >-- Owen
Re: non map-reduce for simple queries
On Mon, Jul 30, 2012 at 9:12 PM, Namit Jain wrote: > The total number of bytes of the input will be used to determine whether > to not launch a map-reduce job for this > query. That was in my original mail. > > However, given any complex where condition and the lack of column > statistics in hive, we cannot determine the > number of bytes that would be needed to satisfy the where condition. All of these are heuristics are guidelines, clearly. My inclination would be to use the maximum data volume as the primary metric until we have a better understanding of cases where that doesn't work well. If we are going to try the local solution and fall back to mapreduce, it seems better to put a limit well short of being done so that you don't waste as much work. Perhaps, if the query isn't 10% done in the first 5 seconds of running locally, you switch to mapreduce. Would that work? -- Owen
Re: non map-reduce for simple queries
The total number of bytes of the input will be used to determine whether to not launch a map-reduce job for this query. That was in my original mail. However, given any complex where condition and the lack of column statistics in hive, we cannot determine the number of bytes that would be needed to satisfy the where condition. On 7/31/12 7:07 AM, "Navis류승우" wrote: >It supports table sampling also. > >select * from src TABLESAMPLE (BUCKET 1 OUT OF 40 ON key); >select * from src TABLESAMPLE (0.25 PERCENT); > >But there is no sampling option specifying number of bytes. This can be >done in another issue. > >2012/7/31 Owen O'Malley > >> On Sat, Jul 28, 2012 at 6:17 PM, Navis류승우 wrote: >> >> > I was thinking of timeout for fetching, 2000msec for example. How >>about >> > that? >> > >> >> Instead of time, which requires launching the query and letting it >>timeout, >> how about determining the number of bytes that would need to be fetched >>to >> the local box? Limiting it to 100 or 200 mb seems reasonable. >> >> -- Owen >>
Re: non map-reduce for simple queries
It supports table sampling also. select * from src TABLESAMPLE (BUCKET 1 OUT OF 40 ON key); select * from src TABLESAMPLE (0.25 PERCENT); But there is no sampling option specifying number of bytes. This can be done in another issue. 2012/7/31 Owen O'Malley > On Sat, Jul 28, 2012 at 6:17 PM, Navis류승우 wrote: > > > I was thinking of timeout for fetching, 2000msec for example. How about > > that? > > > > Instead of time, which requires launching the query and letting it timeout, > how about determining the number of bytes that would need to be fetched to > the local box? Limiting it to 100 or 200 mb seems reasonable. > > -- Owen >
Re: non map-reduce for simple queries
On Sat, Jul 28, 2012 at 6:17 PM, Navis류승우 wrote: > I was thinking of timeout for fetching, 2000msec for example. How about > that? > Instead of time, which requires launching the query and letting it timeout, how about determining the number of bytes that would need to be fetched to the local box? Limiting it to 100 or 200 mb seems reasonable. -- Owen
Re: non map-reduce for simple queries
This can be a follow-up to HIVE-2925. Navis, if you want, I can work on it. On 7/29/12 7:58 PM, "Namit Jain" wrote: >I like Navis's idea. The timeout can be configurable. > > >On 7/29/12 6:47 AM, "Navis류승우" wrote: > >>I was thinking of timeout for fetching, 2000msec for example. How about >>that? >> >>2012년 7월 29일 일요일에 Edward Capriolo님이 작성: >>> If where condition is too complex , selecting specific columns seems >>simple >>> enough and useful. >>> >>> On Saturday, July 28, 2012, Namit Jain wrote: Currently, hive does not launch map-reduce jobs for the following >>queries: select * from where (limit )? This behavior is not configurable, and cannot be altered. HIVE-2925 wants to extend this behavior. The goal is not to spawn >>> map-reduce jobs for the following queries: Select from where (limit )? It is currently controlled by one parameter: >>> hive.aggressive.fetch.task.conversion, based on which it is decided, >>> whether to spawn map-reduce jobs or not for the queries of the above type. Note that this >>> can be beneficial for certain types of queries, since it is avoiding the expensive step of spawning map-reduce. However, it can be >>> pretty expensive for certain types of queries: selecting a very large number of rows, the query having a very selective filter >>> (which is satisfied by a very number of rows, and therefore involves scanning a very large table) etc. The user does not have any control on >>> this. Note that it cannot be done by hooks, since the pre-semantic hooks does not have enough information: type of the query, inputs etc. >>> and it is too late to do anything in the post-semantic hook (the query plan has already been altered). I would like to propose the following configuration parameters to control >>> this behavior. hive.fetch.task.conversion: true, false, auto If the value is true, then all queries with only selects and filters will >>> be converted If the value is false, then no query will be converted If the value is auto (which should be the default behavior), there should >>> be additional parameters to control the semantics. hive.fetch.task.auto.limit.threshold ---> integer value X1 hive.fetch.task.auto.inputsize.threshold ---> integer value X2 If either the query has a limit lower than X1, or the input size is >>> smaller than X2, the queries containing only filters and selects will >>>be >>> converted to not use map-reudce jobs. Comments… -namit >>> >
Re: non map-reduce for simple queries
I like Navis's idea. The timeout can be configurable. On 7/29/12 6:47 AM, "Navis류승우" wrote: >I was thinking of timeout for fetching, 2000msec for example. How about >that? > >2012년 7월 29일 일요일에 Edward Capriolo님이 작성: >> If where condition is too complex , selecting specific columns seems >simple >> enough and useful. >> >> On Saturday, July 28, 2012, Namit Jain wrote: >>> Currently, hive does not launch map-reduce jobs for the following >queries: >>> >>> select * from where (limit )? >>> >>> This behavior is not configurable, and cannot be altered. >>> >>> HIVE-2925 wants to extend this behavior. The goal is not to spawn >> map-reduce jobs for the following queries: >>> >>> Select from where (limit )? >>> >>> It is currently controlled by one parameter: >> hive.aggressive.fetch.task.conversion, based on which it is decided, >> whether to spawn >>> map-reduce jobs or not for the queries of the above type. Note that >>>this >> can be beneficial for certain types of queries, since it is >>> avoiding the expensive step of spawning map-reduce. However, it can be >> pretty expensive for certain types of queries: selecting >>> a very large number of rows, the query having a very selective filter >> (which is satisfied by a very number of rows, and therefore involves >>> scanning a very large table) etc. The user does not have any control on >> this. Note that it cannot be done by hooks, since the pre-semantic >>> hooks does not have enough information: type of the query, inputs etc. >> and it is too late to do anything in the post-semantic hook (the >>> query plan has already been altered). >>> >>> I would like to propose the following configuration parameters to >>>control >> this behavior. >>> hive.fetch.task.conversion: true, false, auto >>> >>> If the value is true, then all queries with only selects and filters >>>will >> be converted >>> If the value is false, then no query will be converted >>> If the value is auto (which should be the default behavior), there >>>should >> be additional parameters to control the semantics. >>> >>> hive.fetch.task.auto.limit.threshold ---> integer value >>>X1 >>> hive.fetch.task.auto.inputsize.threshold ---> integer value X2 >>> >>> If either the query has a limit lower than X1, or the input size is >> smaller than X2, the queries containing only filters and selects will be >> converted to not use >>> map-reudce jobs. >>> >>> >>> Comments… >>> >>> -namit >>> >>> >>> >>
Re: non map-reduce for simple queries
I was thinking of timeout for fetching, 2000msec for example. How about that? 2012년 7월 29일 일요일에 Edward Capriolo님이 작성: > If where condition is too complex , selecting specific columns seems simple > enough and useful. > > On Saturday, July 28, 2012, Namit Jain wrote: >> Currently, hive does not launch map-reduce jobs for the following queries: >> >> select * from where (limit )? >> >> This behavior is not configurable, and cannot be altered. >> >> HIVE-2925 wants to extend this behavior. The goal is not to spawn > map-reduce jobs for the following queries: >> >> Select from where (limit )? >> >> It is currently controlled by one parameter: > hive.aggressive.fetch.task.conversion, based on which it is decided, > whether to spawn >> map-reduce jobs or not for the queries of the above type. Note that this > can be beneficial for certain types of queries, since it is >> avoiding the expensive step of spawning map-reduce. However, it can be > pretty expensive for certain types of queries: selecting >> a very large number of rows, the query having a very selective filter > (which is satisfied by a very number of rows, and therefore involves >> scanning a very large table) etc. The user does not have any control on > this. Note that it cannot be done by hooks, since the pre-semantic >> hooks does not have enough information: type of the query, inputs etc. > and it is too late to do anything in the post-semantic hook (the >> query plan has already been altered). >> >> I would like to propose the following configuration parameters to control > this behavior. >> hive.fetch.task.conversion: true, false, auto >> >> If the value is true, then all queries with only selects and filters will > be converted >> If the value is false, then no query will be converted >> If the value is auto (which should be the default behavior), there should > be additional parameters to control the semantics. >> >> hive.fetch.task.auto.limit.threshold ---> integer value X1 >> hive.fetch.task.auto.inputsize.threshold ---> integer value X2 >> >> If either the query has a limit lower than X1, or the input size is > smaller than X2, the queries containing only filters and selects will be > converted to not use >> map-reudce jobs. >> >> >> Comments… >> >> -namit >> >> >> >
Re: non map-reduce for simple queries
If where condition is too complex , selecting specific columns seems simple enough and useful. On Saturday, July 28, 2012, Namit Jain wrote: > Currently, hive does not launch map-reduce jobs for the following queries: > > select * from where (limit )? > > This behavior is not configurable, and cannot be altered. > > HIVE-2925 wants to extend this behavior. The goal is not to spawn map-reduce jobs for the following queries: > > Select from where (limit )? > > It is currently controlled by one parameter: hive.aggressive.fetch.task.conversion, based on which it is decided, whether to spawn > map-reduce jobs or not for the queries of the above type. Note that this can be beneficial for certain types of queries, since it is > avoiding the expensive step of spawning map-reduce. However, it can be pretty expensive for certain types of queries: selecting > a very large number of rows, the query having a very selective filter (which is satisfied by a very number of rows, and therefore involves > scanning a very large table) etc. The user does not have any control on this. Note that it cannot be done by hooks, since the pre-semantic > hooks does not have enough information: type of the query, inputs etc. and it is too late to do anything in the post-semantic hook (the > query plan has already been altered). > > I would like to propose the following configuration parameters to control this behavior. > hive.fetch.task.conversion: true, false, auto > > If the value is true, then all queries with only selects and filters will be converted > If the value is false, then no query will be converted > If the value is auto (which should be the default behavior), there should be additional parameters to control the semantics. > > hive.fetch.task.auto.limit.threshold ---> integer value X1 > hive.fetch.task.auto.inputsize.threshold ---> integer value X2 > > If either the query has a limit lower than X1, or the input size is smaller than X2, the queries containing only filters and selects will be converted to not use > map-reudce jobs. > > > Comments… > > -namit > > >