Not sure if will help you, but you can try to use the map-join hint,
basically hinting Hive to put a specific table in memory:

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+JoinOptimization#LanguageManualJoinOptimization-PriorSupportforMAPJOIN

On Fri, Sep 11, 2015 at 11:16 PM, Sergey Shelukhin <ser...@hortonworks.com>
wrote:

> As far as I know it’s not currently supported.
> The large table will be streamed in multiple tasks with the small table in
> memory, so there’s not one place that knows for sure there was no row in
> the large table for a particular small table row in any of the locations.
> It could have no match in one task but a match in other task.
> You can try rewriting the query as inner join unioned with not in, but
> “not in” might still be slow…
> IIRC there was actually a JIRA to solve this, but no work has been done so
> far.
>
> From: Steve Howard <stevedhow...@gmail.com>
> Reply-To: "user@hive.apache.org" <user@hive.apache.org>
> Date: Friday, September 11, 2015 at 09:48
> To: "user@hive.apache.org" <user@hive.apache.org>
> Subject: mapjoin with left join
>
> We would like to utilize mapjoin for the following SQL construct:
>
> select small.* from small s left join large l on s.id = l.id where l.id is
> null;
>
> We can easily fit small into RAM, but large is over 1TB according to
> optimizer stats. Unless we set
> hive.auto.convert.join.noconditionaltask.size = to at least the size of
> "large", the optimizer falls back to a common map join, which is incredibly
> slow.
>
> Given the fact it is a left join, which means we won't always have rows in
> large for each row in small, is this behavior expected? Could it be that
> reading the large table would miss the new rows in small, so the large one
> has to be the one that is probed for matches?
>
> We simply want to load the 81K rows in to RAM, then for each row in large,
> check the small hash table and if it the row in small is not in large, then
> add it to large.
>
> Again, the optimizer will use a mapjoin if we set
> hive.auto.convert.join.noconditionaltask.size = 1TB (the size of the large
> table). This is of course, not practical. The small table is only 50MB.
>
> At the link below is the entire test case with two tables, one of which
> has three rows and other has 96. We can duplicate it with tables this
> small, which leads me to believe I am missing something, or this is a bug.
>
> The link has the source code that shows each table create, as well as the
> explain with an argument for hive.auto.convert.join.noconditionaltask.size
> that is passed at the command line. The output shows a mergejoin when the
> hive.auto.convert.join.noconditionaltask.size size is less than 192 (the
> size of the larger table), and a mapjoin when
> hive.auto.convert.join.noconditionaltask.size is larger than 192 (large
> table fits).
>
> http://pastebin.com/Qg6hb8yV
>
> The business case is loading only new rows into a large fact table.  The
> new rows are the ones that are small in number.
>

-- 
This e-mail, as well as any attached document, may contain material which 
is confidential and privileged and may include trademark, copyright and 
other intellectual property rights that are proprietary to Kenshoo Ltd, 
 its subsidiaries or affiliates ("Kenshoo"). This e-mail and its 
attachments may be read, copied and used only by the addressee for the 
purpose(s) for which it was disclosed herein. If you have received it in 
error, please destroy the message and any attachment, and contact us 
immediately. If you are not the intended recipient, be aware that any 
review, reliance, disclosure, copying, distribution or use of the contents 
of this message without Kenshoo's express permission is strictly prohibited.

Reply via email to