Home » RDBMS Server » Performance Tuning » need more memory for hash joins (9i / 10g / 11gR2)
need more memory for hash joins [message #571679] Wed, 28 November 2012 17:52 Go to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
Hello all. Recent events at work are forcing me to take a much closer look at hash joins in an attempt to understand them much deeper than just on the surface. But my question today is maybe simple. I have done lots of reading and can't for the life of me figure out how to get more memory to my HASH JOINS.

So... my question is... is there are way to get around this limit of 2GB on a box that has 64GB with some 20gb not in use?

Please consider:

1) my databases are all using workarea_size_policy=AUTO

2) I am not afraid to go back to =MANUAL and set my own work area sizes.

3) It seems I cannot set HASH_AREA_SIZE to more that about 2GB.

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Solaris: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production


18:40:31 SQL> alter session set hash_area_size = 6000000000 ;
alter session set hash_area_size = 6000000000
                                   *
ERROR at line 1:
ORA-02017: integer value required


I know there is a limit of about 2GB on my box for HASH_AREA_SIZE and setting it to 2GB works fine. But it is still not enough.

18:50:22 SQL> alter session set hash_area_size = 2147483647;

Session altered.

Elapsed: 00:00:00.23


So... my question is... is there are way to get around this limit of 2GB on a box that has 64GB with some 20gb not in use? Using hash_area_size and 2GB, I get better performance than with my current PGA_AGGREGATE_TARGET doing the allocation for me.

I think I'd like to get as much as 20GB to specific sessions for hash joins but maybe I am pipe dreaming? You guys tell me?

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
_pga_max_size                        big integer 1258280K
pga_aggregate_target                 big integer 6G


Kevin
Re: need more memory for hash joins [message #571684 is a reply to message #571679] Thu, 29 November 2012 02:29 Go to previous messageGo to next message
Roachcoach
Messages: 1576
Registered: May 2010
Location: UK
Senior Member
I had a similar issue, albeit with different memory numbers.

What I found is that even with a massive PGA allocated the damned thing wouldn't use all memory for a hash join, even though it could have.

Now what I did find, which was unexpected, is that Oracle lets a SM join use more memory. I found that pushing my queries into forced SM joins as opposed to hash was actually faster since it allowed the whole operation to occur in memory and the performance advantage held by the hash join was eaten up and then some by the losses writing to temp.

I release it's not exactly what you're asking, but may be food for thought. If you've got the memory it might be worth some benchmarks.
Re: need more memory for hash joins [message #571703 is a reply to message #571679] Thu, 29 November 2012 05:06 Go to previous messageGo to next message
John Watson
Messages: 8928
Registered: January 2010
Location: Global Village
Senior Member
While tuning a datapump import recently, I found that the index building phase could be vastly improved by disabling AMM and setting up PGA like this:

alter system set pga_aggregate_target=8g;
alter system set "_pga_max_size"=6g;

The PGA usage by the importing process (this was standard edition, so no parallelism) peaked at about 3.5G. This will have been sort area, so again it isn't what you need. But it might work the same way for hash area.
Re: need more memory for hash joins [message #571721 is a reply to message #571703] Thu, 29 November 2012 09:33 Go to previous messageGo to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
Thanks. These are some interesting ideas.

I have just put in a TAR with Oracle to ask for clarification of HASH_AREA_SIZE and PGA_AGGREGATE_TARGET and if it is at all possible to allocate more than 2GB to hash join work areas. After they get through sending me to read documents I have already read, and not paying attention to my problem description (I figure three communication cycles) then maybe they will provide a real answer. Still, Oracle support is some of the best support in the business these days I think so I am hopeful.

I'll update with responses once they come in.

Kevin

[Updated on: Thu, 29 November 2012 09:35]

Report message to a moderator

Re: need more memory for hash joins [message #571923 is a reply to message #571721] Tue, 04 December 2012 06:00 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
Not speaking from authority here, but isn't PGA_AGGREGATE_TARGET and WORKAREA_SIZE_POLICY all pre-11g stuff used with Automatic PGA Memory Management? And HASH_AREA_SIZE / SORT_AREA_SIZE goes back to v7.

With the introduction of Automatic Memory Management in 11g, you just set the MEMORY_TARGET to the total you want to make available to PGA and SGA and let Oracle do the rest. Have you tried using AMM?

Also, a possibly relevant note for 3+ table joins: In an N-table join, Oracle will try to hash the N-1 smallest tables at the same time. If there are many large tables then they all compete for the same scarce memory resources and result in excessive dumping to disk. Sort-Merge joins happen one at a time, so at any point in time you only need memory to sort the next table in the join plus the result-set from the previous join. If this can fit in available memory and the equivalent N-table hash join spills to disk, the sort-merge can be considerably faster.

Ross Leishman
Re: need more memory for hash joins [message #571959 is a reply to message #571923] Tue, 04 December 2012 21:03 Go to previous message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
No, many of my databases area still 9i and 10g, and most of those on 11g are using HUGEPAGES which at least for now is not compatible with AMM. Was a good idea though.

I too do not count myself as an expert in this area. However I learn quick. I am currently trying to understand how workareas work when mated with parallel query. I am thinking that for specific jobs (one or two) that are giving me trouble, using HASH_AREA_SIZE and PARALLEL QUERY may allow me to allocate much more memory than any of the auto schemes will let me do. I just have to work through the details. Or I could be wrong.

Kevin
Previous Topic: Help in Oracle Query Optimization
Next Topic: Please help me to tune this query
Goto Forum:
  


Current Time: Tue Apr 16 06:25:10 CDT 2024