Home » RDBMS Server » Performance Tuning » Can bigger blocksize for indexes help increasing performance and lower I/O
Can bigger blocksize for indexes help increasing performance and lower I/O [message #138677] Fri, 23 September 2005 00:36 Go to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
I have very high I/O activity due to 3rd party application which I do not control.
I have used in the past 32KA blocksize for indexes to improve performance and had very good results. The application support though said that they are too competent in Oracle but not to expect better results because of the "application nature".
Here is what the app does:
reads from a file 1 records at the time, inserts different parts of this records into 6-7 major tables, then selects from the same tables to compare the sets and if there are matches inserts into audit tables - every new records creates a search for matches as well. There are also 3-4 history tables for each record processed. Not a big deal... but there are up to 70 processes running this app simultanteously.
The I/O and concurrency come from inserting and selecting from the same tables and the number of sessions. I cannot change this as it's a 3rd party product.

My question is: Am I going to get better performance if I move the indexes to the 32K blocksize as I'm expecting? Ora 10.1.0.4, aix5.2, 12Gb server Ram, blocksize for all - 8K. Right now running with 6GB SGA and about 3GB for PGA.
Also, as there are multiple processes running and selecting randomly small amount of data to compare, if I decrease the size for the table data to 2K, is this going to affect my performance as well?
My major waits are for db file sequential read due to the waits for selecting randomly data from big tables - up to 60mln records, and log file sync - due to the millions of single commits after each small insert.

Any idea is greatly appreciated.
Thanks a lot, mj
Re: Can bigger blocksize for indexes help increasing performance and lower I/O [message #138747 is a reply to message #138677] Fri, 23 September 2005 07:32 Go to previous messageGo to next message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
My initial thought is, as always, that you should run a test with 32k and 2k block size and compare the difference. Maybe a third at 8k, run a copy of the app in QA, and benchmark it. That takes all the "in theory" stuff and puts it to the test.

That said, in theory, I would think a smaller block size:

1) is going to cause less contention for simultaneous small modification processes as you describe, at both the disk and latch/enqueue level due to less information stored on each block.

2) would cause less io on small single row select statements because if you only need one row of data you would only be reading 2K of info and using most of it rather than reading 32K and ignoring most of it.
Re: Can bigger blocksize for indexes help increasing performance and lower I/O [message #138765 is a reply to message #138747] Fri, 23 September 2005 08:25 Go to previous messageGo to next message
alliejane
Messages: 59
Registered: July 2005
Location: Glasgow
Member
Mmmm.

To put forward another view for a larger block size for the index tablespace.

If you have small single row queries that are using index access, and the querys are running often enough, then once the index block is brought into the buffer cache it can be used to service many queries since it has many index entries in it. Less I/O to disk..

Buffer busy waits may be a problem for a high transactional index but a larger intrans and maxtrans may mitigate that..

Just a thought..
Re: Can bigger blocksize for indexes help increasing performance and lower I/O [message #138777 is a reply to message #138765] Fri, 23 September 2005 10:17 Go to previous message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
Yes, these were my thoughts too.
I have small queries executing constantly with bind variables.
My bigger size index will keep the index entries in the cache making them available for all simultaneously running processes.
My concern was that the data is selected so randomly that loading 3-4 big index blocks may be saturate the system and cause constantly loading new blocks in the buffer...

For data pages I though that smaller size of 4K will bring into the buffer less data for the random selects...

If the system is running right now is there a way to make some prognoses?

Thanks a lot,mj
Previous Topic: Tuning the sql by using explain plan
Next Topic: Solaris 9, 10g hard parse/soft parse what does it mean?
Goto Forum:
  


Current Time: Thu Mar 28 21:14:29 CDT 2024