Home » RDBMS Server » Performance Tuning » Performance Issue (Oracle 10g, Windows XP)
Performance Issue [message #524125] Wed, 21 September 2011 23:35 Go to next message
a_oracle
Messages: 98
Registered: November 2010
Member
Hi All,

Its just that I need your expertise in helping me opt for the best optimal method. The issue is which option should I adopt to fulfill an insert statement.
The scenario is, I have a cursor which fetches nearly half a million records. Now, the current insert statement does it by something like:

open the cursor
loop
fetch cursor in  a declared variable vartab limit 200;
if vartab.count>0 then
forall i in vartab.first..vartab.last save exceptions
insert into table1
vartab(i);
commit;
end if;
end loop;


Now, in the above logic, for every 200 records the code is inserting and committing. One possible solution is increasing the limit count to say 2000. But how do i decide that 2000 is an optimal value to choose. how to check the memory constraints so that I can decide that 2000 is good enough or maybe I can go still further.
I cannot write insert into table1 select col1.....from table table2 as its an application where multiple transactions are going in parallel and going for insertion using a single insert statement with such a huge records may cause issues.

Forgot to mention, the fetching of records take nearly 30 seconds to fetch 5,00,000 records which seems to be fine.

Appreciate your help.






Re: Performance Issue [message #524126 is a reply to message #524125] Wed, 21 September 2011 23:42 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>I cannot write insert into table1 select col1.....from table table2 as its an application where multiple transactions are going in parallel
>and going for insertion using a single insert statement with such a huge records may cause issues.
Then you have a SERIOUS design flaw.
row by row is slow by slow
COMMIT inside LOOP increases likelihood for ORA-01555 error
PL/SQL will NEVER be faster than pure SQL implementation
Re: Performance Issue [message #524162 is a reply to message #524126] Thu, 22 September 2011 01:46 Go to previous messageGo to next message
a_oracle
Messages: 98
Registered: November 2010
Member
Hi Blackswan,
Thanks for the advice. but since the design flaw has been existing since the start of the application with multiple transactions
going in parallel,we cannot go for a total change of the system because it's beyond our scope and especially this
issue relates to customizing a code (Damn i hate giving support/maintenance to a existing code for which i have not been part of the development)
Now, this insert process with the stated logic takes nearly 40 minutes to complete the processing for nearly 5,00,000 records.
We have been assigned to reduce it to maybe at least 20 minutes on average.
So, the only solution i see here is to increase the limit caluse in the bulk collect statement as a simple insert statement has been rejected for it
had already caused serious performance issue before.
So, the dilemma is, the limit clause is current having 200 records and I want to change it to maybe 2000-5000. But before going to such to make the change
i need to assure that it should not again eat away the performance. Can you suggest what all thing we should be looing into before altering the limit clause.

And looking at the simply using insert into Table 1 (select col1...coln form table 2 <where clause>). I still feel this is indeed the best way. But dont know if this
should be incorporated as thought the query takes merely 30 seconds to fetch the data, inserting such a huge data in multi transaction application at one go might cause issues.

Can you suggest me a good method.
Re: Performance Issue [message #524172 is a reply to message #524162] Thu, 22 September 2011 02:15 Go to previous messageGo to next message
John Watson
Messages: 8929
Registered: January 2010
Location: Global Village
Senior Member
Hi - how long does it actually take when you run the single
INSERT INTO....SELECT...FROM...
statement? 500,000 rows is not that many, and if it takes 40 minutes with your current pl/sql implementation, I would have expected it to take under twenty minutes as a single SQL statement. I cannot see why other concurrent transactions would be a problem. You sayQuote:
a simple insert statement has been rejected for it had already caused serious performance issue before.
what was this issue?
Re: Performance Issue [message #524178 is a reply to message #524172] Thu, 22 September 2011 02:30 Go to previous messageGo to next message
a_oracle
Messages: 98
Registered: November 2010
Member
Hi John,

Actually i cannot test the insertion as the issue is related to replica of a Live database wherein we just have read access.
Now, The issue that happened using the single insert statement was related to memory getting blown away, as stated by the client.
i suspect it must have been something like you create a log entry in rollback segment for the million records then go for a commit at one go.
But, the rollback segment could not hold that much data.
Re: Performance Issue [message #524191 is a reply to message #524178] Thu, 22 September 2011 03:24 Go to previous messageGo to next message
John Watson
Messages: 8929
Registered: January 2010
Location: Global Village
Senior Member
"memory getting blown away" is an Oracle error message I haven't seen before. I would show the client a few examples
like this:
orcl> set timing on
orcl> insert into t1  select * from epistore.grav_open_wave where rownum <=500000;

500000 rows created.

Elapsed: 00:04:50.17
orcl> select
  2  round(used_ublk * (select value from v$parameter where name='db_block_size')/1024/1024) MB_undo
  3  from v$transaction;

   MB_UNDO
----------
        32

Elapsed: 00:00:00.01
orcl>

That should convince them. Good luck.

[update: this was for a table with average row length 1372 bytes]

[Updated on: Thu, 22 September 2011 03:26]

Report message to a moderator

Re: Performance Issue [message #524194 is a reply to message #524191] Thu, 22 September 2011 03:27 Go to previous messageGo to next message
a_oracle
Messages: 98
Registered: November 2010
Member
Thanks John. Will surely try to sing my tune before the client and hopefully your advice should work out.
Re: Performance Issue [message #524252 is a reply to message #524194] Thu, 22 September 2011 07:07 Go to previous messageGo to next message
a_oracle
Messages: 98
Registered: November 2010
Member
Hi,

I am back after having a hefty discussion over the issue. Now, i have planned to fulfill the above requirement as stated below:
1. Will use a simple insert into table 1 (select col1....coln from table2) using the APPEND HINT which could be a better option.
2. The table into which data is inserted is deleted on daily basis. So instead of deleting the table i would rather TRUNCATE the table to deal with issues created by HIGH WATERMARK.

Now, I had a quick overview of the hint and it says that the hint doesn't allow the logs written to buffer cache that implies no redo log generation and also instead of searching empty disks it directly writes into the available fresh free disk spaces above the HIGH WATERMARK. So, this implies that the high watermark increases. So, I hope if i use the truncate then it should eliminate the issue, right?

Please suggest. Also,i can you please advice on what should be my optimal value in my limit clause and if i am choosing a big value then what all thing I should be checking prior to that.

Thanks Again
Re: Performance Issue [message #524253 is a reply to message #524252] Thu, 22 September 2011 07:14 Go to previous messageGo to next message
cookiemonster
Messages: 13920
Registered: September 2008
Location: Rainy Manchester
Senior Member
If you're doing a straight insert/select what do you need limit for? That's a bulk collect setting.
Re: Performance Issue [message #524257 is a reply to message #524253] Thu, 22 September 2011 07:24 Go to previous message
a_oracle
Messages: 98
Registered: November 2010
Member
Hi Cookiemonster,

If you read my previous topic, I had asked for suggestions on increasing my limit clause in my existing code which has been using bulk collect to a higher amount but I am not sure how to determine the same.
Previous Topic: Need help to understand in depth about ORA-01555
Next Topic: MERGE Statment repetitive subquery refactoring (3 Merged)
Goto Forum:
  


Current Time: Fri Apr 19 03:14:56 CDT 2024