PL/SQL Loops and the Performance Problem they May Cause

Kevin Meade's picture
articles: 

PL/SQL is an excellent language for Oracle. Integrated with the database, highly useful extensions to SQL, extremely powerful when exploiting Oracle Objects, and in spite of all this, it is still way easy to learn. But like any procedural programming language, one can get lazy with it, tending to accept the first solution arrived at. Looping constructs in particular seem to be used as crutches rather than necessary components of a solution. Bad looping causing performance issues is a problem that liters the PL/SQL landscape. But it is easy to spot and fix.


In short the problem is this: a PL/SQL routine contains a select statement whose rows returned are used to drive other SQL code inside a loop. Cursor-for-loops are common. Inside the loop we will see SQL statements which do not use indexes based on the data elements in the driving row set and as a result do way more work than necessary. Here is a contrived but clear example:

for r1 in (
           select *
           from dept
           where dept_name like ‘SALES%’
          ) loop
   update emp set
              salary = salary * 1.1
             ,last_update_date = sysdate
   where deptno = r1.deptno;
end loop;

Many of you will recognize this SQL as a variation of the “lets give everyone in a sales department a raise of 10%”. Let us goof on it a little bit. Suppose there are three sales departments. Further suppose there is no index on DEPTNO of the EMP table. You should see right away that this means we will scan the EMP table three times when we run this code. If you don’t see why three passes of the EMP table will happen, then please go back and look at the code again. Three SALES departments means three rows in the for loop means three executed update statements each of which must access the EMP table looking for employees in the right department. No index on EMP.DEPTNO means a real scan, not an index lookup. We look at every row in the EMP table three times even though the most we can update each is once. Many we won’t update at all. What a waste of time.

Stating our example another way, if we are looping through departments, then we are exposing DEPTNO to the DML inside the loop. The SQL statements in the loop should be taking advantage of this. If not, then the loop is hurting us performance wise, not helping us. This leads to useless overhead being done each time through the loop, which is bad for performance.

It takes only a little imagination to recognize that this could become a problem in situations where there are lots of rows coming out of the loop, or lots of rows in the EMP table. Fortunately we have at least three ways to fix this problem. The one to use will depend upon the specifics your code.

If you have any data sense at all you likely have thought, “that is stupid; DEPTNO on the EMP table is a foreign key, it should have been indexed”. Of course you are right, as we all should know by now that that when it comes to Oracle, every Primary Key, Unique Key, and Foreign Key should be supported by an index, no exceptions. Well, there may be exceptions but I have never met one for this rule that could be justified. Creating an index on DEPTNO is the first and most obvious way to fix this problem. Adding an index on EMP.DEPTNO would mean that each loop would execute an update statement that was driven by the loop data. Thus although we would still be doing three updates because in our scenario we said there were three Sales departments and thus three rows in our loop, each update, rather than a table scan, would be an indexed lookup that looked at only those rows it was interested in.

Certainly that solves the potential performance problem presented. But adding an index, in my experience is not the best approach to fixing this problem most of the time. You should consider two questions before adding an index to your database, to solve a problem like this.

First, whenever an index is added to a data system, you run the risk of negatively affecting something else in the system. Databases these days are no longer little pockets of information; they are large integrated data services and their pieces do not operate independently of each other even though we often treat component pieces that way. A new index can and will change query plans for many things that access the table getting the new index. Thus you can for example hose your online application by accident at the expense of making some batch component go faster. This is not far fetched. I in fact did this last week, which is part of the reason for this article. If you are going to add an index, at the least, have some idea of what accesses the table you are enhancing, and be available for the next day incase you need to back it out. Also, make sure to pass the new index by your DBA. Remember the BUDDY rule; “never go swimming in hot water without a friend”.

Second, if your goal of adding the index is to change the query plan of the DML in the loop in order by make the DML run faster then, what makes you think the query plan you are trying to change is actually a bad one? Recent releases of Oracle’s Cost Based Optimizer are in fact quite good. Maybe not perfect, but if statistics for tables and indexes have been collected properly, then the likelihood is that Oracle is generating good plans. The query plan for the update statement in our example may be a good one too. Oracle tuned the update in our example, thinking it was only going to execute the update once (what else can it think?). It is our loop that is the actual problem because it forces Oracle to execute SQL multiple times but the Cost Based Optimizer can’t consider that fact in its plan generation. If the update in our example were indeed only executing once, then the query plan for the update may be a great plan and an index would only be going backwards performance wise. Nope, a better solution most of the time is to get rid of the loop altogether.

In reality, you should never see a simple example like we have shown above. What you will see are much more sophisticated pieces of code. Loop structures that contain either large SQL statements with many joins, or many SQL statements, or more than likely, both. What we need to do is get rid of the loop altogether. In this way, we take advantage of the Optimizer’s plan crafting ability without affecting anything but the code we want to affect.

Why not recode the update to look like this:

update emp set
           salary = salary * 1.1
          ,last_update_date = sysdate
where deptno in
   (
    select deptno
    from dept 
    where dept_name like ‘SALES%’
   );

This update statement will do the same thing as the original code, but in a sense embeds the loop inside the update statement by encompassing the DEPT select in its where clause. The fact that there is no index on DEPTNO is now less important because we are no longer walking the data manually as the loop forced us to do. The Oracle optimizer can consider the nature of the sub-query in its query plan. If there are enough EMP rows in our SALES departments, then a full table scan may indeed be the best plan for this update. In situations where the embedded SQL has lots of joins and sub-queries or is otherwise highly complex (as is what we often see inside PL/SQL loops), then re-coding the PL/SQL to remove its reliance on the loop structure often yields the results we desire. Of course you will have to do some testing to demonstrate that you have not changed the meaning of the process, but this testing is much easier than the regression testing you might be faced with when you add an index. Hmm… I wonder how many people actually consider regression testing when all they do is add an index?

We said there were at least three solutions to this situation. The third is a variation of the one we just looked at. The likelihood is that your loop does more than one thing. That is, the loop contains many SQL statements, not just one. In this situation you don’t want to be adding what might be a large sub-query to every piece of SQL in the loop. That would require the materialization of the same data across multiple statements which itself can have a performance cost. So, it can be beneficial to back up in time a little and use an older technique that works pretty well; the working table.

Create table work1 (deptno number);

Insert into work1
select deptno
from dept
where dept_name like ‘SALES%’;

update emp set
           salary = salary * 1.1
          ,last_update_date = sysdate
where deptno in (select deptno from work1);

Use of a worktable preserves our driving rows and thus lets us do the loop query only once yet use its results in the code controlled by the loop everywhere we need it. Many people don’t like using worktables because they seem “dirty” somehow. I know the feeling. But I got over it. Worktables are very effective mechanisms when employed for the right reasons.

To recap, the problem we are dealing with is a loop structure that drives one of more SQL statements but the SQL inside the loop does not use the data the loop exposes, as part of an efficient path in its query plan. The solutions presented above are to either a) do something with indexing to alter the query plan of each SQL statement in the loop and hope it will be faster (less preferred solution), or b) to eliminate the loop altogether by somehow adding the loop’s query to the other SQL directly, or indirectly via a work table (more preferred).

So what kind of improvement can one expect? Naturally that depends upon how bad your performance problem is. The fact that you are looking at a problem in the first place means it is likely pretty bad. You should expect to achieve orders of magnitude improvement. This means you should be expecting something between 10 and 100 times better performance when you are done. Jobs that take 10 hours to run should take 1 hour or better yet, 10 minutes. The reason you should expect this is because, by removing the loop structure, you should be eliminating massive amounts of overhead work done every time through the loop, that never contributed to the final result. Even if the query plan of each DML does not change after removing reliance on the loop structure, you should still see this improvement because you will only be executing each SQL statement once, not N-times (based on the number of rows of the loop). Your goal was never to change query plans, it was to stop doing useless overhead. A query plan change with this method is only a side-affect to achieving your performance goal of removing needless overhead work.

Remember what to look for: DML contained in a loop that does not take advantage in its query plan, of the data values exposed by loop. If you see this, get cracking as you may be looking at some serious gravy.



About the Author: Kevin Meade is a 22-year veteran of Relational Databases in general and Oracle in particular. He is employed under many hats: Architect, Designer, Modeler, and Programmer and so is currently a contractor using the title of “Oracle Specialist” in the Finance and Insurance Industries. Married with two girls, he likes to swim in his heated pool with his family, is addicted to Strategy computer games, keeps a Saltwater Reef in his home, and brews his own dark beers (yum).

Comments

Hi Kevin,

I have gone through your article and tested the performance of various solution to overcome the 'Loop overhead'. In my testing I have inserted a huge number of records (458752 records) in the emp table. But, the surprise is that the 'Loop version' than the other suggested solutions.

The testing program:

SET SERVEROUTPUT ON SIZE 10000
DECLARE
TYPE DeptTab IS TABLE OF dept.deptno%TYPE INDEX BY PLS_INTEGER;
SalesDept DeptTab;

	t1 INTEGER;
	t2 INTEGER;
	t3 INTEGER;
	t4 INTEGER;
	t5 INTEGER;
	t6 INTEGER;
BEGIN
	-- Loop version
	t1 := dbms_utility.get_time;
	for r1 in 	(
				select
					*
				from
					dept
				where
					dname like 'SALES%'
				)
	loop
		update
			emp
		set
			sal = sal * 1.1
			,last_update_date = sysdate
		where
			deptno = r1.deptno;
	end loop;
	t2 := dbms_utility.get_time;

	-- Subquery with IN operator version
	update
		emp
	set
		sal = sal * 1.1
		,last_update_date = sysdate
	where
		deptno in
					(
					select
						deptno
					from
						dept 
					where
						dname like 'SALES%'
					);
	t3 := dbms_utility.get_time;

	-- Worktable version
	--execute immediate 'Create table work1 (deptno number)';
	delete from work1;
	
	Insert into 
		work1
	select
		deptno
	from
		dept
	where
		dname like 'SALES%';
	
	update
		emp
	set
		sal = sal * 1.1
		,last_update_date = sysdate
	where
		deptno in (select deptno from work1);
	
	t4 := dbms_utility.get_time;

	-- Subquery with EXISTS operator version
	update
		emp
	set
		sal = sal * 1.1
		,last_update_date = sysdate
	where
		exists
				(
				select
					1
				from
					dept 
				where
					dname like 'SALES%'
					and deptno = emp.deptno
				);
	t5 := dbms_utility.get_time;
	
	-- FORALL version
	select
	deptno
	bulk collect into
	SalesDept
	from
	dept
	where
	dname like 'SALES%';

	FORALL i in SalesDept.FIRST..SalesDept.LAST
		update
			emp
		set
			sal = sal * 1.1
			,last_update_date = sysdate
		where
			deptno = SalesDept(i);
	t6 := dbms_utility.get_time;	
	
	dbms_output.put_line('Loop version: ' || TO_CHAR((t2 - t1)/100));
	dbms_output.put_line('Subquery with IN operator version: ' || TO_CHAR((t3 - t2)/100));
	dbms_output.put_line('Worktable version: ' || TO_CHAR((t4 - t3)/100));
	dbms_output.put_line('Subquery with EXISTS operator version: ' || TO_CHAR((t5 - t4)/100));
	dbms_output.put_line('FORALL version: ' || TO_CHAR((t6 - t5)/100));

END;
/

The result of the test as follows

Loop version: 17.64
Subquery with IN operator version: 29.27
Worktable version: 29.82
Subquery with EXISTS operator version: 29.89
FORALL version: 21.91

PL/SQL procedure successfully completed.

Could you pls explain us what went wrong?

Regards and Thanks
Ganesh

Sorry for the delay in getting back to you, had production problems at work that needed attention. Well... that question is rather open ended. There are 101 different things that could have impacted these results as you have shown. Let me make some observations however:

I complement you on a fine example of pl/sql coding. This code stands as a good sampling of different techniques. If other developers spent the time to do this kind of testing, all Oracle databases would be in better shape. As an Oracle developer, you clearly have the right stuff. So... now that I have buttered your toast as it were, let me speak a bit to your real question.

Let me start with the obiligatory cowardly disclaimer: YOUR MILAGE MAY VARY. There are few absolutes, in IM. When it comes to performance tuning techniques, there are no guarantees that a technique will work in all cases. I could say one obvious answer to your question may simply be that it didn't work this time. And it might be true; ah but that would be a weak response so lets move on to another observation.

Next, I can't really give you an answer yet. You have not supplied enough information to answer this question. I would need to know at a minimum the following:

describes of each table
indexes on each table
are statistics collected and how
average row length of each table
how many SALES% rows are in DEPT and how many of each on EMP
query execution plans for each statement
are there any special optimizer settings you have enabled or disabled

With this data we have a better chance at explaining what you are seeing. Indeed, the answer may become obvious to you once you look at this info yourself. If so, please tell us what you find. If not, please submit this data in another reply so I can have a look at it.

Next, I have to point out that such small timings really are not the focus of the article. As you recall, I suggested that an appropriate goal was something like 10 hours back off an 11 hour job. A small situation like this where seconds differentiate results, certianly can be important in the right setting say if you were executing it 10000 times, but 17 second loops don't play in the same batch job league as an 11 hour pl/sql job. So we may be talking apples to oranges here. Still, this as an answer is not satisfying either, so what we really need is for me to do a similar test as you. Here it is.

This is a bit long winded so here are the highlights.

I created a dept and emp table complete with indexes and constraints.
I populated 999 dept rows of which 9 are of type SALES%
I populated 999 emp rows for each dept row

we play around with using or not, the emp index on deptno
we see the affect of the different coding methods
we observe an anomoly with temp tables
we check the affect of having statistics
we change our optimizer mode to enable hash joins

We find out that
having the deptno index on emp gives best results in all cases
followed by use of hash join with statistics
followed by use on subselect in query with no emp.deptno index
followed by loop construct with no index
followed lastly by use of temp table this was interesting and I'll comment...

seems the use of a temp table with no stats confused the optimizer. once stats were collected on the temp table as well as dept,emp, oracle reverted to nested loop which gave pretty much equal timings as the pl/sql loop illustrating the nature of both. Enabling hash join although not the best solution and sometimes very memory hungry, did non the less give good results. Your timing differences are likely do to statistics, indexes, and join methods that are active and available for your session and database.

Though the temp table issue was fun to see and it exhibited the behaviour that of your test case thus clueing us to what is going on, in the end these results below are just what we would expect to see and fall in line with the original article. So, its just your test case and your database and settings etc. that made the difference. Ok, here are the details of my test case with simple timings via set timing on.

One point of clarification: I said in the article that creating a new index (or in real word situations, a set of indexes) was not always the best solution in my experience, but I also said it was a valid solution if you were willing to do the proper regression testing. For this test case, an index on emp.deptno is the best performing solution.

create table dept (deptno number not null,dname varchar2(30) not null);

create table emp (empno number not null, deptno number not null, ename varchar2(30) not null);

insert into dept
select 9999
      ,lpad('.',30,'.')
from dba_tab_columns
where rownum < 1000
/

update dept set deptno = rownum,dname = lpad(rownum,30,'.');
update dept set dname = 'SALES'||rownum where rownum < 10;

commit;

begin
   for r1 in (
              select *
              from dept
             ) loop
      insert into emp
      select 99999
            ,r1.deptno
            ,lpad(rownum,30,'.')
      from dba_tab_columns where rownum < 1000
      ;
      commit;
   end loop;
end;
/

update emp set empno = rownum,ename=lpad(rownum,30,'.');

commit;

create unique index dept_pk on dept (deptno);
create unique index dept_uk1 on dept (dname);
alter table dept
   add constraint dept_pk primary key (deptno)
   add constraint dept_uk1 unique (dname)
/

create unique index emp_pk on emp (empno);
create index emp_fk1 on emp (deptno);
alter table emp
   add constraint emp_pk primary key (empno)
   add constraint emp_fk1 foreign key (deptno) references dept
/


set timing on

-- uses loop with index

begin
  2     for r1 in (
  3                select *
  4                from dept
  5                where dname like 'SALES%'
  6               ) loop
  7        update emp set ename = 'updated this row'
  8        where deptno = r1.deptno
  9        ;
 10     end loop;
 11  end;
 12  /
Elapsed: 00:00:00.01

-- uses loop without index

drop index emp_fk1;
Elapsed: 00:00:00.00
begin
  2     for r1 in (
  3                select *
  4                from dept
  5                where dname like 'SALES%'
  6               ) loop
  7        update emp set ename = 'updated this row'
  8        where deptno = r1.deptno
  9        ;
 10     end loop;
 11  end;
 12  /
Elapsed: 00:00:19.04


create index emp_fk1 on emp (deptno);
Elapsed: 00:00:16.01

-- uses single update with index and nested select

update emp set ename = 'updated this row'
  2  where deptno in (select deptno from dept where dname like 'SALES%')
  3  /
Elapsed: 00:00:00.01

-- uses nested select but no index

drop index emp_fk1;
Elapsed: 00:00:00.00

update emp set ename = 'updated this row'
  2  where deptno in (select deptno from dept where dname like 'SALES%')
  3  /
Elapsed: 00:00:06.05


-- uses work table with index (oops goofed on the temp table name)

create index emp_fk1 on emp (deptno);
Elapsed: 00:00:14.09

drop table demp_temp;
Elapsed: 00:00:00.04
create table demp_temp as select deptno from dept where dname like 'SALES%';
Elapsed: 00:00:00.00

update emp set ename = 'updated this row'
  2  where deptno in (select deptno from demp_temp)
  3  /
Elapsed: 00:00:00.02

-- uses temp table with no index (wow, that was unexpected, let us see the plan)

drop index emp_fk1;
Elapsed: 00:00:00.01

update emp set ename = 'updated this row'
  2  where deptno in (select deptno from demp_temp)
  3  /
Elapsed: 00:00:29.03

delete from plan_table;
Elapsed: 00:00:00.00

explain plan for
  2  update emp set ename = 'updated this row'
  3  where deptno in (select deptno from demp_temp)
  4  /
Elapsed: 00:00:00.00
@showplan

PLANSTEP
-----------------------------------------------------------------------------
UPDATE STATEMENT ()
   UPDATE () EMP
      MERGE JOIN ()
         SORT (JOIN)
            TABLE ACCESS (FULL) EMP
         SORT (JOIN)
            VIEW () VW_NSO_1
               SORT (UNIQUE)
                  TABLE ACCESS (FULL) DEMP_TEMP
Elapsed: 00:00:00.00

-- say, what was the plan for the nested select

delete from plan_table;
Elapsed: 00:00:00.00

explain plan for
  2  update emp set ename = 'updated this row'
  3  where deptno in (select deptno from dept where dname like 'SALES%')
  4  /
Elapsed: 00:00:00.00
@showplan

PLANSTEP
-----------------------------------------------------------------------------
UPDATE STATEMENT ()
   UPDATE () EMP
      NESTED LOOPS ()
         TABLE ACCESS (FULL) EMP
         TABLE ACCESS (BY INDEX ROWID) DEPT
            INDEX (UNIQUE SCAN) DEPT_PK
Elapsed: 00:00:00.00

-- hmm...
-- well... lets collect stats and see what happens

exec dbms_stats.gather_table_stats ('KEVIN','DEPT')
Elapsed: 00:00:00.07
exec dbms_stats.gather_table_stats ('KEVIN','EMP')
Elapsed: 00:00:21.09

delete from plan_table;
Elapsed: 00:00:00.00
explain plan for
  2  update emp set ename = 'updated this row'
  3  where deptno in (select deptno from demp_temp)
  4  /
Elapsed: 00:00:00.00
@showplan

PLANSTEP
--------------------------------------------------------------
UPDATE STATEMENT ()
   UPDATE () EMP
      MERGE JOIN (SEMI)
         SORT (JOIN)
            TABLE ACCESS (FULL) EMP
         SORT (UNIQUE)
            TABLE ACCESS (FULL) DEMP_TEMP
Elapsed: 00:00:00.00

update emp set ename = 'updated this row'
  2  where deptno in (select deptno from demp_temp)
  3  /
Elapsed: 00:00:25.05

-- well, that wasn't much better, lets try this

rollback;
Elapsed: 00:00:00.01

exec dbms_stats.gather_table_stats ('KEVIN','DEMP_TEMP')
Elapsed: 00:00:00.00
delete from plan_table;
Elapsed: 00:00:00.00
explain plan for
  2  update emp set ename = 'updated this row'
  3  where deptno in (select deptno from demp_temp)
  4  /
Elapsed: 00:00:00.00
@showplan

PLANSTEP
-------------------------------------------------
UPDATE STATEMENT ()
   UPDATE () EMP
      NESTED LOOPS ()
         SORT (UNIQUE)
            TABLE ACCESS (FULL) DEMP_TEMP
         TABLE ACCESS (FULL) EMP
Elapsed: 00:00:00.00

update emp set ename = 'updated this row'
  2  where deptno in (select deptno from demp_temp)
  3  /
Elapsed: 00:00:19.00

-- ok, back to the original loop crap
-- this is because of the order of table access
-- note for the nested select with out index and without stats
-- we were doing nested loop but tables in a different order


-- ok lets have some real fun

alter session set hash_join_enabled=true;
Elapsed: 00:00:00.00
delete from plan_table;
Elapsed: 00:00:00.00
explain plan for
  2  update emp set ename = 'updated this row'
  3  where deptno in (select deptno from demp_temp)
  4  /
Elapsed: 00:00:00.00
@showplan

PLANSTEP
-----------------------------------------------------------------
UPDATE STATEMENT ()
   UPDATE () EMP
      HASH JOIN ()
         SORT (UNIQUE)
            TABLE ACCESS (FULL) DEMP_TEMP
         TABLE ACCESS (FULL) EMP
Elapsed: 00:00:00.00
update emp set ename = 'updated this row'
  2  where deptno in (select deptno from demp_temp)
  3  /
Elapsed: 00:00:02.09

Hello Kevin,

I found your article about PL/SQL Loops very interesting (also your thoughts about side-effects if adding "just another index").

You explain three variants for avoiding the "bad loops"
* correct index
* incorporate the select in the where clause of the update statement
* use a working table

What do you think of a forth variant:

for r1 in (
           select ROWID
           from dept
           where dept_name like ‘SALES%’
          ) loop
   update emp set
              salary = salary * 1.1
             ,last_update_date = sysdate
   where ROWID = r1.ROWID;
end loop;

As far as I know, using the ROWID is a very fast way to access a certain row.

Hello Hudo,

I don't how you can use the ROWID of DEPT table to update the rows in EMP table. Can you please explain the logic behind the code? As far as my knowledge is concerned the code and logic is wrong.

Hello Kevin,

That was a good explanation.

Thanks.

Hi Kevin,

Thanks for your reply.

By reading your matured, thoughtful, and analytical reply, I found that my test data for that performance test was wrong:

(1) I created an emp table (a demo table provided by Oracle for the schema 'SCOTT') and inserted the same records multiple time by using:

insert into emp
select * from emp;

Due to that, the empno column wasn't unique.

(2) I creatd a dept table (demo table provided by Oracle for the schema 'SCOTT') and it contained only a single record for the 'SALES' department.

(3) I didn't create proper indexes.

(4) Table distribution data wasn't gathered (table statistics) and I might have used 'Rule Based Optimization'.

I have now tested with proper test data according to your advice, and the results shown that the 'work table' version is the best solution as you suggested.

Thanks for your valuable information about:
- The role of indexes in tuning performance
- The role of table data distribution (statistics data) for Cost-Based optimization.

Thank you very much Kevin!

Excellent Article with testcase.

Many thanks for giving valuable infomation.

You said :
third option is

Create table work1 (deptno number);

Insert into work1
select deptno
from dept
where dept_name like ‘SALES%’;
------------
Another better way to create work table is :

create table work as (select deptno
from dept
where dept_name like ‘SALES%’);

will work in case of bigget tables by preventing some rollback space
Regards,
Ashish

Kevin,

I just stumbled accross your web site and enjoy it immensely. I also do Oracle and brewing. Please let us know if you develop the perfect Guiness clone!

Pete

Kevin,

Another approach to getting rid of the unnecessary loop is to do the following:

update
 (select e.salary, e.last_update_date
  from emp e
    inner join dept d
    on e.deptno = d.deptno
  where d.dept_name like 'SALES%'
 ) sales_dudes
set sales_dudes.salary = sales_dudes.salary * 1.1,
  sales_dudes.last_update_date = sysdate;

This only works if the query is 'key-preserved', meaning that the join doesn't change the size of the result set (1 row in result set for each row in emp). Most developers don't know that you can use this syntax, so I thought I'd throw it out there.

Kevin Meade's picture

This points out why it is so necessary for people to travel the internet on a regular basis and look for new features of Oracle, SQL and PL/SQL; so that they can learn this kind of cool stuff. I also like using merge in the right circumstances though in general I stay away from it, it can be a neat and easy way to do certain things, particularly when combined with Analytics because this update syntax (at least for 9i) does not support use of analytics, but the merge does. All is even cleaner in 10g.

Thanks for your comment, Kevin