Home » RDBMS Server » Performance Tuning » FTS on table (Oracle 11g, Unix)
FTS on table [message #602547] Thu, 05 December 2013 09:26 Go to next message
a_oracle
Messages: 98
Registered: November 2010
Member
Hi All,

Quite new to tuing the queries so need your expertise.
The explain plan:

explain plan for
select *
from A
where close_date = '22-nov-2013';


select * from table(dbms_xplan.display);


Plan hash value: 355257077
 
-------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name         | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |              |   910K|   151M|   119K  (1)| 00:23:55 |       |       |
|   1 |  PARTITION RANGE SINGLE|              |   910K|   151M|   119K  (1)| 00:23:55 |    14 |    14 |
|*  2 |   TABLE ACCESS FULL    | A            |   910K|   151M|   119K  (1)| 00:23:55 |    14 |    14 |
-------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("CLOSE_DATE"=TO_DATE(' 2013-11-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   
   
   SQL> select count(*) from  A partition (P_1); - P_1 refers the partition for the mentioned close date in the query

  COUNT(*)
----------
  17057148


The table is 100% analyzed:

Table::
NUM_ROWS SAMPLE_SIZE LAST_ANALYZED
17057148 17057148 12/5/2013 7:57:40 AM

Index::
NUM_ROWS SAMPLE_SIZE LAST_ANALYZED
17057148 17057148 12/5/2013 8:08:59 AM

The table is range partitioned (monthly)on the close date and has index on close_date and one more column id. Now, I have few doubts:

1. As per the plan, does it imply thet the plan is scanning the entire partition P_1 (17057148 rows) and then fetching 910K rows. Please let me know if my understanding about the cardinality (rows)
column in the explain is correct.

2. The daily row count that is 910k seems to be very low compared to the total rows in the partition, so why doesn't oracle fetches the rows via an index scan rather than the full table scan.
Re: FTS on table [message #602548 is a reply to message #602547] Thu, 05 December 2013 09:30 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>so why doesn't oracle fetches the rows via an index scan rather than the full table scan.

what datatype is CLOSE_DATE?

WHY MY INDEX IS NOT BEING USED
http://communities.bmc.com/communities/docs/DOC-10031

http://searchoracle.techtarget.com/tip/Why-isn-t-my-index-getting-used

http://www.orafaq.com/tuningguide/not%20using%20index.html
Re: FTS on table [message #602614 is a reply to message #602547] Thu, 05 December 2013 23:30 Go to previous messageGo to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
First, oracle has converted the string to a date using the default date format so the issue of date/not-a-date has been addressed for optimization purposes. Of course Lalit is correct in that OP should be doing this conversion explicitly.

Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter("CLOSE_DATE"=TO_DATE(' 2013-11-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


Second, read up on SKEW. Assume your table has this data. NDV (Number of Distinct Values) is 2 for the date column. NDV(C2)=2. Thus c2='21-nov-2013' should give 1/2% or 50% of the rows or in this case 3 rows. But in fact it will give 5 rows. This is because the data is skewed (rows are not evenly distributed across the values in the column). One solution to this problem is to collect histograms on the column c2.

c1  c2
1   '21-nov-2013'
2   '22-nov-2013'
3   '22-nov-2013'
4   '22-nov-2013'
5   '22-nov-2013'
6   '22-nov-2013'

Kevin
Re: FTS on table [message #602623 is a reply to message #602614] Fri, 06 December 2013 01:17 Go to previous messageGo to next message
a_oracle
Messages: 98
Registered: November 2010
Member
Hi All,

Thanks for the suggestion, but my code first converts the date using to_date and then that is used in the query. And Kevin, It' really interesting to see such facts, Thanks for the same.
Just wanted to shed some light on the current scenario. The table gets loaded with 1 million data everyday and since the close_date is the(date)
partition key, so it means the distinct column is just 1 out of 1 million record.
Also, below are some statistics I could gather that might help, but still working out to interpret that same:
select * from dba_tab_col_statistics where table_name = 'BB_FI_BO_REF' and column_name = 'CLOSE_DATE';
COLUMN_NAME	HISTOGRAM	NUM_DISTINCT	SAMPLE_SIZE	LOW_VALUE	HIGH_VALUE	 DENSITY	NUM_NULLS	NUM_BUCKETS	LAST_ANALYZED
CLOSE_DATE	FREQUENCY	41              6104	        78710A03010101	78710C05010101	 1.31779E-08	 0	        41	         12/5/2013 22:00


Does the above factors affect the query?
Re: FTS on table [message #602663 is a reply to message #602623] Fri, 06 December 2013 08:35 Go to previous message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
I don't know what code you are referring to.

explain plan for
select *
from A
where close_date = '22-nov-2013';

does not convert the character string to a date. That is why the optimizer had to do it for you. This is clearly seen in your predicates.

explain plan for
select *
from A
where close_date = to_date('22-nov-2013','dd-mon-rrrr');

This explicitly converts the character string to a date which is what your code should do.

Also, you stats output does not match your query.

09:23:22 SQL> desc dba_tab_col_statistics
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------
 OWNER                                              VARCHAR2(30)
 TABLE_NAME                                         VARCHAR2(30)
 COLUMN_NAME                                        VARCHAR2(30)
 NUM_DISTINCT                                       NUMBER
 LOW_VALUE                                          RAW(32)
 HIGH_VALUE                                         RAW(32)
 DENSITY                                            NUMBER
 NUM_NULLS                                          NUMBER
 NUM_BUCKETS                                        NUMBER
 LAST_ANALYZED                                      DATE
 SAMPLE_SIZE                                        NUMBER
 GLOBAL_STATS                                       VARCHAR2(3)
 USER_STATS                                         VARCHAR2(3)
 AVG_COL_LEN                                        NUMBER
 HISTOGRAM                                          VARCHAR2(15)


The columns are in a different order.

One of the things we don't like is when people tell us about a problem with one piece of SQL and then show us another piece of SQL instead. And we also don't like it when people don't cut/paste actual results but fake them. I ain't saying you did either but it would be nice if what you provided did not have these seeming differences.

Here is a script that will decode the low/high values for you. You need to create the package, then save the script as a file to execute.
/*
CREATE OR replace FUNCTION kev_raw_to_string (rawval RAW, TYPE VARCHAR2) RETURN VARCHAR2
IS
  cn  NUMBER;
  cv  VARCHAR2(32);
  cd  DATE;
  cnv NVARCHAR2(32);
  cr  ROWID;
  cc  CHAR(32);
BEGIN
    IF ( TYPE = 'NUMBER' ) THEN
      dbms_stats.Convert_raw_value(rawval, cn);
      RETURN '"'||cn||'"';
    ELSIF ( TYPE = 'VARCHAR2' ) THEN
      dbms_stats.Convert_raw_value(rawval, cv);
      RETURN '"'||cv||'"';
    ELSIF ( TYPE = 'DATE' ) THEN
      dbms_stats.Convert_raw_value(rawval, cd);
      RETURN '"'||to_char(cd,'dd-mon-rrrr.hh24:mi:ss')||'"';
    ELSIF ( TYPE = 'NVARCHAR2' ) THEN
      dbms_stats.Convert_raw_value(rawval, cnv);
      RETURN '"'||cnv||'"';
    ELSIF ( TYPE = 'ROWID' ) THEN
      dbms_stats.Convert_raw_value(rawval, cr);
      RETURN '"'||cnv||'"';
    ELSIF ( TYPE = 'CHAR' ) THEN
      dbms_stats.Convert_raw_value(rawval, cc);
      RETURN '"'||cc||'"';
    ELSE
      RETURN '"UNSUPPORTED DATA_TYPE"';
    END IF;
END;
/
*/
col low_value format a30
col high_value format a30
col last_analyzed format a22
--select table_name,column_name, num_distinct, num_nulls, num_buckets, sample_size,last_analyzed
select
  OWNER
, TABLE_NAME
, COLUMN_NAME
, NUM_DISTINCT
, NUM_NULLS
, NUM_BUCKETS
, SAMPLE_SIZE
, AVG_COL_LEN
, DENSITY
, TO_CHAR(LAST_ANALYZED,'dd-mon-rrrr.hh24:mi:ss') last_analyzed
, GLOBAL_STATS
, USER_STATS
, kev_raw_to_string (LOW_VALUE,(select data_type from dba_tab_columns b where b.owner = a.owner and b.table_name = a.table_name and b.column_name = a.column_name)) LOW_VALUE
, kev_raw_to_string (HIGH_VALUE,(select data_type from dba_tab_columns b where b.owner = a.owner and b.table_name = a.table_name and b.column_name = a.column_name)) HIGH_VALUE
from dba_tab_col_statistics a
where (owner,table_name) in
(
 (upper('&&1'),upper('&&2'))
)
--and (column_name = 'ROW_TERM_DATE$' or num_buckets > 1)
order by TABLE_NAME,COLUMN_NAME


@showcolstats <owner> <table>


create this, run it, then show us what you get.

Kevin
Previous Topic: Changing the defaut number of Plans checked by optimzer
Next Topic: How to correct completely wrong cardinality shown in query plan
Goto Forum:
  


Current Time: Fri Mar 29 07:46:35 CDT 2024