Feed aggregator

Partner Webcast – Hitchhikers Guide to Oracle Cloud (Part 2)

Oracle Cloud Infrastructure is a set of complementary cloud services that enable you to build and run a wide range of applications and services in a highly available hosted environment. Oracle Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Announcing SLOB 2.5.3

Kevin Closson - Thu, 2021-01-21 15:08

This is just a quick blog post to inform readers that SLOB 2.5.3 is now available at the following webpage: click here.

SLOB 2.5.3 is a bug fix release. One of the fixed bugs has to do with how SLOB sessions get connected to RAC instances. SLOB users can surely connect to the SCAN service but for more repeatable testing I advise SLOB 2.5.3 and SQL*Net services configured one per RAC node. This manner of connectivity establishes affinity between schemas and RAC nodes. For example, repeatability is improved if sessions performing SLOB Operations against, say, user7’s schema, it is beneficial to do so connected to the same RAC node as you iterate through your testing.

The following is cut and pasted from SLOB/misc/sql_net/README:

The tnsnames.ora in this directory offers an example of
service names that will allow the user to test RAC with
repeatable results. Connecting SLOB sessions to the round
robin SCAN listener will result in SLOB sessions connecting
to random RAC nodes. This is acceptable but not optimal and
can result in varying run results due to slight variations
in sessions per RAC node from one test to another.
As of SLOB 2.5.3, runit.sh uses the SQLNET_SERVICE_BASE and
SQLNET_SERVICE_MAX slob.conf parameters to sequentially
affinity SLOB threads (Oracle sessions) to numbered service
names. For example:
SQLNET_SERVICE_BASE=rac
SQLNET_SERVICE_MAX=8
With these assigned values, runit.sh will connect the first
SLOB thread to rac1 then rac2 and so forth until rac8 after
which the connection rotor loops back to rac1. This manner
of RAC affinity testing requires either a single SLOB
schema (see SLOB Single Schema Model in the documentaion)
or 8 SLOB schemas to align properly with the value assigned
to SQLNET_SERVICE_MAX. The following command will connect
32 SLOB threads (Oracle sessions) to each RAC node in an
8-node RAC configuration given the tnsnames.ora example
file in this directory:
$ sh ./runit.sh -s 8 -t 32

Find sku_no values from the table which does not have any records for ven_type='P'

Tom Kyte - Thu, 2021-01-21 09:26
Hi Tom and Team, I have a very simple table with the following records. site_no, sku_no, vendor_id, ven_type A sku_no can have multiple vendor_id values associated to it. A vendor_id can have a ven_type = P (Primary) or S (Secondary) I want to find all those sku_no values from the table that do not have any ven_type='P' records. Kindly help me write a query for this. Please let me know if the explanation is not clear or if you want me to provide any other details. Thank you.
Categories: DBA Blogs

Troubleshooting heavy hash joins

Tom Kyte - Thu, 2021-01-21 09:26
Hello Chris, Hello Connor. I know that hash join performance might suffer a lot if it has to store the hash table on a disk, but I have no idea how to check if that's happening and that is the reason a query is slow. Admittedly I do not know much about memory usage in Oracle and how to track it. Could you please, point me toward some articles on the topic or data dictionaries, that would help? Unfortunately I have neither a dba access nor an access to trace files.
Categories: DBA Blogs

Spooling data to .csv file via SQL Plus

Tom Kyte - Thu, 2021-01-21 09:26
It is probably apparent from my job title that my role is far from a dba. I am, however, a frequent user of Oracle SQL Developer. I work in mass appraisal and use SQL Developer to select, update, insert, delete, etc., on a regular basis. I would like to bring some automation to some of the select statements that I run most frequently and spool the results to a .csv file. An example of this is a select statement for identifying recently sold properties that need to be reviewed. The following command [using the Windows Command Prompt] has been stored as a scheduled task [using Windows Task Scheduler]: G:\>sqlplus username/password@connection @G:\SALES_VALIDATION\bat_files\weekly_salesval_rev.sql Weekly_salesval_rev.sql is the script containing the spool commands and select statement. I have included a link so you can view the script. What command [or commands] can I incorporate so that the data will be formatted appropriately [and include column headings] for review in Excel? While there should be approximately 21 columns of data, the results are currently displaying in a scattered fashion within the first 3 columns of the .csv file. If you need any other detail, please let me know. Also, if you would suggest or recommend other approaches to automating frequently run SQL select statements, let me know and I would be glad to look into those alternatives. Thank you for your time and help! Berkley Rose Product Versions: SQL Plus Release 12.2.0.1.0 Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 Oracle SQL Developer 4.0.3.16 Windows Version 10.0.17763.1577
Categories: DBA Blogs

Datapump in Oracle ADB using SQL Developer Web

Hemant K Chitale - Thu, 2021-01-21 04:20

 If you have a small schema in the Oracle Cloud Autonomous Database, you can actually run DataPump from SQL Developer Web.  DATA_PUMP_DIR maps to a DBFS mount inside the Oracle Database.


Logged in to my Oracle ADB as "ADMIN"

I check if DATA_PUMP_DIR exists  and I find that it is in dbfs  :

Query


I run a PLSQL Block to export the HEMANT schema using the DBMS_DATAPUMP API :

PLSQL Block


After I drop the two tables in the schema, I run the import using the DBMS_DATAPUMP API and then refresh the list of Tables owned by "HEMANT" :

PL/SQL Block


This method is a quick way of using the Autonomous Database itself when you don't have an external Object Store to hold the Datapump file.  So, I'd use this only for very small schemas as the dump is itself loaded into DBFS.


The PLSQL Code is :



REM  Based on Script posted by Dick Goulet, posted to oracle-l@freelists.org
REM With modifications by me.
REM Hemant K Chitale

REM Export schema "HEMANT"
declare
h1 NUMBER := 0;
h2 varchar2(1000);
ex boolean := TRUE;
fl number := 0;
schema_exp varchar2(1000) := 'in(''HEMANT'')';
f_name varchar2(50) := 'My_DataPump';
dp_mode varchar2(100) := 'export';
blksz number := 0;
SUCCESS_WITH_INFO exception;
begin
utl_file.fgetattr('DATA_PUMP_DIR', dp_mode||'.log', ex, fl, blksz);
if(ex = TRUE) then utl_file.fremove('DATA_PUMP_DIR',dp_mode||'.log');
end if;
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => upper(dp_mode)||'_EXP', version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => 2);
dbms_datapump.add_file(handle => h1, filename => f_name||'.dmp%U', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => f_name||'.log', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.metadata_filter(handle=>h1, name=>'SCHEMA_EXPR',value=>schema_exp);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.wait_for_job(handle=>h1, job_state=>h2);
exception
when SUCCESS_WITH_INFO THEN NULL;
when others then
h2 := sqlerrm;
if(h1 != 0) then dbms_datapump.stop_job(h1,1,0,0);
end if;
dbms_output.put_line(h2);
end;





REM Import schema "HEMANT"
declare
h1 NUMBER := 0;
h2 varchar2(1000);
ex boolean := TRUE;
fl number := 0;
schema_exp varchar2(1000) := 'in(''HEMANT'')';
f_name varchar2(50) := 'My_DataPump';
dp_mode varchar2(100) := 'import';
blksz number := 0;
SUCCESS_WITH_INFO exception;
begin
utl_file.fgetattr('DATA_PUMP_DIR', dp_mode||'.log', ex, fl, blksz);
if(ex = TRUE) then utl_file.fremove('DATA_PUMP_DIR',dp_mode||'.log');
end if;
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'SCHEMA', job_name => upper(dp_mode)||'_EXP');
dbms_datapump.set_parallel(handle => h1, degree => 2);
dbms_datapump.add_file(handle => h1, filename => f_name||'.dmp%U', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => f_name||'.log', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value=>'SKIP');
dbms_datapump.metadata_filter(handle=>h1, name=>'SCHEMA_EXPR',value=>schema_exp);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.wait_for_job(handle=>h1, job_state=>h2);
exception
when SUCCESS_WITH_INFO THEN NULL;
when others then
h2 := sqlerrm;
if(h1 != 0) then dbms_datapump.stop_job(h1,1,0,0);
end if;
dbms_output.put_line(h2);
end;



Again, I emphasise that this is only for small dumps.  


Categories: DBA Blogs

Oracle 19c Automatic Indexing: Non-Equality Predicates Part II (Let’s Spend The Night Together)

Richard Foote - Wed, 2021-01-20 21:42
In my previous post in this series, I discussed out Automatic Indexing currently does not consider Non-Equality predicates. Automatic Indexing will index columns based only on Equality predicates. So how does Oracle handle the scenario when an SQL has a mixture of both Equality and Non-Equality predicates? I’ll begin by creating two very similar tables, […]
Categories: DBA Blogs

MicroK8s: Kubernetes on raspberry pi - get nodes= NotReady

Dietrich Schroff - Wed, 2021-01-20 15:44

On my little kubernetes cluster with microK8s

 


 i got this problem:

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
zigbee   NotReady   <none>   59d   v1.19.5-34+b1af8fc278d3ef
ubuntu   Ready      <none>   59d   v1.19.6-34+e6d0076d2a0033

The solution was:

kubectl describe node zigbee

and in the output i found:

Events:
  Type     Reason                   Age                From        Message
  ----     ------                   ----               ----        -------
  Normal   Starting                 18m                kube-proxy  Starting kube-proxy.
  Normal   Starting                 14m                kubelet     Starting kubelet.
  Warning  SystemOOM                14m                kubelet     System OOM encountered, victim process: influx, pid: 3256628
  Warning  InvalidDiskCapacity      14m                kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  14m (x2 over 14m)  kubelet     Node zigbee status is now: NodeHasSufficientMemory
Hmmm - so running additional databases, processes outside of kubernetes is not such a good idea.

But as a fast solution: I ejected the SD card and did a resize + add swap on my laptop and put the SD card back to the raspberry pi...

Need help working with PL/SQL FOR LOOP

Tom Kyte - Wed, 2021-01-20 15:06
We have batch_table with column batch_id having values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Running sample code for demonstration. <code> PACKAGE XXX PROCEDURE YYY IS variables... BEGIN FOR i IN (SELECT batch_id FROM batch_table) LOOP -- Function call IF get_running_threads('XXEXTRACT_DATA') <= l_max_allowed_threads THEN l_request_id := fnd_request.submit_request ( application => g_application_name, program => 'XXEXTRACT_DATA', description => 'XXEXTRACT_DATA', start_time => SYSDATE, sub_request => FALSE, argument1 => i.batch_id); ELSE BEGIN -- WAIT LOGIC -- Wait till previous concurrent program completes successfully. -- Call API FND_CONCURRENT.WAIT_FOR_REQUEST DBMS_LOCK.sleep(10); END; END IF; END LOOP; END YYY; END XXX; </code> My ask is : Batch_id 1, 2, 3 successfully processed by calling the concurrent program XXEXTRACT_DATA. When FOR Loop starts processing batch_id 4, control comes to ELSE part. In the next iteration, this starts with batch_id 5. So batch_id 4 is never processed. How can I process batch_id 4 ? Is this possible to achieve without reopening the cursor more than one ? Pls respond.
Categories: DBA Blogs

Historical question about the definition of the constraining table in the Oracle documentation

Tom Kyte - Wed, 2021-01-20 15:06
Hello, AskTom! Excuse me for the "idle" question, but none of my colleagues could answer this. Many-many years ago, back in Oracle 7, was a "constraining error" ORA-04094 "Table is constraining, trigger may not modify it". This restriction was relaxed since Oracle 8i. In the Oracle (8i and later) documentation you can read - "constraining error prevented a row trigger from modifying a table when the parent statement implicitly read that table to enforce a foreign key constraint" and it's absolutely clear for me. But the Oracle 7 documentation says - "A constraining table is a table that a triggering statement might need to read either directly, for a SQL statement, or indirectly, for a declarative referential integrity constraint". I never undestood the first part of the sentence - "read directly, for a SQL statement": how a reading of the table can "constraint" the table in the triggering statement if the SQL operator reads the data in the table in the state at the moment in time BEFORE the row trigger can do any changes in the table to be read? My opinion is - it was a mistake in the Oracle documentation and a correct definition of the constraining table would be "constraining table is a table that a triggering statement might need to read indirectly, for a declarative referential integrity constraint", without the first part.
Categories: DBA Blogs

Table with LONG data type not being freed

Tom Kyte - Wed, 2021-01-20 15:06
Hi Tom, We are dealing with a system which has some legacy tables with LONG column. One of these tables contains critical data (email bodies and attachments, chat transcripts, etc) in LONG column. Unfortunately we cannot remove or even change this table in any way, system is heavily dependent on it. 2 years ago we introduced the following solution: - New table with same columns, but CLOB instead of LOB - Minutely job, which copies newly inserted records from original table into new table, and converts LONG to CLOB - Daily job which deletes already copied records from old table When this solution was introduced, we did a manual cleanup of the original table: - Stop the system which uses the table - Create new table with exact same definition (LONG column) - Copy over all records which had to be kept - Rename original table to "_OLD" postfix and new table to original table name - Drop old table However, after 2 years, customer where this database is located wanted to drop old partitions (note: the problematic table is NOT partitioned), and while checking disk space, noticed that the original table is still growing to be very large (this was the original problem also 2 years ago). Our expectation from the above solution was that while Oracle will not free up the disk space after the DELETE because of the LONG column, it will still reuse existing blocks, so that table will not grow after a certain point. However this is not what is happening, the table keeps growing, day by day. Currently it has about 13 000 records, but over 19 million blocks. About 18 000 records are added during the day, and each of those are deleted at night. I tried to reproduce the problem using the below SQL, but I'm not sure it's fully correct, it's producing a different result, something which is what we were expecting: after we delete and insert again, used blocks count is not increasing. <code> -- Helper proc CREATE OR REPLACE PROCEDURE show_space ( PI_TABLE_NAME IN VARCHAR2 ) AS l_unformatted_blocks NUMBER; l_unformatted_bytes NUMBER; l_fs1_blocks NUMBER; l_fs1_bytes NUMBER; l_fs2_blocks NUMBER; l_fs2_bytes NUMBER; l_fs3_blocks NUMBER; l_fs3_bytes NUMBER; l_fs4_blocks NUMBER; l_fs4_bytes NUMBER; l_full_blocks NUMBER; l_full_bytes NUMBER; PROCEDURE p ( p_label IN VARCHAR2, p_num IN NUMBER ) IS BEGIN dbms_output.put_line(rpad(p_label, 40, '.') || p_num); END; BEGIN dbms_space.space_usage( 'CCQ', PI_TABLE_NAME, 'TABLE', l_unformatted_blocks, l_unformatted_bytes, l_fs1_blocks, l_fs1_bytes, l_fs2_blocks, l_fs2_bytes, l_fs3_blocks, l_fs3_bytes, l_fs4_blocks, l_fs4_bytes, l_full_blocks, l_full_bytes); p('Unformatted Blocks ', l_unformatted_blocks); p('FS1 Blocks (0-25) ', l_fs1_blocks); p('FS2 Blocks (25-50) ', l_fs2_blocks); p('FS3 Blocks (50-75) ', l_fs3_blocks); p('FS4 Blocks (75-100)', l_fs4_blocks); p('Full Blocks ', l_full_blocks); END; / -- Dummy table, exact same definition as original table CREATE TABLE test_long ( qwkey NUMBER(38, 0) NOT NULL, tablename VARCHAR2(90 BYTE), fieldname VARCHAR2(90 BYTE), textkey NUMBER(38, 0), text LONG ); ALTER TABLE test_long ADD CONSTRAINT pk_test_long PRIMARY KEY (qwkey) USING INDEX; -- Original stats BEGIN dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1); END; BEGIN show_space('TEST_LONG'); END; /* Output: Unformatted...
Categories: DBA Blogs

Hint Errors

Jonathan Lewis - Wed, 2021-01-20 05:06

This is a list of possible explanations of errors that you might see in the Hint Report section of an execution plan. It’s just a list of the strings extracted from a chunk of the 19.3 executable around the area where I found something I knew could be reported, so it may have some errors and omissions – but there are plenty of things there that might give you some idea why (in earlier versions of Oracle) you might have seen Oracle “ignoring” a hint:

internally generated hint is being cleared
hint conflicts with another in sibling query block
hint overridden by another in parent query block
conflicting optimizer mode hints
duplicate hint
all join methods are excluded by hints
index specified in the hint doesn't exist
index specified in hint cannot be parallelized
incorrect number of indexes for AND_EQUAL
partition view set up
FULL hint is same as INDEX_FFS for IOT
access path is not supported for IOT
hint on view cannot be pushed into view
hint is discarded during view merging
duplicate tables in multi-table hint
conditions failed for array vector read
same QB_NAME hints for different query blocks
rejected by IGNORE_OPTIM_EMBEDDED_HINTS
specified number must be positive integer
specified number must be positive number
specified number must be >= 0 and <= 1
hint is only valid for serial SQL
hint is only valid for slave SQL
hint is only valid for dyn. samp. query
hint is only valid for update join ix qry
opt_estimate() without value list
opt_estimate() with conflicting values spec
hint overridden by NO_QUERY_TRANSFORMATION
hinted query block name is too long
hinted bitmap tree wasn't fully resolved
bitmap tree specified was invalid
Result cache feature is not enabled
Hint is valid only for select queries
Hint is not valid for this query block
Hint cannot be honored
Pred reorder hint has semantic error
WITH_PLSQL used in a nested query
ORDER_SUBQ with less than two subqueries
conflicting OPT_PARAM hints
conflicting optimizer_feature_enable hints
because of _optimizer_ignore_parallel_hints
conflicting JSON_LENGTH hints

CBO Example

Jonathan Lewis - Wed, 2021-01-20 04:01

A little case study based on an example just in on the Oracle-L list server. This was supplied with a complete, working, test case that was small enough to understand and explain very quickly.

The user created a table, and used calls to dbms_stats to fake some statistics into place. Here, with a little cosmetic editing, is the code they supplied.

set serveroutput off
set linesize 180
set pagesize 60
set trimspool on

drop table t1 purge;

create table t1 (id number(20), v varchar2(20 char));
create unique index pk_id on t1(id);
alter table t1 add (constraint pk_id primary key (id) using index pk_id enable validate);
exec dbms_stats.gather_table_stats(user, 't1');
 
declare
        srec               dbms_stats.statrec;
        numvals            dbms_stats.numarray;
        charvals           dbms_stats.chararray;
begin
  
        dbms_stats.set_table_stats(
                ownname => user, tabname => 't1', numrows => 45262481, numblks => 1938304, avgrlen => 206
        );

        numvals := dbms_stats.numarray (1, 45262481);
        srec.epc:=2;
        dbms_stats.prepare_column_values (srec, numvals);
        dbms_stats.set_column_stats (
                ownname => user, tabname => 't1', colname => 'id', 
                distcnt => 45262481, density => 1/45262481,
                nullcnt => 0, srec => srec, avgclen => 6
        );

        charvals := dbms_stats.chararray ('', '');
        srec.epc:=2;
        dbms_stats.prepare_column_values (srec, charvals);
        dbms_stats.set_column_stats(
                ownname => user, tabname => 't1', colname => 'v', 
                distcnt => 0,  density => 0, 
                nullcnt => 45262481, srec => srec, avgclen => 0
        );
        dbms_stats.set_index_stats( 
                ownname => user, indname =>'pk_id', numrows => 45607914, numlblks => 101513,
                numdist => 45607914, avglblk => 1, avgdblk => 1, clstfct => 33678879, indlevel => 2
        );
end;
/
 
variable n1 nvarchar2(32)
variable n2 number

begin
        :n1 := 'D';
        :n2 := 50;
end;
/
 

select 
        /*+ gather_plan_statistics */ 
        * 
from    ( 
        select  a.id col0,a.id col1
        from    t1 a
        where   a.v = :n1 
        and     a.id > 1
        order by 
                a.id 
        ) 
where 
        rownum <= :n2 
;
 
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost peeked_binds '));

From Oracle’s perspective the table has 45M rows, with a unique sequential key starting at 1 in the id column. The query looks like a pagination query, asking for 50 rows, ordered by id. But the in-line view asks for rows where id > 1 (which, initiall, means all of them) and applies a filter on the v column.

Of course we know that v is always null, so in theory the predicate a.v = :n1 is always going to return false (or null, but not true) – so the query will never return any data. However, if you read the code carefully you’ll notice that the bind variable v has been declared as an nvarchar2() not a varchar2().

Here’s the exection plan I got on an instance running 19.3 – and it’s very similar to the plan supplied by the OP:

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |  3747 (100)|      0 |00:00:00.01 |
|*  1 |  COUNT STOPKEY                |       |      1 |        |            |      0 |00:00:00.01 |
|   2 |   VIEW                        |       |      1 |     50 |  3747   (1)|      0 |00:00:00.01 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T1    |      1 |    452K|  3747   (1)|      0 |00:00:00.01 |
|*  4 |     INDEX RANGE SCAN          | PK_ID |      0 |   5000 |    14   (0)|      0 |00:00:00.01 |
----------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------
   2 - :2 (NUMBER): 50

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=:N2)
   3 - filter(SYS_OP_C2C("A"."V")=:N1)
   4 - access("A"."ID">1)

The question we were asked was this: “Why does the optimizer estimate that it will return 5,000 entries from the index range scan at operation4?”

The answer is the result of combining two observations.

First: In the Predicate Information you can see that Oracle has applied a character-set conversion to the original predicate “a.v = :n1” to produce filter(SYS_OP_C2C(“A”.”V”)=:N1). The selectivity of “function of something = bind value” is one of those cases where Oracle uses one of its guesses, in this case 1%. Note that the E-rows estimate for operation 3 (table access) is 452K, which is 1% of the 45M rows in the table.

In real life if you had optimizer_dynamic_sampling set at level 3, or had added the hint /*+ dynamic_sampling(3) */ to the query, Oracle would sample some rows to avoid the need for guessing at this point.

Secondly: the optimizer has peeked the bind variable for the rownum predicate, so it is optimizing for 50 rows (basically doing the arithmetic of first_rows(50) optimisation). The optimizer “knows” that the filter predicate at the table will eliminate all but 1% of the rows acquired, and it “knows” that it has to do enough work to find 50 rows in total – so it can calculate that (statistically speaking) it has to walk through 5,000 (= 50 * 100) index entries to visit enough rows in the table to end up with 50 rows.

Next Steps (left as exercise)

Once you’ve got the answer to the question “Why is this number 5,000?”, you might go back and point out that the estimate for the table access was 95 times larger than the estimate for the number of rowids selected from the index and wonder how that could be possible. (Answer: that’s just one of the little defects in the code for first_rows(n).)

You might also wonder what would have happened in this model if the bind variable n1 had been declared as a varchar2() rather than an nvarchar2() – and that might have taken you on to ask yet another question about what the optimizer was playing at.

Once you’ve modelled something that is a little puzzle there’s always scope for pushing the model a little further and learning a little bit more before you file the model away for testing on the next version of Oracle.

Question about sequence with lower nextval than column

Tom Kyte - Tue, 2021-01-19 20:46
Hi, I have a sequence name s1 start with 1 and incremented by 1 and end with 999. I am inserting these sequence value into one table name 'e'. E table contains eno (pk) column. insert into e values(s1.nextval); I inserted 9 rows. sequence current value is 9 and 10 is already inserted in backend. I try to insert into e values(s1.nextval); then it will come pk violation error. In this case i want to insert 11 in place of 10
Categories: DBA Blogs

Choosing a view based on result of view

Tom Kyte - Tue, 2021-01-19 20:46
Hi all, I am having a struggle with a view. The outcome of the view can be 1 row of 3 different views with the same fields. Can someone point me in the right direction how to think please? :) Very basic, this is my problem: <code> /* MAINVIEW */ SELECT * FROM (SELECT * FROM VIEW1) --returns n rows /* IF VIEW1 IS EMPTY THEN SELECT ALL FROM VIEW2 */ (SELECT * FROM VIEW2) -- returns n rows /* IF VIEW2 IS EMPTY THEN SELECT VIEW3 (=HAS ALWAYS DATA) */ (SELECT * FROM VIEW3) -- returns 1 row </code> I don't need full code, just a hint in the right direction ;) . Thanks in advance.
Categories: DBA Blogs

The JSON query results does not return the full path

Tom Kyte - Tue, 2021-01-19 20:46
Hi, I'm try to evaluate the Json query. e.g I have following json string in the table <code>{ "WO": [ { "id": "1", "name": "WO1", "action": "add", "items": [ { "id": "1", "author": "Tom Ask" }, { "id": "2", "author": "Lee Frik" } ] }, { "id": "3", "name": "WO1", "action": "add", "items": [ { "id": "1", "author": "John A" }, { "id": "2", "author": "Jean Pontus" } ] } ] }</code> <code> select d.wo_data.WO.items.id from workorder_json d;</code> I get following results: <code>["Tom Ask","Lee Frik","John A","Jean Pontus"] </code> What I hope is return with full path like following <code>{ "WO": [ "items": [ { "author": "Tom Ask" }, { "author": "Lee Frik" } ], "items": [ { "author": "John A" }, { "author": "Jean Pontus" } ] ] } </code> Is there anyway to return like above?
Categories: DBA Blogs

Partner Webcast – Oracle Cloud Observability & Management Platform Logging Services

The Oracle Cloud Observability and Management Platform brings together a comprehensive set of management, diagnostic, and analytics services that help customers eliminate the complexity, risk, and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Announcement: Both Of My Oracle Webinars Scheduled For February 2021 !!

Richard Foote - Mon, 2021-01-18 19:56
I’m please to announce that both of my highly acclaimed webinars scheduled for February are now confirmed. These webinars are a must for any Oracle DBA or Developer interested in designing, developing or maintaining high performance, highly scalable Oracle-based applications or databases. However only a few places are currently available on each webinar with numbers […]
Categories: DBA Blogs

Supplemental Defect

Jonathan Lewis - Mon, 2021-01-18 11:22

Here’s an anomaly that appeared in a question on the ODC recently about tweaking the output of dbms_metadata.get_ddl(), As the title suggests, though, this posting isn’t about the dbms_metadata package it’s about supplemental logging and something that should not to be possible and may have unfortunate side effects.

We start with a little script that creates a table, inserts some data, adds a couple of constraints, and then introduces some supplemental logging requirements. As the script header indicates I’ve only tested this on 19.3.0.0:

rem
rem     Script:         supplemental_defect.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2021
rem
rem     Last tested 
rem             19.3.0.0
rem

create table t1 (
        n1      number,
        n2      number,
        n3      number,
        n4      number
);

insert into t1 (n1, n2, n3, n4) values (1,2,3,4);
commit;

alter table t1 add constraint t1_pk primary key(n1, n2)
/

alter table t1 add constraint t1_uk unique(n3) using index (
        create index t1_uk on t1(n3, n4)
)
/

alter table t1 add supplemental log data (primary key, unique) columns
/

alter table t1 add supplemental log group t1_g1 (n1, n2) always
/

alter table t1 add supplemental log group t1_g2 (n1, n2) always
/

There’s nothing particularly special or complex about this code, and every statement runs successfully. You might notice that I’ve created two identical supplemental log groups, of course, and that’s clearly a mistake and I’d say that Oracle should have raised an error when I tried to create group t1_g2.

In fact my script is slightly worse than you might think at first sight because (n1, n2) is the primary key of the table, and I’ve added supplemental logging on the primary key already, so even supplemental log group t1_g1 really ought to raise an error as well.

Side note: you’ll notice that I’ve got a single-column unique constraint protected by non-unique multi-column index, and part of my supplemental log data specifies unique columns. If you check the syntax this option can be expressed as “unique” or “unique indexes” – which leaves me wondering whether Oracle my code would treat this as a command relating to n3 and n4, or just to n3.

So far all I’ve done is create a table in a way that’s left me with a couple of questions – let’s see what we get when we use dbms_metadata.get_ddl() to generate code to recreate the table:

begin
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'PRETTY',true);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SQLTERMINATOR',true);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES',false);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'STORAGE', false);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'TABLESPACE',false);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'CONSTRAINTS',true);
end;
/

set long 20000
set heading off

select dbms_metadata.get_ddl('TABLE','T1') from dual;

================================================================

  CREATE TABLE "TEST_USER"."T1"
   (	"N1" NUMBER,
	"N2" NUMBER,
	"N3" NUMBER,
	"N4" NUMBER,
	 CONSTRAINT "T1_PK" PRIMARY KEY ("N1", "N2")
  USING INDEX  ENABLE,
	 CONSTRAINT "T1_UK" UNIQUE ("N3")
  USING INDEX  ENABLE,
	 SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS,
	 SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS,
	 SUPPLEMENTAL LOG GROUP "T1_G1" ("N1", "N2") ALWAYS,
	 SUPPLEMENTAL LOG GROUP "T1_G2" ("N1", "N2") ALWAYS
   ) ;

The “create table” statement includes 4 supplemental log clauses – notice how my original “two-part” clause for the primary key and unique constraints has been split into two – with the key word index being added to the latter.

You might note that if you try to execute this statement to recreate the table you’re going to run into an immediate problem – Oracle will create a unique singe-column index to protect the unique constraint – it doesn’t know that the constraint should be protected by a two-column non-unique index.

But there’s another problem that will protect you from the indexing issue. Here’s what I get when I try to execute the statement:

	 SUPPLEMENTAL LOG GROUP "T1_G1" ("N1", "N2") ALWAYS,
	                                *
ERROR at line 12:
ORA-02261: such unique or primary key already exists in the table

The funniest thing about this error comes when you look at the error message file ($ORACLE_HOME/rdbms/mesg/oraus.msg in my case), where we can find the “Cause” and “Action” that Oracle supplies for the error:

02261, 00000, "such unique or primary key already exists in the table"
// *Cause: Self-evident.
// *Action: Remove the extra key.

It’s not too difficult to guess why the error has appeared but “Self-evident” seems a little optimistic. In fact your first guess about the error may be wrong. Obviously I could simply delete the lines that create the t1_g1 and t1_g2 logs since they appear to be redundant copies of the (primary key) supplemental log – but what if I just delete the line that creates the (primary key) supplemental log? Oracle still raises the ORA-02261 error. It’s only when I delete the declaration of the primary key (and I can leave the declaration of the (primary key) supplemental log in place) that the error stops appearing and I recreate the table … even if I’ve left the decarations of the two supplemental log groups on (n1, n2) in place.

On the plus side – although the export/import utilities expdp and impdp make use of the dbms_metadata package they don’t run into the same problem. After I’d exported, dropped and imported the t1 table with the sqlfile=xxx.sql option this is how impdp reported the code it had run to recreate the table (edited to remove the space management bits):

CREATE TABLE "TEST_USER"."T1" 
   (    "N1" NUMBER, 
        "N2" NUMBER, 
        "N3" NUMBER, 
        "N4" NUMBER
   ) 
;

ALTER TABLE "TEST_USER"."T1" ADD CONSTRAINT "T1_PK" PRIMARY KEY ("N1", "N2")
;
ALTER TABLE "TEST_USER"."T1" ADD CONSTRAINT "T1_UK" UNIQUE ("N3")
;
ALTER TABLE "TEST_USER"."T1" ADD SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS;
ALTER TABLE "TEST_USER"."T1" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
ALTER TABLE "TEST_USER"."T1" ADD SUPPLEMENTAL LOG GROUP "T1_G1" ("N1", "N2") ALWAYS;
ALTER TABLE "TEST_USER"."T1" ADD SUPPLEMENTAL LOG GROUP "T1_G2" ("N1", "N2") ALWAYS;

The underlying code for expdp/impdp generates constraints separately from the table and then adds the constraints to the tables – and it views supplemental logs as a variant on constraints (you can stop the supplemental log clauses appearing by using the dbms_metadata.transform_param() procedure to disable the generation of constraints) and adds them later.

You might notice in passing that the index created by impdp to support the unique constraint is NOT the index originally specified. This is a very old problem – though the nature of the problem has varied with time – so watch out if you move data from a production system to a test system using export/import.

Summary

The internal code to handle supplemental logging allows you to create redundant supplemental log groups that will cause errors if you use dbms_metadata.get_ddl() in the most straightforward way to generate code to recreate the table.

The error message that appears if you haven’t allowed for the anomaly is a little counter-intuitive.

audit once per session (unified auditing)

Tom Kyte - Mon, 2021-01-18 08:26
Hi, is it possible to audit (unified auditing) access to tables only once per session? I don't need to catch all selects from audited tables in a session, I just want to to know if a table was at querried at least once. Currently it generates huge amount of audit data, from which only fraction is needed. Thanks, Julius
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator