Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 15 hours 4 min ago

EDITIONS EBR

Thu, 2024-09-05 09:26
It took me a while pick database development over dabase administration, concerning a editions. My questions is simple, if I've decided to create an editioning view over a table with N triggres, what would I do about this?? some people says all triggers must be moved to the editioning view. Other says if there are trigger oriented to update auditing column (created by, date of creation, updated, etc) or update surrogated key columns (pre-insert), this kind of triggers must remain in the table. Kind regards Mauricio
Categories: DBA Blogs

Big PL/SQL block passes wrong parameters for function call

Thu, 2024-09-05 09:26
During executing a big PL/SQL blocks, some of the numeric parameters get lost/replaced with an earlier value. There is no error message, the PL/SQL block executes, but some of the function calls get the wrong parameter values. See the LiveSQL link: it defines a function which compares its input parameters, and calls it some 11 thousand times, and at the 10932th call, it gets mismatched values. The problem seemingly occurs above a certain program size, but it seemingly didn't reach the diana nodes limit, there is no ORA-00123, the program executes, but stores the wrong data in the database. Is there a size limit I bump into or can it possibly be an Oracle bug (that reproduces on multiple instances and different Oracle versions?)
Categories: DBA Blogs

Extract a Node value from XML stored under BLOB in Oracle Tables has row 266 million rows

Thu, 2024-09-05 09:26
Hello Tom, I have a table which is having around 266573283 rows. In that table i have BLOB column which stores a xml data. In that column i need to parse the xml and then get the value of on node. This node may or may not exists in all the rows. I have tried using python (cx_oracle package). The table has been partitioned on the date(Month and Year) the rows has been stored. I'm not able to process the xml. Is there a way or an approach to process the xml data Note: This table will not grow anymore due to application shutdown. 2018 3543136 2019 3369956 2020 7576397 2021 82413536 2022 123216864 2023 46453394 Thanks & Regards, Mani R
Categories: DBA Blogs

A problem when ranking regression slopes over a database link

Thu, 2024-09-05 09:26
Dear Colleagues, I have come across a problem when ranking regression slopes over a database link. I have a table containing dates and file volumes for downloads from several sources. I want to make a Top N for the sources most growing in their volumes. For this, I calculate slopes for all the sources using regr_slope() function and then rank the slopes using row_number() over (order by slope desc nulls last) as rn_top. Then I want to select rows where rn_top <= :n. The results of the query obtained over a database link differ from the results obtained locally: locally obtained results are correct, remote ones demonstrate strange behavior. Regression is correct, ranking is correct; select * from (?ranking?) is NOT correct even without any ?where? condition ? the slopes are correct, but their ranking is not. This effect does not take place for constants, nor for max/min/avg functions ? only for regr_* functions. The effect is stable for two versions of remote servers: Oracle Database 19c Enterprise Edition and Oracle Database 11g Express Edition, and Oracle Database 19c Enterprise Edition wherefrom the database link goes. For reproducing: <code>CREATE TABLE "TEST_003" ("TASK_SOURCE_ID" NUMBER(6,0), "TASK_DATE_DOWNLOADED" DATE, "TASK_FILE_SIZE" VARCHAR2(128)) ; /* historically task_file_size is varchar in original table */ Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('26.08.24','DD.MM.RR'),'8266'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114657',to_date('26.08.24','DD.MM.RR'),'6925'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('26.08.24','DD.MM.RR'),'8783'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('26.08.24','DD.MM.RR'),'6590'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('26.08.24','DD.MM.RR'),'7204'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('122994',to_date('26.08.24','DD.MM.RR'),'59904'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('120116',to_date('27.08.24','DD.MM.RR'),'35125'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('27.08.24','DD.MM.RR'),'8226'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('27.08.24','DD.MM.RR'),'8784'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('27.08.24','DD.MM.RR'),'6591'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('27.08.24','DD.MM.RR'),'7206'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('28.08.24','DD.MM.RR'),'8230'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNL...
Categories: DBA Blogs

Object Type Variables in Bulk Collect and FORALL

Wed, 2024-09-04 15:06
I was reading the following discussion about using Object Types https://asktom.oracle.com/ords/f?p=100:11:607033327783906::::: https://docs.oracle.com/cd/B10501_01/appdev.920/a96624/10_objs.htm But, I couldnt understand the object types usage in packages (mainly in the bulk processing). Is it possible to use objects in bulk processing? if yes, can you please help with an example?
Categories: DBA Blogs

Inlining a function

Wed, 2024-09-04 15:06
Hello Tom, I am trying to inline a function when using a MERGE statement and I am not sure if it is working as no error is thrown. Here is my code: <code>CREATE OR REPLACE PACKAGE my_pkg IS FUNCTION my_fnc ( p1 VARCHAR2 , p2 VARCHAR2) RETURN VARCHAR2; END my_pkg; / CREATE OR REPLACE PACKAGE BODY my_pkg IS FUNCTION my_fnc ( p1 VARCHAR2 , p2 VARCHAR2) RETURN VARCHAR2 IS BEGIN RETURN p1 || '-' || p2; END my_fnc; END my_pkg; / CREATE OR REPACE PROCEURE my_prc IS TYPE t_type IS TABLE OF my_tab2%ROWTYPE; v_nt t_type; v_colx my_tab2.colx%TYPE; BEGIN -- version 1 PRAGMA INLINE (my_fnc, 'YES'); MERGE INTO my_tab1 a USING (SELECT col1 , col2 , my_pkg.my_fnc(col3, col4) colx , col5 , col6 FROM my_tab2 WHERE 1 = 1) b -- the condition doesn't mather ON (a.col1 = b.col1 AND a.col2 = b.col2 AND a.colx = b.colx) WHEN MATCHED THEN UPDATE SET a.col5 = b.col5 , a.col6 = b.col6 WHEN NOT MATCHED THEN INSERT ( col1 , col2 , colx , col5 , col6); COMMIT; -- -- version 2 SELECT col1 , col2 , my_pkg.my_fnc(col3, col4) colx , col5 , col6 BULK COLLECT INTO v_nt FROM my_tab2; FORALL i IN v_nt.FIRST .. v_nt.LAST PRAGMA INLINE (my_fnc, 'YES'); v_colx := my_pkg.my_fnc(col3, col4); MERGE INTO my_tab1 a USING (SELECT v_nt(i).col1 col1 , v_nt(i).col2 col2 , v_colx , v_nt(i).col5 col5 , v_nt(i).col6 col6 FROM dual) b -- the condition doesn't mather ON (a.col1 = b.col1 AND a.col2 = b.col2 AND a.colx = b.colx) WHEN MATCHED THEN UPDATE SET a.col5 = b.col5 , a.col6 = b.col6 WHEN NOT MATCHED THEN INSERT ( col1 , col2 , colx , col5 , col6); END my_prc;</code> Now, my questions are: can any version be inlinied? Version 1 could not be inlined because it is not preceding any of the statements mention in ORacle documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/plsql-optimization-and-tuning.html#GUID-2E78813E-CF29-409D-9F8B-AA24B294BFA2) Version 1 could not be inlined because, also the invoked and invoking subprograms are not in the same program unit (it is not very clear what means "same program unit") Version 2 could be inlined because it is using the assignment statement? Version 2 could not be inlined because the invoked and invoking subprograms are not in the same program unit Thank you, Ionut Preda.
Categories: DBA Blogs

Oracle's RDBMS approach to sizing the database server ( versus MongoDB's approach )

Wed, 2024-09-04 15:06
Greetings, My question is in regards to Oracle's recommended process of sizing an Oracle database. When I have participated in sizing an Oracle server, we only estimated the table sizes using the field sizes multiplied by the total number of records. Then, we added 15% of the total table sizes for index space. Next, we ran a few simulated runs with test data. The focus was mostly to make sure that the server had enough disk space and total CPU didn't go over 50% for peak load. That was it. The steps I mentioned were provided by either Oracle support or our Oracle technical contact. Is there now an official Oracle whitepaper that states how an Oracle server should be sized? I recently been asked to help size a Mongo server. In this case, I do have access to a MongoDB server sizing whitepaper-like. It recommends that we focus on making sure that the entire "working set" fits in memory. The "working set" is the amount of space used up by both the data and its indexes needed for processing "frequently accessed data". The "frequently accessed data" is the data that is needed by normal processing. I still don't know how to accurately determine that size since it doesn't go into much details. Does this sound like a good way to size an Oracle database server, too? Thanks, John P.S. Here is Mongo's whitepaper-like approach: https://www.linkedin.com/pulse/mongodb-sizing-guide-sepp-renfer/
Categories: DBA Blogs

Insert Into Large Tables Takes Longer Time

Wed, 2024-09-04 15:06
Looking for a recommendation related to large table inserts in Autonomous (ATP) database. We have a sizable history table in our database, roughly 45 GB, which stores all transactions at the end of each day. Recently, we have been experiencing performance challenges when inserting data into this table. We tried 2 approaches to fix the performance. (1) /*+ APPEND */ hint in direct SQL. (2) Bulk Collect and FORALL Both approaches works fine for smaller data sets. But, the data is really big like 2 Million records (in certain days of a week) the inserts are taking more than 5 mins to run. Since our calling application has 5 mins limit on SQL executions, our entire package is failing. Can you please help if we are missing anything in our approach or can try any other things for better performance? Also, should we consider partitioning the table for faster DML? If partitioning is something that we can consider, which field should we consider for partitioning (since the data in the table quite random, we thought of doing hash partition on line_id). Could you provide any suggestions? Furthermore, does the choice of field for hash partitioning carry any significance?
Categories: DBA Blogs

Dynamic Sql

Wed, 2024-09-04 15:06
Hi, thank you for taking my question, I wanted to know the logic behind not being able to reference :new and :old variables in dynamic sql execute immediate 'insert into tablea (columna, columnb) values (:old.columna,:new.columnb)'; this will fail just trying to understand the logic behind it. thank you for you response PS constant browser on on your site loven it!
Categories: DBA Blogs

How to connect SQL Developer using oracle wallet external password store

Wed, 2024-09-04 15:06
Hi, How can i connect to SQL developer using oracle wallet based security. from sql plus we can connect using the command like /@sid, but how can i connect to SQL developer (or) toad using the wallet based security. Thanks in advance.
Categories: DBA Blogs

unified audit on execute on procedure does not record in audit trail when error ORA-06550, PLS-00306

Tue, 2024-09-03 20:46
Hi, I am helping developers to track their failed call to oracle procedures in java app, thought unified auditing can do that. so I create a unified audit policy on execute on the procedure, however, the audit only record the successful execution, but when the call fails with error ORA-06550 and PLS-00306, it's not recorded in the audit trail. here is example code to reproduce problem: <code>CREATE OR REPLACE PROCEDURE output_parameter(p_input VARCHAR2) AS BEGIN DBMS_OUTPUT.PUT_LINE('Input Parameter: ' || p_input); END; / CREATE AUDIT POLICY Jiulu_dropit3 ACTIONS EXECUTE ON jiulusun.output_parameter; AUDIT POLICY Jiulu_dropit3;</code> logout login -- successful execution <code>SQL> exec output_parameter('a'); Input Parameter: a</code> PL/SQL procedure successfully completed. -- failed execution <code>SQL> exec output_parameter('a','b'); BEGIN output_parameter('a','b'); END; * ERROR at line 1: ORA-06550: line 1, column 7: PLS-00306: wrong number or types of arguments in call to 'OUTPUT_PARAMETER' ORA-06550: line 1, column 7: PL/SQL: Statement ignored SELECT event_timestamp, os_username, userhost,dbusername, action_name, return_code,object_schema, object_name, sql_text FROM unified_audit_trail WHERE object_name='OUTPUT_PARAMETER' @pr ============================== EVENT_TIMESTAMP : 12-AUG-24 10.56.17.340044 AM OS_USERNAME : jiulusun USERHOST : my machine DBUSERNAME : JIULUSUN ACTION_NAME : EXECUTE RETURN_CODE : 0 OBJECT_SCHEMA : JIULUSUN OBJECT_NAME : OUTPUT_PARAMETER SQL_TEXT : BEGIN output_parameter('a'); END; PL/SQL procedure successfully completed.</code> so the failed execution is not in audit trail, Is this a bug or I mis-understand how unified auditing works?
Categories: DBA Blogs

Options for Change Data Capture

Tue, 2024-09-03 02:46
Chris & Connor, We are working on a new project and were wondering as to what options are currently available / provided by Oracle for Change Data Capture, besides the DIY MViews ( MVIEW logs mostly ), triggers etc, does Oracle 11.2.0.4 and above provide an out of the box solution ? I have heard that Oracle CDC is being deprecated, is there some option available that will track all the data changes made to a table ? Thanks BC MT, MI
Categories: DBA Blogs

Prevent future dates in for inspection_date column in table

Sun, 2024-09-01 14:26
I have a table called DAY_INSP and a column INS_DATE which allows dates in the future. I want to prevent that by only allowing today's date or older. Do I need to add a trigger or constraint to do this? Any help would be appreciated
Categories: DBA Blogs

Performance issue after migrate from 10g to 19c

Thu, 2024-08-29 13:26
We are doing upgrade of Oracle 10g DB to 19c. for the data migration, we use export the data pump from 10g DB and import it into the new 19c DB. After the data migration was completed and start the performance testing, We found that many Query SQL runs much slower than on 10g DB. We checked the explain plan, tables' statistics etc.. but there was no solution. Then we try to collect statistics for all tables. The first time we used "DBMS_STATS" package which is suggested by Oracle, but it didn't work.(execution time ~43 mins.) The second time we used "analyze" script, the performance back to normal.(execution time ~2 secs.) My questions are: 1. What causes the execution time to slow down after migrate to 19c? 2. Why use analyze script can be fixed the issue, but DBMS_STATS? 3. Which parameters(or table statistics info) do I need to check?
Categories: DBA Blogs

ORA-55616 / ORA-55624 Errors

Thu, 2024-08-29 13:26
Dear Tom, Oracle 19.22 / ibm aix 7.2 We have a set of 7 tables on which flashback data archive was enabled for almost more than 7 years and was running without any issues. The databases were upgraded from 11g to 19c a couple of years back and post the upgrade as well everything was running fine until very recently. A couple of days back, adding to the coincidence of this error, a failover and failback of the database had happened. Post this, the application teams started getting ORA-55616 errors. We did raise a Oracle SR but so far no positive solutions have come. While debugging the issue we found that the 1st time an insert is run in a session, the insert runs through fine without any issues. However, the 2nd time an insert is run, ORA-55616 error occurs. We were able to consistently reproduce this error. Example - This is what we did to test but the table had a lot more columns with different data types. SQL> insert into t1 values(1,2,3); 1 row inserted. SQL> rollback; rollback complete. SQL> insert into t1 values(1,2,3); ERROR at line 1: ORA-55616: Transaction table needs Flashback Archiver processing SQL> conn / as sysdba connected. SQL> alter system set container=pdb1; Session altered. SQL> insert into t1 values(1,2,3); 1 row inserted. SQL> rollback; rollback complete. SQL> insert into t1 values(1,2,3); ERROR at line 1: ORA-55616: Transaction table needs Flashback Archiver processing Due to this reason, we decided that we will disable flashback archive on those set of tables and try the operations. The operations worked fine without any issues. Post this it was decided to check if we can re-enable the flashback archive and see if it works. When we tried to re-enable it, we ran into the ORA-55624 error. For none of the 7 tables, we were able to enable the flashback archive. SQL> alter table t1 flashback archive; alter table t1 flashback archive * Error at line 1: ORA-55624: The table t1 cannot be enabled for Flashback Archive at this point Neither the oracle documentation nor Oracle support has extensive documentation pertaining to these errors. Would you be able to help us out understand why we are running into this error and if there is anything to understand the workflow of the internals which is resulting in this error. Thanks, Amarnath
Categories: DBA Blogs

dbms_datapump

Thu, 2024-08-29 13:26
1. I am trying to run a dbms_data pump Export and import on Schema level by creating a pricedure, also I have the neccessary Privileges on the database on Which I am Running like EXP_FULL_DATABASE and IMP_FULL_DATABASE,read/write/execute on the data_pump_dir diretory too. 2. I have exported the dump file using a filetype=> KU$_FILE_TYPE_DUMP_FILE and it has created 3/4 dumpfiles. This is in a user ABC. 3. So now im trying to import those dumpfiles to a different user XYZ using dbms_datapump.remap and im changing passing the parameter as my old_value=>'ABC' and new=>'XYZ' 4. Here when im runnig the import procedure in the user ABC to import the dump files to XYZ, im getting an erro as 'INVALID OPERATION' and the line number reffering to dbms_datapump.start_job. 5. The input im passing is my dumpfile and it looks like this 'my_dumpfile_%U.dmp' Note: when running the import job, i couldn't use the add_file method to add the log file. So i ignoed addi g the log file. So help me resolve this error, let me know if any further details required. - akilesh
Categories: DBA Blogs

Exposing APEX app on the Internet running on internal on-premise ORDS for supplier data from EBS

Thu, 2024-08-29 13:26
**Use Case:** We have built a supplier portal app in APEX and now would like to give access to it to our suppliers. **Best Practices for Exposing APEX Apps on the Internet:** When exposing APEX apps on the WWW/Internet that use an on-premises EBS instance/Oracle database as a source database, it is essential to follow best practices to ensure security and performance. Below are some key considerations are accounted for: 1. **New Domain Name and DNS Record:** - We have registered a new domain name and created a DNS record to access the app, e.g., supplier.XXXXXX.com. 2. **SSL and Domain Protection:** - We have added SSL and full domain protection with the domain hosting service to ensure secure communication. 3. **Firewall and NAT Configuration:** - We have configured firewall rules and NAT records to route traffic through internal reverse proxy servers. 4. **APEX Authentication Schema:** - We have implemented APEX authentication using Microsoft Active Directory (AD) and Azure for secure user authentication. 5. **User Authorization:** - We have set up customer user authorization to control access to the APEX app. **Additional Considerations:** We are also considering implementing a valid external IP addresses list on our firewall to restrict access to the domain supplier.XXXXX.com based on known external IPs from our suppliers. Here are the pros and cons of this approach: **Pros:** - **Enhanced Security:** Restricting access to known IP addresses reduces the risk of unauthorized access. - **Control:** Provides better control over who can access the application. **Cons:** - **Maintenance:** Managing and updating the list of valid IP addresses can be time-consuming. - **Accessibility:** Legitimate users with dynamic IP addresses may face access issues. Thank you in advance for any additional thoughts, guidelines, or best practices.
Categories: DBA Blogs

Oracle mainframe instances

Thu, 2024-08-29 13:26
G'Day Ask Tom, I have been asked to look into curating an IBM ZOS mainframe interactive blog community on LinkedIN as an Oracle expert. It occurred to me to ask what the installed base on IBM mainframe might be, roughly? It seems that there is a conflict of interest here with IBM's own commercial database offering DB2 so it is an open question for me. If Oracle RDBMS on mainframe is a rarity a blog may not return in value the effort expended. Thank you for your time, Best regards, Mike
Categories: DBA Blogs

Apache Nifi

Fri, 2024-08-23 12:06
Hi! for years we are using "Apache Nifi" for other purposes and when we want to enter some data in the oracle database, a csv is generated in "Nifi" to later load it with external table to Oracle. I think the best thing would be to change this procedure so that "Nifi" accesses directly to Oracle, but what do you think would be the best way? to make an insert directly? to use AQ or TEQ queues? Thank you very much
Categories: DBA Blogs

What is the fastest way to delete data from temporary table?

Wed, 2024-08-21 05:26
Hello, Tom, i'm using gtt during daily data uploading procedures. I have 10 procedures, that launches one by one. Each of this procedures uses 1 universal gtt (lets call it universal_gtt. Table contains 3 fields: id numeric, date1 date, date2 date). At the end of each procedure i'm using: DELETE FROM UNIVERSAL_GTT; COMMIT; but it works too long (about 20% of all procedures working time) That's why i have 2 questions: 1. How to do deletion from gtt faster? 2. Does is will be a good idea to drop and recreate table universal_gtt 10 times each day during launching procedures?
Categories: DBA Blogs

Pages