Feed aggregator

Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud Financial Close Solutions for Oracle EPM Cloud

Oracle Press Releases - Wed, 2019-10-30 08:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud Financial Close Solutions for Oracle EPM Cloud Oracle is the only EPM vendor to be named a Leader in both Cloud Financial Close Solutions and Cloud Financial Planning and Analysis Solutions Magic Quadrant reports

Redwood Shores, Calif.—Oct 30, 2019

Oracle has been named a Leader in Gartner’s 2019 “Magic Quadrant for Cloud Financial Close Solutions” report for the third consecutive year. Out of 10 companies evaluated, Oracle is positioned as a Leader based on its ability to execute and completeness of vision. Oracle is the only Enterprise Performance Management (EPM) vendor to be named a Leader in both Cloud Financial Planning and Analysis Solutions and Cloud Financial Close Solutions Magic Quadrant reports. A complimentary copy of the report is available here.

According to the report, “Leaders provide mature offerings that meet market demand and have demonstrated the vision necessary to sustain their market position as requirements evolve. The hallmark of Leaders is that they focus on, and invest in, their offerings to the point where they lead the market and can affect its overall direction. As a result, Leaders can be vendors to watch as you try to understand how new market offerings might evolve. Leaders typically possess a large, satisfied customer base (relative to the size of the market) and enjoy high visibility within the market. Their size and financial strength enable them to remain viable in a challenging economy. Leaders typically respond to a wide market audience by supporting broad market requirements; however, they may fail to meet the specific needs of vertical markets or other more-specialized segments.”

“As the EPM Cloud market matures, we are seeing more organizations looking for an integrated EPM suite to connect their financial and operational planning with financial close and reporting processes,” said Hari Sankar, Group Vice President, Product Management, Oracle.  “We are proud to be the only vendor acknowledged as a Leader by Gartner in these two Magic Quadrant reports related to EPM.”

Oracle EPM Cloud is the only complete EPM solution on a common platform that addresses financial and operational planning, consolidation and close, data management, reporting, and analysis processes. With native integration with the broader Oracle Cloud Applications suite, which includes Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Human Capital Management (HCM) and Customer Experience (CX) applications, Oracle helps customers to stay ahead of changing expectations, build adaptable organizations, and realize the potential of the latest innovations.

Oracle’s portfolio of enterprise performance management applications has garnered industry recognition. Oracle was recently named a Leader in Gartner’s 2019 “Magic Quadrant for Cloud Financial Planning and Analysis Solutions” for Oracle EPM Cloud, making it the only vendor to be named a Leader in both of Gartner’s 2019 Magic Quadrants related to enterprise performance management.

Gartner, Magic Quadrant for Cloud Financial Close Solutions, Robert Anderson, John Van Decker, Greg Leiter, 21 October 2019

Gartner, Magic Quadrant for Cloud Financial Planning and Analysis Solutions, Robert Anderson, John Van Decker, Greg Leiter, 8 August 2019

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Bill Rundle
Oracle
+1.650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

Top 8 Post Limits on Tumblr (and Other Limitations) Revealed

VitalSoftTech - Tue, 2019-10-29 09:57

Do you know about post limits on Tumblr? Are you aware that the blogging platform has certain rules and regulations which social media users must abide by? Tumblr has become one of the best and most entertaining platforms out there for blogging and sharing multimedia content. It is not just immensely popular amongst the younger […]

The post Top 8 Post Limits on Tumblr (and Other Limitations) Revealed appeared first on VitalSoftTech.

Categories: DBA Blogs

Cloning Oracle Homes in 19c

Michael Dinh - Tue, 2019-10-29 09:41

You may find conflicting information from Oracle’s documentation where Cloning an Oracle Database Home shows to use clone.pl and Database Upgrade Guide 19c shows Deprecation of clone.pl Script

To clone Oracle software, use createGoldImage and then install software as usual.

DEMO for DB:

Source: /u01/app/oracle/product/19.0.0/dbhome_1
Target: /u01/app/oracle/product/19.0.0/dbhome_2

[oracle@ol7-19-rac1 ~]$ ls -l /u01/app/oracle/product/19.0.0/dbhome_2/
total 0

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/runInstaller -createGoldImage -destinationLocation /u01/app/oracle/product/19.0.0/dbhome_2 -silent
Launching Oracle Database Setup Wizard...

[oracle@ol7-19-rac1 ~]$ ls -l /u01/app/oracle/product/19.0.0/dbhome_2/
total 3069584
-rw-r--r--. 1 oracle oinstall 3143250100 Oct 29 13:09 db_home_2019-10-29_12-59-52PM.zip

[oracle@ol7-19-rac1 ~]$ cd /u01/app/oracle/product/19.0.0/dbhome_2/
[oracle@ol7-19-rac1 dbhome_2]$ unzip -qo db_home_2019-10-29_12-59-52PM.zip

[oracle@ol7-19-rac1 dbhome_2]$ ls -ld *
drwxr-xr-x. 2 oracle oinstall 102 Oct 2 00:06 addnode
drwxr-xr-x. 3 oracle oinstall 20 Oct 2 00:35 admin
drwxr-xr-x. 6 oracle oinstall 4096 Apr 17 2019 apex
drwxr-xr-x. 9 oracle oinstall 93 Apr 17 2019 assistants
drwxr-xr-x. 2 oracle oinstall 8192 Oct 29 13:00 bin
drwxr-xr-x. 4 oracle oinstall 87 Oct 2 00:06 clone
drwxr-xr-x. 6 oracle oinstall 55 Apr 17 2019 crs
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 css
drwxr-xr-x. 11 oracle oinstall 4096 Apr 17 2019 ctx
drwxr-xr-x. 7 oracle oinstall 71 Apr 17 2019 cv
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 data
-rw-r--r--. 1 oracle oinstall 3143250100 Oct 29 13:09 db_home_2019-10-29_12-59-52PM.zip
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 dbjava
drwxr-xr-x. 2 oracle oinstall 66 Oct 29 12:35 dbs
drwxr-xr-x. 5 oracle oinstall 4096 Oct 2 00:06 deinstall
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 demo
drwxr-xr-x. 3 oracle oinstall 20 Apr 17 2019 diagnostics
drwxr-xr-x. 13 oracle oinstall 4096 Apr 17 2019 dmu
drwxr-xr-x. 4 oracle oinstall 30 Apr 17 2019 drdaas
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 dv
-rw-r--r--. 1 oracle oinstall 852 Aug 18 2015 env.ora
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 has
drwxr-xr-x. 5 oracle oinstall 41 Apr 17 2019 hs
drwxr-xr-x. 10 oracle oinstall 4096 Oct 29 13:08 install
drwxr-xr-x. 2 oracle oinstall 29 Apr 17 2019 instantclient
drwxr-x---. 13 oracle oinstall 4096 Oct 29 13:00 inventory
drwxr-xr-x. 8 oracle oinstall 82 Oct 29 13:00 javavm
drwxr-xr-x. 3 oracle oinstall 35 Apr 17 2019 jdbc
drwxr-xr-x. 6 oracle oinstall 4096 Oct 29 13:00 jdk
drwxr-xr-x. 2 oracle oinstall 4096 Oct 8 20:23 jlib
drwxr-xr-x. 10 oracle oinstall 4096 Apr 17 2019 ldap
drwxr-xr-x. 4 oracle oinstall 12288 Oct 29 13:00 lib
drwxr-x---. 2 oracle oinstall 6 Oct 2 00:10 log
drwxr-xr-x. 9 oracle oinstall 98 Apr 17 2019 md
drwxr-xr-x. 4 oracle oinstall 31 Apr 17 2019 mgw
drwxr-xr-x. 10 oracle oinstall 4096 Oct 29 13:00 network
drwxr-xr-x. 5 oracle oinstall 46 Apr 17 2019 nls
drwxr-xr-x. 8 oracle oinstall 101 Apr 17 2019 odbc
drwxr-xr-x. 5 oracle oinstall 42 Apr 17 2019 olap
drwxr-x---. 14 oracle oinstall 4096 Oct 2 00:06 OPatch
drwxr-xr-x. 7 oracle oinstall 65 Apr 17 2019 opmn
drwxr-xr-x. 4 oracle oinstall 34 Apr 17 2019 oracore
drwxr-xr-x. 6 oracle oinstall 52 Apr 17 2019 ord
drwxr-xr-x. 4 oracle oinstall 66 Apr 17 2019 ords
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 oss
drwxr-xr-x. 8 oracle oinstall 4096 Oct 2 00:06 oui
drwxr-xr-x. 4 oracle oinstall 33 Apr 17 2019 owm
drwxr-xr-x. 5 oracle oinstall 39 Apr 17 2019 perl
drwxr-xr-x. 6 oracle oinstall 78 Apr 17 2019 plsql
drwxr-xr-x. 6 oracle oinstall 56 Oct 29 13:00 precomp
drwxr-xr-x. 2 oracle oinstall 26 Apr 17 2019 QOpatch
drwxr-xr-x. 5 oracle oinstall 52 Apr 17 2019 R
drwxr-xr-x. 4 oracle oinstall 29 Apr 17 2019 racg
drwxr-xr-x. 15 oracle oinstall 4096 Oct 29 13:00 rdbms
drwxr-xr-x. 3 oracle oinstall 21 Apr 17 2019 relnotes
-rwx------. 1 oracle oinstall 549 Oct 2 00:06 root.sh
-rwx------. 1 oracle oinstall 786 Apr 17 2019 root.sh.old
-rw-r-----. 1 oracle oinstall 10 Apr 17 2019 root.sh.old.1
-rwx------. 1 oracle oinstall 638 Apr 18 2019 root.sh.old.2
-rw-r-----. 1 oracle oinstall 10 Apr 17 2019 root.sh.old.3
-rwxr-x---. 1 oracle oinstall 1783 Mar 8 2017 runInstaller
-rw-r--r--. 1 oracle oinstall 2927 Oct 14 2016 schagent.conf
drwxr-xr-x. 5 oracle oinstall 4096 Apr 17 2019 sdk
drwxr-xr-x. 3 oracle oinstall 18 Apr 17 2019 slax
drwxr-xr-x. 4 oracle oinstall 41 Apr 17 2019 sqldeveloper
drwxr-xr-x. 3 oracle oinstall 17 Apr 17 2019 sqlj
drwxr-xr-x. 5 oracle oinstall 4096 Oct 8 20:22 sqlpatch
drwxr-xr-x. 6 oracle oinstall 53 Oct 2 00:05 sqlplus
drwxr-xr-x. 6 oracle oinstall 54 Apr 17 2019 srvm
drwxr-xr-x. 5 oracle oinstall 45 Oct 29 13:00 suptools
drwxr-xr-x. 3 oracle oinstall 35 Apr 17 2019 ucp
drwxr-xr-x. 4 oracle oinstall 31 Apr 17 2019 usm
drwxr-xr-x. 2 oracle oinstall 33 Apr 17 2019 utl
drwxr-xr-x. 3 oracle oinstall 19 Apr 17 2019 wwg
drwxr-x---. 7 oracle oinstall 69 Apr 17 2019 xdk
[oracle@ol7-19-rac1 dbhome_2]$

DEMO for GI:

Source: /u01/app/19.0.0/grid
Target: /u01/app/19.0.0/grid5

[root@ol7-19-rac1 ~]# mkdir -p /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# chmod 775 /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# chown oracle:oinstall /u01/app/19.0.0/grid5
[root@ol7-19-rac1 ~]# ls -ld /u01/app/19.0.0/grid5/
drwxrwxr-x. 2 oracle oinstall 6 Oct 29 13:15 /u01/app/19.0.0/grid5/

[oracle@ol7-19-rac1 ~]$ echo $ORACLE_HOME
/u01/app/19.0.0/grid
[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...
[oracle@ol7-19-rac1 ~]$

FAILED:

[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ grep -A1 "^WARNING" gridSetupActions2019-10-29_01-20-38PM.log
WARNING:  [Oct 29, 2019 1:20:54 PM] Validation disabled for the state init
INFO:  [Oct 29, 2019 1:20:54 PM] Completed validating state <init>
--
WARNING:  [Oct 29, 2019 1:20:55 PM] Command to get the files from '/u01/app/19.0.0/grid' not owned by 'oracle' failed.
WARNING:  [Oct 29, 2019 1:20:55 PM] Following files from the source home are not owned by the current user: [/u01/app/19.0.0/grid/acfs, /u01/app/19.0.0/grid/acfs/tunables, /u01/app/19.0.0/grid/acfs/tunables/acfstunables]
INFO:  [Oct 29, 2019 1:20:55 PM] Getting the last existing parent of: /u01/app/19.0.0/grid5
--
WARNING:  [Oct 29, 2019 1:20:57 PM] Files list is null or empty.
INFO:  [Oct 29, 2019 1:20:57 PM] Completed validating state <createGoldImage>
--
WARNING:  [Oct 29, 2019 1:20:58 PM] Following files are not readable: [/u01/app/19.0.0/grid/suptools/orachk/orachk, /u01/app/19.0.0/grid/log/procwatcher/prw.sh, /u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac1, /u01/app/19.0.0/grid/log/procwatcher/prwinit.ora, /u01/app/19.0.0/grid/crf/admin/run/crfmond, /u01/app/19.0.0/grid/crf/admin/run/crflogd]
INFO:  [Oct 29, 2019 1:21:00 PM] Verifying whether Central Inventory is locked by any other OUI session...
--
WARNING:  [Oct 29, 2019 1:21:05 PM] Could not create symlink: /tmp/GridSetupActions2019-10-29_01-20-38PM/tempHome_1572355263979/log/procwatcher/prw.sh.
Refer associated stacktrace #oracle.install.ivw.common.driver.job.CreateGoldImageJob:7059
--
WARNING:  [Oct 29, 2019 1:21:34 PM] Could not create symlink: /tmp/GridSetupActions2019-10-29_01-20-38PM/tempHome_1572355294593/log/procwatcher/prw.sh.
Refer associated stacktrace #oracle.install.ivw.common.driver.job.CreateGoldImageJob:7118


[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ ll /u01/app/19.0.0/grid/acfs
total 0
drwxr-xr-x. 2 root root 26 Oct  8 20:33 tunables


[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$ grep -i severe gridSetupActions2019-10-29_01-20-38PM.log
SEVERE: [Oct 29, 2019 1:21:11 PM] [FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-29_01-20-38PM for more details.
SEVERE: [Oct 29, 2019 1:21:40 PM] [FATAL] [INS-32700] The gold image creation failed. Check the install log /u01/app/oraInventory/logs/GridSetupActions2019-10-29_01-20-38PM for more details.
[oracle@ol7-19-rac1 GridSetupActions2019-10-29_01-20-38PM]$

[oracle@ol7-19-rac1 ~]$

RESEARCH:

Bug 29220079 - Error INS-32700 Creating a GI Gold Image (Doc ID 29220079.8)	
Versions confirmed as being affected: 19.3.0	
The fix for 29220079 is first included in: 
19.3.0.0.190416 (Apr 2019) Database Release Update (DB RU) and 
20.1.0

Should have been fixed but does not seems like it.

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29851014;ACFS RELEASE UPDATE 19.4.0.0.0 (29851014)
29850993;OCW RELEASE UPDATE 19.4.0.0.0 (29850993)
29834717;Database Release Update : 19.4.0.0.190716 (29834717)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[oracle@ol7-19-rac1 ~]$

You might have to create SR :=(

UPDATE: Thanks to https://lonedba.wordpress.com/

[oracle@ol7-19-rac1 GridSetupActions2019-10-29_03-06-03PM]$ grep "Permission denied" gridSetupActions2019-10-29_03-06-03PM.log
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/prw.sh’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/PRW_SYS_ol7-19-rac1’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/log/procwatcher/prwinit.ora’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/crf/admin/run/crfmond’: Permission denied
INFO:  [Oct 29, 2019 3:06:14 PM] find: ‘/u01/app/19.0.0/grid/crf/admin/run/crflogd’: Permission denied
[oracle@ol7-19-rac1 GridSetupActions2019-10-29_03-06-03PM]$

[oracle@ol7-19-rac1 ~]$ echo $ORACLE_HOME; cd $ORACLE_HOME/log
/u01/app/19.0.0/grid

[oracle@ol7-19-rac1 log]$ ls -l
total 4
drwxr-x---.  4 oracle oinstall   57 Oct  1 23:57 diag
drwxr-xr-t. 20 root   oinstall 4096 Oct  1 23:55 ol7-19-rac1
drwxr--r--.  3 root   root       66 Oct 25 15:10 procwatcher

[root@ol7-19-rac1 log]# chmod 775 -R ol7-19-rac1/ procwatcher/
[root@ol7-19-rac1 log]# ls -l
total 4
drwxr-xr-x.  2 oracle oinstall    6 Oct  1 23:44 crs
drwxr-x---.  4 oracle oinstall   57 Oct  1 23:50 diag
drwxrwxr-x. 20 root   oinstall 4096 Oct  1 23:47 ol7-19-rac1
drwxrwxr-x.  3 root   root       66 Oct 25 15:08 procwatcher
[root@ol7-19-rac1 log]#

[oracle@ol7-19-rac1 ~]$ . oraenv <<< +ASM1
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-19-rac1 ~]$ $ORACLE_HOME/gridSetup.sh -creategoldimage -destinationlocation /u01/app/19.0.0/grid5 -silent
Launching Oracle Grid Infrastructure Setup Wizard...

Successfully Setup Software.
Gold Image location: /u01/app/19.0.0/grid5/grid_home_2019-10-29_04-36-47PM.zip

[oracle@ol7-19-rac1 ~]$ ll /u01/app/19.0.0/grid5/*
-rw-r--r--. 1 oracle oinstall 4426495995 Oct 29 16:46 /u01/app/19.0.0/grid5/grid_home_2019-10-29_04-36-47PM.zip
[oracle@ol7-19-rac1 ~]$

Penn State Elevates Student Success with Oracle

Oracle Press Releases - Tue, 2019-10-29 07:00
Press Release
Penn State Elevates Student Success with Oracle University selects Oracle Student Cloud to enhance the student experience

Redwood Shores, Calif.—Oct 29, 2019

The Pennsylvania State University, one of the world’s leading higher education and research institutions, has selected Oracle Student Financial Planning Cloud to support its “One Penn State 2025” strategic vision to transform education delivery, focusing on student engagement and support. The offering will be an integral component in the university’s efforts to help reduce the cost of education while enhancing student outcomes and success.

Student Financial Planning is a key pillar in Oracle Student Cloud, a complete suite of modules designed to support the entire student lifecycle, including traditional and non-traditional learning models, with powerful automation and optimization capabilities.

Consistently ranked among the top one percent of the world’s universities, Penn State includes 24 campuses, nearly 100,000 students and 31,000 faculty and staff, all focused on creating, sharing and applying knowledge to make a positive impact on communities across the world. Acknowledging the ever-increasing student debt and drop out crises U.S. college students face, Penn State will use Oracle Student Financial Planning Cloud to help empower students to make more informed academic and financial aid decisions to accomplish their academic, professional and personal life goals.

By providing increased transparency and better management of financial aid resources, Oracle Student Financial Planning Cloud and Oracle Student Cloud will streamline financial aid processes and deliver invaluable, data-backed insights into student behaviors and successes, thus freeing administrators to focus more on supporting the academic needs of its students.

“Penn State is pleased to select Oracle’s Student Financial Planning product as a first step in transitioning our student system to the cloud and to further the institution’s strategic goals,” said Michael J. Büsges, senior director, Enterprise Projects at Penn State. “Oracle has an established track record in meeting the needs of its Higher Education customers, and we look forward to working together on this important initiative.”  

Higher education institutions today are under intense pressure to enroll best-fit students, improve outcomes, and ensure student access – and be able to do more with less. At the same time, today’s students demand modern, consumer-like experiences and engagement with institutions. With $2.9 billion in financial aid (representing 15 million automated packages) already processed through Oracle Student Financial Planning Cloud, Oracle has proven solutions to help universities create a seamless student experience that leads to students’ academic success. With Oracle Student Financial Planning, universities are able to spend more time advising students on their financial aid choices and less time packaging their aid, which leads to improved student outcomes, elevated institutional standing and greater operational efficiency. Oracle Student Financial Planning also supports traditional, continued and online educational programs, as well as administrators’ compliance and integration needs.

“Leading institutions such as Penn State know that by simplifying and optimizing their operations they will be able to use resources efficiently and maximize funding sources,” said Vivian Wong, group vice president of Higher Education Development at Oracle. “Oracle’s commitment to supporting the higher education market includes our belief that no student should sacrifice their academic goals due to cost and we have created powerful cloud solutions to help universities tackle this challenge head on.”

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

User-editable Application Setting

Jeff Kemp - Tue, 2019-10-29 01:49

A nice addition to APEX release 18.1 is the Application Settings feature. This allows the developer to define one or more configuration values that are relevant to a particular application. In a recent project this feature came in useful.

I had built a simple questionnaire/calculator application for a client and they wanted a small “FAQ” box on the left-hand side of the page:

I could have built this as an ordinary HTML region, but the Admin users needed to be able to modify the content later, so the content needed to be stored somewhere. I didn’t feel the users’ requirement was mature enough to design another table to store the boilerplate (not yet, at least), so I thought I’d give the Application Settings feature a go.

An Application Setting is a single value that can be set in Component Settings, and retrieved and modified at runtime via the supplied PL/SQL API (APEX_APP_SETTINGS). The feature is most useful for “configuration”-type data relevant to the application’s user interface. In the past I would have created a special table to store this sort of thing – and in some cases I think I still would – but in some cases using Application Settings may result in a simpler design for your applications.

I went to Shared Components, Application Settings and created a new Setting called “FAQ_BOILERPLATE“. Each Application Setting can have the following attributes configured:

  • Name – although this can be almost anything, I suggest using a naming standard similar to how you name tables and columns, to reduce ambiguity if you need to refer to the setting in your PL/SQL.
  • Value – at first, you would set this to the initial value; if it is changed, it is updated here. Note that the setting can only have one value at any time, and the value is global for all sessions. The value is limited to 4,000 bytes.
  • Value Required – if needed you can make the setting mandatory. In my case, I left this set to “No”.
  • Valid Values – if needed you can specify a comma-delimited list of valid values that APEX will validate against. In my case, I left this blank.
  • On Upgrade Keep Value – if you deploy the application from Dev to Prod, set this to Yes so that if a user has changed the setting your deployment won’t clobber their changes. On the other hand, set this to No if you want the value reset to the default when the application is deployed. In my case, I set this to Yes.
  • Build Option – if needed you can associate the setting with a particular build option. If the build option is disabled, an exception will be raised at runtime if the application setting is accessed.

On the page where I wanted to show the content, I added the following:

  1. A Static Content region titled “FAQ”.
  2. A hidden item in the region named “P10_FAQ_BOILERPLATE“.
  3. A Before Header PL/SQL process.

The Text content for the static content region is:

<div class="boilerplate">
&P10_FAQ_BOILERPLATE!RAW.
</div>

Note that the raw value from the application setting is trusted as it may include some embedded HTML; you would need to ensure that only “safe” HTML is stored in the setting.

The Before Header PL/SQL process has this code:

:P10_FAQ_BOILERPLATE := apex_app_setting.get_value('FAQ_BOILERPLATE');

Side note: a simpler, alternative design (that I started with initially) was just a PL/SQL region titled “FAQ”, with the following code:

htp.p(apex_app_setting.get_value('FAQ_BOILERPLATE'));

I later rejected this design because I wanted to hide the region if the FAQ_BOILERPLATE setting was blank.

I put a Server-side Condition on the FAQ region when “Item is NOT NULL” referring to the “P10_FAQ_BOILERPLATE” item.

Editing an Application Setting

The Edit button is assigned the Authorization Scheme “Admin” so that admin users can edit the FAQ. It redirects to another very simple page with the following components:

  1. A Rich Text Editor item P50_FAQ_BOILERPLATE, along with Cancel and Save buttons.
  2. An After Header PL/SQL process “get value” (code below).
  3. An On Processing PL/SQL process “save value” when the Save button is clicked (code below).

After Header PL/SQL process “get value”:

:P50_FAQ_BOILERPLATE := apex_app_setting.get_value('FAQ_BOILERPLATE');

On Processing PL/SQL process “save value”:

apex_app_setting.set_value('FAQ_BOILERPLATE',:P50_FAQ_BOILERPLATE);

The more APEX-savvy of you may have noticed that this design means that if an Admin user clears out the setting (setting it to NULL), since it has the Server-side Condition on it, the FAQ region will disappear from the page (by design). This also includes the Edit button which would no longer be accessible. In the event this happens, I added another button labelled “Edit FAQ” to the Admin page so they can set it again later if they want.

This was a very simple feature that took less than an hour to build, and was suitable for the purpose. Later, if they find it becomes a bit unwieldy (e.g. if they add many more questions and answers, and need to standardise the layout and formatting) I might replace it with a more complex design – but for now this will do just fine.

Related

So Far So Good with Force Logging

Bobby Durrett's DBA Blog - Mon, 2019-10-28 18:55

I mentioned in my previous two posts that I had tried to figure out if it would be safe to turn on force logging on a production database that does a bunch of batch processing on the weekend: post1, post2. We know that many of the tables are set to NOLOGGING and some of the inserts have the append hint. We put in force logging on Friday and the heavy weekend processing ran fine last weekend.

I used an AWR report to check the top INSERT statements from the weekend and I only found one that was significantly slower. But the table it inserts into is set for LOGGING, it does not have an append hint, and the parallel degree is set to 1. So, it is a normal insert that was slower last weekend for some other reason. Here is the output of my sqlstatsumday.sql script for the slower insert:

Day        SQL_ID        PLAN_HASH_VALUE Executions Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
---------- ------------- --------------- ---------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2019-09-22 6mcqczrk3k5wm       472069319        129         36734.0024     20656.8462    462.098677                  0                      0             38.8160385          666208.285         1139.86923             486.323077
2019-09-29 6mcqczrk3k5wm       472069319        130         44951.6935     27021.6031    573.245664                  0                      0             21.8764885           879019.29         1273.52672             522.083969
2019-10-06 6mcqczrk3k5wm       472069319        130         9624.33742     7530.07634    264.929008                  0                      0             1.26370992          241467.023         678.458015             443.427481
2019-10-13 6mcqczrk3k5wm       472069319        130         55773.0864      41109.542    472.788031                  0                      0             17.5326031          1232828.64         932.083969             289.183206
2019-10-20 6mcqczrk3k5wm       472069319        130         89684.8089     59261.2977    621.276122                  0                      0             33.7963893          1803517.19         1242.61069             433.473282
2019-10-27 6mcqczrk3k5wm       472069319        130         197062.591     144222.595    561.707321                  0                      0             362.101267          10636602.9         1228.91603             629.839695

It averaged 197062 milliseconds last weekend but 89684 the previous one. The target table has always been set to LOGGING so FORCE LOGGING would not change anything with it.

One of the three INSERT statements that I expected to be slowed by FORCE LOGGING was faster this weekend than without FORCE LOGGING last weekend:

Day        SQL_ID        PLAN_HASH_VALUE Executions Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
---------- ------------- --------------- ---------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
2019-09-22 0u0drxbt5qtqk       382840242          1         2610257.66         391635    926539.984                  0                      0              13718.453             5483472           745816.5                3689449
2019-09-29 0u0drxbt5qtqk       382840242          1         17127212.3        1507065    12885171.7                  0                      0             14888.4595            18070434          6793555.5             15028884.5
2019-10-06 0u0drxbt5qtqk       382840242          1         3531931.07         420150    2355139.38                  0                      0             12045.0115             5004273            1692754                5101998
2019-10-13 0u0drxbt5qtqk       382840242          1         1693415.59         180730    1250325.41                  0                      0               819.7725           2242638.5           737704.5                2142812
2019-10-20 0u0drxbt5qtqk       382840242          1         5672230.17         536115    3759795.33                  0                      0             10072.9125             6149731            2332038              2806037.5
2019-10-27 0u0drxbt5qtqk       382840242          1         2421533.59         272585    1748338.89                  0                      0               9390.821           3311219.5           958592.5              2794748.5

It ran 2421533 milliseconds this weekend and 5672230 the prior one. So clearly FORCE LOGGING did not have much effect on its overall run time.

It went so well this weekend that we decided to leave FORCE LOGGING in for now to see if it slows down the mid-week jobs and the web-based front end. I was confident on Friday, but I am even more confident now that NOLOGGING writes have minimal performance benefits on this system. But we will let it bake in for a while. Really, we might as well leave it in for good if only for the recovery benefits. Then when we configure GGS for the zero downtime upgrade it will already have been there for some time.

The lesson for me from this experience and the message of my last three posts is that NOLOGGING writes may have less benefits than you think, or your system may be doing less NOLOGGING writes than you think. That was true for me for this one database. It may be true for other systems that I expect to have a lot of NOLOGGING writes. Maybe someone reading this will find that they can safely use FORCE LOGGING on a database that they think does a lot of NOLOGGING writes, but which really does not need NOLOGGING for good performance.

Bobby

Categories: DBA Blogs

Basic Replication -- 10 : ON PREBUILT TABLE

Hemant K Chitale - Mon, 2019-10-28 09:05
In my previous blog post, I've shown a Materialized View that is built as an empty MV and subsequently populated by a Refresh call.

You can also define a Materialized View over an *existing*  (pre-populated) Table.

Let's say you have a Source Table and have built a Replica of it it another Schema or Database.  Building the Replica may have taken an hour or even a few hours.  You now know that the Source Table will have some changes every day and want the Replica to be updated as well.  Instead of executing, say, a TRUNCATE and INSERT, into the Replica every day, you define a Fast Refresh Materialized View over it and let Oracle identify all the changes (which, on a daily basis, could be a small percentage of the total size of the Source/Replica) and update the Replica using a Refresh call.


Here's a quick demo.

SQL> select count(*) from my_large_source;

COUNT(*)
----------
72447

SQL> grant select on my_large_source to hr;

Grant succeeded.

SQL> connect hr/HR@orclpdb1
Connected.
SQL> alter session enable parallel dml;

Session altered.

SQL> create table my_large_replica
2 as select * from hemant.my_large_source
3 where 1=2;

Table created.

SQL> insert /*+ PARALLEL (8) */
2 into my_large_replica
3 select * from hemant.my_large_source;

72447 rows created.

SQL>


So, now, HR has a Replica of the Source Table in the HEMANT schema.  Without any subsequent updates to the Source Table, I create the Materialized View definition, with the "ON PREBUILT TABLE" clause.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> create materialized view log on my_large_source;

Materialized view log created.

SQL> grant select, delete on mlog$_my_large_source to hr;

Grant succeeded.

SQL> connect hr/HR@orclpdb1
Connected.
SQL>
SQL> create materialized view my_large_replica
2 on prebuilt table
3 refresh fast
4 as select * from hemant.my_large_source;

Materialized view created.

SQL> select count(*) from hemant.my_large_source;

COUNT(*)
----------
72447

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72447

SQL>


I am now ready to add data and Refresh the MV.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> desc my_large_source
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COL NOT NULL NUMBER
PRODUCT_NAME VARCHAR2(128)
FACTORY VARCHAR2(128)

SQL> insert into my_large_source
2 values (74000,'Revolutionary Pin','Outer Space');

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from mlog$_my_large_source;

COUNT(*)
----------
1

SQL>
SQL> connect hr/HR@orclpdb1
Connected.
SQL> select count(*) from hemant.my_large_source;

COUNT(*)
----------
72448

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72447

SQL>
SQL> execute dbms_mview.refresh('MY_LARGE_REPLICA','F');

PL/SQL procedure successfully completed.

SQL> select count(*) from my_large_replica;

COUNT(*)
----------
72448

SQL>
SQL> select id_col, product_name
2 from my_large_replica
3 where factory = 'Outer Space'
4 /

ID_COL
----------
PRODUCT_NAME
--------------------------------------------------------------------------------
74000
Revolutionary Pin


SQL>
SQL> select count(*) from hemant.mlog$_my_large_source;

COUNT(*)
----------
0

SQL>


Instead of rebuilding / repopulating the Replica Table with all 72,448 rows, I used the MV definition and the MV Log on the Source Table to copy over that 1 new row.

The above demonstration is against 19c.

Here are two older posts, one in March 2009 and the other in January 2012 on an earlier release of Oracle.


Categories: DBA Blogs

GRANT INHERIT PRIVILEGES

Flavio Casetta - Mon, 2019-10-28 04:58
Categories: DBA Blogs

Native Oracle DB JSON functionality as alternative for using cursor() in AOP (and APEX_JSON)

Dimitri Gielis - Sun, 2019-10-27 14:46
When using external (WEB/REST) services, you often communicate in JSON. So it's important to be able to generate JSON in the format that is expected by the external service.

In the case of APEX Office Print (AOP), we made it super simple to communicate with the AOP server from the database through our PL/SQL API. You just have to enter a SQL statement and the AOP PL/SQL API, which uses APEX_JSON behind the scenes, generates the necessary JSON that the AOP Server understands.

Here's an example of the Order data in JSON: a customer with multiple orders and multiple order lines:

As we are living in the Oracle database, we have to generate this JSON. The data is coming from different tables and is hierarchical. In SQL you can create hierarchical data by using the cursor() syntax.

Here's an example of the SQL statement that you would typically use in AOP (the cursor highlighted in red):

select
  'file1' as "filename", 
  cursor(
    select
      c.cust_first_name as "cust_first_name",
      c.cust_last_name  as "cust_last_name",
      c.cust_city       as "cust_city",
      cursor(select o.order_total      as "order_total", 
                    'Order ' || rownum as "order_name",
                cursor(select p.product_name as "product_name", 
                              i.quantity     as "quantity",
                              i.unit_price   as "unit_price"
                         from demo_order_items i, demo_product_info p
                        where o.order_id = i.order_id
                          and i.product_id = p.product_id
                      ) "order_lines"
               from demo_orders o
              where c.customer_id = o.customer_id
            ) "orders"
    from demo_customers c
    where customer_id = 1
  ) "data"
from dual

From AOP 19.3 onwards, the AOP PL/SQL API not only supports this cursor() syntax but also the native JSON functionality of the Oracle Database (version 12c and upwards).

The query above can also be written as the following using JSON support in the Oracle Database:

select 
  json_arrayagg( 
    json_object( 
      'filename' value 'file1', 
      'data'     value (
          select 
            json_arrayagg(
              json_object( 
                'cust_first_name' value c.cust_first_name, 
                'cust_last_name'  value c.cust_last_name,
                'cust_city'       value c.cust_city, 
                'orders'          value (
                    select 
                      json_arrayagg(
                        json_object(                               
                          'order_total' value o.order_total, 
                          'order_name'  value 'Order ' || rownum,
                          'order_lines' value (
                              select 
                                json_arrayagg(
                                  json_object(                               
                                    'product_name' value p.product_name, 
                                    'quantity'     value i.quantity,
                                    'unit_price'   value i.unit_price
                                  )
                                returning clob)      
                                from demo_order_items i, demo_product_info p
                               where o.order_id = i.order_id
                                 and i.product_id = p.product_id
                            )
                        )
                      returning clob)      
                      from demo_orders o
                    where o.customer_id = c.customer_id
                  )
              )
            returning clob)  
            from demo_customers c
            where c.customer_id = 1
          )
    )
  returning clob) as aop_json
  from dual 

You have to get used to this syntax and have to think a bit differently. Unlike the cursor syntax where you define the column first and give it an alias, using the JSON functions, you define the JSON object and attributes first and then map it to the correct column.

I find the cursor syntax really elegant, especially in combination with APEX_JSON, it's a really cool solution to generate the JSON you need. But I guess it's a personal choice what you prefer and I must admit, the more I use the native JSON way, the more I like it. If performance is important you most likely want to use native database functionality as much as possible, but I go in more detail further in this post. Lino also found an issue with the cursor syntax in the Oracle Database 19c, so if you are on that database release you want to look at the support document.

Before I move on with my test case, if you need more info on JSON in the database: Carsten did a nice blog post about parsing JSON in APEX, and although it's about parsing JSON and this blog post is more about generating JSON, the conclusions are similar. You can read more about APEX_JSON and the native JSON database functions in Tim's write-up on Oracle-Base.

As I was interested in the performance of both implementations, I run a few test cases. There are different ways to test performance, e.g. use dbms_profiler, Method R Workbench, trace, timing the results, ... Below I use Tom Kyte's script to compare two PL/SQL implementations. The interesting thing with the script it's not only comparing timings but also latches, which give you an idea of how hard the database has to work. You can download it from AskTom under the resources section:


Here's my test script:

declare
  l_sql             clob;
  l_return          blob;
  l_output_filename varchar2(100);  
  l_runs            number(5) := 1;
begin
  runStats_pkg.rs_start;
  
  -- sql example with cursor
  for i in 1..l_runs
  loop
      l_output_filename := 'cursor';
      l_sql := q'[
                select
                'file1' as "filename",
                cursor
                (select 
                    c.cust_first_name as "cust_first_name",
                    c.cust_last_name  as "cust_last_name",
                    c.cust_city       as "cust_city"
                   from demo_customers c
                  where c.customer_id = 1 
                ) as "data"
                from dual   
               ]';
      l_return := aop_api_pkg.plsql_call_to_aop (
                    p_data_type       => aop_api_pkg.c_source_type_sql,
                    p_data_source     => l_sql,
                    p_template_type   => aop_api_pkg.c_source_type_aop_template,
                    p_output_type     => 'docx',
                    p_output_filename => l_output_filename,
                    p_aop_remote_debug=> aop_api_pkg.c_debug_local); 
  end loop;  
  
  runStats_pkg.rs_middle;  
  
  -- sql example with native JSON database functionality
  for i in 1..l_runs
  loop
      l_output_filename := 'native_json';
      l_sql := q'[
                select 
                  json_arrayagg( 
                    json_object( 
                      'filename' value 'file1', 
                      'data'     value (select 
                                          json_arrayagg(
                                            json_object( 
                                              'cust_first_name' value c.cust_first_name, 
                                              'cust_last_name'  value c.cust_last_name,
                                              'cust_city'       value c.cust_city 
                                            )
                                          )  
                                          from demo_customers c
                                         where c.customer_id = 1
                                       )  
                    )
                  ) as aop_json
                  from dual 
               ]';
      l_return := aop_api_pkg.plsql_call_to_aop (
                    p_data_type       => aop_api_pkg.c_source_type_sql,
                    p_data_source     => l_sql,
                    p_template_type   => aop_api_pkg.c_source_type_aop_template,
                    p_output_type     => 'docx',
                    p_output_filename => l_output_filename,
                    p_aop_remote_debug=> aop_api_pkg.c_debug_local);                     
  end loop;    

  runStats_pkg.rs_stop;   
end;
/

I ran the script (with different l_runs settings) a few times on my 18c database and with the above use case on my system, the native JSON implementation was consistently outperforming the cursor (and APEX_JSON) implementation.

Run1 ran in 3 cpu hsecs
Run2 ran in 2 cpu hsecs
run 1 ran in 150% of the time

Name                                  Run1        Run2        Diff
STAT...HSC Heap Segment Block           40          41           1
STAT...Heap Segment Array Inse          40          41           1
STAT...Elapsed Time                      4           3          -1
STAT...CPU used by this sessio           4           3          -1
STAT...redo entries                     40          41           1
STAT...non-idle wait time                0           1           1
LATCH.simulator hash latch              27          26          -1
STAT...non-idle wait count              13          12          -1
STAT...consistent gets examina          41          43           2
LATCH.redo allocation                    1           3           2
STAT...active txn count during          21          23           2
STAT...cleanout - number of kt          21          23           2
LATCH.transaction allocation             1           3           2
LATCH.In memory undo latch               1           3           2
LATCH.JS Sh mem access                   1           3           2
STAT...consistent gets examina          41          43           2
LATCH.keiut hash table modific           3           0          -3
STAT...calls to kcmgcs                  64          69           5
STAT...dirty buffers inspected           6           0          -6
STAT...workarea executions - o           2          12          10
STAT...free buffer requested            71          52         -19
STAT...lob writes unaligned             80          60         -20
STAT...lob writes                       80          60         -20
STAT...sorts (rows)                      0          20          20
STAT...execute count                    91          71         -20
STAT...sorts (memory)                    0          20          20
LATCH.active service list                0          25          25
STAT...consistent gets                 183         156         -27
STAT...consistent gets from ca         183         156         -27
STAT...consistent gets pin (fa         142         113         -29
STAT...consistent gets pin             142         113         -29
STAT...lob reads                       160         130         -30
LATCH.JS queue state obj latch           0          42          42
LATCH.object queue header oper         151         103         -48
STAT...workarea memory allocat          66          -6         -72
STAT...db block changes                431         358         -73
STAT...consistent changes              390         315         -75
LATCH.parameter table manageme          80           0         -80
STAT...undo change vector size       8,748       8,832          84
LATCH.enqueue hash chains                1          88          87
STAT...parse count (total)             100          10         -90
STAT...session cursor cache hi         171          71        -100
STAT...opened cursors cumulati         171          71        -100
STAT...free buffer inspected           126           0        -126
STAT...calls to get snapshot s         470         330        -140
STAT...db block gets from cach         958         744        -214
STAT...hot buffers moved to he         220           0        -220
STAT...redo size                    12,016      12,248         232
STAT...db block gets                 1,039         806        -233
STAT...db block gets from cach       1,029         796        -233
STAT...session logical reads         1,222         962        -260
STAT...file io wait time             5,865       6,279         414
STAT...recursive calls                 561         131        -430
LATCH.cache buffers chains           3,224       2,521        -703
STAT...session uga memory          196,456           0    -196,456
STAT...session pga memory        1,572,864           0  -1,572,864
STAT...logical read bytes from   9,928,704   7,798,784  -2,129,920

Run1 latches total versus runs -- difference and pct
        Run1        Run2        Diff       Pct
       3,853       3,180        -673    121.16%

There are many different iterations of this test, using bind variables, etc. It seems "logical" that a native DB implementation is better performance-wise than a combination of PL/SQL (APEX_JSON) and SQL (cursor). But I always recommend you just run the test in your own environment. What is true today, might be different tomorrow and a lot comes into play, so if there's one thing I learned from Tom Kyte, it's don't take things for granted, but test in your unique situation.

So, in real life using AOP, will you see a big difference? It depends on the complexity of your SQL statement and data, how many times you call a report etc. but my guess is, in most cases, it's probably not much of a difference in user experience.

A simple test would be to do "set timing on" and compare the implementations:


Or if you are using AOP on an Oracle APEX page, you can run your APEX page in Debug mode and you will see exactly how long the generation of the JSON took for the data part:


Happy JSON'ing :)

Categories: Development

Basic Replication -- 9 : BUILD DEFERRED

Hemant K Chitale - Sun, 2019-10-27 10:41
A Materialized View can be created with all the target rows pre-inserted (and subsequently refreshed for changes).  This is the default behaviour.

However, it is possible to define a Materialized View without actually populating it.

You might want to take such a course of action for scenarios like :

1.  Building a number of Materialized Views along with a code migration but not wanting to spend time that would be required to actually populate the MVs  and deferring the population to a subsequent maintenance window after which the code and data will be referenced by the application/users

2.  Building a number of MVs in a Tablespace that is initially small but will be enlarged in the maintenance window to handle the millions of rows that will be inserted

3.  Building an MV definition without actually having all the "clean" Source Table(s) rows currently available, deferring the cleansing of data to a later date and then populating the MV after the cleansing

The BUILD DEFERRED clause comes in handy here.


Let's say that we have a NEW_SOURCE_TABLE (with many rows and/or with rows that are yet to be cleansed) and want to build an "empty" MV on it  (OR that this MV is one of a number of MVs that are being built together simply for migration of dependent code, without the data).

SQL> desc new_source_table
Name Null? Type
----------------------------------------- -------- ----------------------------
ID NOT NULL NUMBER
DATA_ELEMENT_1 VARCHAR2(15)
DATA_ELEMENT_2 VARCHAR2(15)
DATE_COL DATE

SQL>
SQL> create materialized view log on new_source_table;
create materialized view log on new_source_table
*
ERROR at line 1:
ORA-12014: table 'NEW_SOURCE_TABLE' does not contain a primary key constraint


SQL> create materialized view log on new_source_table with rowid;

Materialized view log created.

SQL>
SQL> create materialized view new_mv
2 build deferred
3 refresh with rowid
4 as select id as id_number,
5 data_element_1 as data_key,
6 data_element_2 as data_val,
7 date_col as data_date
8 from new_source_table
9 /

Materialized view created.

SQL>


Notice that my Source Table currently does not have a Primary Key.  The MV Log can be created with the "WITH ROWID" clause in the absence of the Primary Key.
The Materialized View is also built with the ROWID as the Refresh cannot use a Primary Key.
Of course, you may well have a Source Table with a Primary Key.  In that case, you can continue to default using the Primary Key instead of the ROWID

Once the Source Table is properly populated / cleansed and/or the tablespace containing the MV is large enough, the MV is first refreshed with a COMPLETE Refresh and subsequently with FAST Refresh's.

SQL> select count(*) from new_source_table;

COUNT(*)
----------
106

SQL> execute dbms_mview.refresh('NEW_MV','C',atomic_refresh=>FALSE);

PL/SQL procedure successfully completed.

SQL>


Subsequently, when one or more rows are inserted/updated in the Source Table, the next Refresh is a Fast Refresh.

SQL> execute dbms_mview.refresh('NEW_MV','F');

PL/SQL procedure successfully completed.

SQL>
SQL> select mview_name, refresh_mode, refresh_method, last_refresh_type
2 from user_mviews
3 where mview_name = 'NEW_MV'
4 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF
------------------ --------- -------- --------
NEW_MV DEMAND FORCE FAST

SQL>


Thus, we started off with an empty MV and later used REFRESHs (COMPLETE and FAST) to populate it.


Categories: DBA Blogs

PostgreSQL check_function_bodies, what is it good for?

Yann Neuhaus - Sun, 2019-10-27 03:35

One of the probably lesser known PostgreSQL parameters is check_function_bodies. If you know Oracle, then you for sure faced “invalid objects” a lot. In PostgreSQL, by default, there is nothing like an invalid object. That implies that you can not create a function or procedure which references an object that does not yet exist.

Lets assume you want to create a function like this, but the table “t1” does not exist:

postgres=# create or replace function f1 () returns setof t1 as
$$
select * from t1;
$$ language 'sql';
ERROR:  type "t1" does not exist

PostgreSQL will not create the function as a dependent objects does not exist. Once the table is there the function will be created:

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# insert into t1 values(1);
INSERT 0 1
postgres=# create or replace function f1 () returns setof t1 as
$$
select * from t1;
$$ language 'sql';
CREATE FUNCTION
postgres=# select * from f1();
a
---
1

The issue with that is, that you need to follow the order in which functions gets created. Especially when you need to load functions for other users that can easily become tricky and time consuming. This is where check_function_bodies helps:

postgres=# set check_function_bodies = false;
SET
postgres=# create or replace function f2 () returns setof t1 as
$$
select * from t2;
$$ language 'sql';
CREATE FUNCTION

The function was created although t2 did not exist. Executing the function right now of course will generate an error:

postgres=# select * from f2();
ERROR:  relation "t2" does not exist
LINE 2: select * from t2;
^
QUERY:
select * from t2;

CONTEXT:  SQL function "f2" during startup

Once the table is there all is fine:

postgres=# create table t2 ( a int );
CREATE TABLE
postgres=# insert into t2 values (2);
INSERT 0 1
postgres=# select * from f2();
a
---
2

This is very helpful when loading objects provided by an external vendor. pg_dump is doing that by default.

Cet article PostgreSQL check_function_bodies, what is it good for? est apparu en premier sur Blog dbi services.

Oracle database software client 18c 32 bit win 7

Tom Kyte - Sat, 2019-10-26 15:46
I need to install Oracle database software client 18c on win 7 32 bit I visit Oracle website. They only offer instant client without setup file.i don't know configure on win 7.especially about odbc technology. I want to open odbc windows 32 bit an...
Categories: DBA Blogs

ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr], [1], [1384], [1270], [2885], [], [], [], [], [], [], []

Tom Kyte - Sat, 2019-10-26 15:46
------------------------------------- Oracle Database 11.2.0.1.0 Archive Log Mode RMAN Backup is performed daily, with backup control file and backup archive log file 1 time to disk. ------------------------------------ Hi Ask Tom t...
Categories: DBA Blogs

Last run date of a function on a menu

Tom Kyte - Sat, 2019-10-26 15:46
Is it possible extract the last run time of a function run on a menu from the Oracle database?
Categories: DBA Blogs

SQL to return 12 months of this year

Tom Kyte - Sat, 2019-10-26 15:46
select (to_char(add_months (sysdate,level-10),'Month')) as Month ,to_char(TRUNC(add_months(sysdate,level-10),'month'),'mm/dd/yyyy') as firstdayofthemonth ,to_char(last_day(add_months(sysdate,level-10)),'mm/dd/yyyy') as lastdayofmonth from dual ...
Categories: DBA Blogs

Oracle - validate date format (yyyy-mm-ddThh24:mi:ssZ) in XML against XSD

Tom Kyte - Sat, 2019-10-26 15:46
<b>Oracle version:</b> The result of this query select * from v$version; is: <code>Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production PL/SQL Release 11.2.0.4.0 - Production "CORE 11.2.0.4.0 Production" TNS for Li...
Categories: DBA Blogs

REGR_R2 returns a value of 1 when y column contains a constant value (e.g., all rows have value of 0.0038).

Tom Kyte - Sat, 2019-10-26 15:46
I was searching a large emissions database for pairs of columns with correlated values. My first effort identified a large number of sets with an R2 value of 1 based on blindly applying the REGR_R2 function to discrete pairs of numeric data columns....
Categories: DBA Blogs

AEM Forms – No SSLMutualAuthProvider available

Yann Neuhaus - Sat, 2019-10-26 15:00

In the process of setting up the AEM Workbench to use 2-way-SSL, you will need at some point to use a Hybrid Domain and a specific Authentication Provider. Depending on the version of the AEM that you are using, this Authentication Provider might not be present and therefore you will never be able to set that up properly. In this blog, I will describe what was done in our case to solve this problem.

The first time we tried to set that up (WebLogic Server 12.2, AEM 6.4.0), it just wasn’t working. Therefore, we opened a case with the Adobe Support and after quite some time, we found out that the documentation was not complete (#CQDOC-13273) and that there were actually missing steps and missing configuration inside the AEM to allow the 2-way-SSL to work. So basically everything said that the 2-way-SSL was possible but there were just missing pieces inside AEM to have it really working. Therefore after discussion & investigation with the Adobe Support Engineers (#NPR-26490), they provided us the missing piece: adobe-usermanager-ssl-dsc.jar.

When you install AEM Forms, it will automatically deploy a bunch of DSC (jar file) to provide all features of the AEM Forms. These are a few examples:

  • adobe-pdfservices-dsc.jar
  • adobe-usermanager-dsc.jar
  • adobe-jobmanager-dsc.jar
  • adobe-scheduler-weblogic-dsc.jar

Therefore, our AEM Forms version at that time (mid-2018, AEM 6.4.0) was missing one of these DSC and it was the root cause of our issue. So what can you do fix that? Well you just have deploy it and since we are anyway in the middle of working with the AEM Workbench to set it up with 2-way-SSL, that’s perfect. While the Workbench is still able to use 1-way-SSL (don’t set your Application Server in 2-way-SSL or revert it to 1-way-SSL):

  • Download or request the file “adobe-usermanager-ssl-dsc.jar” for your AEM version to the Adobe Support
  • Open the AEM Workbench (run the workbench.exe file)
  • Click on “File > Login
  • Set the Log on to to: <AEM_HOST> – SimpleAuth (or whatever the name of your SimpleAuth is)
  • Set the Username to: administrator (or whatever other account you have)
  • Set the Password for this account
  • Click on “Login
  • Click on “Window > Show View > Components
  • The Components window should be opened (if not already done before) somewhere on the screen (most probably on the left side)
  • Inside the Components window, right click on the “Components” folder and select “Install Component …
  • Find the file “adobe-usermanager-ssl-dsc.jar” that has been downloaded earlier, select it and click on “Open
  • Right click on the “Components” folder and select “Refresh
  • Expand the “Components” folder (if not already done), and look for the component named “SSLAuthProvider
  • If this component isn’t started yet (there is a red square on the package), then start it using the following steps:
    • Right click on “SSLAuthProvider
    • Select “Start Component

Note: If the “SSLAuthProvider” component already exists, then you will see an error. This is fine, it just needs to be there and to be started/running. If this is the case then it’s all good.

Workbench - Open components

Workbench - Refresh components

Workbench - Start component

Once the SSLAuthProvider DSC has been installed and is running, you should be able to see the SSLMutualAuthProvider in the list of custom providers while creating the Hybrid Domain on the AdminUI. Adobe was normally supposed to fix this in the following releases but I didn’t get the opportunity to test the installation of AEM 6.5 from scratch yet. If you have this information, don’t hesitate to share!

Cet article AEM Forms – No SSLMutualAuthProvider available est apparu en premier sur Blog dbi services.

AEM Forms – “2-way-SSL” Setup and Workbench configuration

Yann Neuhaus - Sat, 2019-10-26 14:45

In the past two years almost, I have been working with AEM (Adobe Experience Manager) Forms. The road taken by this project was full of problem because of security constraints that AEM has/had big trouble dealing with. In this blog, I will talk about one security aspect which brings some trouble: how to setup and use the “2-way-SSL” (I will describe below why I put that in quote) for the AEM Workbench.

I have been using AEM Forms 6.4.0 initially (20180228) with its associated Workbench version. I will consider that the AEM Forms has been installed already and is working properly. In this case, I used AEM Forms on a WebLogic Server (12.2) which I configured in HTTPS. So once you have that, what do you need to do to configure and use the AEM Workbench with “2-way-SSL”? Well first, let’s ensure that the AEM Workbench is working properly and then start with the setup.

Open the AEM Workbench and configure a new “Server”:

  • Open the AEM Workbench (run the workbench.exe file)
  • Click on “File > Login
  • Click on “Configure...”
  • Click on the “+” sign to add a new Server
    • Set the Server Title to: <AEM_HOST> – SimpleAuth
    • Set the Hostname to: <AEM_HOST>
    • Set the Protocol to: Simple Object Access Protocol (SOAP/HTTPs)
    • Set the Server Port Number to: <AEM_PORT>
    • Click on “OK
  • Click on “OK
  • Set the Log on to the newly created Server (“<AEM_HOST> – SimpleAuth“)
  • Set the Username to: administrator (or whatever other account you have)
  • Set the Password for this account
  • Click on “Login

Workbench login 1-way-SSL

If everything was done properly, the login should be working. The next step is to configure AEM for the “2-way-SSL” communications. As mentioned at the beginning of this blog, I put that in quote because it’s a 2-way-SSL but there is one security layer that is bypassed when doing that. With the AEM Workbench in 1-way-SSL, you need to enter a username and a credential. Adding a 2-way-SSL instead would normally just add another layer of security where the server and client will exchange their certificate and will trust each other but the user’s authentication is still needed!

In the case of the AEM Workbench, the “2-way-SSL” setup actually completely bypass the user’s authentication and therefore I do not really consider that as a real 2-way-SSL setup… It might even be considered as a security issue (it’s a shame for a feature that is supposed to increase security) because, as you will see below, as soon as you have the Client SSL Certificate (and its password obviously), then you will be able to access AEM Workbench. So protect this certificate with great care.

To configure the AEM, you will then need to create an Hybrid Domain:

  • Open the AEM AdminUI (https://<AEM_HOST>:<AEM_PORT>/adminui)
  • Login with the administrator account (or whatever other account you have)
  • Navigate to: Settings > User Management > Domain Management
  • Click on “New Hybrid Domain
    • Set the ID to: SSLMutualAuthProvider
    • Set the Name to: SSLMutualAuthProvider
    • Check the “Enable Account Locking” checkbox
    • Uncheck the “Enable Just In Time Provisioning” checkbox
    • Click on “Add Authentication
      • Set the “Authentication Provider” to: Custom
      • Check the “SSLMutualAuthProvider” checkbox
      • Click on “OK
    • Click on “OK

Note: If “SSLMutualAuthProvider” isn’t available on the Authentication page, then please check this blog.

Hybrid Domain 1

Hybrid Domain 2

Hybrid Domain 3

Then you will need to create a user. In this example, I will use a generic account but it is possible to have several accounts for each of your devs for example, in which case each user must have their own SSL Certificate. The user Canonical Name and ID must absolutely match the CN used to generate the SSL Certificate that the Client will use. So if you generated an SSL Certificate for the Client with “/C=CH/ST=Jura/L=Delemont/O=dbi services/OU=IT/CN=aem-dev“, then the Canonical Name and ID to be used for the user in AEM should be “aem-dev“:

  • Navigate to: Settings > User Management > Users and Groups
  • Click on “New User
  • On the New User (Step 1 of 3) screen:
    • Uncheck the “System Generated” checkbox
    • Set the Canonical Name to: <USER_CN>
    • Set the First Name to: 2-way-SSL
    • Set the Last Name to: User
    • Set the Domain to: SSLMutualAuthProvider
    • Set the User Id to: <USER_CN>
    • Click on “Next
  • On the New User: 2-way-SSL (Step 2 of 3) screen:
    • Click on “Next
  • On the New User: 2-way-SSL (Step 3 of 3) screen:
    • Click on “Find Roles
      • Check the checkbox for the Role Name: Application Administrator (or any other valid role that you want this user to be able to use)
      • Click on “OK
  • Click on “Finish

User 1

User 2

User 3

At this point, you can configure your Application Server to handle the 2-way-SSL communications. In WebLogic Server, this is done by setting the “Two Way Client Cert Behavior” to “Client Certs Requested and Enforced” in the SSL subtab of the Managed Server(s) hosting the AEM Forms applications.

Finally the last step is to get back to the AEM Workbench and try your 2-way-SSL communications. If you try again to use the SimpleAuth that we defined above, it should fail because the Application Server will require the Client SSL Certificate, which isn’t provided in this case. So let’s create a new “Server”:

  • Click on “File > Login
  • Click on “Configure...”
  • Click on the “+” sign to add a new Server
    • Set the Server Title to: <AEM_HOST> – MutualAuth
    • Set the Hostname to: <AEM_HOST>
    • Set the Protocol to: Simple Object Access Protocol (SOAP/HTTPs) Mutual Auth
    • Set the Server Port Number to: <AEM_PORT>
    • Click on “OK
  • Click on “OK
  • Set the Log on to the newly created Server (“<AEM_HOST> – MutualAuth“)
  • Set the Key Store to: file:C:\Users\Morgan\Documents\AEM_Workbench\aem-dev.jks (Adapt to wherever you put the keystore)
  • Set the Key Store Password to: <KEYSTORE_PWD>
  • Set the Trust Store to: file:C:\Users\Morgan\Documents\AEM_Workbench\trust.jks (Adapt to wherever you put the truststore)
  • Set the Trust Store Password to: <TRUSTSTORE_PWD>
  • Click on “Login

Workbench login 2-way-SSL

In the above login screen, the KeyStore is the SSL Certificate that was created for the Client and the TrustStore will be used to validate/trust the SSL Certificate of the AEM Server. It can be the cacerts from the AEM Workbench for example. If you are using a Self-Signed SSL Certificate, don’t forget to add the Trust Chain into the TrustStore.

Cet article AEM Forms – “2-way-SSL” Setup and Workbench configuration est apparu en premier sur Blog dbi services.

match_recognize()

Jonathan Lewis - Fri, 2019-10-25 12:47

A couple of days ago I posted a note with some code (from Stew Ashton) that derived the clustering_factor you would get for an index when you had set a value for the table_cached_blocks preference but had not yet created the index. In the note I said that I had produced a simple and elegant (though massively CPU-intensive) query using match_recognize() that seemed to produce the right answer, only to be informed by Stew that my interpretation of how Oracle calculated the clustering_factor was wrong and that the query was irrelevant.  (The fact that it happened to produce the right answer in all my tests was an accidental side effect of the way I had been generating test data. With Stew’s explanation of what Oracle was doing it was easy to construct a data set that proved my query was doing the wrong thing.)

The first comment I got on the posting was a request to publish my match_recognize() query – even though it was irrelevant to the problem in hand – simply because it might be a useful lesson in what could be done with the feature; so here’s some code that doesn’t do anything useful but does demonstrate a couple of points about match_recognize(). I have to stress, though, that I’m not an expert on match_recognize() so there may be a more efficient way of using it to acheieve the same result. There is certainly a simple and more efficient way to get the same result using some fairly straightforward PL/SQL code.

Requirement

I had assumed that if you set the table_cached_blocks preference to N then Oracle would keep track of the previous N rowids as it walked an index to gather stats and only increment the “clustering factor counter” if it failed to find a match for the block address of the current rowid in the block addresses extracted from the previous N rowids. My target was to emulate a way of doing this counting.

Strategy

Rather than writing code that “looked backwards” as it walked the index, I decided to take advantage of the symmetry of the situation and write code that looked forwards (you could think of this as viewing the index in descending order and looking backwards along the descending index). I also decided that I could count the number of times I did find a match in the trail of rowids, and subtract that from the total number of index entries.

So my target was to look for patterns where I start with the block address from the current rowid, then find the same block address after zero to N-1 failures to find the block address. Since the index doesn’t exist I’ll need to emulate its existence by selecting the columns that I want in the index along with the rowid, ordering the data in index order. Then I can walk this “in memory” index looking for the desired pattern.

Here’s some code to create a table to test against:


rem
rem     Script:         clustering_factor_est_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem 

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        cast(rownum as varchar2(10))            v1,
        trunc(dbms_random.value(0,10000))       rand,
        rpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
/

My table has 1M rows, and there’s a column called rand which has 10,000 distinct values. This is generated through Oracle’s dbms_random package and the procedure I’ve used will give me roughly 100 occurrences for each value scattered uniformly across the table. An index on this column might be quite useful but it’s probably going to have a very high clustering_factor because, on average, any two rows of a particular value are likely to be separated by 10,000 rows, and even if I look for rows with a pair of consecutive values any two rows are likely to be separated by a few hundred other rows.

Here’s the code that I was using to get an estimate of (my erroneous concept of) the clustering_factor with table_cached_blocks set to 32. For debugging purposes it reports the first row of the pattern for each time my pattern is matched, so in this version of the code I’d  have to check the number of rows returned and subtract that from the number of rows in the table (where rand is not null).


select
        first_rand, first_file_id, first_block_id, first_row_id, step_size
from    (
                select
                        rand, 
                        dbms_rowid.rowid_relative_fno(rowid) file_id,
                        dbms_rowid.rowid_block_number(rowid) block_id,
                        dbms_rowid.rowid_row_number(rowid)   row_id
                from
                        t1
                where
                        rand is not null
        )
match_recognize(
        order by rand, file_id, block_id, row_id
        measures
                count(*) as step_size,
                first(strt.rand)        as first_rand,
                first(strt.file_id)     as first_file_id,
                first(strt.block_id)    as first_block_id,
                first(strt.row_id)      as first_row_id
        one row per match
        after match skip to next row
        pattern (strt miss{0, 31} hit)
        define 
                miss as (block_id != strt.block_id or  file_id != strt.file_id),
                hit  as (block_id  = strt.block_id and file_id  = strt.file_id)
)
order by 
        first_rand, first_file_id, first_block_id, first_row_id
;

The first point to note is that the inline view is the thing I use to model an index on column rand. It simply selects the value and rowid from each row. However I’ve used the dbms_rowid package to break the rowid down into the file_id, block_id and row_id within block. Technically (to deal with partitioned tables and indexes) I also ought to be thinking about the object_id which is the first component of the full rowid. I haven’t actually ordered the rows in the in-line view as I’m going to let the match_recognize() operation handle that.

A warning as we start looking at the match_recognize() section of the query: I’ll be jumping around the clause a little bit, rather than working through it in order from start to finish.

First I need the raw data sorted in order so that any results I get will be deterministic. I have an order by clause that sorts my data by rand and the three components I’ve extracted from the rowid. (It is also possible to have a partition clause – similar to the partition clause of an analytic function – but I don’t need one for this model.)

Then I need to define the “columns” that I’m planning to output in the final query. This is the set of measures and in the list of measures you can see a count(*) (which doesn’t quite mean what it usually does) and a number of calls to a function first() which takes an aliased column name as it’s input and, although the column names look familiar, the alias (strt) seems to have sprung from nowhere.

At this point I want to jump down to the define clause because that’s where the meaning of strt could have been defined.  The define clause is a list of “rules”, which are also treated as “variables”, that tell us how to classify a row. We are looking for patterns, and pattern is a set of rows that follows a set of rules in a particular order, so one of the things I need to do is create a rule that tells Oracle how to identify a row that qualifies as the starting point for a pattern – so I’ve defined a rule called strt to do this – except I haven’t defined it explicitly, it’s not visible in the define list, so Oracle has assumed that the rule is “1 = 1”, in other words every row I’m going to look at could be classified as a strt row.

Now that I have a definition for what strt means I could go back to the measures – but I’ll postpone doing that for a moment and look at the other rules in my define list. I have a rule called miss which says “if either of these comparisons evaluates to true” then the row is a miss;  but the predicates includes a reference to strt which means we are doing comparisons with the most recent row that was classified as a strt row. So a miss means we’ve found a starting row and we’re now looking at other rows comparing the block_id and file_id for each row to check that they don’t match the block_id and file_id of the starting row.

Similarly we have a hit rule which says a hit means we’ve previously found a starting row and we’re now looking at other rows checking for rows where the current block_id and file_id match the starting block_id and file_id.

Once we’ve got a set of rules explaining how to classify rows we can specify a pattern (which means going back up the match_recognize() clause one section). Our pattern reads:  “find a strt row, followed by zero and 31 miss rows, followed by a hit row”. And that’s just a description of my back-to-front way of saying “remember the last 32  rowids and check if the current block address matches the block address in one of those rowids”.

The last two clauses that need to be explained before I revisit the measures clause are the “one row per match” and “after match skip to next row”.

If I find a sequence of rows that matches my pattern there are several things I could do with that set of rows – for example I could report every row in that pattern along with the classification of strt/miss/hit (which would be useful if I’m looking for a few matches to a small pattern in a large data set), or (as I’ve done here) I could report just one row from the pattern and give Oracle some instruction about which one row I want reported.

Then, after I’ve found (and possibly reported) a pattern, what should I do next. Again there are several possibilities – the two most obvious ones, perhaps are: “just keep going” i.e. look at the row after the end of the pattern to see if it’s another strt row, and “go to the row after the start of the pattern you’ve just reported”. These translate into: “after match skip past last row” (which is the default if you don’t specify an “after match” clause) and “after match skip to next row” (which is what I’ve specified).

Finally we get back to the measures clause – I’ve defined four “columns” with names like ‘first_xxx’ and a step_size. The step_size is defined as count(*) which – in this context – means “count the number of rows in the current matched pattern”. The other measures are defined using the first() function, referencing strt alias which tells Oracle I want to retain the value from the first row that met the strt rule in the current matched pattern.

In summary, then my match_recognize() clause tells Oracle to

  • Sort the data by rand, file_id, block_id, row_id
  • For each row in turn
    • extract the file_id and block_id
    • take up to 32 steps down the list looking for a matching file_id and block_id
    • If you find a match pass a row to the parent operation that consists of: the number of rows between strt to hit inclusive, and the values of rand, file_id, block_id, and row_id of the strt row.

Before you try testing the code, you might like to know about some results.

As it stands my laptop with a virtual machine running 12.2.0.1 took 1 minute and 5 seconds to complete with “miss{0, 31}” in the pattern. In fact the first version of the code had the file_id and block_id tests in the define clause in the opposite order viz:

        define 
                miss as (file_id != strt.file_id or  block_id != strt.block_id),
                hit  as (file_id  = strt.file_id and block_id  = strt.block_id)

Since the file_id for my test is the same for every row in my table (it’s a single file tablespace), this was wasting a surprising amount of CPU, leading to a run time of 1 minute 17 seconds! Neither time looks particularly good when compared to the time required to create the index, set the table_cached_blocks to 32, and gather stats on the index – in a total time of less than 5 seconds. The larger the value I chose for the pattern match, the worse the workload became, until at “miss{0, 199}” – emulating a table_cached_blocks of 200 – the time to run was about 434 seconds of CPU!

A major portion of the problem is the way that Oracle is over-optimistic (or “greedy”, to use the technical term) with its pattern matching, combined with the nature of the data which (in this example) isn’t going to offer much opportunity for matching, combined with the fact that Oracle cannot determine that a row that is not a “miss” has to be a “hit”.  In this context “greedy” means Oracle will try to find as many consecutive occurrences of the first rule in the pattern before it tries to find and occurrence of the second rule – and when it fails to match a pattern it will “backtrack” one step and have another go, being slightly less greedy. So, for our example, the greedy algorithm will operate as follows:

  • find 31 rows that match miss, then discover the 32nd row does not match hit
  • go back to the strt and find 30 rows that match miss, then discover the 31st row does not match hit
  • go back to the strt and find 29 rows that match miss, then discover the 30th row does not match hit
  • … repeat until
  • go back to the strt and find 0 rows that match miss, then discover the 1st does not match hit
  • go to the next row, call it strt, and repeat the above … 1M times.

From a human perspective this is a pretty stupid strategy for this specific problem – but that’s because we happen to know that “hit” = “not(miss)” (ignoring nulls, of which there are none) while Oracle has to assume that there is no relationship between “hit” and “miss”.

There is hope, though, because you can tell Oracle that it should be “reluctant” rather than “greedy” which means Oracle will consume the smallest possible number of occurrences of the first rule before testing the second rule, and so on. All you have to do is append a “?” to the count qualifier:

        pattern (strt miss{0, 31 }? hit)

Unfortunately this seemed to have very little effect on execution time (and CPU usage) in our case. Again this may be due to the nature of the data etc., but it may also be a limitation in the way that the back tracking works. I had expected a response time that would more closely mirror the human expectation, but a few timed tests suggest the code uses exactly the same type of strategy for the reluctant strategy as it does for the greedy one, viz:

  • find 0 rows that match miss, then discover the next row does not match hit
  • go back to the strt and find 1 row that matches miss, then discover the 2nd row does not match hit
  • go back to the strt and find 2 rows that match miss, then discover the 3rd row does not match hit
  • … repeat until
  • go back to the strt and find 31 rows that match miss, then discover the 32nd does not match hit
  • go to the next row, call it strt, and repeat the above … 1M times.

Since there are relatively few cases in our data where a pattern match will occur both the reluctant and the greedy strategies will usually end up doing all 32 steps. I was hoping for a more human-like algorithm that would recognise that Oracle would recognise that if it’s just missed on the first X rows then it need only check the X+1th and not go back to the beginning (strt) – but my requirement makes it easy to see (from a human perspective) that that makes sense; in a generic case (with more complex patterns and without the benefit of having two mutially exclusive rules) the strategy of “start all over again” is probably a much safer option to code.

Plan B

Plan B was to send the code to Stew Ashton and ask if he had any thoughts on making it more efficient – I wasn’t expecting him to send a PL/SQL solution my return of post, but that’s what I got and published in the previous post.

Plan C

It occurred to me that I don’t really mind if the predicted clustering_factor is a little inaccurate, and that the “backtracking” introduced by the variable qualifer {0,31} was the source of a huge amount of the work done, so I took a different approach which simply said: “check that the next 32 (or preferred value of N) rows are all misses”.  This required two changes – eliminate one of the defines (the “hit”) and modify the pattern definition as follows:

         pattern (strt  miss{32} )
         define
                 miss as (block_id != strt.block_id or  file_id != strt.file_id)

The change in strategy means that the result is going to be (my version of) the clustering_factor rather than the number to subtract from num_rows to derive the clustering_factor. And I’ve introduced a small error which shows up towards the end of the data set – I’ve demanded that a pattern should include exactly 32 misses; but when you’re 32 rows from the end of the data set there aren’t enough rows left to match the pattern. So the result produced by the modified code could be as much as 32 short of the expected result.  However, when I showed the code to Stew Ashton he pointed out that I could include “alternatives” in the pattern, so all I had to do was add in to the pattern something which said “and if there aren’t 32 rows left, getting to the end of the data set is good enough” (technically that should be end of the current partition, but we have only one partition).

         pattern (strt  ( miss{32} | ( miss* $) ) )

The “miss” part of the pattern now reads:  “32 misses” or “zero or more misses and then the end of file/partition/dataset”.

It’s still not great – but the time to process the 1M rows with a table_cached_blocks of 32 came down to 31 seconds

Finally

I’ll close with one important thought. There’s a significant difference in the execution plans for the two strategies – which I’m showing as outputs from the SQL Monitor report using a version of the code that does a simple count(*) rather than listing any rows:


pattern (strt miss{0, 31} hit)
==========================================================================================================================================
| Id |         Operation          | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   |  Mem  | Activity | Activity Detail |
|    |                            |       | (Estim) |       | Active(s) | Active |       | (Actual) | (Max) |   (%)    |   (# samples)   |
==========================================================================================================================================
|  0 | SELECT STATEMENT           |       |         |       |        54 |    +14 |     1 |        1 |     . |          |                 |
|  1 |   SORT AGGREGATE           |       |       1 |       |        54 |    +14 |     1 |        1 |     . |          |                 |
|  2 |    VIEW                    |       |      1M | 12147 |        54 |    +14 |     1 |     2782 |     . |          |                 |
|  3 |     MATCH RECOGNIZE SORT   |       |      1M | 12147 |        60 |     +8 |     1 |     2782 |  34MB |          |                 |
|  4 |      VIEW                  |       |      1M |   325 |        14 |     +1 |     1 |       1M |     . |          |                 |
|  5 |       INDEX FAST FULL SCAN | T1_I1 |      1M |   325 |         7 |     +8 |     1 |       1M |     . |          |                 |
==========================================================================================================================================

pattern (strt  miss{32} )
==================================================================================================================================================================
| Id |                     Operation                      | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   |  Mem  | Activity | Activity Detail |
|    |                                                    |       | (Estim) |       | Active(s) | Active |       | (Actual) | (Max) |   (%)    |   (# samples)   |
==================================================================================================================================================================
|  0 | SELECT STATEMENT                                   |       |         |       |        15 |    +16 |     1 |        1 |     . |          |                 |
|  1 |   SORT AGGREGATE                                   |       |       1 |       |        15 |    +16 |     1 |        1 |     . |          |                 |
|  2 |    VIEW                                            |       |      1M | 12147 |        15 |    +16 |     1 |     997K |     . |          |                 |
|  3 |     MATCH RECOGNIZE SORT DETERMINISTIC FINITE AUTO |       |      1M | 12147 |        29 |     +2 |     1 |     997K |  34MB |          |                 |
|  4 |      VIEW                                          |       |      1M |   325 |        16 |     +1 |     1 |       1M |     . |          |                 |
|  5 |       INDEX FAST FULL SCAN                         | T1_I1 |      1M |   325 |         9 |     +8 |     1 |       1M |     . |          |                 |
==================================================================================================================================================================

The significant line to note is operation 3 in both cases. The query with the pattern that’s going to induce back-tracking reports only “MATCH RECOGNIZE SORT”. The query with the “fixed” pattern reports “MATCH RECOGNIZE SORT DETERMINISTIC FINITE AUTO” Oracle can implement a finite state machine with a fixed worst case run-time. When you write some code that uses match_recognize() the three magic words you want to see in the plan are “deterministic finite auto” – if you don’t then, in principle, your query might be one of those that could (theoretically) run forever.

Addendum

Congratulations if you’ve got this far – but please remember that I’ve had very little practice using match_recognize; this was a little fun and learning experience for me and there may be many more things you could usefully know about the technology before you use it in production; there may also be details in what I’ve written which are best forgotten about. That being the case I’d be more than happy for anyone who wants to enhance or correct my code, descriptions and observations to comment below.

 

Pages

Subscribe to Oracle FAQ aggregator