Feed aggregator
What’s Taking So Long For Autoupgrade
Directory for autoupgrade/log: $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log, where $ORACLE_UNQNAME=db_unique_name
Create upgrade.config as shown: $ORACLE_BASE/admin/$ORACLE_UNQNAME/${ORACLE_UNQNAME}_upgrade.config
global.autoupg_log_dir=$ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log upg1.log_dir=$ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log upg1.dbname=$ORACLE_UNQNAME upg1.sid=$ORACLE_SID upg1.start_time=NOW upg1.source_home=/oracle/app/product/11.2/dbhome_1 upg1.target_home=/oracle/app/product/12.2/dbhome_1 upg1.upgrade_node=localhost upg1.target_version=12.2 upg1.timezone_upg=no upg1.restoration=yes
Let’s take a took at summary for autograde job process 102.
Find autoupgrade directories.
$ export JOBNO=102 $ ls -l $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log/*/*/* -rwx------ 1 oracle dba 73349 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/100/autoupgrade_20191104.log -rwx------ 1 oracle dba 233 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/100/autoupgrade_20191104_user.log -rwx------ 1 oracle dba 0 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/100/autoupgrade_err.log -rwx------ 1 oracle dba 71390 Nov 04 13:07 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/101/autoupgrade_20191104.log -rwx------ 1 oracle dba 233 Nov 04 13:06 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/101/autoupgrade_20191104_user.log -rwx------ 1 oracle dba 0 Nov 04 13:06 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/101/autoupgrade_err.log -rwx------ 1 oracle dba 891207 Nov 04 16:01 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/autoupgrade_20191104.log -rwx------ 1 oracle dba 12371 Nov 04 16:01 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/autoupgrade_20191104_user.log -rwx------ 1 oracle dba 245 Nov 04 15:50 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/autoupgrade_err.log -rwx------ 1 oracle dba 1118 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/after_upgrade_pfile_ORACLE_SID.ora -rwx------ 1 oracle dba 0 Nov 04 14:05 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID.restart -rwx------ 1 oracle dba 2236 Nov 04 15:48 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_autocompile.sql -rwx------ 1 oracle dba 16805 Nov 04 13:52 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_catnoamd.sql -rwx------ 1 oracle dba 3685 Nov 04 13:56 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_catnoexf.sql -rwx------ 1 oracle dba 19753 Nov 04 13:56 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_catnorul.sql -rwx------ 1 oracle dba 20740 Nov 04 13:54 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_emremove.sql -rwx------ 1 oracle dba 883 Nov 04 15:48 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/ORACLE_SID_objcompare.sql -rwx------ 1 oracle dba 1118 Nov 04 12:46 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/before_upgrade_pfile_ORACLE_SID.ora -rwx------ 1 oracle dba 1168 Nov 04 14:00 ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/temp/during_upgrade_pfile_ORACLE_SID.ora ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/100/prechecks: total 376 -rwx------ 1 oracle dba 2781 Nov 04 12:46 ORACLE_SID_checklist.cfg -rwx------ 1 oracle dba 9328 Nov 04 12:46 ORACLE_SID_checklist.json -rwx------ 1 oracle dba 9962 Nov 04 12:46 ORACLE_SID_checklist.xml -rwx------ 1 oracle dba 25980 Nov 04 12:46 ORACLE_SID_preupgrade.html -rwx------ 1 oracle dba 10018 Nov 04 12:46 ORACLE_SID_preupgrade.log -rwx------ 1 oracle dba 121687 Nov 04 12:46 prechecks_ORACLE_SID.log ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/101/prechecks: total 376 -rwx------ 1 oracle dba 2781 Nov 04 13:07 ORACLE_SID_checklist.cfg -rwx------ 1 oracle dba 9328 Nov 04 13:07 ORACLE_SID_checklist.json -rwx------ 1 oracle dba 9962 Nov 04 13:07 ORACLE_SID_checklist.xml -rwx------ 1 oracle dba 25980 Nov 04 13:07 ORACLE_SID_preupgrade.html -rwx------ 1 oracle dba 10018 Nov 04 13:07 ORACLE_SID_preupgrade.log -rwx------ 1 oracle dba 121687 Nov 04 13:07 prechecks_ORACLE_SID.log ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/dbupgrade: total 446064 -rwx------ 1 oracle dba 13059 Nov 04 15:48 autoupgrade20191104135121ORACLE_SID.log -rwx------ 1 oracle dba 12042 Nov 04 15:48 ORACLE_SID_autocompile20191104135121ORACLE_SID0.log -rwx------ 1 oracle dba 551 Nov 04 15:48 ORACLE_SID_autocompile20191104135121ORACLE_SID_catcon_10617264.lst -rwx------ 1 oracle dba 208215073 Nov 04 15:48 catupgrd20191104135121ORACLE_SID0.log -rwx------ 1 oracle dba 7481470 Nov 04 15:46 catupgrd20191104135121ORACLE_SID1.log -rwx------ 1 oracle dba 5527017 Nov 04 15:46 catupgrd20191104135121ORACLE_SID2.log -rwx------ 1 oracle dba 7040784 Nov 04 15:46 catupgrd20191104135121ORACLE_SID3.log -rwx------ 1 oracle dba 527 Nov 04 14:01 catupgrd20191104135121ORACLE_SID_catcon_17039806.lst -rwx------ 1 oracle dba 0 Nov 04 15:23 catupgrd20191104135121ORACLE_SID_datapatch_normal.err -rwx------ 1 oracle dba 1050 Nov 04 15:46 catupgrd20191104135121ORACLE_SID_datapatch_normal.log -rwx------ 1 oracle dba 0 Nov 04 15:17 catupgrd20191104135121ORACLE_SID_datapatch_upgrade.err -rwx------ 1 oracle dba 702 Nov 04 15:18 catupgrd20191104135121ORACLE_SID_datapatch_upgrade.log -rwx------ 1 oracle dba 9877 Nov 04 15:18 during_upgrade_pfile_catctl.ora -rwx------ 1 oracle dba 32649 Nov 04 14:01 phase.log -rwx------ 1 oracle dba 1489 Nov 04 15:48 upg_summary.log -rwx------ 1 oracle dba 46 Nov 04 15:48 upg_summary_report.log -rwx------ 1 oracle dba 408 Nov 04 15:48 upg_summary_report.pl ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/drain: total 16 -rwx------ 1 oracle dba 4952 Nov 04 14:01 drain_ORACLE_SID.log ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/postchecks: total 104 -rwx------ 1 oracle dba 969 Nov 04 15:50 ORACLE_SID_checklist.cfg -rwx------ 1 oracle dba 3202 Nov 04 15:50 ORACLE_SID_checklist.json -rwx------ 1 oracle dba 3395 Nov 04 15:50 ORACLE_SID_checklist.xml -rwx------ 1 oracle dba 16916 Nov 04 15:50 ORACLE_SID_postupgrade.html -rwx------ 1 oracle dba 3383 Nov 04 15:50 ORACLE_SID_postupgrade.log -rwx------ 1 oracle dba 14861 Nov 04 15:50 postchecks_ORACLE_SID.log ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/postfixups: total 40 -rwx------ 1 oracle dba 14864 Nov 04 16:00 postchecks_ORACLE_SID.log -rwx------ 1 oracle dba 3262 Nov 04 15:59 postfixups_ORACLE_SID.log ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/postupgrade: total 24 -rwx------ 1 oracle dba 10177 Nov 04 16:01 postupgrade.log ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/prechecks: total 376 -rwx------ 1 oracle dba 2781 Nov 04 13:52 ORACLE_SID_checklist.cfg -rwx------ 1 oracle dba 9328 Nov 04 13:52 ORACLE_SID_checklist.json -rwx------ 1 oracle dba 9962 Nov 04 13:52 ORACLE_SID_checklist.xml -rwx------ 1 oracle dba 25980 Nov 04 13:52 ORACLE_SID_preupgrade.html -rwx------ 1 oracle dba 10018 Nov 04 13:52 ORACLE_SID_preupgrade.log -rwx------ 1 oracle dba 121687 Nov 04 13:51 prechecks_ORACLE_SID.log ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/prefixups: total 304 -rwx------ 1 oracle dba 121708 Nov 04 14:00 prechecks_ORACLE_SID.log -rwx------ 1 oracle dba 32727 Nov 04 14:00 prefixups_ORACLE_SID.log ORACLE_BASE/admin/ORACLE_UNQNAME/autoupgrade/102/preupgrade: total 8 -rwx------ 1 oracle dba 98 Nov 04 13:51 preupgrade.log /orahome/oracle/app/admin/ORACLE_SID/autoupgrade/log/cfgtoollogs/upgrade/auto: total 880 -rwx------ 1 oracle dba 414589 Nov 04 16:01 autoupgrade.log -rwx------ 1 oracle dba 780 Nov 04 13:00 autoupgrade_err.log -rwx------ 1 oracle dba 3113 Nov 04 16:01 autoupgrade_user.log drwx------ 2 oracle dba 4096 Nov 04 14:01 config_files drwx------ 2 oracle dba 256 Nov 04 16:01 lock -rwx------ 1 oracle dba 12381 Nov 04 16:01 state.html drwx------ 2 oracle dba 4096 Nov 04 16:01 status $
Find timing for autoupgrade process.
$ tail -12 `ls $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log/$ORACLE_UNQNAME/$JOBNO/autoupgrade*.log|egrep -v 'user|err'` 2019-11-04 16:01:21.082 INFO ----------------------Stages Summary------------------------ - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.082 INFO SETUP 1 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.083 INFO PREUPGRADE 1 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.083 INFO PRECHECKS 1 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.083 INFO GRP 1 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.084 INFO PREFIXUPS 8 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.084 INFO DRAIN 1 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.084 INFO DBUPGRADE 108 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.085 INFO POSTCHECKS 1 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.085 INFO POSTFIXUPS 9 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.086 INFO POSTUPGRADE 1 min - DispatcherOSHelper.writeStageSummary 2019-11-04 16:01:21.086 INFO End of dispatcher instance for ORACLE_SID - AutoUpgDispatcher.run
Find timing for database upgrade.
$ tail -35 $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log/$ORACLE_UNQNAME/102/dbupgrade/catupgrd*${ORACLE_UNQNAME}0.log ========== PROCESS ENDED ========== SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Start of Summary Report ------------------------------------------------------ Oracle Database 12.2 Post-Upgrade Status Tool 11-04-2019 15:46:14 Component Current Version Elapsed Time Name Status Number HH:MM:SS Oracle Server UPGRADED 12.2.0.1.0 00:19:53 JServer JAVA Virtual Machine UPGRADED 12.2.0.1.0 00:07:34 Oracle Workspace Manager UPGRADED 12.2.0.1.0 00:02:16 OLAP Analytic Workspace UPGRADED 12.2.0.1.0 00:00:23 Oracle OLAP API UPGRADED 12.2.0.1.0 00:00:23 Oracle XDK UPGRADED 12.2.0.1.0 00:01:15 Oracle Text UPGRADED 12.2.0.1.0 00:01:24 Oracle XML Database UPGRADED 12.2.0.1.0 00:05:28 Oracle Database Java Packages UPGRADED 12.2.0.1.0 00:00:25 Oracle Multimedia UPGRADED 12.2.0.1.0 00:03:25 Spatial UPGRADED 12.2.0.1.0 00:09:03 Oracle Application Express UPGRADED 5.0.4.00.12 00:23:17 Final Actions 00:04:44 Post Upgrade 00:00:09 Total Upgrade Time: 01:20:32 Database time zone version is 14. It is older than current release time zone version 26. Time zone upgrade is needed using the DBMS_DST package. Grand Total Upgrade Time: [0d:1h:47m:34s] End of Summary Report ------------------------------------------------------ $
Find timing for datapatch.
$ cat $ORACLE_BASE/admin/$ORACLE_UNQNAME/autoupgrade/log/$ORACLE_UNQNAME/$JOBNO/dbupgrade/catupgrd*datapatch_normal.log SQL Patching tool version 12.2.0.1.0 Production on Mon Nov 4 15:23:22 2019 Copyright (c) 2012, 2019, Oracle. All rights reserved. Log file for this invocation: /orahome/oracle/app/cfgtoollogs/sqlpatch/sqlpatch_14221746_2019_11_04_15_23_22/sqlpatch_invocation.log Connecting to database...OK Bootstrapping registry and package to current versions...done Determining current state...done Current state of SQL patches: Bundle series DBRU: ID 190716 in the binary registry and not installed in the SQL registry Adding patches to installation queue and performing prereq checks... Installation queue: Nothing to roll back The following patches will be applied: 29757449 (DATABASE JUL 2019 RELEASE UPDATE 12.2.0.1.190716) Installing patches... Patch installation complete. Total patches installed: 1 Validating logfiles... Patch 29757449 apply: SUCCESS logfile: /orahome/oracle/app/cfgtoollogs/sqlpatch/29757449/23013437/29757449_apply_ORACLE_SID_2019Nov04_15_24_11.log (no errors) SQL Patching tool complete on Mon Nov 4 15:46:00 2019 $
Might be better to use grep -A vs tail; however, I was on AIX and was not able to find option.
Jordan’s Top Bank Becomes Regional Blockchain Leader with Oracle
By Guest Author, Oracle—Nov 20, 2019

Arab Jordan Investment Bank (AJIB) provides retail, corporate, and investment banking services in Jordan, Cyprus, and Qatar. Sending money between subsidiaries has historically relied on a network of correspondent banking relationships and third-party intermediaries, leading to slow and costly transactions for both AJIB and its customers. In addition, it required sharing customer information with third parties—a process involving strict regulatory requirements with occasional conflicts between different jurisdictions. AJIB aimed to reduce the costs and time required for cross-border payments, while making the entire process more secure and efficient.
In order to boost speed, security, and reliability of cross-border money transfers with distributed ledger technology, AJIB deployed Oracle Blockchain Platform in what has become the largest blockchain deployment in the Middle East.
Before using blockchain, money transfers between AJIB subsidiaries were processed by third-party intermediaries that charged fees at each stage of a cross-border transfer transaction. AJIB needed to reduce the costs and the time required for cross-border payments, while making the entire process more secure and efficient.
“Oracle delivers an enterprise-grade blockchain platform with high resiliency, scalability and security. The built-in features, such as identity management and data encryption, made it an ideal choice given our industry requirements and compliance needs,” said Ayman Qadoumi, Deputy General Manager, Information Technology and Security, Arab Jordan Investment Bank.
With Oracle Blockchain Platform, AJIB is now able to make the same transfers in real time without paying those fees. Senders and receivers can now track money transfers while the funds are in transit, providing transparency to both parties about the exact timing and amount of the transfer.
Watch the Video
Watch this video to learn how Arab Jordan Investment Bank is using Oracle Blockchain Platform to become a leader in banking innovation.
Read More Stories from Oracle Cloud
Arad Jordan Investment Bank is one of the thousands of innovative customers succeeding in the cloud. Read about others in Stories from Oracle Cloud: Business Successes.
Urban Leaders Power the Future with Oracle
SMART CITIES EXPO WORLD CONGRESS, Barcelona—Nov 20, 2019
Data is at the core of successful smart city innovation, according to new research from Oracle and economic and urban research consultancy ESI ThoughtLab. The Building a Hyperconnected City study found that cities are drowning in data from advancements such as Internet of Things (IoT). The survey projected that there will be more than 30 billion connected devices generating data by 2020. The study notes that for cities to become truly ‘smart’, they must have a cloud infrastructure in place to extract, integrate, and analyze this data to glean the insights needed to enhance everything from citizen services to building projects.
The report surveyed 100 cities across the United States, APAC, EMEA and LATAM.
The hyper-connected multiplier effectAccording to the study, the average return on investments in hyper-connected initiatives ranges from three - four percent. As cities become more interlinked, their ROI grows: cities just starting out realize a return of 1.8 percent for implementers and 2.6 percent for advancers, while hyper-connected leaders see a 5.0 percent boost. That can translate into enormous returns ranging from $19.6 million for implementers to $40.0 million for advancers and $83 million for hyper-connected leaders.
Other key findings from the study include:
- AI, Blockchain and biometrics are increasingly pervasive: Cities are using these technologies in key urban areas, such as IT infrastructure and telecoms, mobility and transportation, payment and financial systems, and physical and digital security. City leaders need the right technology platforms and applications to implement and leverage these tools and capabilities.
- Cybersecurity requires careful planning and is expensive when not implemented properly: The study revealed that half of the 100 city leaders surveyed do not feel adequately prepared for cyberattacks.
- Smart initiatives are bolstering constituent satisfaction: While physical and digital security top the list of priorities, citizen engagement and satisfaction have risen as a top five goal. 33 percent of innovative leaders in North America have appointed Chief Citizen Experience Officers.
“The public sector, particularly at local level, is dealing with seismic technological, demographic and environmental shifts. Data is the rocket fuel for this transformation, and progressive cities are turning to cloud, data platforms, mobile applications and IoT as a way to scale and prepare for the future,” said Susan O’Connor, global director for Smart Cities, Oracle. “In contrast, not taking advantage of emerging technologies such as AI, Blockchain or virtual and augmented reality comes at a cost. Cities of the future need strategic, long-term investments in cloud data architecture, along with the right expertise to guide them through.”
Customer Commitment to Smarter Cities:“As a data driven organization, we integrate, manage and use data to inform how we improve services for our constituents,” said Hamant Bharadia, assistant director of finance at the London Borough of Lambeth. “Oracle Cloud Applications for financial planning and payroll are an integral part of our digital strategy, setting us up for a modern way of working and engaging with our communities. They are an essential enabler for us to support innovation, improve public safety and realize our vision of making Lambeth a connected, inclusive place to thrive.”
“Approximately 50% of Buenos Aires sidewalks are in poor condition, and we previously used spreadsheets to plan the routes for our crew to fix them,” said Alejandro Naon, chief of staff of planning of the undersecretariat of pedestrian ways, City of Buenos Aires. “Today, with Oracle CX Field Service Cloud, we can identify and fix the sidewalks exponentially faster because we receive images and information in real time. Our sidewalks are safer, our workers are more productive, and we recovered our Oracle technology investment in 18 months.”
“At the foundation of our smart government innovation is Oracle Analytics Cloud. It is both the heartbeat and hub for sharing information, enabling us to deliver data-driven citizen services and engagement with maximum impact,” said Chris Cruz, director and chief information officer, San Joaquin County. “Our entities throughout San Joaquin County, such as hospitals, law enforcement, transportation and public works, now partner more effectively and are better equipped to meet the health, social, safety and economic needs of our constituents.”
Oracle's Smart City solutions transform the ways cities can harness and process the power of data through the integration of modern digital technologies and channels. The platform integrates technologies spanning cloud, digital outreach, omni-channel service, case management, mobility, social, IoT, Blockchain, and artificial intelligence while helping ensure comprehensive security and information privacy.
For more information, go to https://www.oracle.com/applications/customer-experience/industries/public-sector/
The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.
TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Despacito | Keyboard Performance | by Dharun at Improviser Music Studio
Despacito
Please subscribe to our channel Improviser Music Studio
Tchaikovsky | Swan Lake Keyboard Performance | by Dharun at Improviser Music Studio
My Son Dharun Performing at Improviser Music Studio
Tchaikovsky | Swan Lake
Please subscribe to our channel Improviser Music Studio
Shawn Mendes - Treat You Better | Keyboard Performance | by Dharun at Improviser Music Studio
My Son Dharun Performing at Improviser Music Studio
Shawn Mendes - Treat You Better
Please subscribe to our channel Improviser Music Studio
Rittman Mead at UKOUG TechFest 19

Like every year, December for the Oracle community means UKOUG! This time is special since the event, named TechFest19, will be hosted at The Grand Brighton Hotel, near our base office!

Let's talk about the important stuff first: we are organising a Techfest party featuring "The Chaps" on Monday 2nd Dec between 7PM and 10:30PM in our office at Platf9rm, Hove Town Hall. We'll be having few drinks and while enjoying the live music, if you are interested register yourself here!

Now on the main topic, the Techfest! Rittman Mead will be well represented this time with four talks:
Data the Missing Ingredient
Monday 2nd at 14:15, Location: Stage 4
Jon Mead, our CEO, will introduce you to the concepts of Data Management, Engineering and Governance, how they should be addressed across a wide range of projects and which Oracle tools are there to help you.
Become an Equilibrista: Find the Right Balance in the Analytics Tech Ecosystem
Tuesday 3rd at 11:00, Location: Stage 2

Do you want to understand how to avoid the "Excel Chaos" in your organization? Let me show you how you can balance Centralized and Self-service analytics, taking the best of both worlds but still using a unique tool: Oracle Analytics Cloud!
Is it Corked? Wine Machine Learning Predictions with OAC
Tuesday 3rd at 15:45, Location: Stage 2
Do you love Wine? And maybe you don't dislike Analytics? Then join me in understanding how Oracle Analytics Cloud can be used for data-science! We'll be analysing a Wine dataset and using OAC to create a predictive model scoring wine quality! If you are a business analyst looking to start your path into Machine Learning, this session is a kickstarter!

Picking a Good Wine for <$20 Using Oracle Autonomous, Machine Learning and Analytics Cloud
Monday 2nd at 09:00, Location: Stage 2
Using the same dataset as the talk above, Charlie Berger, Sr. Director of Product Management, will show you how to build a predictive model by performing Machine Learning directly within the Oracle Autonomous Datawarehouse, all accessed by a notebook interface. I'll then show you how we can integrate such model within OAC and show the main drivers as well as the model outcomes!
During the event, few of the Rittman Mead folks will be around. If you see us in sessions, around the conference or during our talks, we'd be pleased to speak with you about your projects and answer any questions you might have.
5 Ways to Read Deleted Comments from the Reddit Archive
Have you ever wondered how you can restore Reddit’s deleted posts from the Reddit archive? Here’s a short guide about how to see deleted Reddit posts! If you’re a Reddit aficionado, someone who is fond of reading long, fascinating threads about different topics, then you’ve probably come across deleted comments at one point or another. […]
The post 5 Ways to Read Deleted Comments from the Reddit Archive appeared first on VitalSoftTech.
NFS Storage Configuration for Kubernetes
For one of our important customers, we are working on a major project to migrate critical applications to containers. From the implementation of the Kubernetes architecture to the deployment of applications and the administration of the platform, we are responsible for an important technological stack with new challenges for our team.
One of the challenges, both important and exciting, is the implementation of Kubernetes clusters on bare metal (VM) and its management. We have deployed a Kubernetes cluster in VMs, based on VMWare.
As you know, one of the challenges of containerization is storage management. Do we manage stateless or stateful applications? For stateful applications, the way the data generated by the application is stored is very important.
Therefore, based on our infrastructure, we have 2 possibilities:
- The first one is to use the certified plugin provided by VMWare to create a storage class for our cluster and let him manage the persistent volume by itself: https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/overview.html
- The second possibility is to use an NFS server. This is a more trivial choice and you have to manage your persistent volumes by yourself (manually).
Here is a representative diagram of the 2 solutions:
The Kubernetes infrastructure is composed of the following:
- k8s-master
- k8s-worker1
- k8s-worker2
In addition, we have an NFS server to store our cluster data. In the next steps, we are going to expose the NFS share as a cluster object. We will create Kubernetes Persistent Volumes and Persistent Volume Claims for our application.
Persistent Volume CreationDefine the persistent volume at the cluster level as following:
[ec2-user@ip-10-3-1-217 ~]$ vi create-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: nfs-demo labels: app: nfs type: data spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi volumeMode: Filesystem nfs: path: /home/ec2-user/data server: ec2-3-88-194-14.compute-1.amazonaws.com persistentVolumeReclaimPolicy: Retain
Create the persistent volume and see the results:
[ec2-user@ip-10-3-1-217 ~]$ kubectl create -f create-pv.yaml persistentvolume/nfs-demo created [ec2-user@ip-10-3-1-217 ~]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-demo 10Gi RWO Retain Available 7s
Once it’s created we can create a persistent volume claim. A PVC is dedicated to a specific namespace.
First, create the nfs-demo namespace, then the PVC.
[ec2-user@ip-10-3-1-217 ~]$ kubectl create ns nfs-demo namespace/nfs-demo created
[ec2-user@ip-10-3-1-217 ~]$ vi create-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-demo namespace: nfs-demo labels: app: nfs spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi selector: matchLabels: app: nfs type: data [ec2-user@ip-10-3-1-217 ~]$ kubectl create -f create-pvc.yaml persistentvolumeclaim/nfs-demo created [ec2-user@ip-10-3-1-217 ~]$ kubectl get pvc -n nfs-demo NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-demo Bound nfs-demo 10Gi RWO 3m21s
We can see now that our persistent volume changes its status from “Available” to “Bound”.
[ec2-user@ip-10-3-1-217 ~]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-demo 10Gi RWO Retain Bound nfs-demo/nfs-demo 169m
Finally, let’s deploy now our workload which will consume the volume claim and the persistent volume. Whatever the workload API object you are using (Deployment, StatefulSet or DaemonSet) the Persistent Volume Claim is defined within the Pod specification, as follows:
[ec2-user@ip-10-3-1-217 ~]$ vi create-pod.yaml kind: Pod [ec2-user@ip-10-3-1-217 ~]$ packet_write_wait: Connection to 18.205.188.55 port 22: Broken pipe kind: Pod apiVersion: v1 metadata: name: nfs-pod spec: containers: - name: nfs-demo image: alpine volumeMounts: - name: nfs-demo mountPath: /data/nfs command: ["/bin/sh"] args: ["-c", "sleep 500000"] volumes: - name: nfs-demo persistentVolumeClaim: claimName: nfs-demo
[ec2-user@ip-10-3-1-217 ~]$ kubectl create -f create-pod.yaml pod/nfs-pod created [ec2-user@ip-10-3-1-217 ~]$ kubectl get pods -o wide -n nfs-demo NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-pod 1/1 Running 0 9s 192.168.37.68 ip-10-3-0-143.ec2.internal
Let’s now create an empty file into the container volume mount path and see if it is has been created on the NFS server.
[ec2-user@ip-10-3-1-217 ~]$ kubectl -n nfs-demo exec nfs-pod touch /data/test-nfs.sh
We can see now, in the NFS server that the file has been properly stored.
mehdi@MacBook-Pro: ssh -i "dbi.pem" ec2-user@ec2-3-88-194-14.compute-1.amazonaws.com Last login: Tue Nov 19 13:35:18 2019 from 62.91.42.92 __| __|_ ) _| ( / Amazon Linux 2 AMI ___|\___|___| https://aws.amazon.com/amazon-linux-2/ 16 package(s) needed for security, out of 27 available Run "sudo yum update" to apply all updates. [ec2-user@ip-10-3-0-184 ~]$ ls -lrt data/ total 0 -rw-r--r-- 1 root root 0 Nov 19 13:42 test-nfs.sh
Cet article NFS Storage Configuration for Kubernetes est apparu en premier sur Blog dbi services.
Ekadantaya Vakratundaya Keyboard Performance | by Dharun at Improviser Music Studio
My Son Dharun Performing at Improviser Music Studio
Ekadantaya Vakratundaya
Please subscribe to our channel Improviser Music Studio
Oracle Expands Innovation Lab to Advance Industries
Redwood Shores, Calif.—Nov 19, 2019

Oracle is expanding its Chicago Innovation Lab, empowering more organizations to explore new technologies and strategies to bolster their digital transformation efforts. Since its successful launch last year, the Lab has helped construction organizations explore and test solutions from Oracle and the larger construction ecosystem in a simulated worksite environment. Today, Oracle is planning for an extended facility and broadening the scope of the newly named Oracle Industries Innovation Lab to feature additional partners and technologies to solve complex business issues and accelerate customer success across more verticals.
“We are at an inflection point with technology as the digital and physical worlds continue to blur for our customers across all industries,” said Mike Sicilia, senior vice president and general manager, Global Business Units, Oracle. “This expanded Lab environment gives our customers and partners a place to co-innovate with tools and technologies that yield operational improvements and empowers them to use data to create new business opportunities and revenue streams. We’re coming together to help redefine the future for these industries.”
The Lab has already welcomed more than 650 visitors, including best-in-class technology partners, customers and industry thought leaders. There, they have worked together in a realistic worksite environment to test how leading-edge solutions such as connected devices, autonomous vehicles, drones, augmented reality, visualization, and artificial intelligence tools can positively impact the construction industry. Moving forward, the Lab will also feature simulated environments including Utility and Communication solutions.
Oracle Utilities will explore new concepts driving the future of energy. Lab demonstrations and real-world modeling will range from better managing loads on the grid with distributed energy resources, such as solar, wind and electric vehicles; to using artificial intelligence, IoT and digital-twin technologies to improve network operations and speed outage restoration; to optimizing connections with smart home devices to engage and serve customers, while bolstering the health of the grid with better demand planning. The Lab will also highlight how water, gas and electric utilities can leverage the latest technology to manage and enhance their construction efforts and minimize disruptions during site enhancements, maintenance and upgrades.
Oracle Communications enables both mobile in-app and web-based digital engagement using contextual voice, HD video and screen sharing capabilities through its Oracle Live Experience Cloud. The Oracle Live Experience Cloud directly enables enterprises in the E&C industry to modernize customer experience and field service using enhanced digital engagement channels.
The use cases being demonstrated at the Lab will let customers simulate real-time collaboration on large construction models with massive amounts of data over a high speed, low latency 5G network.
The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.
TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Oracle, ITA Announce Wild Card Linkages Between Major College Championships and Oracle Pro Series Events
TEMPE, Ariz.—Nov 18, 2019
Oracle and the Intercollegiate Tennis Association (ITA) jointly announced today the creation of wild card linkages between major college tennis championships and the Oracle Pro Series. The champions and finalists from the Oracle ITA Masters, the ITA All-American Championships and the Oracle ITA National Fall Championships will be awarded wild card entries into Oracle Pro Series events beginning with the 2020 season.
The opportunity to earn wild card entries into Oracle Pro Series tournaments is available to college players from all five divisions (NCAA DI, DII, DIII, NAIA and Junior College). Singles and doubles champions from The All-American Championships and the Oracle ITA National Fall Championships as well as the Oracle ITA Masters singles champions will earn wild cards into Oracle Challenger level events. Singles and doubles finalists from the All-American Championships and the Oracle ITA National Fall Championships will earn wild cards into Oracle $25k tournaments. ITA Cup singles champions (from NCAA DII, DIII, NAIA and Junior College) will also earn wild card entries into Oracle $25K tournaments.
Eighteen individuals and eight doubles teams have already secured wild cards for Oracle Pro Series tournaments in 2020 following their play at the 2019 Oracle ITA Masters, 2019 ITA All-American Championships, 2019 ITA Cup, and 2019 Oracle ITA National Fall Championships. The list includes:
Oracle ITA Masters
- Men’s Singles Champion – Daniel Cukierman (USC)
- Men’s Singles Finalist – Keegan Smith (UCLA)
- Women’s Singles Champion – Ashley Lahey (Pepperdine)
- Women’s Singles Finalist – Jada Hart (UCLA)
Oracle ITA National Fall Championships
- Men’s Singles Champion – Yuya Ito (Texas)
- Men’s Singles Finalist – Damon Kesaris (Saint Mary’s)
- Women’s Singles Champion – Sara Daavettila (North Carolina)
- Women’s Singles Finalist – Anna Turati (Texas)
- Men’s Doubles Champions – Dominik Kellovsky/Matej Vocel (Oklahoma State)
- Men’s Doubles Finalists – Robert Cash/John McNally (Ohio State)
- Women’s Doubles Champions – Elysia Bolton/Jada Hart (UCLA)
- Women’s Doubles Finalists – Anna Rogers/Alana Smith (NC State)
ITA All-American Championships
- Men’s Singles Champion – Yuya Ito (Texas)
- Men’s Singles Finalist – Sam Riffice (Florida)
- Men’s Doubles Champions – Jack Lin/Jackie Tang (Columbia)
- Men’s Doubles Finalists – Gabriel Decamps/Juan Pablo Mazzuchi (UCF)
- Women’s Singles Champion – Ashley Lahey (Pepperdine)
- Women’s Singles Finalist – Alexa Graham (North Carolina)
- Women’s Doubles Champions – Jessie Gong/Samantha Martinelli (Yale)
- Women’s Doubles Finalists – Tenika McGiffin/Kaitlin Staines (Tennessee)
ITA Cup
- Men’s Division II Singles Champion – Alejandro Gallego (Barry)
- Men’s Division III Singles Champion – Boris Sorkin Tufts)
- Men’s NAIA Singles Champion – Jose Dugo (Georgia Gwinnett)
- Men’s Junior College Singles Champion – Oscar Gabriel Ortiz (Seward County)
- Women’s Division II Singles Champion – Berta Bonardi (West Florida)
- Women’s Division III Singles Champion – Justine Leong (Claremont-Mudd-Scripps)
- Women’s NAIA Singles Champion – Elyse Lavender (Brenau)
- Women’s Junior College Singles Champion – Tatiana Simova (ASA Miami)
“This is yet another exciting step forward for all of college tennis as we build upon our ever-growing partnership with Oracle,” said ITA Chief Executive Officer Timothy Russell. “We are forever grateful to our colleagues at Oracle for both their vision and execution of these fabulous opportunities.”
Oracle is partnering with InsideOut Sports & Entertainment, led by former World No. 1 and Hall of Famer Jim Courier and his business partner Jon Venison, to manage the Oracle Pro Series. InsideOut will work with the college players and their respective coaches to coordinate scheduling in respect to their participation in the Pro Series events.
The final schedule for the 2020 Oracle Pro Series will include more than 35 tournaments, most of which will be combined men’s and women’s events. Dates and locations are listed at https://oracleproseries.com/. Follow on social media through #OracleProSeries.
The expanding partnership between Oracle and the ITA builds upon their collaborative efforts to provide playing opportunities and their goal of raising the profile of college tennis and the sport in general. Oracle supports collegiate tennis through sponsorship of the ITA, including hosting marquee events throughout the year such as the Oracle ITA Masters and the Oracle ITA Fall Championships.
Through that partnership, the ITA has been able to showcase its top events to a national audience as the Oracle ITA Masters, ITA All-American Championships and Oracle ITA National Fall Championships singles finals have been broadcast live with rebroadcasts on the ESPN family of networks.
The Intercollegiate Tennis Association (ITA) is committed to serving college tennis and returning the leaders of tomorrow. As the governing body of college tennis, the ITA oversees men’s and women’s varsity tennis at NCAA Divisions I, II and III, NAIA and Junior/Community College divisions. The ITA administers a comprehensive awards and rankings program for men’s and women’s varsity players, coaches and teams in all divisions, providing recognition for their accomplishments on and off the court. For more information on the ITA, visit the ITA website at www.itatennis.com, like the ITA on Facebook or follow @ITA_Tennis on Twitter and Instagram.
About Oracle TennisOracle is committed to supporting American tennis for all players across the collegiate and professional levels. Through sponsorship of tournaments, players, ranking, organizations and more, Oracle has infused the sport with vital resources and increased opportunities for players to further their careers. For more information, visit www.oracle.com/corporate/tennis/. Follow @OracleTennis on Twitter and Instagram.
About OracleThe Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.
TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
New Study: Only 11% of Brands Can Effectively Use Customer Data
Redwood Shores, Calif.—Nov 18, 2019
Despite all the hype around customer data platforms (CDPs), a new study conducted by Forrester Consulting and commissioned by Oracle found that brands are struggling to create a unified view of customers. The November 2019 study, “Getting Customer Data Management Right,” which includes insights from 337 marketing and advertising professionals in North America and Europe, found that brands want to unify customer data but face significant challenges in bringing different data types together.
Brands Want to Centralize Customer DataAs consumers expect more and more personalized experiences, the ability to effectively leverage customer data is shifting from a “nice-to-have” to table stakes:
- 75% of marketing and advertising professionals believe the ability to “improve the experience of our customers” is a critical or important objective when it comes to the use of customer engagement data.
- 69% believe it is important to create a unified customer profile across channels and devices.
- 64% stated that they adopted a CDP to develop a single source of truth so they could understand customers better.
Unified Customer Profiles Lead to Better Business Results
Brands that effectively leverage unified customer profiles are more likely to experience revenue growth, increased profitability and higher customer lifetime values:
- Brands that use CDPs effectively are 2.5 times more likely to increase customer lifetime value.
- When asked about the benefits of unified data management, the top two benefits were increased specific functional effectiveness (e.g., advertising, marketing, or sales) and increased channel effectiveness (e.g., email, mobile, web, social media).
The Marketing and Advertising Opportunity
While marketing and advertising professionals understand the critical role unified customer profiles play in personalizing the customer experience, the majority of brands are not able to effectively use a wide variety of data types:
- 71% of marketing and advertising professionals say a unified customer profile is important or critical to personalization.
- Only 11% of brands can effectively use a wide variety of data types in a unified customer profile to personalize experiences, provide a consistent experience across channels, and generally improve customer lifetime value and other business outcomes.
- 69% expect to increase CDP investments at their organization over the next two years.
“A solid data foundation is the most fundamental ingredient to success in today’s Experience Economy, where consumers expect relevant, timely and consistent experiences,” said Rob Tarkoff, executive vice president and general manager, Oracle CX. “At Oracle we have been helping customers manage, secure and protect their data assets for more than 40 years, and this unique experience puts us in the perfect position to help brands leverage all their customer data – digital, marketing, sales, service, commerce, financial and supply chain – to make every customer interaction matter.”
Read the full study here.
Forrester Consulting conducted an online survey of 337 professionals in North America and Europe who are responsible for customer data, marketing analytics, or marketing/advertising technology. Survey participants included decision makers director level and above in marketing or advertising roles. Respondents were offered a small incentive as a thank you for time spent on the survey. The study began in August 2019 and was completed in September 2019.
About OracleThe Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.
TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Fun with arrays in PostgreSQL
As you might already know, PostgreSQL comes with many, many data types. What you might not know is, that you can create arrays over all this data types quite easily. Is that important? Well, as always it depends on your requirements but there are use cases where arrays are great. As always, lets do some simple tests.
The following will create very simple table with one column, which is a one-dimensional array of integers.
postgres=# \d t1 Table "public.t1" Column | Type | Collation | Nullable | Default --------+-----------+-----------+----------+--------- a | integer[] | | |
To insert data into that table you would either do it like this:
postgres=# insert into t1 (a) values ( '{1,2,3,4,5,6}' ); INSERT 0 1 postgres=# select * from t1; a --------------- {1,2,3,4,5,6} (1 row)
… or you can do it like this as well:
postgres=# insert into t1 (a) values ( ARRAY[1,2,3,4,5,6] ); INSERT 0 1 postgres=# select * from t1; a --------------- {1,2,3,4,5,6} {1,2,3,4,5,6} (2 rows)
Notice that I did not specify any size of the array. Although you can do that:
postgres=# create table t2 ( a int[6] ); CREATE TABLE
… the limit is not enforced by PostgreSQL:
postgres=# insert into t2 (a) values ( '{1,2,3,4,5,6,7,8}' ); INSERT 0 1 postgres=# select * from t2; a ------------------- {1,2,3,4,5,6,7,8} (1 row)
PostgreSQL does not limit you to one-dimensional arrays, you can well go ahead and create more dimensions:
postgres=# create table t3 ( a int[], b int[][], c int[][][] ); CREATE TABLE postgres=# \d t3 Table "public.t3" Column | Type | Collation | Nullable | Default --------+-----------+-----------+----------+--------- a | integer[] | | | b | integer[] | | | c | integer[] | | |
Although it does look like all of the columns are one-dimensional they are actually not:
postgres=# insert into t3 (a,b,c) values ( '{1,2,3}', '{{1,2,3},{1,2,3}}','{{{1,2,3},{1,2,3},{1,2,3}}}' ); INSERT 0 1 postgres=# select * from t3; a | b | c ---------+-------------------+----------------------------- {1,2,3} | {{1,2,3},{1,2,3}} | {{{1,2,3},{1,2,3},{1,2,3}}} (1 row)
In reality those array columns are not really one-dimensional, you can create as many dimensions as you like even when you think you created one dimension only:
postgres=# create table t4 ( a int[] ); CREATE TABLE postgres=# insert into t4 (a) values ( '{1}' ); INSERT 0 1 postgres=# insert into t4 (a) values ( '{1,2}' ); INSERT 0 1 postgres=# insert into t4 (a) values ( '{{1,2},{1,2}}' ); INSERT 0 1 postgres=# insert into t4 (a) values ( '{{{1,2},{1,2},{1,2}}}' ); INSERT 0 1 postgres=# insert into t4 (a) values ( '{{{{1,2},{1,2},{1,2},{1,2}}}}' ); INSERT 0 1 postgres=# select * from t4; a ------------------------------- {1} {1,2} {{1,2},{1,2}} {{{1,2},{1,2},{1,2}}} {{{{1,2},{1,2},{1,2},{1,2}}}} (5 rows)
Now that there are some rows: how can we query that? This matches the first two rows of the table:
postgres=# select ctid,* from t4 where a[1] = 1; ctid | a -------+------- (0,1) | {1} (0,2) | {1,2} (2 rows)
This matches the second row only:
postgres=# select ctid,* from t4 where a = '{1,2}'; ctid | a -------+------- (0,2) | {1,2} (1 row)
This matches row three only:
postgres=# select ctid, * from t4 where a[1:2][1:3] = '{{1,2},{1,2}}'; ctid | a -------+--------------- (0,3) | {{1,2},{1,2}} (1 row)
You can even index array data types by using a GIN index:
postgres=# create index i1 ON t4 using gin (a); CREATE INDEX postgres=# \d t4 Table "public.t4" Column | Type | Collation | Nullable | Default --------+-----------+-----------+----------+--------- a | integer[] | | | Indexes: "i1" gin (a)
This does not make much sense right now is we do not have sufficient data for PostgreSQL considering the index, but a as soon as we have more data the index will be helpful:
postgres=# insert into t4 select '{{1,2},{1,2}}' from generate_series(1,1000000); INSERT 0 1000000 postgres=# explain select ctid,* from t4 where a = '{1,2}'; QUERY PLAN ------------------------------------------------------------------ Bitmap Heap Scan on t4 (cost=28.00..32.01 rows=1 width=51) Recheck Cond: (a = '{1,2}'::integer[]) -> Bitmap Index Scan on i1 (cost=0.00..28.00 rows=1 width=0) Index Cond: (a = '{1,2}'::integer[]) (4 rows)
In addition to that PostgreSQL comes with many support functions for working with arrays, e.g. to get the length of an array:
postgres=# select array_length(a,1) from t4 limit 2; array_length -------------- 1 2
As I mentioned at the beginning of this post you can create arrays of all kinds of data types, not only integers:
postgres=# create table t5 ( a date[], b timestamp[], c text[], d point[], e boolean[] ); CREATE TABLE postgres=# \d t5 Table "public.t5" Column | Type | Collation | Nullable | Default --------+-------------------------------+-----------+----------+--------- a | date[] | | | b | timestamp without time zone[] | | | c | text[] | | | d | point[] | | | e | boolean[] | | |
Whatever you want. You can even create arrays over user typed types:
postgres=# create type type1 as ( a int, b text ); CREATE TYPE postgres=# create table t6 ( a type1[] ); CREATE TABLE postgres=# \d t6 Table "public.t6" Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+--------- a | type1[] | | |
Quite powerful.
Cet article Fun with arrays in PostgreSQL est apparu en premier sur Blog dbi services.
EBS 12.2 ADOP Cycle Errors During Validation Cannot open XML file for load
ADOP cycle will have validation errors in some cases.
*******FATAL ERROR*******
PROGRAM :
(/test/apps/CLONE/fs1/EBSapps/appl/ad/12.0.0/patch/115/bin/txkADOPEvalSrvStatus.pl)
TIME : Wed Nov 13 15:50:36 2019
FUNCTION: TXK::XML::load_doc [ Level 1 ]
MESSAGES:
error = Cannot open XML file for load
errorno = No such file or directory
file =
/test/apps/CLONE/fs_ne/EBSapps/log/adop/6/fs_clone_20191113_153822/CLONE_test/TXK_EVAL_fs_clone_Wed_Nov_13_15_49_52_2014/ctx_files/CLONE_test.xml
*******FATAL ERROR*******
PROGRAM :
(/test/apps/CLONE/fs1/EBSapps/appl/ad/12.0.0/patch/115/bin/txkADOPPreparePhaseSynchronize.pl)
TIME : Wed Nov 13 15:50:36 2019
FUNCTION: main::validatePatchContextFile [ Level 1 ]
MESSAGES:
message = Access permission error on test
File CLONE_test.xml not readable
If you see the above stack then do the fix as suggested below
- Validate FND_NODES table for valid hostnames
- Validate FND_OAM_CONTEXT_FILES table for run and patch context file
- If a valid node does not have a valid run and patch context file in FND_OAM_CONTEXT_FILES , then it has to be loaded either by running autoconfig from the respective file system or if you do not want to run autoconfig then load the context file using API
$ADJVAPRG oracle.apps.ad.autoconfig.oam.CtxSynchronizer action=upload contextfile=$CONTEXT_FILE
Rerun ADOP Cycle.
Have fun with Oracle EBS 12.2
Parse Time
This is a note I started drafting In October 2012. It’s a case study from an optimizer (10053) trace file someone emailed to me, and it describes some of the high-level steps I went through to see if I could pinpoint what the optimizer was doing that fooled it into spending a huge amount of time optimising a statement that ultimately executed very quickly.
Unfortunately I never finished my notes and I can no longer find the trace file that the article was based on, so I don’t really know what I was planning to say to complete the last observation I had recorded.
I was prompted a couple of days ago to publish the notes so far becuase I was reminded in a conversation with members of the Oak Table Network about an article that Franck Pachot wrote a couple of years ago. In 12c Oracle Corp. introduced a time-reporting mechanism for the optimizer trace. If some optimisation step takes “too long” (1 second, by default) then then optimizer will write a “TIMER:” line into the trace file telling you what the operation was and how long it took to complete and how much CPU time it used. The default for “too long” can be adjusted by setting a “fix control”. This makes it a lot easier to find out where the time went if you see a very long parse time.
But let’s get back to the original trace file and drafted blog note. It started with a question on OTN and an extract from a tkprof output to back up a nasty performance issue.
=============================================================================================
What do you do about a parse time of 46 seconds ? That was the question that came up on OTN a few days ago – and here’s the tkprof output to demonstrate it.
call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 46.27 46.53 0 5 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.33 0.63 129 30331 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 46.60 47.17 129 30336 0 1 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 144 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 1 1 1 SORT AGGREGATE (cr=30331 pr=129 pw=0 time=637272 us) 863 863 863 VIEW VM_NWVW_1 (cr=30331 pr=129 pw=0 time=637378 us cost=1331 size=10 card=1) ... and lots more lines of plan
According to tkprof, it takes 46 seconds – virtually all CPU time – to optimise this statement, then 0.63 seconds to run it. You might spot that this is 11gR2 (in fact it’s 11.2.0.3) from the fact that the second line of the “Row Source Operation” includes a report of the estimated cost of the query, which is only 1,331.
Things were actually worse than they seem at first sight; when we saw more of tkprof output the following also showed up:
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_00"), NVL(SUM(C2),:"SYS_B_01") FROM (SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("VAL_000002") FULL("VAL_000002") NO_PARALLEL_INDEX("VAL_000002") */ :"SYS_B_02" AS C1, CASE WHEN ... END AS C2 FROM "BISWEBB"."RECORDTEXTVALUE" SAMPLE BLOCK (:"SYS_B_21" , :"SYS_B_22") SEED (:"SYS_B_23") "VAL_000002" WHERE ... ) SAMPLESUB call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 5 0.00 0.00 0 0 0 0 Execute 5 0.00 0.00 0 0 0 0 Fetch 5 21.41 24.14 11108 37331 0 5 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 15 21.41 24.15 11108 37331 0 5 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: 144 (recursive depth: 1) Number of plan statistics captured: 3 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 1 1 1 SORT AGGREGATE (cr=7466 pr=3703 pw=0 time=5230126 us) 3137126 3137126 3137126 PARTITION HASH ALL PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2547843 us cost=18758 size=131597088 card=3133264) 3137126 3137126 3137126 TABLE ACCESS SAMPLE RECORDTEXTVALUE PARTITION: 1 128 (cr=7466 pr=3703 pw=0 time=2372509 us cost=18758 size=131597088 card=3133264)
This piece of SQL executed five times as the query was optimised, adding a further 24 seconds elapsed time and 21 CPU seconds which, surprisingly, weren’t included in the headline 46 seconds. The total time spent in optimising the statement was around 70 seconds, of which about 68 seconds were spent on (or waiting for) the CPU.
This is unusual – I don’t often see SQL statements taking more than a few seconds to parse – not since 8i, and not without complex partition views – and I certainly don’t expect to see a low cost query in 11.2.0.3 taking anything like 70 (or even 46) seconds to optimise.
The OP had enabled the 10046 and the 10053 traces at the same time – and since the parse time was sufficiently unusual I asked him to email me the raw trace file – all 200MB of it.
Since it’s not easy to process 200MB of trace the first thing to do is extract a few headline details, and I thought you might be interested to hear about some of the methods I use on the rare occasions when I decide to look at a 10053.
My aim is to investigate a very long parse time and the tkprof output had already shown me that there were a lot of tables in the query, so I had the feeling that the problem would relate to the amount of work done testing possible join orders; I’ve also noticed that the dynamic sampling code ran five times – so I’m expecting to see some critical stage of the optimisation run 5 times (although I don’t know why it should).
Step 1: Use grep (or find if you’re on Windows) to do a quick check for the number of join orders considered. I’m just searching for the text “Join order[” appearing at the start of line and then counting how many times I find it:
[jonathan@linux01 big_trace]$ grep "^Join order\[" orcl_ora_25306.trc | wc -l 6266
That’s 6,266 join orders considered – let’s take a slightly closer look:
[jonathan@linux01 big_trace]$ grep -n "^Join order\[" orcl_ora_25306.trc >temp.txt [jonathan@linux01 big_trace]$ tail -2 temp.txt 4458394:Join order[581]: RECORDTYPEMEMBER[RTM]#9 RECORD_[VAL_000049]#13 ...... from$_subquery$_008[TBL_000020]#2 4458825:Join order[1]: VM_NWVW_1[VM_NWVW_1]#0
The line of dots represents another 11 tables (or similar objects) in the join order. But there are only 581 join orders (apparently) before the last one in the file (which is a single view transformation). I’ve used the “-n” option with grep, so if I wanted to look at the right bit of the file I could tail the last few thousand lines, but my machine is happy to use vi on a 200MB file, and a quick search (backwards) through the file finds the number 581 in the following text (which does not appear in all versions of the trace file):
Number of join permutations tried: 581
So a quick grep for “join permutations” might be a good idea. (In the absence of this line I’d have got to the same result by directing the earlier grep for “^Join order\[“ to a file and playing around with the contents of the file.
[jonathan@linux01 big_trace]$ grep -n "join permutations" orcl_ora_25306.trc 11495:Number of join permutations tried: 2 11849:Number of join permutations tried: 1 12439:Number of join permutations tried: 2 13826:Number of join permutations tried: 2 14180:Number of join permutations tried: 1 14552:Number of join permutations tried: 2 15938:Number of join permutations tried: 2 16292:Number of join permutations tried: 1 16665:Number of join permutations tried: 2 18141:Number of join permutations tried: 2 18550:Number of join permutations tried: 2 18959:Number of join permutations tried: 2 622799:Number of join permutations tried: 374 624183:Number of join permutations tried: 2 624592:Number of join permutations tried: 2 624919:Number of join permutations tried: 1 625211:Number of join permutations tried: 2 1759817:Number of join permutations tried: 673 1760302:Number of join permutations tried: 1 1760593:Number of join permutations tried: 2 1760910:Number of join permutations tried: 1 1761202:Number of join permutations tried: 2 2750475:Number of join permutations tried: 674 2751325:Number of join permutations tried: 2 2751642:Number of join permutations tried: 1 2751933:Number of join permutations tried: 2 2752250:Number of join permutations tried: 1 2752542:Number of join permutations tried: 2 3586276:Number of join permutations tried: 571 3587133:Number of join permutations tried: 2 3587461:Number of join permutations tried: 1 3587755:Number of join permutations tried: 2 3588079:Number of join permutations tried: 1 3588374:Number of join permutations tried: 2 4458608:Number of join permutations tried: 581 4458832:Number of join permutations tried: 1
The key thing we see here is that there are five sections of long searches, and a few very small searches. Examination of the small search lists shows that they relate to some inline views which simply join a couple of tables. For each of the long searches we can see that the first join order in each set is for 14 “tables”. This is where the work is going. But if you add up the number of permutations in the long searches you get a total of 2,873, which is a long way off the 6,266 that we found with our grep for “^Join order[“ – so where do the extra join orders come from ? Let’s take a closer look at the file where we dumped all the Join order lines – the last 10 lines look like this:
4452004:Join order[577]: RECORD_[VAL_000033]#10 from$_subquery$_017[TBL_000029]#1 ... 4452086:Join order[577]: RECORD_[VAL_000033]#10 from$_subquery$_017[TBL_000029]#1 ... 4453254:Join order[578]: RECORD_[VAL_000033]#10 from$_subquery$_017[TBL_000029]#1 ... 4453382:Join order[578]: RECORD_[VAL_000033]#10 from$_subquery$_017[TBL_000029]#1 ... 4454573:Join order[579]: RECORD_[VAL_000033]#10 from$_subquery$_017[TBL_000029]#1 ... 4454655:Join order[579]: RECORD_[VAL_000033]#10 from$_subquery$_017[TBL_000029]#1 ... 4455823:Join order[580]: RECORD_[VAL_000033]#10 from$_subquery$_017[TBL_000029]#1 ... 4455905:Join order[580]: RECORD_[VAL_000033]#10 from$_subquery$_017[TBL_000029]#1 ... 4457051:Join order[581]: RECORDTYPEMEMBER[RTM]#9 RECORD_[VAL_000049]#13 ... 4458394:Join order[581]: RECORDTYPEMEMBER[RTM]#9 RECORD_[VAL_000049]#13 ... 4458825:Join order[1]: VM_NWVW_1[VM_NWVW_1]#0
Every single join order seems to have appeared twice, and doubling the counts we got for the sum of the permutations gets us close to the total we got for the join order search. Again, we could zoom in a little closer, does the text near the start of the two occurrences of join order 581 give us any clues ? We see the following just before the second one:
****** Recost for ORDER BY (using join row order) *******
The optimizer has tried to find a way of eliminating some of the cost by letting the table join order affect the order of the final output. Let’s do another grep to see how many join orders have been recosted:
[jonathan@linux01 big_trace]$ grep "Recost for ORDER BY" orcl_ora_25306.trc | sort | uniq -c 452 ****** Recost for ORDER BY (using index) ************ 2896 ****** Recost for ORDER BY (using join row order) *******
So we’ve done a huge amount recosting. Let’s check arithmetic: 452 + 2,896 + 2,873 = 6,221, which is remarkably close to the 6,266 we needed (and we have ignored a few dozen join orders that were needed for the inline views, and the final error is too small for me to worry about).
We can conclude, therefore, that we did a huge amount of work costing a 14 table join a little over 6,000 times. It’s possible, of course, that we discarded lots of join orders very early on in the cost stage, so we could count the number of times we see a “Now joining” message – to complete a single pass on a 14 table join the optimizer will have to report “Now joining” 13 times.
[jonathan@linux01 big_trace]$ grep -n "Now joining" orcl_ora_25306.trc | wc -l 43989
Since the message appeared 44,000 times from 6,200 join orders we have an average of 7 steps evaluated per join order. Because of the way that the optimizer takes short-cuts I think this is a fairly strong clue that most of the join order calculations actually completed, or get very close to completing, over the whole 14 tables. (The optimizer remembers “partial results” from previous join order calculations, so doesn’t have to do 13 “Now joining” steps on every single join order.)
We still need to know why the optimizer tried so hard before supplying a plan – so let’s look for the “Best so far” lines, which the trace file reports each time the optimizer finds a better plan than the previous best. Here’s an example of what we’re looking for:
Cost: 206984.61 Degree: 1 Resp: 206984.61 Card: 0.00 Bytes: 632 *********************** Best so far: Table#: 0 cost: 56.9744 card: 1.0000 bytes: 30 Table#: 3 cost: 59.9853 card: 0.0000 bytes: 83 Table#: 6 cost: 60.9869 card: 0.0000 bytes: 151 Table#:10 cost: 61.9909 card: 0.0000 bytes: 185 Table#: 5 cost: 62.9928 card: 0.0000 bytes: 253 Table#: 2 cost: 65.0004 card: 0.0000 bytes: 306 Table#: 1 cost: 122.4741 card: 0.0000 bytes: 336 Table#: 8 cost: 123.4760 card: 0.0000 bytes: 387 Table#: 4 cost: 125.4836 card: 0.0000 bytes: 440 Table#: 7 cost: 343.2625 card: 0.0000 bytes: 470 Table#: 9 cost: 345.2659 card: 0.0000 bytes: 530 Table#:11 cost: 206981.5979 card: 0.0000 bytes: 564 Table#:12 cost: 206982.6017 card: 0.0000 bytes: 598 Table#:13 cost: 206984.6055 card: 0.0000 bytes: 632 ***********************
As you can see, we get a list of the tables (identified by their position in the first join order examined) with details of accumulated cost. But just above this tabular display there’s a repeat of the cost that we end up with. So let’s write, and apply, a little awk script to find all the “Best so far” lines and then print the line two above. Here’s a suitable script, followed by a call to use it:
{ if (index($0,"Best so far") != 0) {print NR m2} m2 = m1; m1 = $0; } awk -f cost.awk orcl_ora_25306.trc >temp.txt
There was a bit of a mess in the output – there are a couple of special cases (relating, in our trace file, to the inline views and the appearance of a “group by placement”) that cause irregular patterns to appear, but the script was effective for the critical 14 table join. And looking through the list of costs for the various permutations we find that almost all the options show a cost of about 206,000 – except for the last few in two of the five “permutation sets” that suddenly drop to costs of around 1,500 and 1,300. The very high starting cost explains why the optimizer was prepared to spend so much time trying to find a good path and why it kept working so hard until the cost dropped very sharply.
Side bar: I have an old note from OCIS (the precursor or the precursor of the precursor of MOS) that the optimizer will stop searching when the number of join orders tests * the number of “non-single-row” tables (according to the single table access path) * 0.3 is greater than the best cost so far. I even have a test script (run against 8.1.7.4, dated September 2002) that seems to demonstrate the formula. The formula may be terribly out of date by now and the rules of exactly how and when it applies may have changed – the model didn’t seem to work when I ran it against 19.3 – but the principle probably still holds true.
At this point we might decide that we ought to look at the initial join order and at the join order where the cost dropped dramatically, and try to work out why Oracle picked such a bad starting join order, and what it was about the better join order that the optimizer had missed. This might allow us to recognise some error in the statistics for either the “bad” starting order or the “good” starting order and allow us to solve the problem by (e.g.) creating a column group or gather some specific statistics. We might simply decide that we’ll take a good join order and pass it to the optimizer through a /*+ leading() */ hint, or simply take the entire outline and attach it to the query through a faked SQL Profile (or embedded set of hints).
However, for the purposes of this exercise (and because sometimes you have to find a strategic solution rather than a “single statement” solution) I’m going to carry on working through mechanisms for dissecting the trace file without looking too closely at any of the fine detail.
The final “high-level” target I picked was to pin down why there were 5 sets of join orders. I had noticed something particular about the execution plan supplied – it showed several occurrences of the operation “VIEW PUSHED PREDICATE” so I wondered if this might be relevant. So I did a quick check near the start of the main body of the trace file for anything that might be a clue, and found the following just after the “QUERY BLOCK SIGNATURE”.
QUERY BLOCK SIGNATURE --------------------- signature(): NULL *********************************** Cost-Based Join Predicate Push-down *********************************** JPPD: Checking validity of push-down in query block SEL$6E5D879B (#4) JPPD: Checking validity of push-down from query block SEL$6E5D879B (#4) to query block SEL$C20BB4FE (#6) Check Basic Validity for Non-Union View for query block SEL$C20BB4FE (#6) JPPD: JPPD bypassed: View has non-standard group by. JPPD: No valid views found to push predicate into. JPPD: Checking validity of push-down in query block SEL$799AD133 (#3) JPPD: Checking validity of push-down from query block SEL$799AD133 (#3) to query block SEL$EFE55ECA (#7) Check Basic Validity for Non-Union View for query block SEL$EFE55ECA (#7) JPPD: JPPD bypassed: View has non-standard group by. JPPD: No valid views found to push predicate into. JPPD: Checking validity of push-down in query block SEL$C2AA4F6A (#2) JPPD: Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$799AD133 (#3) Check Basic Validity for Non-Union View for query block SEL$799AD133 (#3) JPPD: Passed validity checks JPPD: Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$6E5D879B (#4) Check Basic Validity for Non-Union View for query block SEL$6E5D879B (#4) JPPD: Passed validity checks JPPD: Checking validity of push-down from query block SEL$C2AA4F6A (#2) to query block SEL$FC56C448 (#5) Check Basic Validity for Non-Union View for query block SEL$FC56C448 (#5) JPPD: Passed validity checks JPPD: JPPD: Pushdown from query block SEL$C2AA4F6A (#2) passed validity checks. Join-Predicate push-down on query block SEL$C2AA4F6A (#2) JPPD: Using search type: linear JPPD: Considering join predicate push-down JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0)
As you can see we are doing cost-based join-predicate pushdown, and there are three targets which are valid for the operation. Notice the line that says “using search type: linear”, and the suggestive “starting iteration 1” – let’s look for more lines with “Starting iteration”
[jonathan@linux01 big_trace]$ grep -n "Starting iteration" orcl_ora_25306.trc 9934:GBP: Starting iteration 1, state space = (20,21) : (0,0) 11529:GBP: Starting iteration 2, state space = (20,21) : (0,C) 11562:GBP: Starting iteration 3, state space = (20,21) : (F,0) 12479:GBP: Starting iteration 4, state space = (20,21) : (F,C) 12517:GBP: Starting iteration 1, state space = (18,19) : (0,0) 13860:GBP: Starting iteration 2, state space = (18,19) : (0,C) 13893:GBP: Starting iteration 3, state space = (18,19) : (F,0) 14587:GBP: Starting iteration 4, state space = (18,19) : (F,C) 14628:GBP: Starting iteration 1, state space = (16,17) : (0,0) 15972:GBP: Starting iteration 2, state space = (16,17) : (0,C) 16005:GBP: Starting iteration 3, state space = (16,17) : (F,0) 16700:GBP: Starting iteration 4, state space = (16,17) : (F,C) 16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0) 622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0) 1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0) 2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1)
There are four iterations for state space (3,4,5) – and look at the huge gaps between their “Starting iteration” lines. In fact, let’s go a little closer and combine their starting lines with the lines above where I searched for “Number of join permutations tried:”
16877:JPPD: Starting iteration 1, state space = (3,4,5) : (0,0,0) 622799:Number of join permutations tried: 374 622904:JPPD: Starting iteration 2, state space = (3,4,5) : (1,0,0) 1759817:Number of join permutations tried: 673 1759914:JPPD: Starting iteration 3, state space = (3,4,5) : (1,1,0) 2750475:Number of join permutations tried: 674 2750592:JPPD: Starting iteration 4, state space = (3,4,5) : (1,1,1) 3586276:Number of join permutations tried: 571 4458608:Number of join permutations tried: 581
At this point my notes end and I don’t know where I was going with the investigation. I know that I suggested to the OP that the cost-based join predicate pushdown was having a huge impact on the optimization time and suggested he experiment with disabling the feature. (Parse time dropped dramatically, but query run-time went through the roof – so that proved a point, but wasn’t a useful strategy). I don’t know, however, what the fifth long series of permutations was for, so if I could find the trace file one of the things I’d do next would be to look at the detail a few lines before line 4,458,608 to see what triggered that part of the re-optimization. I’d also want to know whether the final execution plan came from the fifth series and could be reached without involving all the join predicate pushdown work, or whether it was a plan that was only going to appear after the optimizer had worked through all 4 iterations.
The final plan did involve all 3 pushed predicates (which looksl like it might have been from iteration 4), so it might have been possible to find a generic strategy for forcing unconditional predicate pushing without doing all the expensive intermediate work.
Version 12c and beyondThat was then, and this is now. And something completely different might have appeared in 12c (or 19c) – but the one thing that is particularly helpful is that you can bet that every iteration of the JPPD state spaces would have produced a “TIMER:” line in the trace file, making it very easy to run grep -n “TIMER:” (or -nT as I recently discovered) against the trace file to pinpoint the issue very quickly.
Here’s an example from my “killer_parse.sql” query after setting “_fix_control”=’16923858:4′ (1e4 microseconds = 1/100th second) in an instance of 19c:
$ grep -nT TIMER or19_ora_21051.trc 16426 :TIMER: bitmap access paths cpu: 0.104006 sec elapsed: 0.105076 sec 252758 :TIMER: costing general plans cpu: 0.040666 sec elapsed: 0.040471 sec 309460 :TIMER: bitmap access paths cpu: 0.079509 sec elapsed: 0.079074 sec 312584 :TIMER: CBQT OR expansion SEL$765CDFAA cpu: 10.474142 sec elapsed: 10.508788 sec 313974 :TIMER: Complex View Merging SEL$765CDFAA cpu: 1.475173 sec elapsed: 1.475418 sec 315716 :TIMER: Table Expansion SEL$765CDFAA cpu: 0.046262 sec elapsed: 0.046647 sec 316036 :TIMER: Star Transformation SEL$765CDFAA cpu: 0.029077 sec elapsed: 0.026912 sec 318207 :TIMER: Type Checking after CBQT SEL$765CDFAA cpu: 0.220506 sec elapsed: 0.219273 sec 318208 :TIMER: Cost-Based Transformations (Overall) SEL$765CDFAA cpu: 13.632516 sec elapsed: 13.666360 sec 328948 :TIMER: bitmap access paths cpu: 0.093973 sec elapsed: 0.095008 sec 632935 :TIMER: Access Path Analysis (Final) SEL$765CDFAA cpu: 7.703016 sec elapsed: 7.755957 sec 633092 :TIMER: SQL Optimization (Overall) SEL$765CDFAA cpu: 21.539010 sec elapsed: 21.632012 sec
The closing 21.63 seconds (line 633092) is largely 7.7559 seconds (632,935) plus 13.666 seconds (line 318208) Cost-Based Transformation time, and that 13.666 seconds is mostly the 1.475 seconds (line 313,974) plus 10.508 seconds (line 312,584) for CBQT OR expansion – so let’s try disabling OR expansion (alter session set “_no_or_expansion”=true;) and try again:
$ grep -nT TIMER or19_ora_22205.trc 14884 :TIMER: bitmap access paths cpu: 0.062453 sec elapsed: 0.064501 sec 15228 :TIMER: Access Path Analysis (Final) SEL$1 cpu: 0.256751 sec elapsed: 0.262467 sec 15234 :TIMER: SQL Optimization (Overall) SEL$1 cpu: 0.264099 sec elapsed: 0.268183 sec
Not only was optimisation faster, the runtime was quicker too.
Warning – it’s not always that easy.
A day of conferences with the Swiss Oracle User Group
I’m not that excited with all these events arround Oracle technologies (and beyond) but it’s always a good place to learn new things, and maybe the most important, discover new ways of thinking. And regarding this point, I was not disappointed.
Franck Pachot: serverless and distributed databaseFranck talked about scaling out, it means avoid monoliths. Most of the database servers are this kind of monoliths today. And he advises us to think microservices. It’s not so easy regarding the database component, but it could surely simplify the management of different modules through different developper teams. Achieving scaling out is also get rid of these old cluster technologies (think about RAC) and instead of that, adopt the “sharing nothing”: no storage sharing, no network sharing, etc.
It also means the need for db replication, and also scale of the writes: and that point is more complicated. Sharding is a key point for scaling out (put the associated data where the users resides).
I discovered the CAP theorem, a very interesting theory that shows us that there is actually no ultimate solution. You need to choose your priority: Consistancy and Availability, or Availability and Partition Tolerant or Consistency and Partiton Tolerant. Just remind to keep your database infrastructure adapted to your needs, a google-like infrastructure being probably nice but do you really need the same?
Kamran Aghayer: Transition from dba to data engineerTimes are changing. I knew that since several years, but now it’s like an evidence: as a traditional DBA, I will soon be deprecated. Old-school DBA jobs will be replaced by a lot of new jobs: data architect, data engineer, data analyst, data scientist, machine learning engineer, AI engineer, …
Kamran focused on Hadoop ecosystem and Spark especially when he needed to archive data from EXADATA to HADOOP (and explained how HADOOP manage data through HDFS filesystem and datanodes – sort of ASM). He used a dedicated connector, sort of wrapper using external tables. Actually this is also what’s inside the Big Data Appliance from Oracle. This task was out of the scope of a traditional DBA, as a good knowledge of the data was needed. So, traditionnal DBA is dead.
Stefan Oehrli – PDB isolation and securitySince Oracle announced the availability of 3 free PDBs with each container database, the interest for Multitenant increased.
We had an overview of the top 10 security risks, all about privileges, privilege abuse, unauthorized privileges elevation, platform vulnerability, sql injection, etc. If you’re already in the cloud with PAAS or DBAAS, risks are the same.
We had a presentation of several clues for risk mitigation:
– path_prefix: it’s some kind of chroot for the PDB
– PDB_os_credential (still bugs but…): concerns credentials and dbms_scheduler
– lockdown profiles: a tool for restricting database features like queuing, partitioning, Java OS access, altering the database. Restrictions working with inclusion or exclusion.
Already heard today that the classic DBA is soon dead. And now the second bullet. The fact is that Oracle worked hard to improve autonomous features during the last 20 years, and like it was presented, you realize that it’s clearly true. Who cares about extents management now?
But there is still a hope. DBA of tomorrow is starting today. As the DBA role actually sits between infrastructure team and data scientists, there is a way to architect your career. Keep a foot in technical stuff, but become a champion in data analysis and machine learning.
Or focus on development with opensource and cloud. The DBA job can shift, don’t miss this opportunity.
Nikitas Xenakis – MAA with 19c and GoldenGate 19c: a real-world case studyHey! Finally, the DBA is not dead yet! Some projects still need technical skills and complex architecture. The presented project was driven by dowtime costs, and for some kind of businesses, a serious downtime can kill the company. The customer concerned by this project cannot afford more than 1h of global downtime.
We had an introduction of MAA (standing for Maximum Availability Architecture – see Oracle documentation for that).
You first need to estimate:
– the RPO: how much data you can afford to loose
– the RTO: how quick you’ll be up again
– the performance you expect after the downtime: because it matters
The presented infrastructure was composed of RHEL, RAC with Multitenant (1 PDB only), Acitve Data Guard and GoldenGate. The middleware was not from Oracle but configured to work with Transparent Application Failover.
For sure, you still need several old-school DBA’s to setup and manage this kind of infrastructure.
Luiza Nowak: Error when presenting your dataYou can refer to the blog from Elisa USAI for more information.
For me, it was very surprising to discover how a presentation can be boring, confusing, missing the point just because of inappropriate slides. Be precise, be captivating, make use of graphics instead of sentences, make good use of the graphics, if you want your presentation to have the expected impact.
Julian Frey: Database cloning in a multitenant environmentBack to pure DBA stuff. Quick remind of why we need to clone, and what we need to clone (data, metadata, partial data, refreshed data only, anonymised data, etc). And now, always considering GDPR compliance!
Cloning before 12c was mainly done with these well known tools: rman duplicate, datapump, GoldenGate, dblinks, storage cloning, embedded clone.pl script (didn’t heard about this one before).
Starting from 12c, and only if you’re using multitenant, new convenient tools are available for cloning: PDB snapshot copy, snapshot carousel, refreshable copy, …
I discovered that you can duplicate a PDB without actually putting the source PDB in read only mode: you just need to put your source PDB in begin backup mode, copy the files, generate the metadata file and create the database with resetlogs. Nice feature.
You have to know that cloning a PDB is native with multitenant, a database being always a clone of something (at least an empty PDB is created from PDB$seed).
Note that Snapshot copy of a PDB is limited for some kind of filesystems, the most known being nfs and acfs. If you decide to go for multitenant without actually having the option, don’t forget to limit the maximum of PDB in your CDB settings. It’s actually a parameter: max_PDBs. Another interesting feature is the possibily to create a PDB from a source PDB without the data (but tablespace and tables are created).
Finally, and against all odds, datapump is still a great tool for most of the cases. You’d better still consider this tool too.
ConclusionThis was a great event, from great organizers, and if pure Oracle DBA is probably not a job that makes younger people dream, jobs dealing with data are not planned to disappear in the near future.
Cet article A day of conferences with the Swiss Oracle User Group est apparu en premier sur Blog dbi services.
Alpine Linux, Oracle Java JDK and musl?! - why it does not work...
So i decided after nearly 6 years of running an aircraft noise measuring station (for dfld.de) with Ubuntu to change to Alpine Linux.
This station runs a software on Java and needs RXTX, because the microphone is connected via USB and is read over /dev/ttyUSB0.
What is the problem with this setup?
- RXTX needs a Java which is running on glibc
- Alpine Linux does not run on glibc
./javaash: java: command not foundand this happens even if you are in the right directory and java got the execute bit configured.
Alpine Linux changed to musl:
There are some other libc implementations (take a look here).
The homepage is https://www.musl-libc.org/:
And a comparison to other libc can be found at http://www.etalabs.net/compare_libcs.html:
There are some workarounds to get applications build with glibc running on Alpine Linux, but i did not get to run my aircraft noise measuring station - i switched to Debian - because i needed a 32bit Linux for my very old UMPC...
Library Cache Stats
In resonse to a comment that one of my notes references a call to a packate “snap_libcache”, I’ve posted this version of SQL that can be run by SYS to create the package, with a public synonym, and privileges granted to public to execute it. The package doesn’t report the DLM (RAC) related activity, and is suitable only for 11g onwards (older versions require a massive decode of an index value to convert indx numbers into names).
rem
rem Script: snap_11_libcache.sql
rem Author: Jonathan Lewis
rem Dated: March 2001 (updated for 11g)
rem Purpose: Package to get snapshot start and delta of library cache stats
rem
rem Notes
rem Lots of changes needed by 11.2.x.x where x$kglst holds
rem two types – TYPE (107) and NAMESPACE (84) – but no
rem longer needs a complex decode.
rem
rem Has to be run by SYS to create the package
rem
rem Usage:
rem set serveroutput on size 1000000 format wrapped
rem set linesize 144
rem set trimspool on
rem execute snap_libcache.start_snap
rem — do something
rem execute snap_libcache.end_snap
rem
create or replace package snap_libcache as
procedure start_snap;
procedure end_snap;
end;
/
create or replace package body snap_libcache as
cursor c1 is
select
indx,
kglsttyp lib_type,
kglstdsc name,
kglstget gets,
kglstght get_hits,
kglstpin pins,
kglstpht pin_hits,
kglstrld reloads,
kglstinv invalidations,
kglstlrq dlm_lock_requests,
kglstprq dlm_pin_requests,
— kglstprl dlm_pin_releases,
— kglstirq dlm_invalidation_requests,
kglstmiv dlm_invalidations
from x$kglst
;
type w_type1 is table of c1%rowtype index by binary_integer;
w_list1 w_type1;
w_empty_list w_type1;
m_start_time date;
m_start_flag char(1);
m_end_time date;
procedure start_snap is
begin
m_start_time := sysdate;
m_start_flag := ‘U’;
w_list1 := w_empty_list;
for r in c1 loop
w_list1(r.indx).gets := r.gets;
w_list1(r.indx).get_hits := r.get_hits;
w_list1(r.indx).pins := r.pins;
w_list1(r.indx).pin_hits := r.pin_hits;
w_list1(r.indx).reloads := r.reloads;
w_list1(r.indx).invalidations := r.invalidations;
end loop;
end start_snap;
procedure end_snap is
begin
m_end_time := sysdate;
dbms_output.put_line(‘———————————‘);
dbms_output.put_line(‘Library Cache – ‘ ||
to_char(m_end_time,’dd-Mon hh24:mi:ss’)
);
if m_start_flag = ‘U’ then
dbms_output.put_line(‘Interval:- ‘ ||
trunc(86400 * (m_end_time – m_start_time)) ||
‘ seconds’
);
else
dbms_output.put_line(‘Since Startup:- ‘ ||
to_char(m_start_time,’dd-Mon hh24:mi:ss’)
);
end if;
dbms_output.put_line(‘———————————‘);
dbms_output.put_line(
rpad(‘Type’,10) ||
rpad(‘Description’,41) ||
lpad(‘Gets’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Pins’,12) ||
lpad(‘Hits’,12) ||
lpad(‘Ratio’,6) ||
lpad(‘Invalidations’,14) ||
lpad(‘Reloads’,10)
);
dbms_output.put_line(
rpad(‘—–‘,10) ||
rpad(‘—–‘,41) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘—-‘,12) ||
lpad(‘—-‘,12) ||
lpad(‘—–‘,6) ||
lpad(‘————-‘,14) ||
lpad(‘——‘,10)
);
for r in c1 loop
if (not w_list1.exists(r.indx)) then
w_list1(r.indx).gets := 0;
w_list1(r.indx).get_hits := 0;
w_list1(r.indx).pins := 0;
w_list1(r.indx).pin_hits := 0;
w_list1(r.indx).invalidations := 0;
w_list1(r.indx).reloads := 0;
end if;
if (
(w_list1(r.indx).gets != r.gets)
or (w_list1(r.indx).get_hits != r.get_hits)
or (w_list1(r.indx).pins != r.pins)
or (w_list1(r.indx).pin_hits != r.pin_hits)
or (w_list1(r.indx).invalidations != r.invalidations)
or (w_list1(r.indx).reloads != r.reloads)
) then
dbms_output.put(rpad(substr(r.lib_type,1,10),10));
dbms_output.put(rpad(substr(r.name,1,41),41));
dbms_output.put(to_char(
r.gets – w_list1(r.indx).gets,
‘999,999,990’)
);
dbms_output.put(to_char(
r.get_hits – w_list1(r.indx).get_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.get_hits – w_list1(r.indx).get_hits)/
greatest(
r.gets – w_list1(r.indx).gets,
1
),
‘999.0’));
dbms_output.put(to_char(
r.pins – w_list1(r.indx).pins,
‘999,999,990’)
);
dbms_output.put(to_char(
r.pin_hits – w_list1(r.indx).pin_hits,
‘999,999,990’));
dbms_output.put(to_char(
(r.pin_hits – w_list1(r.indx).pin_hits)/
greatest(
r.pins – w_list1(r.indx).pins,
1
),
‘999.0’));
dbms_output.put(to_char(
r.invalidations – w_list1(r.indx).invalidations,
‘9,999,999,990’)
);
dbms_output.put(to_char(
r.reloads – w_list1(r.indx).reloads,
‘9,999,990’)
);
dbms_output.new_line;
end if;
end loop;
end end_snap;
begin
select
startup_time, ‘S’
into
m_start_time, m_start_flag
from
v$instance;
end snap_libcache;
/
drop public synonym snap_libcache;
create public synonym snap_libcache for snap_libcache;
grant execute on snap_libcache to public;
You’ll note that there are two classes of data, “namespace” and “type”. The dynamic view v$librarycache reports only the namespace rows.
PostgreSQL 12 : Setting Up Streaming Replication
PostgreSQL 12 was released a few month ago. When actually setting up a replication, there is no longer recovery.conf file in the PGDATA. Indeed all parameters of the recovery.conf should be now in the postgresql.conf file. And in the cluster data directory of the standby server, therre should be a file named standby.signal to trigger the standby mode.
In this blog I am just building a streaming replication between 2 servers to show these changes. The configuration we are using is
Primary server dbi-pg-essentials : 192.168.56.101
Standby server dbi-pg-essentials-2 : 192.168.56.102
The primary server is up and running on dbi-pg-essentials server.
postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12] pg12 ********* dbi services Ltd. ********* STATUS : OPEN ARCHIVE_COMMAND : test ! -f /u99/pgdata/12/archived_wal/%f && cp %p /u99/pgdata/12/archived_wal/%f ARCHIVE_MODE : on EFFECTIVE_CACHE_SIZE : 4096MB FSYNC : on SHARED_BUFFERS : 128MB SYNCHRONOUS_COMMIT : on WORK_MEM : 4MB IS_STANDBY : false ************************************* postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]
step 1 : Prepare the user for the replication on the primay server
For streaming replication, we need a user to read the WAL stream, we can do it with a superuser but it is not required. We will create a user with REPLICATION and LOGIN privileges. Contrary to the SUPERUSER privilege, the REPLICATION privilege gives very high permissions but does not allow to modifiy any data.
Here we will create a user named repliuser
postgres=# create user repliuser with password 'postgres' replication ; CREATE ROLE postgres=#
Step 2 : Prepare the authentication on the primary server
The user used for the replication should be allowed to connect for replication. We need then to adjust the pg_hba.conf file for the two servers.
postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12] grep repliuser pg_hba.conf host replication repliuser 192.168.56.101/32 md5 host replication repliuser 192.168.56.102/32 md5 postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]
Step 3 : Create a replication slot on the primary server
Replication slots provide an automated way to ensure that the master does not remove WAL segments until they have been received by all standbys, and that the master does not remove rows which could cause a recovery conflict even when the standby is disconnected.
psql (12.1 dbi services build) Type "help" for help. postgres=# SELECT * FROM pg_create_physical_replication_slot('pg_slot_1'); slot_name | lsn -----------+----- pg_slot_1 | (1 row) postgres=#
Step 4 : Do a backup of the primary database and restore it on the standby
From the standby server launch the following command
postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pg_basebackup -h 192.168.56.101 -D /u02/pgdata/12/PG1 --wal-method=fetch -U repliuser
Step 5 : set the primary connection info for the streaming on standby side
The host name and port number of the primary, connection user name, and password are specified in the primary_conninfo. Here there is a little change as there is no longer a recovery.conf parameter. The primary_conninfo should now be specified in the postgresql.conf
postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] grep primary postgresql.conf primary_conninfo = 'host=192.168.56.101 port=5432 user=repliuser password=postgres' primary_slot_name = 'pg_slot_1' # replication slot on sending server
Step 6 : Create the standby.signal file on standby server
In the cluster data directory of the standby, create a file standby.signal
postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pwd /u02/pgdata/12/PG1 postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] touch standby.signal
Step 7 : Then start the standby cluster
postgres@dbi-pg-essentials-2:/u02/pgdata/12/PG1/ [PG12] pg_ctl start
If everything is fine, you should fine in the alert log
2019-11-16 17:41:21.552 CET [1590] LOG: database system is ready to accept read only connections 2019-11-16 17:41:21.612 CET [1596] LOG: started streaming WAL from primary at 0/5000000 on timeline 1
As confirmed by dbi dmk tool, the master is now streaming to the standby server
********* dbi services Ltd. ********* STATUS : OPEN ARCHIVE_COMMAND : test ! -f /u99/pgdata/12/archived_wal/%f && cp %p /u99/pgdata/12/archived_wal/%f ARCHIVE_MODE : on EFFECTIVE_CACHE_SIZE : 4096MB FSYNC : on SHARED_BUFFERS : 128MB SYNCHRONOUS_COMMIT : on WORK_MEM : 4MB IS_STANDBY : false IS_MASTER : YES, streaming to 192.168.56.102/32 ************************************* postgres@dbi-pg-essentials:/u02/pgdata/12/PG1/ [PG12]
Cet article PostgreSQL 12 : Setting Up Streaming Replication est apparu en premier sur Blog dbi services.
Pages
