Feed aggregator

Language of APEX Office Hours sessions

Tom Kyte - 6 hours 8 min ago
I need English only, thanks
Categories: DBA Blogs

Filters order in SQL query

Tom Kyte - 6 hours 8 min ago
Hello, Ask Tom Team. I want to know to put the correct order in the filter in a SQL query. I mean, in the WHERE condition. 1. How can we help the optimizer in order to have the best performance? I know that it tries to execute the best plan ba...
Categories: DBA Blogs

Kubernetes Taints/Tolerations and Node Affinity for Dummies

Pakistan's First Oracle Blog - Thu, 2020-04-09 21:26

In order to guarantee which pod goes to which node in a Kubernetes cluster, the concept of Taints/Tolerations and Node Affinity is used. With Taints/Tolerations, we taint a node with a specific label, and then add those labels as toleration in the pod manifest to ensure that if a pod doesn't have that toleration, it won't be scheduled on that tainted node. In order to ensure that, this tolerated pod only goes to tainted node, we also add an affinity within the pod manifest.


So in other words, Taints/Tolerations are used to repel undesired pods, whereas Node Affinity is used to guide Kubernetes scheduler to use a node for a specific pod.

So why we need both Taints/Tolerations and Node Affinity? It is to guarantee that a pod goes to an intended node. Because Taints/Tolerations ensures that undesired pod stay away from a node but it doesn't ensure that desired pod will actually be placed on that node. In order to guarantee that, we use node affinity.

Following is a complete example of 4 deployments: red, blue, green, other. We have 4 worker nodes node01, node02, node03, node04.

We have labelled nodes to their respective colors, and we also have added a taint with same key value pair. Then we added a toleration in deployments for the respective key value pair. For example, this ensures that For node01 , the label is red, taint is also red and any pod which doesn't have label red won't be scheduled on this node. Then we added a node affinity which ensures that red pods will only be placed on node with label red. Same logic is being used for other deployments.


For Node red:

Kubectl label node node01 color=red
Kubectl taint node node01 color=red:NoSchedule

For Deployment red:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: red
spec:
  replicas: 1
  selector:
    matchLabels:
      color: red
  template:
    metadata:
      labels:
        color: red
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - red
      tolerations:
  - key: "color"
    operator: "Equal"
    value: "red"
    effect: "NoSchedule"


For Node blue:

Kubectl label node node02 color=blue
Kubectl taint node node02 color=blue:NoSchedule

For Deployment blue:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  replicas: 1
  selector:
    matchLabels:
      color: blue
  template:
    metadata:
      labels:
        color: blue
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - blue
      tolerations:
  - key: "color"
    operator: "Equal"
    value: "blue"
    effect: "NoSchedule"

For Node green:

Kubectl label node node03 color=green
Kubectl taint node node03 color=green:NoSchedule

For Deployment green:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: green
spec:
  replicas: 1
  selector:
    matchLabels:
      color: green
  template:
    metadata:
      labels:
        color: green
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - green
      tolerations:
  - key: "color"
    operator: "Equal"
    value: "green"
    effect: "NoSchedule"


For Node Other:

Kubectl label node node04 color=other

For Deployment other:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: other
spec:
  replicas: 2
  selector:
    matchLabels:
      color: other
  template:
    metadata:
      labels:
        color: other
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - other
      tolerations:
  - key: "color"
    operator: "Equal"
    value: "other"
    effect: "NoSchedule"

Hope this helps!!!
Categories: DBA Blogs

Create Your First VBCS Instance in Oracle Cloud

Online Apps DBA - Thu, 2020-04-09 08:45

Create Your First VBCS Instance In Oracle CloudAre you a beginner who has just started his journey to learn Oracle Integration Cloud (OIC) and wants to get familiar with Oracle Visual Builder Cloud Service (VBCS)? If YES, then check out K21Academy’s blog post at https://k21academy.com/oic28 that tells you about: • VBCS • Setting Up Visual Builder On Oracle Cloud • […]

The post Create Your First VBCS Instance in Oracle Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Process large file (above 10MB) in Oracle Integration Cloud Service (OIC)

Online Apps DBA - Thu, 2020-04-09 08:28

Integration Cloud offers several designs and modeling options to process large files. The ‘Download File’ option allows files up to 1GB in Integration Cloud local file system. If you want to know in more detail, then check out K21Academy’s blog post at https://k21academy.com/oic27 that covers: • Handling Files Less Than 1MB • Large File Handling • […]

The post Process large file (above 10MB) in Oracle Integration Cloud Service (OIC) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Build Oracle APEX application that can run on Android based tabs without Internet having own LAN

Tom Kyte - Thu, 2020-04-09 08:06
hi, we have an oracle apex application running on our LAN, we want to use the same on Android based tabs. I'm a little confused how to go about it as we have no Internet access on our office PCs on which the app is running and same SOP will be appl...
Categories: DBA Blogs

Convert arbitrary length numbers from decimal to binary to decimal in PLSQL

Tom Kyte - Thu, 2020-04-09 08:06
Hi, We are storing a 100 digit length number in a varchar2 field in oracle database table column. In PLSQL code, we need to convert this 100 digit number to its binary equivalent (300+ digits), add some more digits to binary and then convert this b...
Categories: DBA Blogs

Converting Unique Index to Non-Unique

Tom Kyte - Thu, 2020-04-09 08:06
Hi, TOM! Some time ago I decided to change my unique index to non-unique because of new requirements. I googled a lot but didn't find anything about <b>convertation</b>. Though I found solutions like creating new nnon-unique index with extra cons...
Categories: DBA Blogs

BigQuery Materialized Views and Oracle Materialized Views

Pakistan's First Oracle Blog - Wed, 2020-04-08 18:33
One of the common ways of one-to-many replication setups in Oracle databases involve, on high level, having one master transaction database which holds the transactions, then a mview log is created on that table.


Then all the other reporting databases subscribe their respective materialized views (MViews) to this log table. These MViews remain in sync with the master log table through incremental refresh or through complete refresh. As long as it runs fine, it runs fine but when things break, it becomes ugly, and I mean ugly. The MViews at reporting databases could lag behind the master log due to network issue or if the master database goes down. Doing a complete refresh is also a nightmare and you have to do lots of purging and tinkering. The more subscribing MViews, the more hassle it is when things break.

BigQuery is Google's managed data warehousing service which now offers materialized views. If you have managed Oracle MViews, it brings you to tears when you learn that BigQuery MViews offers following:

Zero maintenance: A materialized view is recomputed in background once the base table has changed. All incremental data changes from the base tables are automatically added to the materialized views. No user inputs are required.

Always fresh: A materialized view is always consistent with the base table, including BigQuery streaming tables. If a base table is modified via update, merge, partition truncation, or partition expiration, BigQuery will invalidate the impacted portions of the materialized view and fully re-read the corresponding portion of the base table. For an unpartitioned materialized view, BigQuery will invalidate the entire materialized view and re-read the entire base table. For a partitioned materialized view, BigQuery will invalidate the affected partitions of the materialized view and re-read the entire corresponding partitions from the base table. Partitions that are append-only are not invalidated and are read in delta mode. In other words, there will never be a situation when querying a materialized view results in stale data.

Smart tuning: If a query or part of a query against the source table can instead be resolved by querying the materialized view, BigQuery will rewrite (reroute) the query to use the materialized view for better performance and/or efficiency.

In my initial testing, the things work like a charm and refresh takes at most couple of minutes. I will be posting some tests here very soon. But suffice is to say that delegating manaagement of Mview refresh to Google is reason enough to move to BigQuery.


Categories: DBA Blogs

Practical Oracle SQL Webinar by ACE Director Kim Berg Hansan

Gerger Consulting - Wed, 2020-04-08 14:26
ACE Director Kim Berg Hansen is the author of the book "Practical Oracle SQL, Mastering the Full Power of Oracle Database".



Kim Berg Hansen is known to present complicated SQL features in a very accessible way to developers so that they can apply these features in their daily work.

In this webinar, Kim will present several SQL techniques, taken from his new book Practical Oracle SQL, and show you how you can apply them in real life scenarios.

Kim will cover the following topics:
  • Tree Calculations with Recursion (Recursive subquery factoring)
  • Functions Defined Within SQL (Functions in the WITH clause)
  • Answering Top-N Questions (Analytic ranking functions)
  • Rolling Sums to Forecast Reaching Minimums (Analytic window clause, recursive subquery factoring, model clause)
  • Merging Date Ranges (Row pattern matching MATCH_RECOGNIZE)
Categories: Development

Kindly help with Data Integration Related Terms in below

Tom Kyte - Wed, 2020-04-08 13:46
Hello TOM, May I know what we mean by below terms with examples: Data Outages Siloed Data Data Automation Data Profiling Push Down Data movement Thanks Rajneesh
Categories: DBA Blogs

After logon Trigger

Tom Kyte - Wed, 2020-04-08 13:46
Tom, What I am trying to accomplish is basically, to monotor any one (mainly DBAs) who connects to the database using "system". I know that, you have advocated to use "audit connect by system" but, I want to have more flexibility (for example,...
Categories: DBA Blogs

Announcing: Integration Tools Update Course

Jim Marion - Tue, 2020-04-07 21:34

PeopleSoft Integration Tools is normally taught as a five-day introductory course designed to answer questions such as:

  • What is Integration Broker?
  • How do you configure it?
  • What are Service Operations?
  • How should I integrate with PeopleSoft?

But a lot of us already know the basics of Integration Broker. So last fall, we decided to add another course to our library: PeopleTools Integration Tools Update. This is a three-day class showing what's new with Integration Broker and covers topics such as:

  • Producing and Consuming REST,
  • Constructing Documents and understanding the role of Documents in integration,
  • Creating structured and unstructured JSON responses,
  • Understanding and implementing content negotiation,
  • Responding to REST verbs,
  • Using pre-built services such as Query Access Service,
  • Securing integration points.

As we integrate Chatbots and Cloud solutions with PeopleSoft, it is critical that we understand PeopleSoft's modern integration capabilities.

Our next class begins April 14, 2020. Register now to reserve your seat!

How to print ref cursor and nested table as out parameter in procedure using anonymous

Tom Kyte - Tue, 2020-04-07 19:26
Hi Sir, I want to print all out parameters using sql developer and anonymous block. Please refer below scripts. <code>CREATE TABLE EMP ( empno number(4,0) not null, ename varchar2(10 byte), job varchar2(9 byte), mgr number(4,0), hiredate da...
Categories: DBA Blogs

GoldenGate 19c error - ORA-12549: TNS:operating system resource quota exceeded

Tom Kyte - Tue, 2020-04-07 19:26
I`ve tried to start two Extract. One is started and works. But when I try start one more Extract I get error: <code> 2020-04-06T15:27:30.549+0300 ERROR OGG-02615 Oracle GoldenGate Capture for Oracle, e_rf.prm: Login to the database as user GGA...
Categories: DBA Blogs

Spool sql query output in single excel file but in two separate work sheets

Tom Kyte - Tue, 2020-04-07 19:26
I have written shell script which connect to Oracle DB (11g) and spool output into excel (.csv / .xls). This works perfectly fine to fetch data in single excel file. I want to enhance my script which will connect to DB and execute 2 different querie...
Categories: DBA Blogs

database authentication

Tom Kyte - Tue, 2020-04-07 19:26
could u please explain the difference between os authentication and database authentication and how it is done, briefly...i have read it but unable to understand it completely(w.r.t oracle 9i)
Categories: DBA Blogs

MemOptimized Rowstore

Tom Kyte - Tue, 2020-04-07 19:26
I tried to create a table learning from with a tutorial <code>create table key_val_pair( id number constraint id_pk primary key, key_val varchar2(100), key_name varchar2(100)) segment creation immediate memoptimize for read;</code> it is sh...
Categories: DBA Blogs

How to email from Python

Bobby Durrett's DBA Blog - Tue, 2020-04-07 15:25

I got an email asking how to send an alert email from a Python script. Here is an example based on a script I have setup:

You need to replace “PUT YOUR SMTP SERVER HERE” with the host name of your smtp server.

Bobby

Categories: DBA Blogs

Using a video capture usb stick with Linux / Ubuntu

Dietrich Schroff - Tue, 2020-04-07 13:04
Three weeks ago i ordered a video capture usb stick at banggood and today it arrived:




So will this device work with my ubuntu?

I inserted the stick on my laptop and dmesg showed:

[123483.143071] hid-generic 0003:534D:0021.0002: hiddev0,hidraw1: USB HID v1.10 Device [MACROSILICON AV TO USB2.0] on usb-0000:00:14.0-1/input4

That was all...
But there was a module missing:

modprobe uvcvideo
and after that:
[125822.751366] usb 1-1: new high-speed USB device number 19 using xhci_hcd
[125822.901442] usb 1-1: New USB device found, idVendor=534d, idProduct=0021
[125822.901449] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[125822.901454] usb 1-1: Product: AV TO USB2.0
[125822.901458] usb 1-1: Manufacturer: MACROSILICON
[125822.901478] usb 1-1: SerialNumber: 20150130
[125822.902702] uvcvideo: Found UVC 1.00 device AV TO USB2.0 (534d:0021)
[125822.902835] uvcvideo: UVC non compliance - GET_DEF(PROBE) not supported. Enabling workaround.
[125822.904681] uvcvideo 1-1:1.0: Entity type for entity Processing 2 was not initialized!
[125822.904691] uvcvideo 1-1:1.0: Entity type for entity Camera 1 was not initialized!
[125822.911070] hid-generic 0003:534D:0021.0004: hiddev0,hidraw1: USB HID v1.10 Device [MACROSILICON AV TO USB2.0] on usb-0000:00:14.0-1/input4
Checking with
root@zerberus:~# ffmpeg -sources |grep video
Auto-detected sources for video4linux2,v4l2:
  /dev/video1 [AV TO USB2.0]
  /dev/video0 [HD WebCam: HD WebCam]
plus:
root@zerberus:~#  ffmpeg -list_formats all -i /dev/video1

[video4linux2,v4l2 @ 0x55b0d1ca38c0] Compressed:       mjpeg :          Motion-JPEG : 480x320 640x480 720x480
[video4linux2,v4l2 @ 0x55b0d1ca38c0] Raw       :     yuyv422 :           YUYV 4:2:2 : 480x320
/dev/video1: Immediate exit requested

Now i was able to open /dev/video1 with VLC:



The video came from a camera connected via TS832 --[5.8GHz]--> RC832...





Pages

Subscribe to Oracle FAQ aggregator