Feed aggregator

transportable DBF Import in 12c

Tom Kyte - Mon, 2019-12-02 17:52
i'm trying to import transportable data files to Oracle DB 12.2 . These files are exported from as transportable from Oracle DB 11.1.i recieve the following error. ORA-39123: Data Pump transportable tablespace job aborted ORA-19721: Cannot find dat...
Categories: DBA Blogs

How to extract table data into CSV file dynamically using generic procedure

Tom Kyte - Mon, 2019-12-02 17:52
Hi, Need help on how to generate the CSV file for the given tablename dynamically using PLSQL procedure. I understand we can use UTL_FILE oracle package to generate the CSV file however I would like to know how we can create generic script which ...
Categories: DBA Blogs

Multiple Schema Oracle Wallet

Tom Kyte - Mon, 2019-12-02 17:52
Dear AskTom, I have a shell script that connects as several different users to the same database. From Oracle: You can store multiple credentials for multiple databases in one client wallet. You cannot store multiple credentials (for logging i...
Categories: DBA Blogs

Practical Application Performance Tuning: An nVision Case Study

David Kurtz - Mon, 2019-12-02 16:41
I gave this presentation at the UKOUG Techfest 19 conference.  It is closely based on a previous presentation about PeopleSoft nVision performance tuning, and uses the experience of a PeopleSoft project as a case study, so I am also posting here on my PeopleSoft blog.
This video was produced as a part of the preparation for this session.  The slide deck is also available on my website.

Learning about and understanding the principles and mechanics of the Oracle database is fundamentally important for both DBAs and developers. It is one of the reasons we still physical conferences.
This presentation tells the story of a performance tuning project for the GL reporting on a Financials system on an engineered system. It required various techniques and features to be brought to bear. Having a theoretical understanding of how the database and various features work allowed us to make reasonable predictions about whether they would be effective in our environment. Some ideas were discounted, some were taken forward.
We will look at instrumentation, ASH, statistics collection, partitioning, hybrid columnar compression, Bloom filtering, SQL profiles. All of them played a part in the solution, some added further complications that had to be worked around, some had to be carefully integrated with the application, and some required some reconfiguration of the application into order to work properly.
Ultimately, performance improvement is an experimental science, and it requires a similar rigorous thought process.

Create a Vagrant box with Oracle Linux 7 Update 7 Server with GUI

Darwin IT - Mon, 2019-12-02 12:31
Yesterday and today I have been attending the UKOUG TechFest '19 in Brighton. And it got me eager to try things out. For instance with new Oracle DB 19c features. And therefor I should update my vagrant boxes to be able to install one. But I realized my basebox is still on Oracle Linux 7U5, and so I wanted to have a neatly fresh, latest OL 7U7 box.
Use Oracle's base boxNow, last year I wrote about how to create your own Vagrant Base Box: Oracle Linux 7 Update 5 is out: time to create a new Vagrant Base Box. So I could create my own, but already quite some time ago I found out that Oracle supplies those base boxes.

They're made available at https://yum.oracle.com/boxes, and there are boxes for OL6, OL7 and even OL8. I want to use OL 7U7, and thus I got started with that one. It's neatly described at the mentioned link and it all comes down to:

$ vagrant box add --name <name> <url>
$ vagrant init <name>
$ vagrant up
$ vagrant ssh

And in my case:

$ vagrant box add --name ol77 https://yum.oracle.com/boxes/oraclelinux/ol77/ol77.box
$ vagrant init ol77
$ vagrant up
$ vagrant ssh

Before you do that vagrant up, you might want to edit your vagrant file, to add a name for your VM:
BOX_NAME="ol77"
VM_NAME="ol77"
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.

# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = BOX_NAME

...

# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
vb.name = VM_NAME
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
#
...

Otherwise your VM name in Virtual box would be someting like ol7_default_1235897983, something cryptic with a random number.

If you do a vagrant up now it will boot up nicely.

VirtualBox Guest AdditionsThe VirtualBox GuestAdditions are from version 6.12, while my VirtualBox installation already has 6.14. I found it handy to have a plugin that auto-updates it. My co-Oracle-ACE Maarten Smeets wrote about that earlier. It comes down to executing the following in a command line:
vagrant plugin install vagrant-vbguest

If you do a vagrant up now, it will update the guest additions. However, to be able to do so, it needs to install all kinds of kernel packages to compile the drivers. So, be aware that this might take some time, and you'll need internet connection.
Server with GUIThe downloaded box is a Linux Server install, without a UI. This probably is fine for most of the installations you do. But I like to be able to log on to the desktop from time to time, and I want to be able to connect to that using MobaXterm, and be able to run a UI based installer or application. A bit of X-support is handy. How to do that, I found at this link.

GUI support is one of the group packages that are supported by Oracle Linux 7, and this works exactly the same as RHEL7 (wonder why that is?).

To list the available packages groups are supported, you can do:

[vagrant@localhost ~]$ sudo  yum group list
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Available Environment Groups:
Minimal Install
Infrastructure Server
File and Print Server
Cinnamon Desktop
MATE Desktop
Basic Web Server
Virtualization Host
Server with GUI
Available Groups:
Backup Client
Base
Cinnamon
Compatibility Libraries
Console internet tools
Development tools
E-mail server
Educational Software
Electronic Lab
Fedora Packager
Fonts
General Purpose Desktop
Graphical Administration Tools
Graphics Creation Tools
Hardware monitoring utilities
Haskell
Input Methods
Internet Applications
KDE Desktop
Legacy UNIX Compatibility
MATE
Milkymist
Network Infrastructure Server
Networking Tools
Office Suite and Productivity
Performance Tools
Scientific support
Security Tools
Smart card support
System Management
System administration tools
Technical Writing
TurboGears application framework
Web Server
Web Servlet Engine
Xfce
Done

(After having executed vagrant ssh.)
You'll find 'Server with GUI' as one of the options. This will install all the necessary packages to run Gnome. But, if you want to have KDE there's also package group for that.

To install it you would run:
[vagrant@localhost ~]$ sudo yum groupinstall 'Server with GUI'
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Resolving Dependencies
--> Running transaction check
---> Package ModemManager.x86_64 0:1.6.10-3.el7_6 will be installed
--> Processing Dependency: ModemManager-glib(x86-64) = 1.6.10-3.el7_6 for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libmbim-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-glib.so.5()(64bit) for package: ModemManager-1.6.10-3.el7_6.x86_64
....
....
python-firewall noarch 0.6.3-2.0.1.el7_7.2 ol7_latest 352 k
systemd x86_64 219-67.0.1.el7_7.2 ol7_latest 5.1 M
systemd-libs x86_64 219-67.0.1.el7_7.2 ol7_latest 411 k
systemd-sysv x86_64 219-67.0.1.el7_7.2 ol7_latest 88 k

Transaction Summary
========================================================================================================================
Install 303 Packages (+770 Dependent packages)
Upgrade ( 7 Dependent packages)

Total download size: 821 M
Is this ok [y/d/N]:


It will list a whole bunch of packages with dependencies that it will install. If you're up to it, at this point you would confirm with 'y'. Notice that there will be a bit over a 1000 packages installed, so it will be busy with that for a while.
This is because it will install the complete Gnome Desktop environment.
You could also do:
[vagrant@localhost ~]$ sudo yum groupinstall 'X Window System' 'GNOME'

That will install only the minimum, necessary packages to run Gnome. I did not try that yet.
If it finished installing all the packages, the one thing that is left, is to change the default runlevel, since obviously you want to start in the GUI by default. I think most in the cases, at least.
This is done by:
[vagrant@localhost ~]$ sudo systemctl set-default graphical.target

I could have put that in a provision script, like I've done before. And maybe I will do that.
Package the boxYou will have noticed that it would have stamped quite some time to update the kernel packages for installing the latest Guest Additons and the GUI desktop. To prevent us from doing that over and over again, I thought it was wise to package the box into a ol77SwGUI box (Server with GUI). I described that in my previous article last year:
vagrant package --base ol77_default_1575298630482_71883 --output d:\Projects\vagrant\boxes\OL77SwGUIv1.0.box

The result
This will deliver you a Vagrant Box/VirtualBox image with:
  • Provider: VirtualBox
  • 64 bit
  • 2 vCPUs
  • 2048 MB RAM
  • Minimal package set installed
  • 32 GiB root volume
  • 4 GiB swap
  • XFS root filesystem
  • Extra 16GiB VirtualBox disk image attached, dynamically allocated
  • Guest additions installed
  • Yum configured for Oracle Linux yum server. _latest and _addons repos enabled as well as _optional_latest, _developer, _developer_EPEL where available.
  • And as an extra addon: Server with GUI installed.
Or basically more or less what I have in may own base box. What I'm less happy with is the 16GiB extra disk image attached. I want a bigger disk for my installations, or at least the data. I'll need to figure out what I want to do with that. Maybe I add an extra disk and reformat the lot with a disk spanning Logical Volume based filesystem.

Real time replication from Oracle to PostgreSQL using Data Replicator from DBPLUS

Yann Neuhaus - Mon, 2019-12-02 07:15

I’ve done quite some real time logical replication projects in the past, either using Oracle Golden Gate or EDB replication server. Build in logical replication in PostgreSQL (which is available since PostgreSQL 10) can be used as well when both, the source and the target are PostgreSQL instances. While being at the DOAG conference and exhibition 2019 I got in contact with people from DBPLUS and they provide a product which is called “Data Replicator”. The interesting use case for me is the real time replication from Oracle to PostgreSQL as the next project for such a setup is already in the pipe so I thought I’ll give it try.

The “Data Replicator” software needs to be installed on a Windows machine and all traffic will go through that machine. The following picture is stolen from the official “Data Replicator” documentation and it pretty well describes the architecture when the source system is Oracle:

As “Data Replicator” will use Oracle LogMiner, no triggers need to be installed on the source system. Installing something on a validated system might become tricky so this already is a huge benefit compared to some other solutions, e.g. SymmetricDS. When you know GoldenGate the overall architecture is not so much different: What GoldeGate calls the extract is the “Reader” in Data Replicator and the replicat becomes the “Applier”.

The installation on the Windows machine is so simple, that I’ll just be providing the screenshots without any further comments:





In the background three new services have been created and started by the installation program:

There is the replication manager which is responsible for creating replication processes. And then there are two more services for reading from source and writing data to the target. In addition the graphical user interface was installed (which could also be running on another windows machine) which looks like this once you start it up:

Before connecting with the GUI you should do the basic configuration by using the “DBPLUS Replication Manager Configuration” utility:

Once that is done you can go back to the client and connect:

The initial screen has not much content, except for the possibility to create a new replication and I really like that: No overloaded, very hard to initially understand interface but easy and tidy. With only one choice it is easy to go forward so lets create a new replication:

Some concept here: Very clean interface, only 5 steps to follow. My source system is Oracle 19.3 EE and all I have to do is to provide the connection parameters, admin user and a new user/password combination I want to us for the logical replication:

Asking “Data Replicator” to create the replication user, and all is fine:

SQL> r
  1* select username,profile from dba_users where username = 'REPLUSR'

USERNAME                       PROFILE
------------------------------ ------------------------------
REPLUSR                        DEFAULT

Of course some system privileges have been granted to the user that got created:

SQL> select privilege from dba_sys_privs where grantee = 'REPLUSR';

PRIVILEGE
----------------------------------------
SELECT ANY TRANSACTION
LOGMINING
SELECT ANY DICTIONARY
SELECT ANY TABLE

Proceeding with the target database, which is PostgreSQL 12.1 in my case:

As you can see there is no option to create a user on the target. What I did is this:

postgres=# create user replusr with login password 'xxxxxxx';
CREATE ROLE
postgres=# create database offloadoracle with owner = 'replusr';
CREATE DATABASE
postgres=# 

Once done, the connection succeeds and can be saved:

That’s all for the first step and we can proceed to step two:

I have installed the Oracle sample schemas for this little demo and as I only want to replicate these I’ve changed the selection to “REPLICATE ONLY SELECTED SCHEMAS AND TABLES”.

Once more this is all that needs to be done and the next step would be to generate the report for getting an idea of possible issues:

The reported issues totally make sense and you even get the commands to fix it, except for the complaints about the unique keys, of course (If you go for logical replication you should anyway make sure that each table either contains a primary key or at last a unique key). Once the Oracle database is in archive mode and supplemental log data was added the screen will look fine (I will ignore the two warnings as they are not important for this demo):

The next step is to define the “Start Options” and when you select “automatic” you’ll have to specify the options for the transfer server:

There is a small configuration utility for that as well:

When you are happy with it, provide the details in the previous screen and complete the replication setup by providing a name in the last step:

That’s all you need to do and the replication is ready to be started:

… and then it immediately fails because we do not have a valid license. For getting a trial license you need to provide the computer ID which can be found in the information section:

Provide that to DBPLUS and request a trial license. Usually they are responding very fast:

Starting the replication once more:

You’ll see new processes on the PostgreSQL side:

postgres@centos8pg:/home/postgres/ [121] ps -ef | grep postgres
root      1248   769  0 12:58 ?        00:00:00 sshd: postgres [priv]
postgres  1252     1  0 12:58 ?        00:00:00 /usr/lib/systemd/systemd --user
postgres  1256  1252  0 12:58 ?        00:00:00 (sd-pam)
postgres  1262  1248  0 12:58 ?        00:00:00 sshd: postgres@pts/0
postgres  1263  1262  0 12:58 pts/0    00:00:00 -bash
postgres  1667     1  0 12:58 ?        00:00:00 /u01/app/postgres/product/12/db_0/bin/postgres -D /u02/pgdata/12
postgres  1669  1667  0 12:58 ?        00:00:00 postgres: checkpointer   
postgres  1670  1667  0 12:58 ?        00:00:00 postgres: background writer   
postgres  1671  1667  0 12:58 ?        00:00:00 postgres: walwriter   
postgres  1672  1667  0 12:58 ?        00:00:00 postgres: autovacuum launcher   
postgres  1673  1667  0 12:58 ?        00:00:00 postgres: stats collector   
postgres  1674  1667  0 12:58 ?        00:00:00 postgres: logical replication launcher   
postgres  2560  1667  0 14:40 ?        00:00:00 postgres: replusr offloadoracle 192.168.22.1(40790) idle
postgres  2562  1667  0 14:40 ?        00:00:00 postgres: replusr offloadoracle 192.168.22.1(40800) idle
postgres  2588  1263  0 14:40 pts/0    00:00:00 ps -ef
postgres  2589  1263  0 14:40 pts/0    00:00:00 grep --color=auto postgres

… and you’ll see LogMiner proceses on the Oracle side:

LOGMINER: summary for session# = 2147710977
LOGMINER: StartScn: 2261972 (0x00000000002283d4)
LOGMINER: EndScn: 18446744073709551615 (0xffffffffffffffff)
LOGMINER: HighConsumedScn: 0
LOGMINER: PSR flags: 0x0
LOGMINER: Session Flags: 0x4000441
LOGMINER: Session Flags2: 0x0
LOGMINER: Read buffers: 4
LOGMINER: Region Queue size: 256
LOGMINER: Redo Queue size: 4096
LOGMINER: Memory LWM: limit 10M, LWM 12M, 80%
LOGMINER: Memory Release Limit: 0M
LOGMINER: Max Decomp Region Memory: 1M
LOGMINER: Transaction Queue Size: 1024
2019-11-22T14:05:54.735533+01:00
LOGMINER: Begin mining logfile for session -2147256319 thread 1 sequence 8, /u01/app/oracle/oradata/DB1/onlinelog/o1_mf_2_gxh8fbhr_.log
2019-11-22T14:05:54.759197+01:00
LOGMINER: End   mining logfile for session -2147256319 thread 1 sequence 8, /u01/app/oracle/oradata/DB1/onlinelog/o1_mf_2_gxh8fbhr_.log

In the details tab there is more information about what is currently going on:

Although it looked quite good at the beginning there is the first issue:

Oracle data type is unknown: OE.CUST_ADDRESS_TYP
Stack trace:
System.ArgumentException: Oracle data type is unknown: OE.CUST_ADDRESS_TYP
   at DbPlus.DataTypes.Oracle.OracleDataTypes.Get(String name)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass22_0.g__MapSourceColumnType|1(TableColumn sourceColumn, String targetColumnName)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.g__GetColumnMapping|4(TableColumn sourceColumn)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)
   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.b__5()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Execute[T](Func`1 operation)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.GetTableCopyParameters(ReplicatedTable sourceTable)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_2(ReplicatedTable table)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteOneWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteAllWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_0()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Block(Action action)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.StartDataTransfer()
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ProcessRemoteOperations()
   at DbPlus.Tasks.Patterns.TaskTemplates.c__DisplayClass0_0.<g__Run|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at DbPlus.Tasks.Patterns.TaskGroup.Run(CancellationToken cancellationToken)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.Run()
   at DbPlus.Replicator.ComponentModel.Component.RunInternal()

As with all logical replication solutions custom types are tricky and usually not supported. What I will be doing now is to replicate the “HR” and “SH” schemas only, which do not contain any custom type:

Once again, starting the replication, next issue:

Oracle data type is unknown: ROWID
Stack trace:
System.ArgumentException: Oracle data type is unknown: ROWID
   at DbPlus.DataTypes.Oracle.OracleDataTypes.Get(String name)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass22_0.g__MapSourceColumnType|1(TableColumn sourceColumn, String targetColumnName)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.g__GetColumnMapping|4(TableColumn sourceColumn)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)
   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.b__5()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Execute[T](Func`1 operation)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.GetTableCopyParameters(ReplicatedTable sourceTable)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_2(ReplicatedTable table)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteOneWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteAllWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_0()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Block(Action action)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.StartDataTransfer()
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ProcessRemoteOperations()
   at DbPlus.Tasks.Patterns.TaskTemplates.c__DisplayClass0_0.<g__Run|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at DbPlus.Tasks.Patterns.TaskGroup.Run(CancellationToken cancellationToken)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.Run()
   at DbPlus.Replicator.ComponentModel.Component.RunInternal()

Lets check which column(s) and table(s) that is/are:

SQL> SELECT owner, table_name, column_name from dba_tab_columns where data_type = 'ROWID' and owner in ('HR','SH');

OWNER                TABLE_NAME                     COLUMN_NAME
-------------------- ------------------------------ ------------------------------
SH                   DR$SUP_TEXT_IDX$U              RID
SH                   DR$SUP_TEXT_IDX$K              TEXTKEY

Such columns can be easily excluded:



Starting over again, next issue:

At least the schemas need to exist on the target, so:

postgres=# \c offloadoracle postgres
You are now connected to database "offloadoracle" as user "postgres".
offloadoracle=# create schema sh;
CREATE SCHEMA
offloadoracle=# create schema hr;
CREATE SCHEMA
offloadoracle=# 

Next try:

On the source side:

SQL> grant flashback any table to REPLUSR;

Grant succeeded.

SQL> 

On the target side:

offloadoracle=# grant all on schema hr to replusr;
GRANT
offloadoracle=# grant all on schema sh to replusr;
GRANT

Finally most of the tables are replicating fine now:

There are a few warnings about missing unique keys and some tables can not be replicated at all:

For now I am just going to exclude the failed tables as this is fine for the scope of this post:

… an my replication is fine. A quick check on the target:

offloadoracle=# select * from sh.products limit 3;
 prod_id |           prod_name           |           prod_desc           | prod_subcategory | prod_subcategory_id | prod_subcategory_desc |        prod_category        | prod_category_id |     prod_category_desc      | prod_weight_class | prod_unit_of_measure | prod_pac>
---------+-------------------------------+-------------------------------+------------------+---------------------+-----------------------+-----------------------------+------------------+-----------------------------+-------------------+----------------------+--------->
      13 | 5MP Telephoto Digital Camera  | 5MP Telephoto Digital Camera  | Cameras          |     2044.0000000000 | Cameras               | Photo                       |   204.0000000000 | Photo                       |                 1 | U                    | P       >
      14 | 17" LCD w/built-in HDTV Tuner | 17" LCD w/built-in HDTV Tuner | Monitors         |     2035.0000000000 | Monitors              | Peripherals and Accessories |   203.0000000000 | Peripherals and Accessories |                 1 | U                    | P       >
      15 | Envoy 256MB - 40GB            | Envoy 256MB - 40Gb            | Desktop PCs      |     2021.0000000000 | Desktop PCs           | Hardware                    |   202.0000000000 | Hardware                    |                 1 | U                    | P       >
(3 rows)

lines 1-7/7 (END)

… confirms the data is there. As this post is already long enough here some final thoughts: The installation of “Data Replicator” is a no-brainer. I really like the simple interface and setting up a replication between Oracle and PostgreSQL is quite easy. Of course you need to know the issues you can run into with logical replication (missing unique or primary keys, not supported data types, …) but this is the same topic for all solutions. What I can say for sure is, that I never was as fast for setting up a demo replication as with “Data Replicator”. More testing to come …

Cet article Real time replication from Oracle to PostgreSQL using Data Replicator from DBPLUS est apparu en premier sur Blog dbi services.

AEM Forms – FIPS 140-2 Support

Yann Neuhaus - Mon, 2019-12-02 00:00

Around summer last year, one of the project I was working on started a new integration with AEM Forms for Digital Signatures and Reader Extension components. It was already using AEM Forms before but for other purposes. With this new requirement came new problems (obviously). This project was still using AEM Forms 6.4 JEE on WebLogic Server 12.2.1.3.

As mentioned a few times already, our policy is always security by default, unless the customer has some specific requirements that would prevent us to do that. Since we are usually working for critical businesses, that’s normally not a problem at all (quite the opposite). Therefore, when we install a WebLogic Server, we always set our best practices on top of it. One of these best practices is to enable the FIPS (Federal Information Processing Standards) 140-2 support and as much as possible it’s compliance. A software stack is FIPS 140-2 compliant if all its components support the FIPS 140-2 and they can all restrict their operations to FIPS 140-2 methods/transactions only. If a single piece of the stack (used) isn’t FIPS 140-2 compliant, then the whole software stack isn’t.

Alright, so why am I mentioning all that? Well, we are trying as much as possible to have fully FIPS 140-2 compliant installations and therefore we always use restricted ciphers, encryptions, protocols, aso… In this AEM Forms FIPS 140-2 compliant installation, we tried to add Digital Signatures & Reader Extensions on PDF but while doing some testing on the AEM Workbench, we encountered the following error pop-up:

FIPS 140-2 Reader Extension error

The complete error stack can be seen on the AEM Workbench logs:

ALC-DSC-003-000: com.adobe.idp.dsc.DSCInvocationException: Invocation error.
	at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:152)
	at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doSupports(EjbTransactionCMTAdapterBean.java:227)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
	at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doSupports(Unknown Source)
	at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:104)
	at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
	at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
	at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
	at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:225)
	at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
	at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
	at com.adobe.workflow.engine.PEUtil.invokeAction(PEUtil.java:893)
	at com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker.transientInvoke(WorkflowDSCInvoker.java:356)
	at com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker.invoke(WorkflowDSCInvoker.java:159)
	at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doRequiresNew(EjbTransactionCMTAdapterBean.java:299)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
	at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doRequiresNew(Unknown Source)
	at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:143)
	at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
	at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
	at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.invoke(AbstractMessageReceiver.java:329)
	at com.adobe.idp.dsc.provider.impl.soap.axis.sdk.SoapSdkEndpoint.invokeCall(SoapSdkEndpoint.java:153)
	at com.adobe.idp.dsc.provider.impl.soap.axis.sdk.SoapSdkEndpoint.invoke(SoapSdkEndpoint.java:91)
	at sun.reflect.GeneratedMethodAccessor753.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.axis.providers.java.RPCProvider.invokeMethod(RPCProvider.java:397)
	at org.apache.axis.providers.java.RPCProvider.processMessage(RPCProvider.java:186)
	at org.apache.axis.providers.java.JavaProvider.invoke(JavaProvider.java:323)
	at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32)
	at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
	at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83)
	at org.apache.axis.handlers.soap.SOAPService.invoke(SOAPService.java:454)
	at org.apache.axis.server.AxisServer.invoke(AxisServer.java:281)
	at org.apache.axis.transport.http.AxisServlet.doPost(AxisServlet.java:699)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
	at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
	at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
	at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
	at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
	at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
	at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.adobe.idp.dsc.provider.impl.soap.axis.InvocationFilter.doFilter(InvocationFilter.java:43)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
	at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
	at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3701)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3667)
	at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
	at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
	at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
	at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
	at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
	at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
	at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
	at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
	at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
	at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
	at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
	at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
	at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
	at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
	at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
	at weblogic.work.ExecuteThread.execute(ExecuteThread.java:415)
	at weblogic.work.ExecuteThread.run(ExecuteThread.java:355)
Caused by: com.adobe.livecycle.readerextensions.client.exceptions.ReaderExtensionsException: ALC-RES-001-008: Unable to apply the requested usage rights to the given document.
	at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:125)
	at com.adobe.livecycle.readerextensions.ReaderExtensionsService.applyUsageRights(ReaderExtensionsService.java:166)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
	... 102 more
Caused by: com.adobe.livecycle.readerextensions.client.ProcessingException: ALC-RES-001-008: Unable to apply the requested usage rights to the given document.
	... 109 more
Caused by: com.adobe.internal.pdftoolkit.core.exceptions.PDFInvalidParameterException: Exception encountered when applying the signature
	at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1803)
	at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:110)
	... 108 more
Caused by: com.adobe.internal.pdftoolkit.core.exceptions.PDFSignatureException: com.adobe.idp.cryptoprovider.CryptoProviderException: Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
 Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)
	at com.adobe.idp.cryptoprovider.LCPKCS7Signer.sign(LCPKCS7Signer.java:128)
	at com.adobe.internal.pdftoolkit.services.digsig.digsigframework.impl.SignatureHandlerPPKLite.writeSignatureAfterSave(SignatureHandlerPPKLite.java:816)
	at com.adobe.internal.pdftoolkit.services.digsig.impl.SigningUtils.doSigning(SigningUtils.java:801)
	at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1797)
	... 109 more
Caused by: com.adobe.idp.cryptoprovider.CryptoProviderException: Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
 Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)
	... 113 more

 

From the AEM Forms side, the error that can be seen at the same time was:

####<Jun 18, 2018 10:14:54,073 AM UTC> <Warning> <com.adobe.idp.cryptoprovider.CryptoProviderException> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '56' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <7dc476bf-0258-4e62-96d4-e9bcf5274954-000001bd> <1529316894073> <[severity-value: 16] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)>
####<Jun 18, 2018 10:14:54,074 AM UTC> <Error> <com.adobe.idp.cryptoprovider.CryptoProviderException> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '56' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <7dc476bf-0258-4e62-96d4-e9bcf5274954-000001bd> <1529316894074> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)>
####<Jun 18, 2018 10:14:54,079 AM UTC> <Error> <com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '56' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <7dc476bf-0258-4e62-96d4-e9bcf5274954-000001bd> <1529316894079> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <ALC-RES-001-008: Unable to apply the requested usage rights to the given document.
com.adobe.internal.pdftoolkit.core.exceptions.PDFInvalidParameterException: Exception encountered when applying the signature
        at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1803)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:110)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsService.applyUsageRights(ReaderExtensionsService.java:166)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
        at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doSupports(EjbTransactionCMTAdapterBean.java:227)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
        at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doSupports(Unknown Source)
        at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:104)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
        at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
        at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:198)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
        at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.invoke(ReaderExtensionsServiceClient.java:58)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.applyUsageRights(ReaderExtensionsServiceClient.java:105)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.applyRights(ApplyRightsServlet.java:241)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doOperation(ApplyRightsServlet.java:189)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doPost(ApplyRightsServlet.java:80)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
        at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
        at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3701)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3667)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:415)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:355)
Caused By: com.adobe.internal.pdftoolkit.core.exceptions.PDFSignatureException: com.adobe.idp.cryptoprovider.CryptoProviderException: Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)
        at com.adobe.idp.cryptoprovider.LCPKCS7Signer.sign(LCPKCS7Signer.java:128)
        at com.adobe.internal.pdftoolkit.services.digsig.digsigframework.impl.SignatureHandlerPPKLite.writeSignatureAfterSave(SignatureHandlerPPKLite.java:816)
        at com.adobe.internal.pdftoolkit.services.digsig.impl.SigningUtils.doSigning(SigningUtils.java:801)
        at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1797)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:110)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsService.applyUsageRights(ReaderExtensionsService.java:166)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
        at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doSupports(EjbTransactionCMTAdapterBean.java:227)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
        at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doSupports(Unknown Source)
        at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:104)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
        at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
        at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:198)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
        at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.invoke(ReaderExtensionsServiceClient.java:58)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.applyUsageRights(ReaderExtensionsServiceClient.java:105)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.applyRights(ApplyRightsServlet.java:241)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doOperation(ApplyRightsServlet.java:189)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doPost(ApplyRightsServlet.java:80)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
        at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
        at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3701)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3667)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:415)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:355)
Caused By: com.adobe.idp.cryptoprovider.CryptoProviderException: Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)
        at com.adobe.idp.cryptoprovider.LCPKCS7Signer.sign(LCPKCS7Signer.java:128)
        at com.adobe.internal.pdftoolkit.services.digsig.digsigframework.impl.SignatureHandlerPPKLite.writeSignatureAfterSave(SignatureHandlerPPKLite.java:816)
        at com.adobe.internal.pdftoolkit.services.digsig.impl.SigningUtils.doSigning(SigningUtils.java:801)
        at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1797)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:110)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsService.applyUsageRights(ReaderExtensionsService.java:166)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
        at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doSupports(EjbTransactionCMTAdapterBean.java:227)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
        at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doSupports(Unknown Source)
        at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:104)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
        at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
        at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:198)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
        at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.invoke(ReaderExtensionsServiceClient.java:58)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.applyUsageRights(ReaderExtensionsServiceClient.java:105)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.applyRights(ApplyRightsServlet.java:241)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doOperation(ApplyRightsServlet.java:189)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doPost(ApplyRightsServlet.java:80)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
        at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
        at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3701)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3667)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:415)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:355)
>

 

Based on all these logs, it is pretty clear that the issue was that a specific component of the AEM Forms signature process was trying to use a non-FIPS 140-2 supported algorithm. Since our WebLogic Servers are restricting this kind of weak algorithms, then the method failed on the AEM Forms server side, which was propagated on the AEM Workbench obviously.

According to the Adobe documentation, the AEM Forms is supposed to be FIPS compliant. Globally, it seems to support the FIPS but for this specific piece, it’s doesn’t and therefore the AEM Forms isn’t FIPS 140-2 compliant. Since there is nothing we can do on our side to change that, we opened a SR with the Adobe Support (#160202). After almost three months trying to explain the situation and our requirements as well as investigating the issue on Adobe side, they finally found the piece of code that was still using SHA-1.

Adobe then started a PoC to change this and have a fully (hopefully) FIPS 140-2 compliant software. The fix was tested and validated in November 2018 and it was therefore included in the next release: AEM Forms 6.4.3 (6.4 SP3) published on December 20, 2018. We installed it day-1 since it was released to fix our requirement and after that, the issue was indeed gone. Therefore, if you need to work with AEM Forms in a FIPS compliant environment, you should work with an AEM version released after that date.

 

Cet article AEM Forms – FIPS 140-2 Support est apparu en premier sur Blog dbi services.

Linux ser2net: no connection to /dev/ttyUSB0

Dietrich Schroff - Sun, 2019-12-01 13:17
If you are running some java application on a Linux box (especially ARM architecture) and this application accesses the serial interface (/dev/ttyUSB0, /dev/ttyUSB1 or just /dev/ttyX), then a easy way to do this, is running ser2net.

For all who are not familiar with the serial port:
https://en.wikipedia.org/wiki/Serial_port



But there is one tricky thing, you have to consider when using ser2net:

Inside ser2net.conf you will find some lines like this here:

15000:raw:0:/dev/ttyUSB0:9600 8DATABITS NONE 1STOPBIT
This means: on port tcp 15000 you can access the serial port /dev/ttyUSB0 (if you have a USB to serial adapter in place).

If this does not work, check the ports with

root@ubuntu:/home/ubuntu/dfld# netstat -lntup |grep ser2net
tcp6       0      0 :::15000                :::*                    LISTEN      1361/ser2net       
As you can see, it only listens on TCP6. So you have to reconfigure this to


127.0.0.1,15000:raw:0:/dev/ttyUSB0:9600 8DATABITS NONE 1STOPBIT
If you only want to access this on localhost (which is very nice security enhancement ;-) ).
And after a restart of ser2net everything works like expected:


root@ubuntu:/home/ubuntu/dfld# netstat -lntup |grep ser2net

tcp        0      0 127.0.0.1:15000         0.0.0.0:*               LISTEN     

Documentum – FQDN Validation on RCS/CFS

Yann Neuhaus - Sun, 2019-12-01 04:00

In a previous blog, I talked about the possible usage of K8s Services in place of the default headless/pod name and the issues that it brings. This one can be seen as a continuation since it is also related to the usage of K8s Services to install Documentum but this time with another issue that is specific to a RCS/CFS. This issue & solution might be interesting for you, even if you aren’t using K8s.

As mentioned in this previous blog, the installation of a Primary CS using K8s Services is possible but it might bring you some trouble with a few repository objects. To go further with the testing, without fixing the issues on the first CS, we tried to install a RCS/CFS (second CS for the High Availability) with the exact same parameters. As a reminder, this is what has been used:

  • Primary Content Server:
    • headless/pod: documentum-server-0.documentum-server.dbi-ns01.svc.cluster.local
    • K8s Service: cs01.dbi-ns01.svc.cluster.local
  • Remote Content Server:
    • headless/pod: documentum-server-1.documentum-server.dbi-ns01.svc.cluster.local
    • K8s Service: cs02.dbi-ns01.svc.cluster.local
  • Repository & Service: gr_repo

Therefore, the Repository silent properties file contained the following on this second CS:

[dmadmin@documentum-server-1 ~]$ grep -E "FQDN|HOST" RCS_Docbase_Global.properties
SERVER.FQDN=cs02.dbi-ns01.svc.cluster.local
SERVER.REPOSITORY_HOSTNAME=cs01.dbi-ns01.svc.cluster.local
SERVER.PRIMARY_CONNECTION_BROKER_HOST=cs01.dbi-ns01.svc.cluster.local
SERVER.PROJECTED_CONNECTION_BROKER_HOST=cs02.dbi-ns01.svc.cluster.local
SERVER.PROJECTED_DOCBROKER_HOST_OTHER=cs01.dbi-ns01.svc.cluster.local
[dmadmin@documentum-server-1 ~]$

 

I started the silent installation of the Repository and after a few seconds, the installer exited. Obviously, it means that something went wrong. Checking at the installation logs:

[dmadmin@documentum-server-1 ~]$ cd $DM_HOME/install/logs
[dmadmin@documentum-server-1 logs]$ cat install.log
13:42:26,225  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: CfsConfigurator
13:42:26,225  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 16.4.0000.0248
13:42:26,225  INFO [main]  -
13:42:26,308  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
13:42:26,332  INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsInitializeImportantServerVariables - The installer is gathering system configuration information.
13:42:26,349  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Start to verify the password
13:42:29,357  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - FQDN is invalid
13:42:29,359 ERROR [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Fail to reach the computer with the FQDN "cs02.dbi-ns01.svc.cluster.local". Check the value you specified. Click Yes to ignore this error, or click No to re-enter the FQDN.
com.documentum.install.shared.common.error.DiException: Fail to reach the computer with the FQDN "cs02.dbi-ns01.svc.cluster.local". Check the value you specified. Click Yes to ignore this error, or click No to re-enter the FQDN.
        at com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation.setup(DiWASilentRemoteServerValidation.java:64)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:73)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.an(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        ...
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.am(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runNextInstallPiece(Unknown Source)
        at com.zerog.ia.installer.ConsoleBasedAAMgr.ac(Unknown Source)
        at com.zerog.ia.installer.AAMgrBase.runPreInstall(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
[dmadmin@documentum-server-1 logs]$

 

On the Primary CS, the installation using the K8s Service went smoothly without error but on the Remote CS with the exact same setup, it failed with the message: ‘Fail to reach the computer with the FQDN “cs02.dbi-ns01.svc.cluster.local”. Check the value you specified. Click Yes to ignore this error, or click No to re-enter the FQDN.‘. So the installer binaries behave differently if it’s a PCS or a RCS/CFS. Another funny thing is the message that says ‘Click Yes to ignore this error, or click No to re-enter the FQDN‘… That’s obviously a GUI message that is being printed to the logs but fortunately, the silent installer isn’t just waiting for an input that will never come.

I assumed that this had something to do with the K8s Services and some kind of network/hostname validation that the RCS/CFS installer is trying to do (which isn’t done on the Primary). Therefore, I tried a few things like checking the nslookup & ping, validating that the docbroker is responding:

[dmadmin@documentum-server-1 logs]$ nslookup cs01.dbi-ns01.svc.cluster.local
Server: 1.1.1.10
Address: 1.1.1.10#53

Name: cs01.dbi-ns01.svc.cluster.local
Address: 1.1.1.100
[dmadmin@documentum-server-1 logs]$
[dmadmin@documentum-server-1 logs]$ ping cs01.dbi-ns01.svc.cluster.local
PING cs01.dbi-ns01.svc.cluster.local (1.1.1.100) 56(84) bytes of data.
^C
--- cs01.dbi-ns01.svc.cluster.local ping statistics ---
12 packets transmitted, 0 received, 100% packet loss, time 10999ms
[dmadmin@documentum-server-1 logs]$
[dmadmin@documentum-server-1 logs]$ dmqdocbroker -t cs01.dbi-ns01.svc.cluster.local -p 1489 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0110.0058
Using specified port: 1489
Successful reply from docbroker at host (documentum-server-0) on port(1490) running software version (16.4.0110.0167  Linux64).
[dmadmin@documentum-server-1 logs]$
[dmadmin@documentum-server-1 logs]$
[dmadmin@documentum-server-1 logs]$
[dmadmin@documentum-server-1 logs]$ nslookup cs02.dbi-ns01.svc.cluster.local
Server: 1.1.1.10
Address: 1.1.1.10#53

Name: cs02.dbi-ns01.svc.cluster.local
Address: 1.1.1.200
[dmadmin@documentum-server-1 logs]$
[dmadmin@documentum-server-1 logs]$ ping cs02.dbi-ns01.svc.cluster.local
PING cs02.dbi-ns01.svc.cluster.local (1.1.1.200) 56(84) bytes of data.
^C
--- cs02.dbi-ns01.svc.cluster.local ping statistics ---
12 packets transmitted, 0 received, 100% packet loss, time 10999ms
[dmadmin@documentum-server-1 logs]$
[dmadmin@documentum-server-1 logs]$ dmqdocbroker -t cs02.dbi-ns01.svc.cluster.local -p 1489 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0110.0058
Using specified port: 1489
Successful reply from docbroker at host (documentum-server-1) on port(1490) running software version (16.4.0110.0167  Linux64).
[dmadmin@documentum-server-1 logs]$

 

As you can see above, same result for the Primary CS and the Remote one. The only thing not responding is the ping but that’s because it’s a K8s Service… At this point, I assumed that the RCS/CFS installer is trying to do something like a ping which fails and therefore the error on the log and the stop of the installer. To validate that, I simply updated a little bit the file /etc/hosts (as root obviously):

[root@documentum-server-1 ~]$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1       localhost ip6-localhost ip6-loopback
fe00::0   ip6-localnet
fe00::0   ip6-mcastprefix
fe00::1   ip6-allnodes
fe00::2   ip6-allrouters
1.1.1.200  documentum-server-1.documentum-server.dbi-ns01.svc.cluster.local  documentum-server-1
[root@documentum-server-1 ~]$
[root@documentum-server-1 ~]$ echo '1.1.1.200  cs02.dbi-ns01.svc.cluster.local' >> /etc/hosts
[root@documentum-server-1 ~]$
[root@documentum-server-1 ~]$ cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1       localhost ip6-localhost ip6-loopback
fe00::0   ip6-localnet
fe00::0   ip6-mcastprefix
fe00::1   ip6-allnodes
fe00::2   ip6-allrouters
1.1.1.200  documentum-server-1.documentum-server.dbi-ns01.svc.cluster.local  documentum-server-1
1.1.1.200  cs02.dbi-ns01.svc.cluster.local
[root@documentum-server-1 ~]$

 

After doing that, I tried again to start the RCS/CFS installer in silent (exact same command, no changes to the properties file and this time, it was able to complete the installation without issue.

[dmadmin@documentum-server-1 ~]$ cd $DM_HOME/install/logs
[dmadmin@documentum-server-1 logs]$ cat install.log
14:01:33,199 INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: CfsConfigurator
14:01:33,199 INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 16.4.0000.0248
14:01:33,199 INFO [main] -
14:01:33,247 INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
14:01:33,278 INFO [main] com.documentum.install.multinode.cfs.installanywhere.actions.DiWAServerCfsInitializeImportantServerVariables - The installer is gathering system configuration information.
14:01:33,296 INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentRemoteServerValidation - Start to verify the password
14:01:33,906 INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/089972.tmp/dfc.keystore
14:01:34,394 INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential$MultiFormatPKIKeyPair - generated RSA (2,048-bit strength) mutiformat key pair in 468 ms
14:01:34,428 INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential - certificate created for DFC <CN=dfc_MlM5tLi5T9u1r82AdbulKv14vr8a,O=EMC,OU=Documentum> valid from Tue Sep 10 13:56:33 UTC 2019 to Fri Sep 07 14:01:33 UTC 2029:
14:01:34,429 INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is /tmp/089972.tmp/dfc.keystore
14:01:34,446 INFO [main] com.documentum.fc.client.security.impl.InitializeKeystoreForDfc - [DFC_SECURITY_IDENTITY_INITIALIZED] Initialized new identity in keystore, DFC alias=dfc, identity=dfc_MlM5tLi5T9u1r82AdbulKv14vr8a
14:01:34,448 INFO [main] com.documentum.fc.client.security.impl.AuthenticationMgrForDfc - identity for authentication is dfc_MlM5tLi5T9u1r82AdbulKv14vr8a
14:01:34,449 INFO [main] com.documentum.fc.impl.RuntimeContext - DFC Version is 16.4.0110.0058
14:01:34,472 INFO [Timer-3] com.documentum.fc.client.impl.bof.cache.ClassCacheManager$CacheCleanupTask - [DFC_BOF_RUNNING_CLEANUP] Running class cache cleanup task
...
[dmadmin@documentum-server-1 logs]$

 

Since this looks obviously as a bug, I opened a SR with the OpenText Support (#4252205). The outcome of this ticket is that the RCS/CFS installer is indeed doing a different validation that what is done by the PCS installer and that’s why the issue is only for RCS/CFS. At the moment, there is no way to skip this validation when using the silent installer (contrary to the GUI which allows you to ‘click Yes‘). Therefore, OpenText decided to add a new parameter starting with the CS 16.4 P20 (end of December 2019) to check whether the FQDN validation should be done or just skipped. This new parameter will be “SERVER.VALIDATE_FQDN” and it will be a Boolean value. The default value will be set to “true” and therefore by default, it will do the FQDN validation. To skip it starting with the P20, just put the value to false and the RCS/CFS installer should be able to complete successfully. To be tested once the patch is out!

 

Cet article Documentum – FQDN Validation on RCS/CFS est apparu en premier sur Blog dbi services.

Documentum – Usage of K8s Services to install Documentum?

Yann Neuhaus - Sun, 2019-12-01 02:00

In the past several months, we have been extensively working on setting up a CI/CD pipeline for Documentum at one of our customer. As part of this project, we are using Kubernetes pods for Documentum components. In this blog, I will talk about an issue caused by what seemed like a good idea but finally, not so much…

The goal of this project is to migrate dozens of Documentum environments and several hundred of VMs into K8s pods. In order to streamline the migration and simplify the management, we thought: why not try to use K8s Services (ingres) for all the communications between the pods as well as external to K8s. Indeed, we needed to take into account several interfaces outside of the K8s world, usually some old software that would most probably never support containerization and such. These interfaces will need to continue to work in the way they used to so we will need K8s Services at some point for the communications between Documentum and these external interfaces. Therefore, the idea was to try to use this exact same K8s Services to install the Documentum components.

By default, K8s will create a headless service for each of the pods, which is composed in the following way: <pod_name>.<service_name>.<namespace_name>.<cluster>. The goal here was therefore to define a K8s Service in addition for each Content Servers: <service_name_ext>.<namespace_name>.<cluster>. This is what has been used:

  • Primary Content Server:
    • headless/pod: documentum-server-0.documentum-server.dbi-ns01.svc.cluster.local
    • K8s Service: cs01.dbi-ns01.svc.cluster.local
  • Remote Content Server:
    • headless/pod: documentum-server-1.documentum-server.dbi-ns01.svc.cluster.local
    • K8s Service: cs02.dbi-ns01.svc.cluster.local
  • Repository & Service: gr_repo

On a typical VM, you would usually install Documentum using the VM hostname. The pendant on K8s would therefore be to use the headless/pod name. Alternatively, on a VM, you could think about using a DNS entry to install Documentum and you might think that this should work. I sure did and therefore, we tried to use the same kind of thing on K8s with the K8s Services directly.

Doing so for the Primary Content Server, all the Documentum silent installers completed successfully. We used “cs01.dbi-ns01.svc.cluster.local” for the following things for example:

  • Docbroker projections
  • Repository installation
  • DFC & CS Projections
  • BPM/xCP installation

Therefore, looking into the silent properties file for the Repository for example, it contained the following:

[dmadmin@documentum-server-0 ~]$ grep -E "FQDN|HOST" CS_Docbase_Global.properties
SERVER.FQDN=cs01.dbi-ns01.svc.cluster.local
SERVER.PROJECTED_DOCBROKER_HOST=cs01.dbi-ns01.svc.cluster.local
[dmadmin@documentum-server-0 ~]$

 

At the end of our silent installation (include Documentum silent installers + dbi services’ best practices (other stuff like security, JMS configuration, projections, jobs, aso…)), connection to the repository was possible, D2 & DA were both working properly so it looked like being a first good step. Unfortunately, when I was doing a review of the repository objects later, I saw some wrong objects and a bit of a mess in the repository: that’s the full purpose of this blog, to explain what went wrong when using a K8s Service instead of the headless/pod name.

After a quick review, I found the following things that were wrong/messy:

  • dm_jms_config object
    • Expected: for a Primary Content Server, you should have one JMS config object with “do_mail”, “do_method” and “SAMLAuthentication” at least (+ “do_bpm” for BPM/xCP, Indexagent ones, aso…)
      • JMS <FQDN>:9080 for gr_repo.gr_repo
    • Actual: the installer created two JMS Objects, one with a correct name (using FQDN provided in installer = K8s Service), one with a wrong name (using pod name (short-name, no domain))
      • JMS cs01.dbi-ns01.svc.cluster.local:9080 for gr_repo.gr_repo => Correct one and it contained all the needed servlets (“do_mail”, “do_method”, “do_bpm” and “SAMLAuthentication”)
      • JMS documentum-server-0:9080 for gr_repo.gr_repo => Wrong one and it contained all the do_ servlets but not the SAML one strangely (“do_mail”, “do_method” and “do_bpm” only, not “SAMLAuthentication”)
  • dm_acs_config object
    • Expected: just like for the JMS, you would expect the object to be created with the FQDN you gave it in the installer
      • <FQDN>ACS1
    • Actual: the installer create the ACS config object using the headless/pod name (full-name this time and not the short-name)
      • documentum-server-0.documentum-server.dbi-ns01.svc.cluster.localACS1
  • A lot of other references to the headless/pod name: dm_user, dm_job, dm_client_registration, dm_client_rights, aso…

So in short, sometimes the Repository installer uses the FQDN provided (K8s Service) and sometimes it doesn’t. So what’s the point in providing a FQDN during the installation since it will anyway ignore it for 90% of the objects? In addition, it also creates two JMS config objects at the same time but with different names and different servlets. Looking at the “dm_jms_config_setup.out” log file created by the installer when it executed the JMS config object creation, you can see that it mention the creation of only one object and yet at the ends, it says that there are two:

[dmadmin@documentum-server-0 ~]$ cat $DOCUMENTUM/dba/config/gr_repo/dm_jms_config_setup.out
/app/dctm/server/product/16.4/bin/dm_jms_admin.sh -docbase gr_repo.gr_repo -username dmadmin -action add,enableDFC,testDFC,migrate,dumpServerCache,listAll -jms_host_name cs01.dbi-ns01.svc.cluster.local -jms_port 9080                                               -jms_proximity 1 -webapps ServerApps -server_config_id 3d0f123450000102
2019-10-21 09:50:55 UTC:  Input arguments are: -docbase gr_repo.gr_repo -username dmadmin -action add,enableDFC,testDFC,migrate,dumpServerCache,listAll -jms_host_name cs01.dbi-ns01.svc.cluster.local -jms_port 9080 -jm                                              s_proximity 1 -webapps ServerApps -server_config_id 3d0f123450000102
2019-10-21 09:50:55 UTC:  Input parameters are: {jms_port=[9080], server_config_id=[3d0f123450000102], docbase=[gr_repo.gr_repo], webapps=[ServerApps], action=[add,enableDFC,testDFC,migrate,dumpServerCache,listAll], jms_pr                                              oximity=[1], jms_host_name=[cs01.dbi-ns01.svc.cluster.local], username=[dmadmin]}
2019-10-21 09:50:55 UTC:  ======================================================================================
2019-10-21 09:50:55 UTC:  Begin administering JMS config objects in docbase gr_repo.gr_repo ...
2019-10-21 09:51:01 UTC:  The following JMS config object has been successfully created/updated in docbase gr_repo
2019-10-21 09:51:01 UTC:  --------------------------------------------------------------------------------------
2019-10-21 09:51:01 UTC:                      JMS Config Name: JMS cs01.dbi-ns01.svc.cluster.local:9080 for gr_repo.gr_repo
                      JMS Config ID: 080f1234500010a3
                      JMS Host Name: cs01.dbi-ns01.svc.cluster.local
                    JMS Port Number: 9080
             Is Disabled In Docbase: F
               Repeating attributes:
               Content_Server_Id[0] = 3d0f123450000102
        Content_Server_Host_Name[0] = documentum-server-0
    JMS_Proximity_Relative_to_CS[0] = 2
             Servlet to URI Mapping:
                          do_method = http://cs01.dbi-ns01.svc.cluster.local:9080/DmMethods/servlet/DoMethod
                 SAMLAuthentication = http://cs01.dbi-ns01.svc.cluster.local:9080/SAMLAuthentication/servlet/ValidateSAMLResponse
                            do_mail = http://cs01.dbi-ns01.svc.cluster.local:9080/DmMail/servlet/DoMail

2019-10-21 09:51:01 UTC:  --------------------------------------------------------------------------------------
2019-10-21 09:51:01 UTC:  Successfully enabled principal_auth_priv for current DFC client  in docbase gr_repo
2019-10-21 09:51:01 UTC:  Successfully tested principal_auth_priv for current DFC client  in docbase gr_repo
2019-10-21 09:51:01 UTC:  Successfully migrated content server 3d0f123450000102 to use JMS config object(s)
2019-10-21 09:51:01 UTC:  Dump of JMS Config List in content server cache, content server is gr_repo
2019-10-21 09:51:01 UTC:  --------------------------------------------------------------------------------------
2019-10-21 09:51:01 UTC:  USER ATTRIBUTES

  jms_list_last_refreshed         : Mon Oct 21 09:51:01 2019
  incr_wait_time_on_failure       : 30
  max_wait_time_on_failure        : 3600
  current_jms_index               : -1
  jms_config_id                [0]: 080f1234500010a3
                               [1]: 080f1234500010a4
  jms_config_name              [0]: JMS cs01.dbi-ns01.svc.cluster.local:9080 for gr_repo.gr_repo
                               [1]: JMS documentum-server-0:9080 for gr_repo.gr_repo
  server_config_id             [0]: 3d0f123450000102
                               [1]: 3d0f123450000102
  server_config_name           [0]: gr_repo
                               [1]: gr_repo
  jms_to_cs_proximity          [0]: 2
                               [1]: 1
  is_disabled_in_docbase       [0]: F
                               [1]: F
  is_marked_dead_in_cache      [0]: F
                               [1]: F
  intended_purpose             [0]: DM_JMS_PURPOSE_FOR_LOAD_BALANCING
                               [1]: DM_JMS_PURPOSE_DEFAULT_EMBEDDED_JMS
  last_failure_time            [0]: N/A
                               [1]: N/A
  next_retry_time              [0]: N/A
                               [1]: N/A
  failure_count                [0]: 0
                               [1]: 0

SYSTEM ATTRIBUTES


APPLICATION ATTRIBUTES


INTERNAL ATTRIBUTES


2019-10-21 09:51:01 UTC:  --------------------------------------------------------------------------------------
2019-10-21 09:51:01 UTC:  Total 2 JMS Config objects found in docbase gr_repo
2019-10-21 09:51:01 UTC:  --------------------------------------------------------------------------------------
2019-10-21 09:51:01 UTC:                      JMS Config Name: JMS cs01.dbi-ns01.svc.cluster.local:9080 for gr_repo.gr_repo
                      JMS Config ID: 080f1234500010a3
                      JMS Host Name: cs01.dbi-ns01.svc.cluster.local
                    JMS Port Number: 9080
             Is Disabled In Docbase: F
               Repeating attributes:
               Content_Server_Id[0] = 3d0f123450000102
        Content_Server_Host_Name[0] = documentum-server-0
    JMS_Proximity_Relative_to_CS[0] = 2
             Servlet to URI Mapping:
                          do_method = http://cs01.dbi-ns01.svc.cluster.local:9080/DmMethods/servlet/DoMethod
                 SAMLAuthentication = http://cs01.dbi-ns01.svc.cluster.local:9080/SAMLAuthentication/servlet/ValidateSAMLResponse
                            do_mail = http://cs01.dbi-ns01.svc.cluster.local:9080/DmMail/servlet/DoMail

2019-10-21 09:51:01 UTC:  --------------------------------------------------------------------------------------
2019-10-21 09:51:01 UTC:                      JMS Config Name: JMS documentum-server-0:9080 for gr_repo.gr_repo
                      JMS Config ID: 080f1234500010a4
                      JMS Host Name: documentum-server-0
                    JMS Port Number: 9080
             Is Disabled In Docbase: F
               Repeating attributes:
               Content_Server_Id[0] = 3d0f123450000102
        Content_Server_Host_Name[0] = documentum-server-0
    JMS_Proximity_Relative_to_CS[0] = 1
             Servlet to URI Mapping:
                          do_method = http://documentum-server-0:9080/DmMethods/servlet/DoMethod
                            do_mail = http://documentum-server-0:9080/DmMail/servlet/DoMail

2019-10-21 09:51:01 UTC:  --------------------------------------------------------------------------------------
2019-10-21 09:51:01 UTC:  Done administering JMS config objects in docbase gr_repo.gr_repo: status=SUCCESS ...
2019-10-21 09:51:01 UTC:  ======================================================================================
Program exit status = 0 = SUCCESS
Connect to docbase gr_repo.gr_repo as user dmadmin.
Start running dm_jms_config_setup.ebs script on docbase gr_repo.gr_repo
[DM_API_E_NO_MATCH]error:  "There was no match in the docbase for the qualification: dm_method where object_name='dm_JMSAdminConsole'"


dm_method dm_JMSAdminConsole object does not exist, yet.
jarFile = /app/dctm/server/product/16.4/lib/dmjmsadmin.jar
wrapper_script = /app/dctm/server/product/16.4/bin/dm_jms_admin.sh
Create dm_method dm_JMSAdminConsole object in docbase now
new dm_JMSAdminConsole dm_method object created in docbase successfully
new object id is: 100f123450001098
Begin updating JMS_LOCATION for Java Methods ...
Assign JMS_LOCATION=ANY to a_extended_properties in method object CTSAdminMethod
Assign JMS_LOCATION=ANY to a_extended_properties in method object dm_bp_transition_java
Assign JMS_LOCATION=ANY to a_extended_properties in method object dm_bp_schedule_java
Assign JMS_LOCATION=ANY to a_extended_properties in method object dm_bp_batch_java
Assign JMS_LOCATION=ANY to a_extended_properties in method object dm_bp_validate_java
Assign JMS_LOCATION=ANY to a_extended_properties in method object dm_event_template_sender
Done updating JMS_LOCATION for Java Methods ...
Begin create default JMS config object for content server
Content Server version: 16.4.0110.0167  Linux64.Oracle
Content Server ID: 3d0f123450000102
dm_jms_config type id = 030f12345000017c
jms_count = 0
wrapper_script = /app/dctm/server/product/16.4/bin/dm_jms_admin.sh
script_params =  -docbase gr_repo.gr_repo -username dmadmin -action add,enableDFC,testDFC,migrate,dumpServerCache,listAll  -jms_host_name cs01.dbi-ns01.svc.cluster.local -jms_port 9080 -jms_proximity 1 -webapps Server                                              Apps  -server_config_id 3d0f123450000102
cmd = /app/dctm/server/product/16.4/bin/dm_jms_admin.sh  -docbase gr_repo.gr_repo -username dmadmin -action add,enableDFC,testDFC,migrate,dumpServerCache,listAll  -jms_host_name cs01.dbi-ns01.svc.cluster.local -jms_po                                              rt 9080 -jms_proximity 1 -webapps ServerApps  -server_config_id 3d0f123450000102
status = 0
Finished creating default JMS config object for content server
Finished running dm_jms_config_setup.ebs...
Disconnect from the docbase.
[dmadmin@documentum-server-0 ~]$

 

In the log file above, there is no mention of “do_bpm” because it’s the installation of the Repository and therefore, at that time, the BPM/xCP isn’t installed yet. We only install it later, switch the URLs in HTTPS and other things. So looking into the objects in the Repository, this is what we can see at the end of all installations (I purposely only executed the HTTP->HTTPS + BPM/xCP addition but not JMS Projections to keep below the default value added by the installer, which are also wrong):

[dmadmin@documentum-server-0 ~]$ iapi gr_repo
Please enter a user (dmadmin):
Please enter password for dmadmin:


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058


Connecting to Server using docbase gr_repo
[DM_SESSION_I_SESSION_START]info:  "Session 010f12345000117c started for user dmadmin."


Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ?,c,select count(*) from dm_server_config;
count(*)
----------------------
                     1
(1 row affected)

API> ?,c,select r_object_id, object_name, app_server_name, app_server_uri from dm_server_config order by object_name, app_server_name;
r_object_id       object_name  app_server_name  app_server_uri
----------------  -----------  ---------------  -----------------------------------------------------------------------
3d0f123450000102  gr_repo      do_bpm           https://cs01.dbi-ns01.svc.cluster.local:9082/bpm/servlet/DoMethod
                               do_mail          https://cs01.dbi-ns01.svc.cluster.local:9082/DmMail/servlet/DoMail
                               do_method        https://cs01.dbi-ns01.svc.cluster.local:9082/DmMethods/servlet/DoMethod
(1 row affected)

API> ?,c,select count(*) from dm_jms_config;
count(*)
----------------------
                     2
(1 row affected)

API> ?,c,select r_object_id, object_name from dm_jms_config order by object_name;
r_object_id       object_name
----------------  ------------------------------------------------------------
080f1234500010a3  JMS cs01.dbi-ns01.svc.cluster.local:9080 for gr_repo.gr_repo
080f1234500010a4  JMS documentum-server-0:9080 for gr_repo.gr_repo
(2 rows affected)

API> dump,c,080f1234500010a3
...
USER ATTRIBUTES

  object_name                     : JMS cs01.dbi-ns01.svc.cluster.local:9080 for gr_repo.gr_repo
  title                           :
  subject                         :
  authors                       []: <none>
  keywords                      []: <none>
  resolution_label                :
  owner_name                      : dmadmin
  owner_permit                    : 7
  group_name                      : docu
  group_permit                    : 5
  world_permit                    : 3
  log_entry                       :
  acl_domain                      : dmadmin
  acl_name                        : dm_450f123450000101
  language_code                   :
  server_config_id             [0]: 3d0f123450000102
  config_type                     : 2
  servlet_name                 [0]: do_method
                               [1]: SAMLAuthentication
                               [2]: do_mail
                               [3]: do_bpm
  base_uri                     [0]: https://cs01.dbi-ns01.svc.cluster.local:9082/DmMethods/servlet/DoMethod
                               [1]: https://cs01.dbi-ns01.svc.cluster.local:9082/SAMLAuthentication/servlet/ValidateSAMLResponse
                               [2]: https://cs01.dbi-ns01.svc.cluster.local:9082/DmMail/servlet/DoMail
                               [3]: https://cs01.dbi-ns01.svc.cluster.local:9082/bpm/servlet/DoMethod
  supported_protocol           [0]: https
                               [1]: https
                               [2]: https
                               [3]: https
  projection_netloc_enable      []: <none>
  projection_netloc_ident       []: <none>
  projection_enable            [0]: T
  projection_proximity_value   [0]: 2
  projection_targets           [0]: documentum-server-0
  projection_ports             [0]: 0
  network_locations             []: <none>
  server_major_version            :
  server_minor_version            :
  is_disabled                     : F

SYSTEM ATTRIBUTES

  r_object_type                   : dm_jms_config
  r_creation_date                 : 10/21/2019 09:51:00
  r_modify_date                   : 10/21/2019 10:49:08
  r_modifier                      : dmadmin
  r_access_date                   : nulldate
  r_composite_id                []: <none>
  r_composite_label             []: <none>
  r_component_label             []: <none>
  r_order_no                    []: <none>
  r_link_cnt                      : 0
  r_link_high_cnt                 : 0
  r_assembled_from_id             : 0000000000000000
  r_frzn_assembly_cnt             : 0
  r_has_frzn_assembly             : F
  r_is_virtual_doc                : 0
  r_page_cnt                      : 0
  r_content_size                  : 0
  r_lock_owner                    :
  r_lock_date                     : nulldate
  r_lock_machine                  :
  r_version_label              [0]: 1.0
                               [1]: CURRENT
  r_immutable_flag                : F
  r_frozen_flag                   : F
  r_has_events                    : F
  r_creator_name                  : dmadmin
  r_is_public                     : T
  r_policy_id                     : 0000000000000000
  r_resume_state                  : 0
  r_current_state                 : 0
  r_alias_set_id                  : 0000000000000000
  r_full_content_size             : 0
  r_aspect_name                 []: <none>
  r_object_id                     : 080f1234500010a3

APPLICATION ATTRIBUTES

  a_application_type              :
  a_status                        :
  a_is_hidden                     : F
  a_retention_date                : nulldate
  a_archive                       : F
  a_compound_architecture         :
  a_link_resolved                 : F
  a_content_type                  :
  a_full_text                     : T
  a_storage_type                  :
  a_special_app                   :
  a_effective_date              []: <none>
  a_expiration_date             []: <none>
  a_publish_formats             []: <none>
  a_effective_label             []: <none>
  a_effective_flag              []: <none>
  a_category                      :
  a_is_template                   : F
  a_controlling_app               :
  a_extended_properties         []: <none>
  a_is_signed                     : F
  a_last_review_date              : nulldate

INTERNAL ATTRIBUTES

  i_is_deleted                    : F
  i_reference_cnt                 : 1
  i_has_folder                    : T
  i_folder_id                  [0]: 0c0f123450000105
  i_contents_id                   : 0000000000000000
  i_cabinet_id                    : 0c0f123450000105
  i_antecedent_id                 : 0000000000000000
  i_chronicle_id                  : 080f1234500010a3
  i_latest_flag                   : T
  i_branch_cnt                    : 0
  i_direct_dsc                    : F
  i_is_reference                  : F
  i_retain_until                  : nulldate
  i_retainer_id                 []: <none>
  i_partition                     : 0
  i_is_replica                    : F
  i_vstamp                        : 4

API> dump,c,080f1234500010a4
...
USER ATTRIBUTES

  object_name                     : JMS documentum-server-0:9080 for gr_repo.gr_repo
  title                           :
  subject                         :
  authors                       []: <none>
  keywords                      []: <none>
  resolution_label                :
  owner_name                      : dmadmin
  owner_permit                    : 7
  group_name                      : docu
  group_permit                    : 5
  world_permit                    : 3
  log_entry                       :
  acl_domain                      : dmadmin
  acl_name                        : dm_450f123450000101
  language_code                   :
  server_config_id             [0]: 3d0f123450000102
  config_type                     : 2
  servlet_name                 [0]: do_method
                               [1]: do_mail
                               [2]: do_bpm
  base_uri                     [0]: https://documentum-server-0:9082/DmMethods/servlet/DoMethod
                               [1]: https://documentum-server-0:9082/DmMail/servlet/DoMail
                               [2]: https://cs01.dbi-ns01.svc.cluster.local:9082/bpm/servlet/DoMethod
  supported_protocol           [0]: https
                               [1]: https
                               [2]: https
  projection_netloc_enable      []: <none>
  projection_netloc_ident       []: <none>
  projection_enable            [0]: T
  projection_proximity_value   [0]: 1
  projection_targets           [0]: documentum-server-0
  projection_ports             [0]: 0
  network_locations             []: <none>
  server_major_version            :
  server_minor_version            :
  is_disabled                     : F

SYSTEM ATTRIBUTES

  r_object_type                   : dm_jms_config
  r_creation_date                 : 10/21/2019 09:51:01
  r_modify_date                   : 10/21/2019 10:50:20
  r_modifier                      : dmadmin
  r_access_date                   : nulldate
  r_composite_id                []: <none>
  r_composite_label             []: <none>
  r_component_label             []: <none>
  r_order_no                    []: <none>
  r_link_cnt                      : 0
  r_link_high_cnt                 : 0
  r_assembled_from_id             : 0000000000000000
  r_frzn_assembly_cnt             : 0
  r_has_frzn_assembly             : F
  r_is_virtual_doc                : 0
  r_page_cnt                      : 0
  r_content_size                  : 0
  r_lock_owner                    :
  r_lock_date                     : nulldate
  r_lock_machine                  :
  r_version_label              [0]: 1.0
                               [1]: CURRENT
  r_immutable_flag                : F
  r_frozen_flag                   : F
  r_has_events                    : F
  r_creator_name                  : dmadmin
  r_is_public                     : T
  r_policy_id                     : 0000000000000000
  r_resume_state                  : 0
  r_current_state                 : 0
  r_alias_set_id                  : 0000000000000000
  r_full_content_size             : 0
  r_aspect_name                 []: <none>
  r_object_id                     : 080f1234500010a4

APPLICATION ATTRIBUTES

  a_application_type              :
  a_status                        :
  a_is_hidden                     : F
  a_retention_date                : nulldate
  a_archive                       : F
  a_compound_architecture         :
  a_link_resolved                 : F
  a_content_type                  :
  a_full_text                     : T
  a_storage_type                  :
  a_special_app                   :
  a_effective_date              []: <none>
  a_expiration_date             []: <none>
  a_publish_formats             []: <none>
  a_effective_label             []: <none>
  a_effective_flag              []: <none>
  a_category                      :
  a_is_template                   : F
  a_controlling_app               :
  a_extended_properties         []: <none>
  a_is_signed                     : F
  a_last_review_date              : nulldate

INTERNAL ATTRIBUTES

  i_is_deleted                    : F
  i_reference_cnt                 : 1
  i_has_folder                    : T
  i_folder_id                  [0]: 0c0f123450000105
  i_contents_id                   : 0000000000000000
  i_cabinet_id                    : 0c0f123450000105
  i_antecedent_id                 : 0000000000000000
  i_chronicle_id                  : 080f1234500010a4
  i_latest_flag                   : T
  i_branch_cnt                    : 0
  i_direct_dsc                    : F
  i_is_reference                  : F
  i_retain_until                  : nulldate
  i_retainer_id                 []: <none>
  i_partition                     : 0
  i_is_replica                    : F
  i_vstamp                        : 2

API> ?,c,select r_object_id, object_name, servlet_name, supported_protocol, base_uri from dm_jms_config order by object_name, servlet_name;
r_object_id       object_name                                                   servlet_name        supported_protocol  base_uri                                                                                                                  
----------------  ------------------------------------------------------------  ------------------  ------------------  --------------------------------------------------------------------------------------------
080f1234500010a3  JMS cs01.dbi-ns01.svc.cluster.local:9080 for gr_repo.gr_repo  SAMLAuthentication  https               https://cs01.dbi-ns01.svc.cluster.local:9082/SAMLAuthentication/servlet/ValidateSAMLResponse
                                                                                do_bpm              https               https://cs01.dbi-ns01.svc.cluster.local:9082/bpm/servlet/DoMethod
                                                                                do_mail             https               https://cs01.dbi-ns01.svc.cluster.local:9082/DmMail/servlet/DoMail
                                                                                do_method           https               https://cs01.dbi-ns01.svc.cluster.local:9082/DmMethods/servlet/DoMethod
080f1234500010a4  JMS documentum-server-0:9080 for gr_repo.gr_repo              do_bpm              https               https://cs01.dbi-ns01.svc.cluster.local:9082/bpm/servlet/DoMethod
                                                                                do_mail             https               https://documentum-server-0:9082/DmMail/servlet/DoMail
                                                                                do_method           https               https://documentum-server-0:9082/DmMethods/servlet/DoMethod
(2 rows affected)

API> ?,c,select r_object_id, object_name, projection_enable, projection_proximity_value, projection_ports, projection_targets from dm_jms_config order by object_name, projection_targets;
r_object_id       object_name                                                   projection_enable  projection_proximity_value  projection_ports  projection_targets
----------------  ------------------------------------------------------------  -----------------  --------------------------  ----------------  -------------------
080f1234500010a3  JMS cs01.dbi-ns01.svc.cluster.local:9080 for gr_repo.gr_repo                  1                           2                 0  documentum-server-0
080f1234500010a4  JMS documentum-server-0:9080 for gr_repo.gr_repo                              1                           1                 0  documentum-server-0
(2 rows affected)

API> ?,c,select count(*) from dm_acs_config;
count(*)
----------------------
                     1
(1 row affected)

API> ?,c,select r_object_id, object_name, acs_supported_protocol, acs_base_url from dm_acs_config order by object_name, acs_base_url;
r_object_id       object_name                                                           acs_supported_protocol  acs_base_url
----------------  --------------------------------------------------------------------  ----------------------  ------------------------------------------------------------
080f123450000490  documentum-server-0.documentum-server.dbi-ns01.svc.cluster.localACS1  https                   https://cs01.dbi-ns01.svc.cluster.local:9082/ACS/servlet/ACS
(1 row affected)

API> exit
Bye
[dmadmin@documentum-server-0 ~]$

 

So what to do with that? Well a simple solution is to just remove the wrong JMS config object (the second one) and redo the JMS Projections. You can stay with the wrong name of the ACS config object and other wrong references: even if it’s ugly, it will be working properly, it’s really just the second JMS config object that might cause you some trouble. Either scripting all that so it’s done properly in the end or doing it manually but then obviously when you have a project with a few hundred Content Servers, a simple manual task can become a nightmare ;). Another obvious solution is to not use the K8s Service but stick with the headless/pod name. With this second solution, you might as well try to use the MigrationUtil utility to change all references to the hostname after the installation is done. That would be something interesting to test!

 

Cet article Documentum – Usage of K8s Services to install Documentum? est apparu en premier sur Blog dbi services.

Documentum – Database password validation rule in 16.4

Yann Neuhaus - Sun, 2019-12-01 00:00

A few months ago, I started working with the CS 16.4 (always using silent installation) and I had the pleasant surprise to see a new error message in the installation log. It’s always such a pleasure to lose time on pretty stupid things like the one I will talk about in this blog.

So what’s the issue? Well upon installing a new repository, I saw an error message around the start of the silent installation. In the end, the process didn’t stop and the repository was actually installed and functional – as far as I could see – but I needed to check this deeper, to be sure that there were no problem. This is an extract of the installation log showing the exact error message:

[dmadmin@documentum-server-0 ~]$ cd $DM_HOME/install/logs
[dmadmin@documentum-server-0 logs]$ cat install.log
14:45:02,608  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: UniversalServerConfigurator
14:45:02,608  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 16.4.0000.0248
14:45:02,608  INFO [main]  -
14:45:02,660  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
14:45:02,698  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerInformation - Setting CONFIGURE_DOCBROKER value to TRUE for SERVER
14:45:02,699  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerInformation - Setting CONFIGURE_DOCBASE value to TRUE for SERVER
14:45:03,701  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer was started using the dm_launch_server_config_program.sh script.
14:45:03,701  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer will determine the value of environment variable DOCUMENTUM.
14:45:06,702  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer will determine the value of environment variable PATH.
14:45:09,709 ERROR [main] com.documentum.install.server.installanywhere.actions.DiWAServerValidteVariables - Invalid database user password. Valid database user password rules are:
1. Must contain only ASCII alphanumeric characters,'.', '_' and '-'.
Please enter a valid database user password.
14:45:09,717  INFO [main]  - The license file:/app/dctm/server/dba/tcs_license exists.
14:45:09,721  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentConfigurationInstallationValidation - Start to validate docbase parameters.
14:45:09,723  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerPatchExistingDocbaseAction - The installer will obtain all the DOCBASE on the machine.
14:45:11,742  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerDocAppFolder - The installer will obtain all the DocApps which could be installed for the repository.
...
[dmadmin@documentum-server-0 logs]$

 

As you can see above, the error message is self-explanatory: the Database password used doesn’t comply with the so-called “rules”. Seeing this kind of message, you would expect the installer to stop since the password doesn’t comply. It shouldn’t install the Repository. Yet it just skip it and completes without problem.

On my side, I have always been using the same rule for passwords in Documentum: at least 1 lowercase, 1 uppercase, 1 figure, 1 special character and a total of 15 or more characters. Just comparing the password that has been used for the Database with what is printed on the log, the only reason why the password wouldn’t be correct is because I put a ‘+’ in it. In previous versions of Documentum, I often used a ‘+’ and I never had any issue or errors with it.

So I checked with the OpenText Support (#4240691) to have more details on what is happening here. Turns out that starting with the CS 16.4, OpenText added a new password validation for the Database account and that this password must indeed only contain alphanumeric characters, ‘.’, ‘_’ or ‘-‘… So they added a password validation which complains but it’s not doing anything. Actually, it’s even worse because the CS Team added this password validation with the CS 16.4 and they enforced this rule but only for the GUI installer. The same check was added only later to the silent installation but it was not enforced at that time. That’s the reason why if you would try using the same password on the GUI, it should fail while with the silent installation, it prints an error but it still complete successfully… Therefore, with the same binaries, you have two different behaviors. That’s pretty cool, right? Right? RIGHT?

In the end, a new defect (#CS-121161) has been raised and they will enforce the rule in a coming patch it seems. Therefore, if you are planning to use ‘+’ characters in your Database passwords, consider changing it upfront to avoid a failure in the Repository installation. Looks like this time I should have stayed quiet and maybe I would have been able to use ‘+’ for the next 10 years using the silent installations… Sorry!

 

Cet article Documentum – Database password validation rule in 16.4 est apparu en premier sur Blog dbi services.

Focus on 19c NOW!

Yann Neuhaus - Fri, 2019-11-29 08:00
Introduction

For years, Oracle used the same mechanism for database versioning. A major version, represented by the first number. And then a release number, 1 for the very first edition, and a mature and reliable release 2 for production databases. Both of them having patchsets (the last number) and regular patchset updates (the date optionally displayed at the end) to remove bugs and to increase security. Jumping from release 1 to release 2 required a migration as if you were coming from an older version. Recently, Oracle broke this release pace to introduce a new versioning system based on the year of release, like Microsoft and a lot of others did. Patchsets are also replaced by release updates. Quite obvious: it’s been a long time patchsets have become complete releases. Lots of Oracle DBAs are now in the fog, and as a result, could take wrong decision regarding the version to choose.

A recent history of Oracle Database versioning

Let’s focus on the versions currently running on most of customer’s databases:

  • 11.2.0.4: The terminal version of 11gR2 (long-term). 4 is the latest patchset of the 11gR2, there will never exist a 11.2.0.5. If you install the latest PSU (Patchset update) your database will precisely run on 11.2.0.4.191015 (as of the 29th of November 2019)
  • 12.1.0.2: The terminal version of 12cR1 (sort of long-term). A 12.1.0.1 existed but for a very short time
  • 12.2.0.1: first version of 12cR2 (short-term). This is the latest version with old versioning model
  • 18c: actually 12.2.0.2 – first patchset of the 12.2.0.1 (short-term). You cannot apply this patchset on top of the 12.2.0.1
  • 19c: actually 12.2.0.3 – terminal version of the 12cR2 (long-term). The next version will no more be based on 12.2 database kernel

18c and 19c also have sort of patchset but the name has changed: we’re now talking about RU (release update). RU are actually the second number, 18.8 for example. Each release update can also be updated with PSUs, still the last number, for example 18.8.0.0.191015.

Is there a risk to use older versions?

Actually, there is no risk using 11.2.0.4 and 12.1.0.2. These versions represent almost all the Oracle databases running in the world. Few people already migrated to 12.2 or newer versions. The risk is more related to the support provided by Oracle. With premier support (linked to the support fees almost every customer pay each year), you have limited access to My Oracle Support. Looking up for something in the knowledge database is OK, downloading old patches is OK, but downloading newest patches will no more be possible. And if you open a SR, the Oracle support team could ask you to buy extended support, or at least to apply the latest PSU you cannot download. If you want to keep your databases fully supported by Oracle, you’ll have to ask and pay for extended support, as far as your version can still be supported with this kind of support. For sure, 11gR1, 10gR2 and older versions are no more eligible for extended support.

Check this My Oracle Support note for fresh information about support timeline: Doc ID 742060.1

Should I migrate to 12.2 or 18c?

If you plan to migrate to 12.2 or 18c in 2020, think twice. The problem with these versions is that premier support is ending soon: before the end of 2020 for 12.2 and in the middle of 2021 for 18c. It’s very short and you probably won’t have the possibility to buy extended support (these are not terminal releases), you’ll have to migrate to 19c or newer version in 2020 or 2021.

Why 19c is probably the only version you should migrate to?

19c is the long-term support release, meaning that premier support will last longer (until 2023) and also that extended support will be available (until 2026). If you plan to migrate to 19c in 2020, you will benefit from all the desired patches and full support for 3 years. And there is a chance that Oracle will also offer extended support for the first year or more, as they did for 11.2 and 12.1, even it’s pure assumption.

How about the costs?

You probably own perpetual licenses, meaning that the Oracle database product is yours (if you are compliant regarding the number of users or processors defined in your contract). Your licenses are not attached to a specific version, you can use 11gR2, 12c, 18c, 19c… Each year, you pay support fees: these fees give you access to My Oracle Support, for downloading patches or opening a Service Request in case of problem. But you are supposed to run recent version of the database with this premier support. For example, as of the 29th of November 2019, the versions supported with premier support are 12.2.0.1, 18c and 19c. If you’re using older versions, like 12.1.0.2 or 11.2.0.4, you should pay additional fees for extended support. Extended support is not something you have to subscribe indefinitely, as the purpose is only to keep your database supported before you migrate to a newer version and return to premier support.

So, keeping older versions will cost you more, and in-time migration will keep your support fees as low as possible.

For sure, migrating to 19c also comes at a cost, but we’re now quite aware of the importance of migrating software and stay up to date for a lot of reasons.

Conclusion

Motivate your software vendor or your development team to validate and support 19c. The amount of work for supporting 19c against 18c or 12c is quite the same. All these versions being actually 12c. The behaviour of the database will be the same for most of us. Avoid migrating to 12.2.0.1 or 18c as you’ll have to migrate again in 1 year. Keep your 11gR2 and/or 12cR1 and take extended support for one year while preparing the migration to 19c if you’re not yet ready. 20c will be a kind of very first release 1: you probably won’t migrate to this version if you mostly consider stability and reliability for your databases.

Cet article Focus on 19c NOW! est apparu en premier sur Blog dbi services.

dbvisit dbvctl process is terminating abnormally with Error Code: 2044

Yann Neuhaus - Fri, 2019-11-29 03:07

When applying archive logs on the standby, dbvctl process can terminate abnormally with Error Code: 2044. This can happen in case there are several archive logs with huge size to be applied.

Problem description

With dbvisit there is 2 ways to recover the archive log on the standby, either using sqlplus or rman. By default the configuration is set to sqlplus. It can happen that, following a maintenance windows where synchronization had to be suspended, a huge gap is faced between the primary and the standby databases. Several archive logs need to be applied. Problem is even more visible if the size of the archive log files is big. In my case there were about 34 archive logs to be applied following a maintenance activity and size of each file was 8 GB.

Applying the archive logs on the standby failed as seen in the following output.

oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] ./dbvctl -d DDC_name
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 1939)
dbvctl started on server_name: Mon Oct 28 16:19:01 2019
=============================================================
 
 
>>> Applying Log file(s) from primary_server to DB_name on standby_server:
 
 
Dbvisit Standby terminated...
Error Code: 2044
File (/u01/app/dbvisit/standby/tmp/1939.dbvisit.201910281619.sqlplus.dbv) does not
exist or is empty. Please check space and file permissions.
 
Tracefile from server: server_name (PID:1939)
1939_dbvctl_DB_name_201910281619.trc
 
oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] ls -l /u01/app/dbvisit/standby/tmp/1939.dbvisit.201910281619.sqlplus.dbv
ls: cannot access /u01/app/dbvisit/standby/tmp/1939.dbvisit.201910281619.sqlplus.dbv: No such file or directory

Solution

To solve this problem, you can change DDC configuration on the primary to use RMAN to apply archive log, at least time the gap is caught up. You will have to synchronize the standby configuration as well.
To use RMAN, set APPLY_ARCHIVE_RMAN parameter to Y in the DDC configuration file.

Procedure is described below :

Backup the DDC configuration file

oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] cd conf
oracle@server_name:/u01/app/dbvisit/standby/conf/ [DB_name] cp -p dbv_DCC_name.env dbv_DCC_name.env.20191028

Change the parameter

oracle@server_name:/u01/app/dbvisit/standby/conf/ [DB_name] vi dbv_DCC_name.env
oracle@server_name:/u01/app/dbvisit/standby/conf/ [DB_name] diff dbv_DCC_name.env dbv_DCC_name.env.20191028
543c543
APPLY_ARCHIVE_RMAN = Y
---
APPLY_ARCHIVE_RMAN = N

Send the configuration changes to the standby

oracle@server_name:/u01/app/dbvisit/standby/conf/ [DB_name] cd ..
oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] ./dbvctl -d DCC_name -C
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 9318)
dbvctl started on server_name: Mon Oct 28 16:51:45 2019
=============================================================
 
>>> Dbvisit Standby configurational differences found between primary_server and standby_server.
Synchronised.
 
=============================================================
dbvctl ended on server_name: Mon Oct 28 16:51:52 2019
=============================================================

Apply archive log on the standby again and it will be completed successfully

oracle@server_name:/u01/app/dbvisit/standby/ [DB_name] ./dbvctl -d DDC_name
=============================================================
Dbvisit Standby Database Technology (8.0.20_0_g7e6bd51b) (pid 50909)
dbvctl started on server_name: Mon Oct 28 16:53:05 2019
=============================================================
 
 
>>> Applying Log file(s) from from primary_server to DB_name on standby_server:
 
 
Next SCN required for recovery 3328390017 generated at 2019-10-28:11:57:42 +01:00.
Next log(s) required for recovery:
thread 1 sequence 77553
>>> Searching for new archive logs under /u03/app/oracle/dbvisit_arch/DB_name_SITE2... done
thread 1 sequence 77553 (1_77553_973158276.arc)
thread 1 sequence 77554 (1_77554_973158276.arc)
thread 1 sequence 77555 (1_77555_973158276.arc)
thread 1 sequence 77556 (1_77556_973158276.arc)
thread 1 sequence 77557 (1_77557_973158276.arc)
thread 1 sequence 77558 (1_77558_973158276.arc)
thread 1 sequence 77559 (1_77559_973158276.arc)
thread 1 sequence 77560 (1_77560_973158276.arc)
thread 1 sequence 77561 (1_77561_973158276.arc)
thread 1 sequence 77562 (1_77562_973158276.arc)
thread 1 sequence 77563 (1_77563_973158276.arc)
thread 1 sequence 77564 (1_77564_973158276.arc)
thread 1 sequence 77565 (1_77565_973158276.arc)
thread 1 sequence 77566 (1_77566_973158276.arc)
thread 1 sequence 77567 (1_77567_973158276.arc)
thread 1 sequence 77568 (1_77568_973158276.arc)
thread 1 sequence 77569 (1_77569_973158276.arc)
thread 1 sequence 77570 (1_77570_973158276.arc)
thread 1 sequence 77571 (1_77571_973158276.arc)
thread 1 sequence 77572 (1_77572_973158276.arc)
thread 1 sequence 77573 (1_77573_973158276.arc)
thread 1 sequence 77574 (1_77574_973158276.arc)
>>> Catalog archives... done
>>> Recovering database... done
Last applied log(s):
thread 1 sequence 77574
 
Next SCN required for recovery 3331579974 generated at 2019-10-28:16:50:18 +01:00.
Next required log thread sequence
 
>>> Dbvisit Archive Management Module (AMM)
 
Config: number of archives to keep = 0
Config: number of days to keep archives = 7
Config: diskspace full threshold = 80%
==========
 
Processing /u03/app/oracle/dbvisit_arch/DB_name_SITE2...
Archive log dir: /u03/app/oracle/dbvisit_arch/DB_name_SITE2
Total number of archive files : 1025
Number of archive logs deleted = 8
Current Disk percent full : 51%
 
=============================================================
dbvctl ended on server_name: Mon Oct 28 17:10:36 2019
=============================================================

Cet article dbvisit dbvctl process is terminating abnormally with Error Code: 2044 est apparu en premier sur Blog dbi services.

ODA hang/crash due to software raid-check

Yann Neuhaus - Fri, 2019-11-29 02:33

Oracle Database Appliance (ODA) is by default configured with software raid for Operating System and Oracle Database software file system (2 internal SSD disks). 2 raid devices are configured : md0 and md1.ODA are configured to run raid-check every Sunday at 1am.

Analysing the problem

In case the ODA is having some load during raid-check, it can happen that the server freezes. Only IP layer seems to still be alive : server is replying to the ping command, but ssh layer is not available any more.
Nothing can be done with the ODA : no ssh connection, all logs and writes on the server are stuck, ILOM serial connection is impossible.

The only solution is to power cycle the ODA through ILOM.

Problem could be reproduced on customer side by running 2 RMAN database backups and manually executing the raid-check.

In /var/log/messages we can see that server hung doing raid-check on md1 :

Oct 27 01:00:01 ODA02 kernel: [6245829.462343] md: data-check of RAID array md0
Oct 27 01:00:01 ODA02 kernel: [6245829.462347] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Oct 27 01:00:01 ODA02 kernel: [6245829.462349] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
Oct 27 01:00:01 ODA02 kernel: [6245829.462364] md: using 128k window, over a total of 511936k.
Oct 27 01:00:04 ODA02 kernel: [6245832.154108] md: md0: data-check done.
Oct 27 01:01:02 ODA02 kernel: [6245890.375430] md: data-check of RAID array md1
Oct 27 01:01:02 ODA02 kernel: [6245890.375433] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
Oct 27 01:01:02 ODA02 kernel: [6245890.375435] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for data-check.
Oct 27 01:01:02 ODA02 kernel: [6245890.375452] md: using 128k window, over a total of 467694592k.
Oct 27 04:48:07 ODA02 kernel: imklog 5.8.10, log source = /proc/kmsg started. ==> Restart of ODA with ILOM, server freezed on data-check of RAID array md1
Oct 27 04:48:07 ODA02 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="5788" x-info="http://www.rsyslog.com"] start
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Initializing cgroup subsys cpuset
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Initializing cgroup subsys cpu
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Initializing cgroup subsys cpuacct
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Linux version 4.1.12-124.20.3.el6uek.x86_64 (mockbuild@ca-build84.us.oracle.com) (gcc version 4.9.2 20150212 (Red Hat 4.9.2-6.2.0.3) (GCC) ) #2 SMP Thu Oct 11 17:47:32 PDT 2018
Oct 27 04:48:07 ODA02 kernel: [ 0.000000] Command line: ro root=/dev/mapper/VolGroupSys-LogVolRoot rd_NO_LUKS rd_MD_UUID=424664a7:c29524e9:c7e10fcf:d893414e rd_LVM_LV=VolGroupSys/LogVolRoot rd_LVM_LV=VolGroupSys/LogVolSwap SYSFONT=latarcyrheb-sun16 LANG=en_US.UTF-8 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM pci=noaer crashkernel=256M@64M loglevel=3 panic=60 transparent_hugepage=never biosdevname=1 ipv6.disable=1 intel_idle.max_cstate=1 nofloppy nomce numa=off console=ttyS0,115200n8 console

Solution Reduce raid check CPU and IO priority

By default raid check is configured with low priority. Setting the priority to idle would ensure to limit the resource used by the check.

Change NICE=low to NICE=idle in /etc/sysconfig/raid-check configuration file.

[root@ODA02 log]# cat /etc/sysconfig/raid-check
#!/bin/bash
#
# Configuration file for /usr/sbin/raid-check
#
# options:
# ENABLED - must be yes in order for the raid check to proceed
# CHECK - can be either check or repair depending on the type of
# operation the user desires. A check operation will scan
# the drives looking for bad sectors and automatically
# repairing only bad sectors. If it finds good sectors that
# contain bad data (meaning that the data in a sector does
# not agree with what the data from another disk indicates
# the data should be, for example the parity block + the other
# data blocks would cause us to think that this data block
# is incorrect), then it does nothing but increments the
# counter in the file /sys/block/$dev/md/mismatch_count.
# This allows the sysadmin to inspect the data in the sector
# and the data that would be produced by rebuilding the
# sector from redundant information and pick the correct
# data to keep. The repair option does the same thing, but
# when it encounters a mismatch in the data, it automatically
# updates the data to be consistent. However, since we really
# don't know whether it's the parity or the data block that's
# correct (or which data block in the case of raid1), it's
# luck of the draw whether or not the user gets the right
# data instead of the bad data. This option is the default
# option for devices not listed in either CHECK_DEVS or
# REPAIR_DEVS.
# CHECK_DEVS - a space delimited list of devs that the user specifically
# wants to run a check operation on.
# REPAIR_DEVS - a space delimited list of devs that the user
# specifically wants to run a repair on.
# SKIP_DEVS - a space delimited list of devs that should be skipped
# NICE - Change the raid check CPU and IO priority in order to make
# the system more responsive during lengthy checks. Valid
# values are high, normal, low, idle.
# MAXCONCURENT - Limit the number of devices to be checked at a time.
# By default all devices will be checked at the same time.
#
# Note: the raid-check script is run by the /etc/cron.d/raid-check cron job.
# Users may modify the frequency and timing at which raid-check is run by
# editing that cron job and their changes will be preserved across updates
# to the mdadm package.
#
# Note2: you can not use symbolic names for the raid devices, such as you
# /dev/md/root. The names used in this file must match the names seen in
# /proc/mdstat and in /sys/block.
 
ENABLED=yes
CHECK=check
NICE=idle
# To check devs /dev/md0 and /dev/md3, use "md0 md3"
CHECK_DEVS=""
REPAIR_DEVS=""
SKIP_DEVS=""
MAXCONCURRENT=

Change raid-check scheduling

Configure raid-check to be run in low activity period. Avoid running raid check during database backup periods for example.

[root@ODA02 ~]# cd /etc/cron.d
 
[root@ODA02 cron.d]# cat raid-check
# Run system wide raid-check once a week on Sunday at 1am by default
0 1 * * Sun root /usr/sbin/raid-check
 
[root@ODA02 cron.d]# vi raid-check
 
[root@ODA02 cron.d]# cat raid-check
# Run system wide raid-check once a week on Sunday at 1am by default
0 19 * * Sat root /usr/sbin/raid-check

Conclusion

These configuration changes could be successfully tested on customer environment. No crash/hang was experienced with NICE parameter set to idle.
As per the oracle documentation, the ODA BIOS default configuration could be change to use hardware raid.
ODA – configuring RAID
The question would be if patching an ODA is still possible afterwards. If you would like to changed this configuration I would strongly recommend you to get Oracle support approval.

Cet article ODA hang/crash due to software raid-check est apparu en premier sur Blog dbi services.

The Many Ways To Sign-In To Oracle Cloud

Michael Dinh - Thu, 2019-11-28 09:09

When signing up for Oracle Cloud, Cloud Account Name must be provided.

Login to Oracle Cloud Infrastructure Classic using Cloud Account Name
https://myservices-CloudAccountName.console.oraclecloud.com

Login to Oracle Cloud Infrastructure (most simple and need to enter Cloud Account Name)

https://www.oracle.com/cloud/sign-in.html

Login to Oracle Cloud Infrastructure Region (need to enter Cloud Account Name/Cloud Tenant)

https://console.us-phoenix-1.oraclecloud.com

Login to Oracle Cloud Infrastructure Region using Cloud Account Name
https://console.us-phoenix-1.oraclecloud.com/?tenant=CloudAccountName

If you find more, the please let me know.

Good Minecraft Usernames

VitalSoftTech - Thu, 2019-11-28 08:53

You are all set to play Minecraft, things get whacky when you cannot think of a good Minecraft username. We are here to help! Minecraft is an award-winning videogame developed by Markus Presson, a Swedish game developer. It is much like a videogame in a sandbox because the player can create, modify, and destroy his […]

The post Good Minecraft Usernames appeared first on VitalSoftTech.

Categories: DBA Blogs

Multiple Node.js Applications on Oracle Always Free Cloud

Andrejus Baranovski - Thu, 2019-11-28 08:26
What if you want to host multiple Oracle JET applications? You can do it easily on Oracle Always Free Cloud. The solution is described in the below diagram:


You should wrap Oracle JET application into Node.js and deploy it to Oracle Compute Instance through Docker container. This is described in my previous post - Running Oracle JET in Oracle Cloud Free Tier.

Make sure to create Docker container with a port different than 80. To host multiple Oracle JET apps, you will need to create multiple containers, each assigned with a unique port. For example, I'm using port 5000:

docker run -p 5000:3000 -d --name appname dockeruser/dockerimage

This will map standard Node port 3000 to port 5000, accessible internally within Oracle Compute Instance. We can direct external traffic from port 80 to port 5000 (or any other port, mapped with Docker container) through Nginx.

Install Nginx:

yum install nginx

Go to Nginx folder:

cd etc/nginx

Edit configuration file:

nano nginx.conf

Add context root configuration for Oracle JET application, to be directed to local port 5000:

location /invoicingdemoui/ {
     proxy_pass http://127.0.0.1:5000/;
}

To allow HTTP call from Nginx to port 5000 (or other port), run this command (more about it on Stackoverflow):

setsebool -P httpd_can_network_connect 1

Reload Nginx:

systemctl reload nginx

Check Nginx status:

systemctl status nginx

That's all. Your Oracle JET app (demo URL) now accessible from the outside:

SOA Suite 12c Stumbling on parsing Ampersands

Darwin IT - Thu, 2019-11-28 03:51

Yesterday I ran into a problem parsing xml in BPEL. A bit of context: I get messages from a JMS queue, that I read 'Opaque'. Because I want to be able to dispatch the messages to different processes based on a generic WSDL, but with a different payload.

So after the Base64 Decode, for which I have a service, I need to parse the content to XML. Now, I used to use the oraext:parseEscapedXML() function for it. This function is known to have bugs, but I traced that down to BPEL 10g. And I'm on 12.2.1.3 now.

Still I got exceptions as:

<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"><part name="summary"><summary>An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)</summary></part><part name="code"><code>XPath expression failed to execute</code></part><part name="detail"><detail>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Expected ';'.
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</detail></part></subLanguageExecutionFault></bpelFault>

Or:

<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"><part name="summary"><summary>An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)</summary></part><part name="code"><code>XPath expression failed to execute</code></part><part name="detail"><detail>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Expected name instead of .
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</detail></part></subLanguageExecutionFault></bpelFault>

It turns out that it was due to ampersands (&amp;) in the message. The function oraext:parseEscapedXML() is known to stumble on that.

A work around is suggested in a forum on Integration Cloud Service (ICS).  It suggests to use oraext:get-content-as-string() first. And feed the contents to oraext:parseEscapedXML(). It turns out that that helps, although I had to fiddle around with xpath expressions, to get the correct child element, since I also got the parent element surrounding the part I actually wanted to parse.

But then I found this blog, suggesting that it was replaced by oraext:parseXML() in 12c (I found that it is actually introduced in 11g).

Strange that I didn't find this earlier. Digging deeper down memory-lane, I think I must have seen the function before.  However, it shows I'm still learning all the time.

Enabling, disabling, and validating foreign key constraints in PostgreSQL

Yann Neuhaus - Thu, 2019-11-28 01:39

Constraints are in important concept in every realtional database system and they guarantee the correctness of your data. While constraints are essentials there are situations when it is required to disable or drop them temporarily. The reason could be performance related because it is faster to validate the constraints at once after a data load. The reason could also be, that you need to load data and you do not know if the data is ordered in such a way that all foreign keys will validate for the time the data is loaded. In such a case it is required to either drop the constraints or to disable them until the data load is done. Validation of the constraints is deferred until all your data is there.

As always lets start with a simple test case, two tables, the second one references the first one:

postgres=# create table t1 ( a int primary key
postgres(#                 , b text
postgres(#                 , c date
postgres(#                 );
CREATE TABLE
postgres=# create table t2 ( a int primary key
postgres(#                 , b int references t1(a)
postgres(#                 , c text
postgres(#                 );
CREATE TABLE

Two rows, for each of them:

postgres=# insert into t1 (a,b,c) values(1,'aa',now());
INSERT 0 1
postgres=# insert into t1 (a,b,c) values(2,'bb',now());
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (1,1,'aa');
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (2,2,'aa');

Currently the two tiny tables look like this:

postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | text    |           |          | 
 c      | date    |           |          | 
Indexes:
    "t1_pkey" PRIMARY KEY, btree (a)
Referenced by:
    TABLE "t2" CONSTRAINT "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# 

Lets assume we want to load some data provided by a script. As we do not know the ordering of the data in the script we decide to disable the foreign key constraint on the t2 table and validate it after the load:

postgres=# alter table t2 disable trigger all;
ALTER TABLE

The syntax might look a bit strange but it actually does disable the foreign key and it would have disabled all the foreign keys if there would have been more than one. It becomes more clear when we look at the table again:

postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)
Disabled internal triggers:
    "RI_ConstraintTrigger_c_16460" AFTER INSERT ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_ins"()
    "RI_ConstraintTrigger_c_16461" AFTER UPDATE ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_upd"()

“ALL” means, please also disable the internal triggers that are responsible for verifying the constraints. One restriction of the “ALL” keyword is, that you need to be superuser for doing that. Trying that with a normal user will fail:

postgres=# create user u1 with login password 'u1';
CREATE ROLE
postgres=# \c postgres u1
You are now connected to database "postgres" as user "u1".
postgres=> create table t3 ( a int primary key
postgres(>                 , b text
postgres(>                 , c date
postgres(>                 );
CREATE TABLE
postgres=> create table t4 ( a int primary key
postgres(>                 , b int references t3(a)
postgres(>                 , c text
postgres(>                 );
CREATE TABLE
postgres=> alter table t4 disable trigger all;
ERROR:  permission denied: "RI_ConstraintTrigger_c_16484" is a system trigger
postgres=> 

What you could do as a regular user to do disable the user triggers:

postgres=> alter table t4 disable trigger user;
ALTER TABLE

As I do not have any triggers it of course does not make much sense. Coming back to our initial t1 and t2 tables. As the foreign key currently is disabled we can insert data into the t2 table that would violate the constraint:

postgres=# select * from t1;
 a | b  |     c      
---+----+------------
 1 | aa | 2019-11-27
 2 | bb | 2019-11-27
(2 rows)

postgres=# select * from t2;
 a | b | c  
---+---+----
 1 | 1 | aa
 2 | 2 | aa
(2 rows)

postgres=# insert into t2 (a,b,c) values (3,3,'cc');
INSERT 0 1
postgres=# 

There clearly is no matching parent for this row in the t1 table but the insert succeeds, as the foreign key is disabled. Time to validate the constraint:

postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)
Disabled internal triggers:
    "RI_ConstraintTrigger_c_16460" AFTER INSERT ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_ins"()
    "RI_ConstraintTrigger_c_16461" AFTER UPDATE ON t2 FROM t1 NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE FUNCTION "RI_FKey_check_upd"()

postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a)

postgres=# alter table t2 validate constraint t2_b_fkey;
ALTER TABLE
postgres=# 

Surprise, surprise, PostgreSQL does not complain about the invalid row. Why is that? If we ask the pg_constraint catalog table the constraint is recorded as validated:

postgres=# select convalidated from pg_constraint where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
 convalidated 
--------------
 t
(1 row)

It is even validated if we disable it once more:

postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# select convalidated from pg_constraint where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
 convalidated 
--------------
 t
(1 row)

That implies that PostgreSQL will not validate the constraint when we enable the internal triggers and PostgreSQL will not validate all the data as long as the status is valid. What we really need to do for getting the constraint validated is to invalidate it before:

postgres=# alter table t2 alter CONSTRAINT t2_b_fkey not valid;
ERROR:  ALTER CONSTRAINT statement constraints cannot be marked NOT VALID

Seems this is not the correct way of doing it. The correct way of doing it is to drop the foreign key and then re-create it with status invalid:

postgres=# alter table t2 drop constraint t2_b_fkey;
ALTER TABLE
postgres=# delete from t2 where a in (3,4);
DELETE 2
postgres=# alter table t2 add constraint t2_b_fkey foreign key (b) references t1(a) not valid;
ALTER TABLE
postgres=# \d t2
                 Table "public.t2"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           | not null | 
 b      | integer |           |          | 
 c      | text    |           |          | 
Indexes:
    "t2_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
    "t2_b_fkey" FOREIGN KEY (b) REFERENCES t1(a) NOT VALID

Now we have the desired state and we can insert our data:

postgres=# insert into t2(a,b,c) values (3,3,'cc');
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(3) is not present in table "t1".

Surprise, again. Creating a “not valid” constraint only tells PostgreSQL not to scan the whole table to validate if all the rows are valid. For data inserted or updated the constraint is still checked, and this is why the insert fails.

What options do we have left? The obvious one is this:

  • Drop all the foreign the keys.
  • Load the data.
  • Re-create the foreign keys, but leave them invalid to avoid the costly scan of the tables. Now data will be validated.
  • Validate the constraints when there is less load on the system.

Another possibility would be this:

postgres=# alter table t2 alter constraint t2_b_fkey deferrable;
ALTER TABLE
postgres=# begin;
BEGIN
postgres=# set constraints all deferred;
SET CONSTRAINTS
postgres=# insert into t2 (a,b,c) values (3,3,'cc');
INSERT 0 1
postgres=# insert into t2 (a,b,c) values (4,4,'dd');
INSERT 0 1
postgres=# insert into t1 (a,b,c) values (3,'cc',now());
INSERT 0 1
postgres=# insert into t1 (a,b,c) values (4,'dd',now());
INSERT 0 1
postgres=# commit;
COMMIT

The downside of this is that this only works until the next commit, so you have to do all your work in one transaction. The key point of this post is, that the assumption that following will validate your data is false:

postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# insert into t2 (a,b,c) values (5,5,'ee');
INSERT 0 1
postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# 

This will only validate new data but it does not guarantee that all the rows satisfy the constraint:

postgres=# insert into t2 (a,b,c) values (6,6,'ff');
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(6) is not present in table "t1".
postgres=# select * from t2 where b = 5;
 a | b | c  
---+---+----
 5 | 5 | ee
(1 row)

postgres=# select * from t1 where a = 5;
 a | b | c 
---+---+---
(0 rows)

Finally: There is another way of doing it, but this directly updates the pg_constraint catalog table and this is something you should _not_ do (never update internal tables directly!):

postgres=# delete from t2 where b = 5;
DELETE 1
postgres=# delete from t2 where b = 5;
DELETE 1
postgres=# alter table t2 disable trigger all;
ALTER TABLE
postgres=# insert into t2 values (5,5,'ee');
INSERT 0 1
postgres=# alter table t2 enable trigger all;
ALTER TABLE
postgres=# update pg_constraint set convalidated = false where conname = 't2_b_fkey' and conrelid = 't2'::regclass;
UPDATE 1
postgres=# alter table t2 validate constraint t2_b_fkey;
ERROR:  insert or update on table "t2" violates foreign key constraint "t2_b_fkey"
DETAIL:  Key (b)=(5) is not present in table "t1".
postgres=# 

In this case the constraint will be fully validated as it is recorded as invalid in the catalog.

Conclusion: Do not rely on assumptions, always carefully test your procedures.

Cet article Enabling, disabling, and validating foreign key constraints in PostgreSQL est apparu en premier sur Blog dbi services.

Oracle Files Lawsuit against Secretary of Labor Eugene Scalia and Department of Labor plus OFCCP and OFCCP Director Craig Leen Challenging the Unauthorized U.S. Department of Labor Enforcement and Adjudicative Regime

Oracle Press Releases - Wed, 2019-11-27 13:42
Press Release
Oracle Files Lawsuit against Secretary of Labor Eugene Scalia and Department of Labor plus OFCCP and OFCCP Director Craig Leen Challenging the Unauthorized U.S. Department of Labor Enforcement and Adjudicative Regime

Washington, D.C.—Nov 27, 2019

Oracle today filed a lawsuit in U.S. District Court in Washington, D.C. challenging the legality of the system of enforcement and adjudication established by the U.S. Department of Labor and its Office of Federal Contract Compliance Programs (OFCCP) for discrimination claims against government contractors. The complaint alleges that this system was not authorized by Congress or the President and contravenes statutory authorities. 

Oracle’s complaint states that under the current system, claims against government contractors are not prosecuted in federal courts with a federal jury. Instead, the Department of Labor itself serves as investigator, prosecutor, judge, jury and appellate court, usurping the role of the Equal Employment Opportunity Commission (EEOC), the Department of Justice and the Courts.

“Oracle filed this case because it is being subjected to an unlawful enforcement action by the Labor Department utilizing a process with no statutory foundation whatsoever,” said Ken Glueck, executive vice president, Oracle.

Congress expressly declined to give agencies, such as EEOC, the broad and unfettered authority that the Department of Labor has assumed for itself to investigate, prosecute and adjudicate lawsuits entirely in-house. This system violates the U.S. Constitution and acts of Congress, including the Civil Rights Act of 1964 and the Equal Employment Opportunity Act of 1972.

Oracle recognizes the vital importance of a lawful system that investigates and prosecutes discrimination by employers, including government contractors. But the existing extra-statutory Department of Labor process results in arbitrary enforcement actions against the many employers who qualify as federal contractors, often with no evidentiary foundation and designed to do nothing more than extort concessions under a system lacking any semblance of due process. 

“It is apparent that neither Solicitor of Labor Kate O’Scannlain nor OFCCP Director Craig Leen is prepared to move back to a system where merits trump optics. Oracle brings this suit because the leadership at the Department of Labor has failed to restore balance to an unrestrained bureaucracy,” said Glueck.

“We believe strongly in maintaining a level playing field in the workplace for all of our employees and remain proud of our firm commitment to equality in our workforce. This lawsuit seeks to ensure that employers such as Oracle are likewise entitled to a level playing field when the government asserts claims of discrimination. That has not been the case with the OFCCP, resulting in enforcement actions that are meritless and defamatory to Oracle, its executives, and other government contractors,” said Glueck

In addition to today’s lawsuit, Oracle fully intends to defend itself against the Department of Labor’s baseless enforcement action set to begin trial on December 5. The government’s case rests on false allegations, cherry-picked statistics, and erroneous and radical theories of the law. The Labor Department’s nonsensical claims underscore the need for the federal courts to declare the Department of Labor’s current enforcement system unconstitutional.

Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Pages

Subscribe to Oracle FAQ aggregator