Feed aggregator

What's new in OAC5.5?

Rittman Mead Consulting - Mon, 2020-02-03 03:01
What's new in OAC5.5?

Last Friday, alongside Oracle Analytics Server (for which a blog post is coming), new OAC version came out, let's have a quick look at all the new features it includes!

Maps

If you use Maps often, then there is a good list of options available to you! The very first is the possibility to associate a Map Layer to a Data Column directly in the data source definition.

Let's say you have a column City Zones in your dataset, which divides a city in customized areas based on your company needs and you have a map layer defining geographically those areas (e.g. with a GeoJSON file). I created an example with Verona, the city where I live. Custom GeoJSON file was created using geojson.io.

What's new in OAC5.5?

The upload of custom shapes and their usage in OAC was already available since some time, however you as project creator had to associate the City Zone column to the correct Map Layer for each map visualization included in your project. Now you can define the Map Layer to Data column association once for all at datasource level, so every Map using the City Zone column will automatically use the correct Layer.

What's new in OAC5.5?

Another cool new feature in Maps is the AutoFocus on Data, meaning that the  visualization will automatically zoom and center the map appropriately based on the dataset presented and rearrange in case of changes in the filtering.

Pivot Tables

Another new option is available in pivot tables where now you can set Totals and Subtotals Above and Below like you were used to do in Answers. Like the "old" tool you can now set a different format for the Totals and Subtotals with coloring, background and font formatting options available. You have now the full control of the layout and can make beautiful or horrible (like the below) color choices.

What's new in OAC5.5?Visualizations

The perfect visualization is now available: the Spacer Viz! This is an empty visualization that you can add to your canvas allowing you to optimize the layout in cases where you need an extra white space.

What's new in OAC5.5?

Another news in this release is related to the Custom Background: now it's possible to define a color or an image as background for the whole Project or for a single canvas. The image can be a URL reference or uploaded from the desktop. There are also options to position the image in the screen and to auto-fit the image in the window size. Adding a custom background to the whole project means that every time a new canvas is added, it will already have the selected image/color by default.

What's new in OAC5.5?

Another news is represented by the Butterfly Viz, this view was already available  as plugin from the Oracle Analytics Library, now becomes native in OAC5.5. The butterfly viz is useful when comparing two metrics across the same dimension.

What's new in OAC5.5?

By Default the two metrics are on the same scale, but there is also an option "Synchronized Scales" that, when set to OFF will show the metric on different scales.

Datasources and Data Gateway

A new datasource definition to Oracle NetSuite is now available, allowing the connection by passing the parameters Datasource, Account ID and Role ID on top of the usual Host, Username and Password.

What's new in OAC5.5?

An enhancement has been published also for the Oracle Database connection: now you can select between a Basic connection and Advanced. The Basic option should be used when connecting to single node databases. The Advanced, on the other side, is useful when connecting to Cluster RAC DBs where multiple hostnames and ports need to be listed. When selecting the Advanced option we can simply add a custom connection string like the below

(DESCRIPTION 
      (ADDRESS_LIST= (LOAD_BALANCE=on)(FAILOVER=on)
      (ADDRESS=(PROTOCOL=tcp)(HOST=hostname1.subnet.com)(PORT=1529))
      (ADDRESS=(PROTOCOL=tcp)(HOST=hostname2.subnet.com)(PORT=1529))
      (ADDRESS=(PROTOCOL=tcp)(HOST=hostname3.subnet.com)(PORT=1529))
      ...
      (ADDRESS=(PROTOCOL=tcp)(HOST=hostnamen.subnet.com)(PORT=1529))
      )
)

A option is also available to use Data Gateway with Essbase sources, making the OAC  on-premises Essbase connection only one click away by just enabling Use Data Gateway option in the screen. The usage of Data Gateway is also available now on BI Publisher allowing the pixel perfect reporting from on-premises datasources.

What's new in OAC5.5?

Another option in BI Publisher is Data Chunking, extremely useful for big reports since it allows the report execution in multiple sub-jobs in parallel with a final job to consolidate the results in a unique output.

The above are the news for this release, do you want more detailed examples on a particular features? Let me know in the comments and I'll write about it!

Categories: BI & Warehousing

Setup a two node Postgres HA Cluster using EDB Advanced Server and EFM

Yann Neuhaus - Mon, 2020-02-03 01:00

Some time ago I had a customer looking for a two node PostgreSQL Master/Replica solution. As we need Oracle compatibility in a later step, we decided to go with the EnterpriseDB tools. This article should give you an introduction on how to setup the environment.

Prerequisites

There are just some few things, that you need to prepare.
You need (at least) three servers with:

  • EPEL repository available
  • Subscription for EDB
  • EDB repository available

To make everything working with our DMK some folders and links are needed:

mkdir -p /u01/app/postgres/product/as11
mkdir -p /u01as11
mkdir -p /usr/edb
mkdir -p /u02/pgdata/11/PG1
ln -s /u02/pgdata/11/PG1/ /u01as11/data
ln -s /u01/app/postgres/product/as11/ /usr/edb/as11
yum install -y unzip xorg-x11-xauth screen
EDB Advanced Server Installation

Let’s start with the installation of the EDB Advanced Server This is really straight forward:

$ yum install edb-as11-server
$ chown enterprisedb:enterprisedb /u02/pgdata/11/epg1/
$ chown -R enterprisedb:enterprisedb /u01/app/
$ rm -rf /u01as11/backups/
$ passwd enterprisedb

Now you can install and configure our DMK. Make sure to adjust var::PGUSER::=::nooption::”enterprisedb” in the dmk.conf.

As soon as the installation is done, you can initialize a new primary cluster.

enterprisedb@edb1:/var/lib/edb/ [PG1] cat /etc/pgtab
PG1:/u01/app/postgres/product/as11/:/u02/pgdata/11/PG1/:5444:Y
enterprisedb@edb1:/var/lib/edb/ [PG1] dmk
enterprisedb@edb1:/var/lib/edb/ [pg950] PG1

********* dbi services Ltd. ****************

STATUS           : CLOSED

********************************************
enterprisedb@ad1:/var/lib/edb/ [PG1] sudo mkdir -p /u02/pgdata/PG1
enterprisedb@ad1:/var/lib/edb/ [PG1] sudo chown enterprisedb:enterprisedb /u02/pgdata/PG1
enterprisedb@ad1:/var/lib/edb/ [PG1] /u01/app/postgres/product/as11/bin/initdb --pgdata=/u02/pgdata/PG1/ --pwprompt --data-checksums --auth=md5
The files belonging to this database system will be owned by user "enterprisedb".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are enabled.

Enter new superuser password:
Enter it again:

fixing permissions on existing directory /u02/pgdata/PG1 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Europe/Berlin
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
creating edb sys ... ok
loading edb contrib modules ...
edb_redwood_bytea.sql ok
edb_redwood_date.sql ok
dbms_alert_public.sql ok
dbms_alert.plb ok
dbms_job_public.sql ok
dbms_job.plb ok
dbms_lob_public.sql ok
dbms_lob.plb ok
dbms_output_public.sql ok
dbms_output.plb ok
dbms_pipe_public.sql ok
dbms_pipe.plb ok
dbms_rls_public.sql ok
dbms_rls.plb ok
dbms_sql_public.sql ok
dbms_sql.plb ok
dbms_utility_public.sql ok
dbms_utility.plb ok
dbms_aqadm_public.sql ok
dbms_aqadm.plb ok
dbms_aq_public.sql ok
dbms_aq.plb ok
dbms_profiler_public.sql ok
dbms_profiler.plb ok
dbms_random_public.sql ok
dbms_random.plb ok
dbms_redact_public.sql ok
dbms_redact.plb ok
dbms_lock_public.sql ok
dbms_lock.plb ok
dbms_scheduler_public.sql ok
dbms_scheduler.plb ok
dbms_crypto_public.sql ok
dbms_crypto.plb ok
dbms_mview_public.sql ok
dbms_mview.plb ok
dbms_session_public.sql ok
dbms_session.plb ok
edb_bulkload.sql ok
edb_gen.sql ok
edb_objects.sql ok
edb_redwood_casts.sql ok
edb_redwood_strings.sql ok
edb_redwood_views.sql ok
utl_encode_public.sql ok
utl_encode.plb ok
utl_http_public.sql ok
utl_http.plb ok
utl_file.plb ok
utl_tcp_public.sql ok
utl_tcp.plb ok
utl_smtp_public.sql ok
utl_smtp.plb ok
utl_mail_public.sql ok
utl_mail.plb ok
utl_url_public.sql ok
utl_url.plb ok
utl_raw_public.sql ok
utl_raw.plb ok
commoncriteria.sql ok
waitstates.sql ok
installing extension edb_dblink_libpq ... ok
installing extension edb_dblink_oci ... ok
installing extension pldbgapi ... ok
snap_tables.sql ok
snap_functions.sql ok
dblink_ora.sql ok
sys_stats.sql ok
finalizing initial databases ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

    /u01/app/postgres/product/as11/bin/pg_ctl -D /u02/pgdata/PG1/ -l logfile start

enterprisedb@ad1:/var/lib/edb/ [PG1]

Than adjust wal_keep_segments and afterwards the edb-as service can be enabled and started.

$ echo "wal_keep_segments=100" >> $PGDATA/postgresql.auto.conf
$ sudo systemctl enable edb-as-11.service
$ sudo systemctl start edb-as-11

To be sure everything works as expected, reboot the server (if possible).

All above steps should also be done on your additional nodes, but without the systemctl start.

Configuration

First, on Node 1 (Master) you need to create the replication role.

postgres=# create role replication with REPLICATioN PASSWORD 'replication' login;
CREATE ROLE

Second, you need to add replication to pg_hba.conf.

local   replication    all             127.0.0.1/32            trust
host    replication    all             192.168.22.53/32        trust
host    replication    all             192.168.22.51/32        trust
host    replication    all             192.168.22.52/32        trust
host    replication    all             ::1/128                 trust

And last but not least, your should exchange the ssh-key of all nodes:

enterprisedb@edb1:/u01 [PG1] ssh-keygen
enterprisedb@edb1:/u01 [PG1] ssh-copy-id enterprisedb@edb1
enterprisedb@edb1:/u01 [PG1] ssh-copy-id enterprisedb@edb2
enterprisedb@edb1:/u01 [PG1] ssh-copy-id enterprisedb@edb3
Create the replica

As already mentioned, you need almost all steps done on Node 2 as well, but without starting the service.
Make sure to have all hosts in pg_hba.conf of Master.

To create the replication create a pg_basebackup into Node 2:

enterprisedb@edb2:/u01 [PG1] pg_basebackup -h 192.168.22.51 -U replication -p 5432 -D $PGDATA -Fp -Xs -P -R
49414/49414 kB (100%), 1/1 tablespace

Once finish, check if the recovery.conf is available and add the following lines:

enterprisedb@edb2:/u01 [PG1] echo "recovery_target_timeline = 'latest'" >> /u02/pgdata/11/PG1/recovery.conf
enterprisedb@edb2:/u01 [PG1] echo "trigger_file='/u02/pgdata/11/PG1/trigger_failover'" >> /u02/pgdata/11/PG1/recovery.conf

To test, if the recovery is working, start the cluster and check the recovery status.

enterprisedb@edb2:/u01 [PG1] pgstart
enterprisedb@edb2:/u01 [PG1] psql -U enterprisedb -c "select pg_is_in_recovery()" postgres
 pg_is_in_recovery
-------------------
 t
(1 row)

enterprisedb@edb2:/u01 [PG1] sudo systemctl enable edb-as-11.service
enterprisedb@edb2:/u01 [PG1] pgstop
enterprisedb@edb2:/u01 [PG1] systemctl start edb-as-11
EDB Postgres Failover Manager (EFM)

To make our two Node setup High Available, we need to install the EDB Postgres Failover Manager on three nodes. On the both installed with the Master / Replica and on a third one as a witness server.

Installation

Installation for EFM is straight forward as well, therefore your have to do the following steps on all three nodes. To use EFM toghether with our DMK, you need to create some links.

$ sudo yum install edb-efm37
$ sudo yum install java-1.8.0-openjdk
$ sudo chown -R enterprisedb:enterprisedb /etc/edb/efm-3.7/
$ cat /etc/edb/efm-3.7/efm.nodes
$ sudo ln -s /usr/edb/efm-3.7 /usr/edb/efm
$ sudo ln -s /etc/edb/efm-3.7 /etc/edb/efm
Configuration

On the master you have to set a password for the enterprisedb user and encrypt the password using EFM.

$ psql -U enterprisedb -c "alter user enterprisedb with password '*******'" postgres
$ /usr/edb/efm/bin/efm encrypt efm

The enrypted password generated by efm encrypt will be needed in the efm.properties files

As next step we need an efm.properties file on Node 1 and 2 (parameters to adjust below).

$ cp /u01/app/postgres/local/dmk/templates/postgres/efm.properties.in /etc/edb/efm-3.7/efm.properties
$ vi /etc/edb/efm-3.7/efm.properties
  db.user=enterprisedb
  db.password.encrypted=f17db6033ef1be48ec1955d38b4c9c46
  db.port=5400
  db.database=postgres
  db.bin=/u01/app/postgres/product/as11/bin
  db.recovery.dir=/u02/pgdata/11/EPAS
  bind.address=192.168.22.51:7800
  admin.port=7809
  is.witness=false 
  virtualIp=192.168.22.55
  virtualIp.interface=enp0s8
  virtualIp.prefix=24
  virtualIp.single=true
$ sudo chown enterprisedb:enterprisedb /etc/edb/efm/efm.properties

We also need a efm.nodes file to have all nodes of the cluster.

$ cat /etc/edb/efm/efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.51:7800 192.168.22.52:7800 192.168.22.53:7800
$ chown efm:efm efm.nodes
$ chmod 660 /etc/edb/efm/efm.nodes

To conclude, enable and start the efm-3.7.service.

sudo systemctl enable efm-3.7.service
sudo systemctl start efm-3.7.service

On node 3 we need to create a efm.properties file as well, but we need the efm.properties_witness file of dmk as draft.

$ cp /u01/app/postgres/local/dmk/templates/postgres/efm.properties_witness /etc/edb/efm-3.7/efm.properties

Adjust the parameters as shown in the step for node 1 and 2, but be careful to have:

is.witness=true

Afterwards start the efm-3.7 service on node 3 as well.

$ sudo systemctl start efm-3.7.service

In the end, you can check if everything is running as expected using EFM.

$ efm cluster-status efm
Cluster Status: efm

        Agent Type  Address              Agent  DB       VIP
        -----------------------------------------------------------------------
        Standby     192.168.22.51        UP     UP       192.168.22.55
        Master      192.168.22.52        UP     UP       192.168.22.55*
        Witness     192.168.22.53        UP     N/A      192.168.22.55

Allowed node host list:
        192.168.22.52 192.168.22.51 192.168.22.53

Membership coordinator: 192.168.22.52

Standby priority host list:
        192.168.22.51

Promote Status:

        DB Type     Address              WAL Received LSN   WAL Replayed LSN   Info
        ---------------------------------------------------------------------------
        Master      192.168.22.52                           0/110007B8
        Standby     192.168.22.51        0/110007B8         0/110007B8

        Standby database(s) in sync with master. It is safe to promote.

That’s it, now you have a Master/Replica system using EDB tools.
In a next step we will have a look at the setup of the cluster monitoring using EDB Enterprise Manager.

Cet article Setup a two node Postgres HA Cluster using EDB Advanced Server and EFM est apparu en premier sur Blog dbi services.

Oracle Flashback Data Archive

Tom Kyte - Sat, 2020-02-01 15:01
Hi All, We are using Oracle Flashback Data Archive in our database to track Audit in most of the critical tables. We initially planned to store data for 6months but there are few errors / issues we face making it unstable. So we have it as retentio...
Categories: DBA Blogs

Java procedure for host calls on Unix environment

Tom Kyte - Sat, 2020-02-01 15:01
Steve, I'm looking for a Java-procedure executing host calls on a Unix environment from the Oracle-server. I know the standard way of doing it by means of ProC (for Oracle versions before 8.1) but I do not have the ProC compiler available. I do...
Categories: DBA Blogs

Audit Vault and Database Firewall licensing

Tom Kyte - Sat, 2020-02-01 15:01
Dear Experts, Imagine that a partner as 8 Database EE Embedded licenses for end users and 4 Database EE Full licenses for their own internal application. Now, they want to implement only one Audit Vault appliance to collect and audit logs for all th...
Categories: DBA Blogs

Memory parameters - simple and auto tune

Tom Kyte - Sat, 2020-02-01 15:01
Hi Tom, We have 16 GB of memory on Windows server. Database is Oracle 12.2.01. 6.6 GB is using Oraclekernel exe. We have set: SGA_MAX_SIZE = 10016 M SGA_TARGET = 6016 M PGA_AGGREGATE_LIMIT = 6000 M PGA_AGGREGATE_TARGET = 2900 M I hop...
Categories: DBA Blogs

Calling executable from Scheduler

Tom Kyte - Sat, 2020-02-01 15:01
<b></b><code></code><u></u>Hi Tom I have been trying to call an executable shell script placed on AIX 7.2 from Oracle 12.2 via following code: <code>BEGIN DBMS_SCHEDULER.create_job ( job_name => 'GEN_DAILY_CNT_FILE', job_typ...
Categories: DBA Blogs

Data Safe private endpoints

Tom Kyte - Sat, 2020-02-01 15:01
Hi, customer is using Data Safe with DB System on OCI, but it wants to use DB System on private subnet. Now is possible to use Data Safe only with a DB System on a public subnet but this is not acceptable for customer security department. When will ...
Categories: DBA Blogs

How To Pass a List Object From C# to an Oracle Stored Procedure?

Tom Kyte - Sat, 2020-02-01 15:01
Dear Sirs, I know you?re probably tired from shaving your yak, but I?ve been doing research on this topic for a few months now with very little luck. Is there a way you can pass a list object from C# over to a stored procedure? I was able to do s...
Categories: DBA Blogs

How to find last DDL / DML timestamp for IOT tables

Tom Kyte - Sat, 2020-02-01 15:01
Hi, One of my Customer is asking for last DDL/DML timestamp change for IOT tables. They have many tables and they need to drop some of them which is not used / modified / altered frequently. I checked internally for IOT and Normal tables using OR...
Categories: DBA Blogs

Unpivoting billion rows

Tom Kyte - Sat, 2020-02-01 15:01
Hi ask Tom team, Hope you people are doing great in 2020. I am working in migration team ,our goal is to take csv file from client and ultimately transform data as per our production table structure . Requirement :- Recenty we have rec...
Categories: DBA Blogs

PL/SQL Package with no DEFINER or INVOKER rights - Part 2

Pete Finnigan - Sat, 2020-02-01 12:01
I posted about a discovery I made whilst testing for an issue in our PL/SQL code analyser checks in PFCLScan last week as I discovered that the AUTHID column in DBA_PROCEDURES or ALL_PROCEDURES or USER_PROCEDURES can be NULL; this caused....[Read More]

Posted by Pete On 28/01/20 At 03:11 PM

Categories: Security Blogs

PL/SQL That is not DEFINER or INVOKER rights - BUG?

Pete Finnigan - Sat, 2020-02-01 12:01
Note: Part 2 - PL/SQL Package with no DEFINER or INVOKER rights - Part 2 is available that takes this investigation further I always understood that PL/SQL objects in the database that are not explicitly changed to INVOKER rights....[Read More]

Posted by Pete On 24/01/20 At 03:19 PM

Categories: Security Blogs

Oracle BPEL: What's up with it?

Dietrich Schroff - Fri, 2020-01-31 15:22
Nearly 20 years ago there was a new standard for modelling business processes: BPEL (https://en.wikipedia.org/wiki/Business_Process_Execution_Language)
Oracle bought Collaxa and released the Oracle BPEL Process Manager.
In 2009 i wrote some posts over tutorials, which where from collaxa and which were very useful for starting with Oracle BPEL product:
http://dietrichschroff.blogspot.com/search?q=lost+collaxa

And what is left today?

The product is still listed on Oracle homepage:

And there is a section with some "news":


But the version number 10.1.3.4 is not really somehing new and if you open the white paper you will see:

I think a whitepaper from 2014 is not really something new. 

If you take a look on the BPEL specification at oasis (http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html), you can see, that there was no update to the BPEL standard for more than 10 years. 

It is very sad, that this concept of modelling processes in a graphical way was unsuccessful...

Provision EBS R12 On Oracle Cloud From Cloud Marketplace (OCI)

Online Apps DBA - Fri, 2020-01-31 07:04

Provision EBS R12 On Oracle Cloud From Cloud Marketplace (OCI) Do you know that you can now provision EBS (R12) On Oracle Cloud (OCI) just by clicking a button? All thanks to the availability of the EBS (R12) image on Oracle Cloud Marketplace. Check out K21Academy’s blog post at https://k21academy.com/ebscloud37 that covers: •Different Ways To […]

The post Provision EBS R12 On Oracle Cloud From Cloud Marketplace (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Can AI-based Cybersecurity Render Human Error Extinct?

Oracle Press Releases - Thu, 2020-01-30 08:00
Blog
Can AI-based Cybersecurity Render Human Error Extinct?

By Greg Jensen, Senior Director of Cloud Security—Jan 30, 2020

Numbed by yet another year of data breaches–2,013 in 2019 alone by one estimate–it is no surprise researchers widely acknowledge human error as a top-source of cyber insecurity. C-Suite executives and policy makers called “human error” their top cybersecurity risk in Oracle’s Security in the Age of AI report, released last May.

Without change, the scale and frequency of data breaches will continue to increase. To reverse the trend we must better understand how exactly, people contribute to cyber-insecurity.

How human error creates problems

In December 2019, Oracle surveyed 300 enterprise-level IT decision makers nationwide to examine concerns around cybersecurity posture and potential solutions. The survey evaluated the state of cybersecurity preparedness and trust in employing AI to address security gaps at organizations with 500+ employees. We saw key red flags from patching protocols to workforce constraints that created a cycle of vulnerability and error.

The results demonstrate that the human contribution to IT vulnerability remains undeniable. Most respondents (58%) cited human error as a leading reason for IT vulnerabilities. Yet what is  often characterized as a “technical” factor–insufficient patching–was judged just as important as human error, with 57% of those surveyed citing it as a leading reason for vulnerability. But insufficient patching–meaning either that patches are not applied frequently enough, or they are not applied consistently enough–is a factor firmly rooted in human behavior.

The problem of human error may be even bigger than these statistics suggest. Digging deeper, the survey found that nearly half (49%) of respondents cited available workforce as a constraint on patching; 62% said they would apply a wider variety of patches more frequently if they had additional personnel to do so. When asked about major contributors to failed patches, 84% of respondents pointed to human error.

Not surprisingly, a separate question found people were less trusted to keep organization IT systems safe (16%) than were machines (31%).

A role for AI.

Enterprises will never be able to hire enough humans to fix problems created by human error. Existing methods that companies and agencies employ leave considerable room for error, with 25% of all security patches deployed by organizations failing to reach every device. When U.S.-based CEOs cite cybersecurity as the number one external concern for their business in 2019, the simple act of patching (and failing) is a huge unresolved risk. Why should humans have to fix this problem alone, when autonomous capabilities are available to improve IT security.

Some vendors may place blame on customers for incorrectly configuring computing resources, but Oracle understands customers deserve more, and should have the option to do less, which is why we use autonomous technologies to secure customer systems. With the cybersecurity workforce gap growing, deploying AI is the best way to supplement human abilities and protect systems more efficiently.

Oracle’s investment of $6 billion each year to address problems like automated patching and security configuration translates into real action taken to protect our customers. Oracle created the world’s first autonomous database, a defining technology for this next generation of AI-led software and cloud services. Our self-repairing, self-tuning, self-patching autonomous technology helps protect businesses and governments from threats wherever they originate.

By taking on this responsibility, Oracle empowers enterprises to redeploy the scarce human talent of their cybersecurity workforce to address new priorities, not basic security hygiene. Enterprises that adopt autonomous technologies broadly and quickly will help eliminate an entire category of cyber risk (patching)–and stop repeating the pattern of human-error induced insecurity in the future.

Oracle Unshackles Stores from the Cash Wrap with Flexible POS Systems

Oracle Press Releases - Thu, 2020-01-30 08:00
Press Release
Oracle Unshackles Stores from the Cash Wrap with Flexible POS Systems New point of service hardware and software help retailers deliver a more engaging, full-service shopping experience with ultimate flexibility, form and function

Redwood Shores, Calif.—Jan 30, 2020

With 52 percent of shoppers reporting their favorite way to shop is still in store, retailers must continually push the boundaries of in-store experiences. For shoppers, convenience reigns supreme—they want to get in and out with the goods they were seeking, which can only happen when inventory is accessible and smart, connected associates are ready to help. Enabling brands to deliver on this promise, Oracle Retail has continued to evolve its Xstore Point-of-Service (POS) and Oracle hardware systems.  

The new Workstation 655 and 625 models feature a sleek, contemporary design that is built with a tough, but minimal footprint. The Workstation 6 models come preinstalled with the ‘Oracle Linux for MICROS’ operating system or Windows 10 IoT Enterprise. This enables retailers such as McColl’s or Burberry, to concentrate on their core mission of delivering the customer experience most aligned with their brand, without sacrificing scale or long-term growth.

“McColl’s performs more than four million customer transactions per week through 2,700 POS terminals and we need fast and reliable store systems to support our customers and store colleagues,” said Neil Hodge, information technology director, McColl’s. “We chose Oracle Retail Xstore Point-of-Service and the Oracle hardware because it is an adaptable solution capable of supporting our growth as operational requirements change.”

Oracle’s platform for modern retail allows retailers to select the format that best fits their unique brand image and customer expectations. This may be a fully mobile store, discrete fixed units or a combination, which is increasingly common. The form factor is just as important as having an integrated hardware and software point-of-service (POS) solutions that provide consistency across interfaces, features and functionality. This consistency ensures a seamless experience for both the consumer and the associate, helping reduce errors, increase speed and enhance the interaction with the inclusion of rich customer data.  

For example, using the customer data in Xstore on their mobile tablet, a store associate can guide a shopper to the newest shoe from their favorite brand, check inventory in the customer’s size and then complete the transaction right there in the shoe department. A seamless, convenient experience for both the shopper and associate.

As retailers explore new store formats and seek agility in their experiences, Oracle offers a hardware footprint engineered from the ground up including industry-leading peripherals, operating system, application servers, and database, all designed to run and enhance the Oracle Retail Xstore Point-of-Service experience right out of the box.

“Our latest hardware delivers a modern look and feel, and a more streamlined and resilient user-friendly footprint, which supports full feature store applications,” said Jeff Warren, vice president of strategy for Oracle Retail.  “Couple this with our hardware service offering and the result is an agile approach that enables retailers of all sizes to pick and choose the POS that is most brand-appropriate to their image, without sacrificing software features and functionality.”

The Oracle Retail Hardware portfolio is fully integrated with the Oracle Retail Xstore Point-of-Service and supporting omnichannel suite. The cohesive solution can provide retailers a wide range of benefits including faster implementation, lower total cost of ownership (TCO), a consistent user experience, and investment protection that will accommodate future upgrades.

Contact Info
Kaitlin Ambrogio
Oracle
+1 781-434-9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1 781-434-9555

Tracking Logs Inside a Documentum Container (part II)

Yann Neuhaus - Wed, 2020-01-29 18:45
Testing the log watcher

This is part II of the article. Part I is here.
All the above code has to be included in the entrypoint script so it gets executed at container start up time but it can also be tested more simply in a traditional repository installation.
First, we’ll move the code into a excutable script, e.g. entrypoint.sh, and run it in the background in a first terminal. Soon, we will notice that lots of log messages get displayed, e.g. from jobs executing into the repository:

---------------------------------------
Job Arguments:
(StandardJobArgs: docbase_name: dmtest.dmtest user_name: dmadmin job_id: 0800c35080007042 method_trace_level: 0 )
window_interval=120
queueperson=null
max_job_threads=3
 
---------------------------------------
2020-01-02T07:03:15.807617 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
Executing query to collect XCP automatic tasks to be processed:
select r_object_id from dmi_workitem where a_wq_name = 'to be processed by job' and r_auto_method_id != '0000000000000000' and (r_runtime_state = 0 or r_runtime_state = 1) order by r_creation_date asc
Report End 2020/01/02 07:03:15
2020-01-02T07:03:16.314586 5888[5888] 0100c3508000ab90 [DM_SESSION_I_SESSION_QUIT]info: "Session 0100c3508000ab90 quit."
Thu Jan 02 07:04:44 2020 [INFORMATION] [LAUNCHER 6311] Detected during program initialization: Version: 16.4.0080.0129 Linux64
2020-01-02T07:04:44.736720 6341[6341] 0100c3508000ab91 [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab91 started for user dmadmin."
Thu Jan 02 07:04:45 2020 [INFORMATION] [LAUNCHER 6311] Detected while preparing job dm_Initialize_WQ for execution: Agent Exec connected to server dmtest: [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab91 started for user dmadmin."
 
 
2020-01-02T07:04:45.686337 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
2020-01-02T07:04:45.698970 1597[1597] 0100c3508000ab7f [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab7f is owned by user dmadmin now."
Initialize_WQ Report For DocBase dmtest.dmtest As Of 2020/01/02 07:04:45
 
---------------------------------------
Job Arguments:
(StandardJobArgs: docbase_name: dmtest.dmtest user_name: dmadmin job_id: 0800c3508000218b method_trace_level: 0 )
window_interval=120
queueperson=null
max_job_threads=3
 
---------------------------------------
Starting WQInitialisation job:
2020-01-02T07:04:45.756339 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
Executing query to collect Unassigned worktiems to be processed:
select r_object_id, r_act_def_id from dmi_workitem where a_held_by = 'dm_system' and r_runtime_state = 0 order by r_creation_date
Total no. of workqueue tasks initialized 0
Report End 2020/01/02 07:04:45
2020-01-02T07:04:46.222728 6341[6341] 0100c3508000ab91 [DM_SESSION_I_SESSION_QUIT]info: "Session 0100c3508000ab91 quit."
Thu Jan 02 07:05:14 2020 [INFORMATION] [LAUNCHER 6522] Detected during program initialization: Version: 16.4.0080.0129 Linux64
2020-01-02T07:05:14.828073 6552[6552] 0100c3508000ab92 [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab92 started for user dmadmin."
Thu Jan 02 07:05:15 2020 [INFORMATION] [LAUNCHER 6522] Detected while preparing job dm_bpm_XCPAutoTaskMgmt for execution: Agent Exec connected to server dmtest: [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab92 started for user dmadmin."
 
 
2020-01-02T07:05:15.714803 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
2020-01-02T07:05:15.726601 1597[1597] 0100c3508000ab7f [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab7f is owned by user dmadmin now."
bpm_XCPAutoTaskMgmt Report For DocBase dmtest.dmtest As Of 2020/01/02 07:05:15
 
---------------------------------------
 

Then, from a second terminal, we’ll start and stop several idql sessions and observe the resulting output. We will notice the familiar lines *_START and *_QUIT from the session’s logs:

---------------------------------------
2020-01-02T07:09:16.076945 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
Executing query to collect XCP automatic tasks to be processed:
select r_object_id from dmi_workitem where a_wq_name = 'to be processed by job' and r_auto_method_id != '0000000000000000' and (r_runtime_state = 0 or r_runtime_state = 1) order by r_creation_date asc
Report End 2020/01/02 07:09:16
2020-01-02T07:09:16.584776 7907[7907] 0100c3508000ab97 [DM_SESSION_I_SESSION_QUIT]info: "Session 0100c3508000ab97 quit."
2020-01-02T07:09:44.969770 8080[8080] 0100c3508000ab98 [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab98 started for user dmadmin."
2020-01-02T07:09:47.329406 8080[8080] 0100c3508000ab98 [DM_SESSION_I_SESSION_QUIT]info: "Session 0100c3508000ab98 quit."
...

So, inotifywatch is pretty effective as a file watcher.

Let’se see how many tail processes are currently running:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 4818 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 4846 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 4850 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f
1 8375 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c3508000218b
1 8389 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab99
1 8407 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/Initialize_WQDoc.txt
1 8599 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c35080000386
1 8614 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9a
1 8657 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9b
1 8673 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/DataDictionaryPublisherDoc.txt

And after a while:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 4818 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 4846 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 4850 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f
1 8599 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c35080000386
1 8614 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9a
1 8657 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9b
1 8673 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/DataDictionaryPublisherDoc.txt
1 8824 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c35080007042
1 8834 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9c
1 8852 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/bpm_XCPAutoTaskMgmtDoc.txt

Again:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 10058 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 10078 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c3508000218b
1 10111 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9f
1 10131 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 10135 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f
1 10139 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/Initialize_WQDoc.txt

And again:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 10892 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 10896 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c3508000218b
1 10907 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000aba1
1 10921 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 10925 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f
1 10930 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/Initialize_WQDoc.txt

And, after disabling, the aggregation of new logs:

$ echo 0 > $tail_on_off
$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 24676 24277 9328 pts/1 26096 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 24710 24277 9328 pts/1 26096 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 24714 24277 9328 pts/1 26096 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f

And eventually:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
<nothing>

From now on, until the aggregation is turned back on, this list will be empty as no new tail commands will be started.
The number of tail commands grows and shrinks as expected, so the process clean up is working.
The agentexec.log file is often here because jobs are started frequently and therefore this file regularly triggers the MODIFY event. dmadmin and sysadmin owned session logs come and go for each started job. The DataDictionaryPublisherDoc.txt, bpm_XCPAutoTaskMgmtDoc.txt and Initialize_WQDoc.txt are a few followed job’s logs.
In the used test system, an out of the box docbase without any running applications, the output is obviously very quiet with long pauses in between but it can become extremely dense in a busy system. When investigating problems in such systems, it can be useful to redirect the whole output to a text file and use one’s favorite editor to search it at leisure. It is also possible to restrict it to an as narrow as desired time window (if docker logs is used) in order to reduce its noise, exclude files from being watched (see the next paragraph) and even to stop aggregating new logs via the $tail_on_off file so only the currently ones are followed as long as they are active (i.e. written in).

Dynamically reconfiguring inotifywait via a parameter file

In the preceding examples, the inotifywait command has been passed a fixed, hard-coded subtree name to watch. In certain cases, it could be useful to be able to watch a different, random sub-directory or list of arbitrary files in unrelated sub-directories with a possible list of excluded files; for even more flexibility, the other inotifywait’s parameters could also be changed dynamically. In such cases, the running inotifywait command will have to be stopped and restarted to change its command-line parameters. One could imagine to check a communication file (like the $tail_on_off file above) inside the entrypoint’s main loop, as shown below. Here the diff is relative to the precedent static parameters version:

10a11,13
> # file that contains the watcher's parameters;
> export new_inotify_parameters_file=${watcher_workdir}/inotify_parameters_file
> 
15a19,20
>    inotify_params="$1"
>  
19c24
    eval $private_inotify ${inotify_params} $watcher_workdir | gawk -v tail_timeout=$((tail_timeout * 60)) -v excluded_files="$excluded_files" -v bMust_tail=1 -v tail_on_off=$tail_on_off -v heartbeat_file=$heartbeat_file -v env_private_tail=private_tail -v FS="|" -v Apo="'" 'BEGIN {
122a128,142
> process_new_inotify_parameters() {
>    # read in the new parameters from file $new_inotify_parameters_file;
>    # first line of the $new_inotify_parameters_file must contains the non-target parameters, e.g. -m -e create,modify,attrib -r --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T %e %w%f";
>    new_params=$(cat $new_inotify_parameters_file)
>    rm $new_inotify_parameters_file
>  
>    # kill current watcher;
>    pkill -f $private_inotify
> 
>    # kill the current private tail commands too;
>    pkill -f $private_tail
> 
>    # restart inotify with new command-line parameters taken from $new_inotify_parameters_file;
>    follow_logs "$new_params" &
> }
138,139c158,162
< # start the watcher;
 # default inotifywait parameters;
> # the creation of this file will trigger a restart of the watcher using the new parameters;
> cat - < $new_inotify_parameters_file
>    --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format '%T|%e|%w%f' \${DOCUMENTUM}/dba/log --exclude \$new_inotify_parameters_file
> eot
143a167
>    [[ -f $new_inotify_parameters_file ]] && process_new_inotify_parameters

To apply the patch, save the first listing into a file, e.g. inotify_static_parameters.sh, the above diff output into a file, e.g. static2dynamic.diff, and use the command below to create the script inotify_dynamic_parameters.sh:

patch inotify_static_parameters.sh static2dynamic.diff -o inotify_dynamic_parameters.sh

In order not to trigger events when it is created, the $new_inotify_parameters_file must be excluded from inotifywait’s watched directories.
If, for instance, we want later to exclude jobs’ logs in order to focus on more relevant logs, we could use the following inotifywait’s parameters:

# cat - <<-eot > $new_inotify_parameters_file
   --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T|%e|%w%f" \${DOCUMENTUM}/dba/log --exclude '${DOCUMENTUM}/dba/log/dmtest/agentexec/job_.*' --exclude $new_inotify_parameters_file
eot

From the outside of a container:

docker exec <container> /bin/bash -c 'cat - < $new_inotify_parameters_file
   --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T|%e|%w%f" \${DOCUMENTUM}/dba/log --exclude "\${DOCUMENTUM}/dba/log/dmtest/agentexec/job_.*" --exclude \$new_inotify_parameters_file
eot
'

where the new option −−exclude specifies a POSIX-compliant regular expression to be quoted so the shell does not perform its variable expansions.
After at most $pause_duration seconds, this file is detected (line 34), read (line 14), deleted (line 15) and the current watcher gets restarted in the background with the new parameters (line 24).
This approach is akin to polling the parameter file and it looks a bit unsophisticated. We use events to control the tailing through the $tail_on_off file and for the heartbeat. Why not for the parameter file ? The next paragraph will show just that.

Dynamically reconfiguring inotifywait via an event

A better way would be to use the watcher itself to check if the parameter file has changed ! After all, we shall put our money where our mouth is, right ?
In order to do this, and dedicated watcher is set up to watch the parameter file. For technical reasons (in order to watch a file, that file must already exist. Also, when the watched file is erased, inotifywait does not react to any events on that file any longer; it just sits there idly), instead of watching $new_inotify_parameters_file, it watches its parent directory. To avoid scattering files in too many locations, the parameter file will be stored along with the technical files in ${watcher_workdir} and, in order to avoid impacting the performance, it will be excluded from the main watcher (parameter –exclude \$new_inotify_parameters_file, do not forget to append it otherwise the main watcher will raise events for nothing; it should not impede the parameter file to be processed though).
When a MODIFY event on this file occurs, which includes a CREATE event if the file did not pre-exist, the event is processed as shown precedently.
Here the diff compared to the preceding polled parameter file version:

143a144,151
> # the parameter file's watcher;
> param_watcher() {
>    IFS="|"
>    inotifywait --quiet --monitor --event create,modify,attrib --timefmt "%Y/%m/%d-%H:%M:%S" --format '%T|%e|%w%f' $watcher_workdir --exclude \$tail_on_off \$heartbeat_file | while read timestamp event target_file; do
>       [[ $target_file == $new_inotify_parameters_file ]] && process_new_inotify_parameters
>    done
> }
> 
162a171,173
> 
> # starts the parameter file's watcher;
> param_watcher &

The delta is quite short with just the function param_watcher() to start the watcher on the parameter file and process its signal by invoking the same function process_new_inotify_parameters() introduced in the polling version.
To apply the patch, save the above diff output into a file, e.g. dynamic2event.diff, and use the command below to create the script inotify_dynamic_parameters_event.sh from the polling version’s script inotify_dynamic_parameters.sh created above:

patch inotify_dynamic_parameters.sh dynamic2event.diff -o inotify_dynamic_parameters_event.sh

The dedicated watcher runs in the background and restarts the main watcher whenever the latter receives new parameters. Quite simple.

Dynamically reconfiguring inotifywait via signals

Yet another way to interact with inotifywait’s script (e.g. the container’s entrypoint) could be through signals. To this effect, we’ll need first to choose suitable signals, say SIGUSR[12], define a trap handler and implement it. And while we are at it, let’s also add switching the watcher on and off, as illustrated below:

2a3,5
> trap 'bounce_watcher' SIGUSR1
> trap 'toggle_watcher' SIGUSR2
> 
7a11,14
> # new global variable to hold the current tail on/off status;
> # used to toggle the watcher;
> export bMust_tail=1
> 
144,149c151,162
< # the parameter file's watcher;
< param_watcher() {
<    IFS="|"
<    inotifywait --quiet --monitor --event create,modify,attrib --timefmt "%Y/%m/%d-%H:%M:%S" --format '%T|%e|%w%f' $watcher_workdir --exclude \$tail_on_off \$heartbeat_file | while read timestamp event target_file; do
<       [[ $target_file == $new_inotify_parameters_file ]] && process_new_inotify_parameters
 # restart the watcher with new parameters;
> bounce_watcher() {
>    [[ -f $new_inotify_parameters_file ]] && process_new_inotify_parameters
>    echo "parameters reloaded"
> }
> 
> # flip the watcher on/off;
> # as file $tail_on_off is watched, modifying it will trigger the processing of the boolean bMust_tail in the gawk script;
> toggle_watcher() {
>    (( bMust_tail = (bMust_tail + 1) % 2 ))
>    echo $bMust_tail > $tail_on_off
>    echo "toggled the watcher, it is $bMust_tail now"
172,173c185,187
< # starts the parameter file's watcher;
 process_new_inotify_parameters
> 
> echo "process $$ started"
178d191
<    [[ -f $new_inotify_parameters_file ]] && process_new_inotify_parameters

To apply the patch, save the above diff output into a file, e.g. event2signals.diff, and use the command below to create the script inotify_dynamic_parameters_signals.sh from the event version’s script inotify_dynamic_parameters_event.sh created above:

patch inotify_dynamic_parameters_events.sh event2signals.diff -o inotify_dynamic_parameters_signals.sh

The loop is simpler now as the functions are directly invoked by outside signals.
To use it, just send the SIGUSR[12] signals to the container, as shown below:

# write a new configuration file:
...
# and restart the watcher:
$ docker kill --signal SIGUSR1 <container>
 
# toggle the watcher;
$ docker kill --signal SIGUSR2 <container>

If testing outside a container, the commands would be:

/bin/kill --signal SIGUSR1 pid
# or:
/bin/kill --signal SIGUSR2 pid

where pid is the pid displayed when the script starts.
We won’t indulge more on signals, see How to stop Documentum processes in a docker container, and more (part I) for many more examples of using signals to talk to containers.

Conclusion

inotifywait is simple to configure and extremely fast, pratically instantaneous, at sending notifications. Although the aggregated output looks a bit confusing when the system is loaded, there are several ways to reduce its volume to make it easier to use. It is an interesting addition to an administrator to help troubleshooting repositories and their client applications. With suitable parameters, it can be used to support any containerized or traditional installations. It is one of those no frills tools that just work (mostly) as expected. If you ever happen to need a file watcher, consider it, you won’t be disappointed and it is fun to use.

Cet article Tracking Logs Inside a Documentum Container (part II) est apparu en premier sur Blog dbi services.

Tracking Logs Inside a Documentum Container (part I)

Yann Neuhaus - Wed, 2020-01-29 18:44

Containers running under docker can have their stdout observed from the outside through the “docker logs”command; here is an excerpt of its usage:

docker logs --help
Usage:	docker logs [OPTIONS] CONTAINER

Fetch the logs of a container

Options:
      --details        Show extra details provided to logs
  -f, --follow         Follow log output
      --since string   Show logs since timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m
                       for 42 minutes)
      --tail string    Number of lines to show from the end of the logs (default "all")
  -t, --timestamps     Show timestamps
      --until string   Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g.
                       42m for 42 minutes)

e.g.:

Example of output:

docker logs --follow --timestamps container05bis
...
2019-07-10T03:50:38.624862914Z ==> /app/dctm/dba/log/docbroker.container05bis.1489.log <==
2019-07-10T03:50:38.624888183Z OpenText Documentum Connection Broker (version 16.4.0000.0248 Linux64)
2019-07-10T03:50:38.624893936Z Copyright (c) 2018. OpenText Corporation
2019-07-10T03:50:38.624898034Z 2019-07-10T05:50:38.519721 [DM_DOCBROKER_I_START]info: "Docbroker has started. Process id: 35"
2019-07-10T03:50:38.624902047Z 2019-07-10T05:50:38.521502 [DM_DOCBROKER_I_REGISTERED_PORT]info: "The Docbroker registered using port (1489)."
2019-07-10T03:50:38.624906087Z 2019-07-10T05:50:38.521544 [DM_DOCBROKER_I_LISTENING]info: "The Docbroker is listening on network address: (INET_ADDR: family: 2, port: 1489, host: container05bis (192.168.33.7, 0721a8c0))"
2019-07-10T03:50:38.624911984Z
2019-07-10T03:50:38.624915369Z ==> /app/dctm/dba/log/dmtest05bis.log <==
2019-07-10T03:50:38.625040316Z
2019-07-10T03:50:38.625050474Z ==> /app/dctm/dba/log/dmtest05bis/agentexec/agentexec.log <==
2019-07-10T03:50:38.625055299Z Wed Jul 10 03:33:48 2019 [INFORMATION] [LAUNCHER 4251] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625059065Z Wed Jul 10 03:34:18 2019 [INFORMATION] [LAUNCHER 4442] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625071866Z Wed Jul 10 03:34:48 2019 [INFORMATION] [LAUNCHER 4504] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625075268Z Wed Jul 10 03:36:18 2019 [INFORMATION] [LAUNCHER 4891] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625078688Z Wed Jul 10 03:36:49 2019 [INFORMATION] [LAUNCHER 4971] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625082182Z Wed Jul 10 03:48:18 2019 [INFORMATION] [LAUNCHER 6916] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625096886Z
2019-07-10T03:50:38.625101275Z ==> /app/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs/ServerApps.log <==
2019-07-10T03:50:38.625105098Z at io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:511)
2019-07-10T03:50:38.625108575Z at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:101)
2019-07-10T03:50:38.625112342Z at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:82)
2019-07-10T03:50:38.625116110Z at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
2019-07-10T03:50:38.625120084Z at java.util.concurrent.FutureTask.run(FutureTask.java:266)
2019-07-10T03:50:38.625123672Z at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2019-07-10T03:50:38.625127341Z at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2019-07-10T03:50:38.625131122Z at java.lang.Thread.run(Thread.java:748)
2019-07-10T03:50:38.625134828Z at org.jboss.threads.JBossThread.run(JBossThread.java:320)
2019-07-10T03:50:38.625139133Z 05:34:58,050 INFO [ServerService Thread Pool -- 96] com.documentum.cs.otds.DfSessionHandler - DFC Client Successfully initialized
2019-07-10T03:50:38.625143291Z
2019-07-10T03:50:38.625146939Z ==> /app/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs/ServerApps_trace.log <==
2019-07-10T03:50:38.625150991Z
2019-07-10T03:50:38.625154528Z ==> /app/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs/dmotdsrest.log <==
2019-07-10T03:50:38.625159563Z 03:33:16,505 [ServerService Thread Pool -- 66] DFC Client Successfully initialized
2019-07-10T03:50:38.625163445Z 05:34:58,050 [ServerService Thread Pool -- 96] DFC Client Successfully initialized
2019-07-10T03:50:38.625955045Z Setting up watches. Beware: since -r was given, this may take a while!
2019-07-10T03:50:38.627044196Z Watches established.
2019-07-10T03:50:38.934648542Z
2019-07-10T03:50:38.934668467Z ==> /app/dctm/dba/log/dmtest05bis.log <==
2019-07-10T03:50:38.934673076Z Wed Jul 10 05:50:38 2019[DM_STARTUP_W_DOCBASE_OWNER_NOT_FOUND] *** warning *** : The database user, dmtest05bisc as specified by your server.ini is not a valid user as determined using the system password check api. This will likely severly impair the operation of your docbase.
2019-07-10T03:50:39.001793811Z
2019-07-10T03:50:39.001816266Z
2019-07-10T03:50:39.001821414Z OpenText Documentum Content Server (version 16.4.0000.0248 Linux64.Oracle)
2019-07-10T03:50:39.001825146Z Copyright (c) 2018. OpenText Corporation
2019-07-10T03:50:39.001828983Z All rights reserved.
2019-07-10T03:50:39.001832816Z
2019-07-10T03:50:39.005318448Z 2019-07-10T05:50:39.005068 193[193] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase dmtest05bis attempting to open"
2019-07-10T03:50:39.005337158Z
2019-07-10T03:50:39.005342472Z 2019-07-10T05:50:39.005172 193[193] 0000000000000000 [DM_SERVER_I_START_KEY_STORAGE_MODE]info: "Docbase dmtest05bis is using database for cryptographic key storage"
...

If the information is plethoric, the above command can be narrowed to a time window by adding −−since and −−until restrictions, e.g.:

docker logs --timestamps --follow --since 5m <container>

Thus, everything sent to the container’s stdout can be aggregated and viewed very simply from one place, yielding a cheap console log. In particular, Documentum containers could expose their well-known log files to the outside world, e.g. the docbroker log, the content server(s) log(s) and the method server logs. To this effect, it would be enough to just start a “tail -F ” on those files from within the entrypoint as illustrated below:

tail -F ${DOCUMENTUM}/dba/log/docbroker.log ${DOCUMENTUM}/dba/log/dmtest.log ${DOCUMENTUM}/wildfly9.0.1/server/DctmServer_MethodServer/logs/ServerApps.log ...

The -F option guarantees that the logs continue being followed even after a possible rotation.
Admittedly, this output can be a bit hard to read because the logs are interlaced, i.e. lines or block of lines from different logs are displayed sorted by the time they were produced and not by their origin. Actually, this is a benefit because it makes it easier to find correlations between distinct, apparently unrelated events.
Viewing a particular log is still possible from without the container, e.g.:

docker exec <container_name> /bin/bash -c "tail -f \${DOCUMENTUM}/dba/log/dmtest05bis.log"

provided the tail command exists in the container, which is not obvious as there is a definitive will to make images as stripped down as possible.
As those files are statically known (i.e. at build time), such command could be defined as early as in the buildfile and invoked in its entrypoint script.
Unfortunately, the content server logs are not very verbose and the most useful messages are directed to session or ad hoc logs. The session logs are dynamically created for each new session with the session id as their name, which makes it unpredictable. Since those names are only known at run-time, the above buildfile’s “tail -F” command cannot include them and consequently they are not displayed by the “docker logs” command. The same applies to on-demand trace files with variable names, e.g. with a timestamp suffix.
So, is there a way to follow those dynamic session logs (or any dynamically named files at that) anyway ? An obvious way is to use a file or directory watcher to be notified of any created or modified file. If a watcher process running inside the container could wait for such conditions and signal any occurence thereof, a listener process, also running inside the container, could receive the notification and dynamically fork a tail -F command to follow that file. Externally, “docker logs” would continue displaying whatever is sent to the container’s stdout, including the newly discovered files.
Under Linux, we can use inotifywait as the file watcher. Let’s see how to set it up.

Installing inotifywait

Under a Debian Linux derivative such as Ubuntu, inotifywait can be easily installed from the inotify-tools package through the usual command:

sudo apt-get install inotify-tools

Under a Red Hat Linux derivative such as Centos, a two-step method is to first grab the rpm package from its on-line repository and then install it; the latest release as of this writing is the 3.14-9:

  curl https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/i/inotify-tools-3.14-9.el7.x86_64.rpm -o inotify-tools-3.14-9.el7.x86_64.rpm
  sudo rpm -Uvh inotify-tools-3.14-9.el7.x86_64.rpm 

In order to verify that the installation was successful, just try to launch “inotifywait”:

$ inotifywait
No files specified to watch!

Good, the command exists; let’s now test it.

Testing inotifywait

The command inotifywait has the following invocation syntax:

inotifywait [-hcmrq] [-e  ] [-t  ] [--format  ] [--timefmt  ]  [ ... ]

The man page explains very well each of the parameters so, please, refer there for details.
As the whole point is to detect new or changed files whose name we don’t know at image build time, but whose location is known (e.g. a path such as ${DOCUMENTUM}/dba/log), we will be watching directories. In such a case, one parameter will be a list of directories to recursively watch for any new or modified files.
That command was designed to output to stdout any event whose occurence it was configured to wait for.
The default output format is the following:

watched_filename EVENT_NAMES event_filename

one line per file.
EVENT_NAMES is the event that occurred, in our case mainly one of CREATE or MODIFY as requested through the -e command-line parameter.
As we watch directories, watched_filename is the name of the watched directory where the event occured and event_filename, the created/modified file.
Here is an example of use as a test with custom formatting for a better presentation:

$ inotifywait --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T %e %w%f" /tmp &

The command will recursively (option −−recursive) watch the /tmp directory for any created or modified (option −−event …) file or a change of file attributes such as the timestamp or permissions. It will run in the background (option −−monitor) and issue events to stdout prefixed with a properly formatted time stamp (options −−timefmt and −−format).
With inotifywait running in the background, create now a dummy file in /tmp:

$ touch /tmp/xx
# the following lines get displayed:
2019/12/31-17:43:45 CREATE /tmp/xx
2019/12/31-17:43:45 ATTRIB /tmp/xx

Actually, the /tmp directory is a busy directory and a lot of file activity occurs in there very quickly:

2019/12/31-17:43:52 CREATE /tmp/hsperfdata_dmadmin/471
2019/12/31-17:43:52 MODIFY /tmp/hsperfdata_dmadmin/471
2019/12/31-17:43:52 MODIFY /tmp/hsperfdata_dmadmin/471
2019/12/31-17:44:06 MODIFY /tmp/nohup.log
2019/12/31-17:44:06 MODIFY /tmp/nohup.log
2019/12/31-17:44:06 MODIFY /tmp/nohup.log
2019/12/31-17:45:06 MODIFY /tmp/nohup.log
2019/12/31-17:45:06 MODIFY /tmp/nohup.log
2019/12/31-17:45:06 MODIFY /tmp/nohup.log
2019/12/31-17:45:22 CREATE /tmp/hsperfdata_dmadmin/510
2019/12/31-17:45:22 MODIFY /tmp/hsperfdata_dmadmin/510
2019/12/31-17:45:22 MODIFY /tmp/hsperfdata_dmadmin/510
2019/12/31-17:45:50 CREATE /tmp/runc-process785366568
2019/12/31-17:45:50 MODIFY /tmp/runc-process785366568
...

Let’s see if the directory is really watched recursively:

$ mkdir -p /tmp/dir1/dir2
> 2019/12/31-17:49:51 CREATE,ISDIR /tmp/dir1
> 2019/12/31-17:49:51 CREATE,ISDIR /tmp/dir1/dir2

We notice that the directory creation is trapped too.

$ touch /tmp/dir1/dir2/xx
> 2019/12/31-17:49:59 CREATE /tmp/dir1/dir2/xx
> 2019/12/31-17:49:59 ATTRIB /tmp/dir1/dir2/xx

It works as advertised. Moreover, two events are raised here, CREATE for the file creation as it didn’t previously exist, and ATTRIB for the change of timestamp. Let’s verify this:

$ touch /tmp/dir1/dir2/xx
> 2019/12/31-17:50:01 ATTRIB /tmp/dir1/dir2/xx
$ touch /tmp/dir1/dir2/xx
> 2019/12/31-17:50:03 ATTRIB /tmp/dir1/dir2/xx

Indeed.
Let’s see if a change of attribute is also noticed:

$ chmod -r /tmp/dir1/dir2/xx
2019/12/31-18:03:50 ATTRIB /tmp/dir1/dir2/xx
$ chmod +r /tmp/dir1/dir2/xx
2019/12/31-18:03:55 ATTRIB /tmp/dir1/dir2/xx

It is, fine.

Using inotify in a [containerized] Documentum installation

inotifywait will be used with the syntax shown above against the Documentum log directory ${DOCUMENTUM}/dba/log. Its output will be piped into a gawk script that will spawn “tail -F” commands when needed and keep a list of such processes so they can be killed after a timeout of inactivity, i.e. when they are following a file that did not get updated within a given time interval (let’s jokingly name this value “tail time to live”, or TTTL). This is to prevent a potentially unlimited number of such processes to hog the system’s resources, not that they consume much CPU cycles but it is pointless to leave hundreds of such idle processes sitting in memory. So the script will start a tail command on a file upon its CREATE event, or a MODIFY event if not already followed, and clean up existing idle tail commands. As said before, the code below could be included into a container’s entrypoint to make it run constantly in the background.

#!/bin/bash

export watcher_workdir=/tmp/watcher
export tail_on_off=${watcher_workdir}/tail_on_off
export heartbeat_file=${watcher_workdir}/heartbeat_file
export pause_duration=3

# comma-separated list of files watched but excluded from tailing;
export excluded_files="$tail_on_off,$heartbeat_file"

# make private copies of inotifywait and tail so they can be easily identified and killed from the list of running processes;
export private_inotify=~/dctm-inotifywait
export private_tail=~/dctm-tail

follow_logs() {
   # tail time to live, maximum duration in minutes an unmodified tailed file will stay tailed before the tail is killed, i.e. TTTL ;-);
   tail_timeout=1
 
   $private_inotify --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T|%e|%w%f" ${DOCUMENTUM}/dba/log $watcher_workdir | gawk -v tail_timeout=$((tail_timeout * 60)) -v excluded_files="$excluded_files" -v tail_on_off=$tail_on_off -v heartbeat_file=$heartbeat_file -v env_private_tail=private_tail -v FS="|" -v Apo="'" 'BEGIN {
      # get the dedicated tail command from the environment;
      private_tail=ENVIRON[env_private_tail]

      # get current time;
      cmd_now = "date +\"%Y/%m/%d-%H:%M:%S\""
 
      # get the time of the next check, i.e. now + the timeout delay;
      cmd_future = "date +\"%Y/%m/%d-%H:%M:%S\" --date=\"" tail_timeout  " seconds\""
      cmd_future | getline next_check_date; close(cmd_future)
 
      # find the running private tail commands;
      # use the same FS defined in the outer gawk scope to guarantee thet $0 can be split smoothly;
      cmd_tail = "pgrep -fa " private_tail " | gawk " Apo "{print $1 \"" FS "\" $NF}" Apo
 
      # get the most recent time a running idle tail command is allowed to live;
      cmd_oldest = "date +\"%Y/%m/%d-%H:%M:%S\" --date=\"" tail_timeout  " seconds ago\""
 
      # command to get the modifiy date of a tailed file;
      cmd_file_stamp = "date +\"%Y/%m/%d-%H:%M:%S\" --reference="
 
      # files can be excluded from tailing if specified in parameter exclude_files;
      # convert the space-separated list of excluded files to an associative array, which is easier to search as it is indexed by file names;
      nb_files = split(excluded_files, tmp_excluded_files, ",")
      for (;nb_files > 0; nb_files--)
         tab_excluded_files[tmp_excluded_files[nb_files]] = 0

      bMust_tail = 1
   }
   {
      # skip directories (they have a trailing /);
      if (match($3, /\/$/)) next
 
      # check if logs must be tailed;
      if (tail_on_off == $3 && ("CREATE" == $2 || "MODIFY" == $2)) {
         if ((getline bMust_tail < tail_on_off) >  0) {
            close(tail_on_off)
            system("rm " tail_on_off " 2>/dev/null")
         }
      } else 
         # tailing on ?
         if (bMust_tail && !($3 in current_tails))
            # CREATE event ?
            if ("CREATE" == $2 && !($3 in tab_excluded_files)) {
               # newly created file not in exclusion list: tail it !
               system(private_tail " --follow=name --retry " $3 " 2>/dev/null &")
               # ... and keep track of it;
               current_tails[$3] = 0
            }
            # MODIFY event ?
            else if ("MODIFY" == $2 && !($3 in tab_excluded_files)) {
               # modified file not in exclusion list nor already tailed: tail it !
               system(private_tail " --follow=name --retry " $3 " 2>/dev/null &")
               # ... and keep track of it;
               current_tails[$3] = 0
            }
 
      # clean up logic starts here;
      # only upon a heartbeat event;
      if ("ATTRIB" == $2 && $3 == heartbeat_file) {
         # time to check ?
         cmd_now | getline now; close(cmd_now)
         if (now >= next_check_date) {
            # maximum time to live for idle tails;
            cmd_oldest | getline oldest_date; close(cmd_oldest)
            # loop though all the running tail commands;
            while ((cmd_tail | getline) > 0) {
               # cannot explain this spurious entry, ignoring it explicitly eventhough it will anyway; 
               if ("$NF}" Apo == $2) continue
               tail_pid = $1
               tailed_file = $2
               # is it one of the watched files ?
               if (tailed_file in current_tails) {
                  # get the current tailed file last modification date;
                  (cmd_file_stamp tailed_file " 2>/dev/null") | getline last_modif_date; close(cmd_file_stamp tailed_file " 2>/dev/null")
                  # tailed file not updated during time to live period ?
                  if (last_modif_date <= oldest_date) {
                     # yes, kill the tailing process;
                     system("kill -9 " tail_pid " 2> /dev/null")
                     # ... and update the list of tailed files;
                     delete current_tails[tailed_file]
                  }
                  else current_tails[tailed_file] = 1
               }
               # else it should not happen because private tail commands are only issues from here and get tracked;
            }
            close(cmd_tail)

            # resynchronize internal list with actual tailed files;
            for (f in current_tails)
               if (0 == current_tails[f])
                  # tracked file not tailed any more (this should not happen because private tail commands are killed from here only), untrack it;
                  delete current_tails[f]
               else
                  # tracked file was checked and is still alive;
                  current_tails[f] = 0
 
            # compute the next check time;
            cmd_future | getline next_check_date; close(cmd_future)
         }
      }
   }'
}
 
 
# -----------
# main;
# -----------

# create an inotifywait alias so it is easily identified to be stopped later;
ln -s /usr/bin/inotifywait $private_inotify 2> /dev/null
ln -s /usr/bin/tail $private_tail           2> /dev/null

# create the watcher's workdir;
mkdir -p $watcher_workdir 2> /dev/null

# enable following new or modified logs;
echo 1 > $tail_on_off
 
# start the watcher;
follow_logs &

while [ true ]; do
   # send heartbeat to inotify;
   touch $heartbeat_file
   # do something else here;
   # ...
   sleep $pause_duration
done

Admittedly, the fact that gawk relies a lot on external commands and pipes (because of its lack of libraries, but this can be arranged by extending it, see other articles in this blog such this one) obfuscates somewhat the statements’ purpose. Also, all the logic is contained in the not-so-obvious automatic loop which is executed each time an event is notified. Anyway, as usual the code is only provided as an illustration.
On line 3, a “technical” sub-directory is defined and created later on line 134. It will contain the heartbeat file (file heartbeat_file, see next paragraph for more details) and a file to switch tailing on and off (file tail_on_off). This directory is watched (see the call to inotifywait on line 19).
One line 12 and 13, private versions of the commands inotifywait and tail are defined and created later on line 130 and 131 as symlinks; this is to facilitate their identification in the list of running processes to kill them if needed.
This script is encapsulated in function follow_logs() started on line 15. It is launched in the background on line 140.
On line 46 the boolean bMust_tail is initialized; it gets updated on line 53 upon a CREATE event on the file $tail_on_off; after it has been read into bMust_tail, the file is remove. By writing 0 or a positive number in it, aggregation of new logs is respectively disabled or enabled:

# disable following new logs;
# from within a container or in a traditional installation:
$ echo 0 > $tail_on_off
 
# or, for a container, supposing $tail_on_off is defined in the container's current user's environment:
$ docker exec container_name /bin/bash -c "echo 0 > \$tail_on_off"
 
# enable following new logs;
# from within a container or in a traditional installation:
$ echo 1 > $tail_on_off
 
# or, for a container, supposing $tail_on_off is defined in the container's current user's environment:
$ docker exec container_name /bin/bash -c "echo 1 > \$tail_on_off"

Currently running tail commands are not impacted.
Note how the $ in $tail_on_off is escaped so it is not consumed by the docker’s host shell and is passed as-is to the container.
On line 63 and 70, private tails commands are started when new files have appeared in the watched sub-directory, or modified files which are not already tailed.
One line 79, a clean-up of idle (private) tail commands is performed and the internal associative array that keeps track of them is refreshed to make sure it reflects the currently running tails.
One line 142, the well-known container entrypoint’s never-ending loop is entered.
The above script can also be run stand-alone in a classic, non containerized installation.

Keeping the gawk script busy

A problem to solve is that the gawk script runs as a coroutine to inotify, i.e. it is synchronized with its output. If there is none, because no event were detected, the script blocks waiting for input, so no cleaning of idle tail commands gets done. This not such a big deal because no new tails commands are started either so the status quo is maintained. However, an easy work-around is possible: In order to force the script to enter the next iteration and give it a chance to perform the cleaning, we introduce a heartbeat with one watched file, e.g.:

...
while [ true ]; do
   # send heartbeat to inotify;
   touch $heartbeat_file
   # do something else here;
   # ...
   sleep $pause_duration
done


This explains the presence of the file $heartbeat_file in the list of inotifywait’s target (see line 19 in the code above).
Now, because its timestamp is updated, at least one event is always raised periodically, an ATTRIB on the heartbeat file $heartbeat_file, as shown before. Although ATTRIB events are listened to, no action is done upon them generally, except when occuring on $heartbeat_file in which case their sole purpose is to trigger the script’s execution, more precisely, the cleanup of inactive tail commands.

Let’s test it now.

Cet article Tracking Logs Inside a Documentum Container (part I) est apparu en premier sur Blog dbi services.

SSQ Unnesting

Jonathan Lewis - Wed, 2020-01-29 06:17

I hesitate to call something a bug simply because Oracle doesn’t do what I thought it would do; but when a trace file says:

“I’m not going to do X because P is not true

followed a little later by

“I’m going to do Y because P is true

then I think it’s safe to say there’s a bug there somewhere – even if it’s only a bug in the code that writes the trace file.

The example is this note is a feature that appeared in 12c (possibly only 12.2) – the ability to unnest scalar subqueries in the select list and transform them into outer joins. Here’s an example to demonstrate the mechanism:

rem
rem     Script:         ssq_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table companies as
select
        rownum  id,
        dbms_random.string('U',30)      name
from
        all_objects
where
        rownum <= 1e4 -- > comment to avoid wordpress format issue
;

alter table companies add constraint com_pk primary key(id);

create table orders (
        id              number(10,0),
        id_company      number(10,0)    not null,
        date_placed     date,
        status          varchar2(1),
        items           number(3,0),
        valuation       number(6,2),
        constraint ord_fk_com foreign key(id_company) references companies,
        constraint ord_pk primary key(id)
)
/

insert into orders
select
        rownum,
        trunc(dbms_random.value(1,1e4))  id_company,
        sysdate - 100 + rownum/100        date_placed,
        chr(64+dbms_random.value(1,6))    status,
        trunc(dbms_random.value(1,11))    items,
        round(dbms_random.value(1,250),2) valuation
from
        all_objects
where
        rownum <= 1e4 -- > comment to avoid wordpress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'orders',
                method_opt  => 'for all columns size 1'
        );
end;
/

set serveroutput off
set linesize 180
set pagesize 60
set trimspool on

select
        /*+ 
                qb_name(main)
        */
        ord.id,
        (
        select 
                /*+ qb_name(company) */ 
                max(com.name)
        from    companies  com  
        where   com.id  = ord.id_company
        )   company,
        ord.valuation,
        ord.status
from
        orders          ord
where
        ord.date_placed > trunc(sysdate) - 1
/

select * from table(dbms_xplan.display_cursor(null,null,'alias'))
/

I’ve created an orders table with an id_company column that is declared as a foreign key to a companies table. When I’ve queried the orders table and reported the company associated with an order I’ve been a little silly and used a correlated scalar subquery in the select list to query the companies table instead of doing a simple join. In fact I’ve been more than a little silly because I’ve used an aggregate when the query is by primary key and can only return one row.

Here’s the execution plan (produced by 12.2 or 19.3)


--------------------------------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |       |       |    33 (100)|          |
|*  1 |  HASH JOIN OUTER   |           |   143 |  8294 |    33  (22)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| ORDERS    |   143 |  3289 |    22  (23)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| COMPANIES | 10000 |   341K|    10  (10)| 00:00:01 |
--------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$EBD4C958
   2 - SEL$EBD4C958 / ORD@MAIN
   3 - SEL$EBD4C958 / COM@COMPANY

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("COM"."ID"="ORD"."ID_COMPANY")
   2 - filter("ORD"."DATE_PLACED">TRUNC(SYSDATE@!)-1)

Note
-----
   - this is an adaptive plan

The optimizer has taken my query and turned it into a simple (outer) join between the two tables. I’ve included the Query Block / Alias information in the output so that you can see that Oracle really has generated a new query block by transforming the two separate query blocks in the original query.

Oracle has been very clever here – it has even recognised that the join is going to use a unique scan of a unique key so it has taken out the redundant aggregation step. In many cases where this type of scalar subquery unnesting is used you’re more likely to see a plan with one of the following shapes:


----------------------------------------------------------------------------------
| Id  | Operation            | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |           |       |       |    36 (100)|          |
|*  1 |  HASH JOIN OUTER     |           |   144 |   286K|    36  (28)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | ORDERS    |   144 |  3312 |    22  (23)| 00:00:01 |
|   3 |   VIEW               | VW_SSQ_1  | 10000 |    19M|    13  (31)| 00:00:01 |
|   4 |    HASH GROUP BY     |           | 10000 |   341K|    13  (31)| 00:00:01 |
|   5 |     TABLE ACCESS FULL| COMPANIES | 10000 |   341K|    10  (10)| 00:00:01 |
----------------------------------------------------------------------------------

-----------------------------------------------------------------------------------
| Id  | Operation             | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |           |       |       |    36 (100)|          |
|*  1 |  HASH JOIN OUTER      |           |   144 |   286K|    36  (28)| 00:00:01 |
|   2 |   JOIN FILTER CREATE  | :BF0000   |   144 |  3312 |    22  (23)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL  | ORDERS    |   144 |  3312 |    22  (23)| 00:00:01 |
|   4 |   VIEW                | VW_SSQ_1  | 10000 |    19M|    13  (31)| 00:00:01 |
|   5 |    HASH GROUP BY      |           | 10000 |   341K|    13  (31)| 00:00:01 |
|   6 |     JOIN FILTER USE   | :BF0000   | 10000 |   341K|    10  (10)| 00:00:01 |
|*  7 |      TABLE ACCESS FULL| COMPANIES | 10000 |   341K|    10  (10)| 00:00:01 |
-----------------------------------------------------------------------------------

---------------------------------------------------------------------------------
| Id  | Operation           | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |           |       |       |    34 (100)|          |
|   1 |  HASH GROUP BY      |           |   144 |  8784 |    34  (24)| 00:00:01 |
|*  2 |   HASH JOIN OUTER   |           |   144 |  8784 |    33  (22)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| ORDERS    |   144 |  3744 |    22  (23)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| COMPANIES | 10000 |   341K|    10  (10)| 00:00:01 |
---------------------------------------------------------------------------------

The first variation shows the creation of an aggregate view that is used in the join – note the generated view name vw_ssq_1 (ssq = scalar sub query). In the second variation Oracle has used a Bloom filter to reduce the volume of data passed up from the view to the hash join operator, and in the third variation Oracle has used complex view merging to handle the join before performing the aggregation.

The anomaly

Clearly the optimizer is smarter than I am with this query – it spotted that I didn’t need that max() aggregation and took it out. So maybe I should take a hint from the optimizer and edit the query to remove the max(). Here’s the plan I get if I do:


----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |       |   167 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| COMPANIES |     1 |    35 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | COM_PK    |     1 |       |     1   (0)| 00:00:01 |
|*  3 |  TABLE ACCESS FULL          | ORDERS    |   145 |  3335 |    22  (23)| 00:00:01 |
-----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - COMPANY / COM@COMPANY
   2 - COMPANY / COM@COMPANY
   3 - MAIN    / ORD@MAIN

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("COM"."ID"=:B1)
   3 - filter("ORD"."DATE_PLACED">TRUNC(SYSDATE@!)-1)

The optimizer doesn’t unnest the subquery (and it produces an execution plan that has a far higher cost than the unnested version). You can see from the Query Block information that the final query still consists of the two original query blocks, and the plan shows the standard “subquery in the select list” pattern – the main query block at the end of the plan with the scalar subquery(ies) above it. Even if I insert an /*+ unnest */ hint in the subquery the optimizer will not unnest the scalar subquery.

This really looks like a bug – which means I have to take a look at the CBO (10053) trace file; and here are the critical lines (from the 19.3 trace file which is significantly more informative than the 12.2 file):

...
SU: Checking validity of scalar subquery unnesting for query block COMPANY (#0)
SU: bypassed: Scalar subquery may return more than one row.
...

When we used an aggregate subquery the optimizer knew the aggregation was redundant because it was querying with equality on the primary key, so it eliminated the aggregation step from the plan; but when we don’t specify an aggregate the optimizer thinks the primary key will return more than one row!

So do we have a “documentation” bug where the trace file is simply reporting the wrong reason for bypassing unnesting, or do we have a logic bug where the optimizer makes a mistake when checking for uniqueness ? (I’d lile it to be the latter, and see a fix some time soon, of course.)

Conclusion

There are cases where the optimizer ought to be unnesting scalar subqueries in the select list but fails to do so for what seems to be a completely spurious reason. Unfortunately some people find it very convenient to write SQL that does primary key lookups as in-line scalar subqueries instead of joins; so if you find examples like this then (for the short term, at least) you might see some performance benefit by introducing a redundant max() operation in the scalar subquery.

Footnote

If you want to see more examples of how Oracle does, or doesn’t handle scalar subqueries in the select list there are a couple of articles on Nenad Noveljic’s blog one comparing how Oracle and SQL Server handle a particular case, the other looking at several other cases.

 

Pages

Subscribe to Oracle FAQ aggregator