Feed aggregator

Basic Replication -- 7 : Refresh Groups

Hemant K Chitale - Fri, 2019-10-11 23:24
So far, all my blog posts in this series cover "single" Materialized Views (even if I have created two MVs, they are independent of each other and can be refreshed at different schedules).

A Refresh Group is what you would define if you want multiple MVs to be refreshed to the same point in time.  This allows for
(a) data from transaction that touch multiple tables
or
(b) views of multiple tables
to be consistent in the target MVs.

For example, if you have SALES_ORDER and LINE_ITEMS tables and the MVs on these are refreshed at different times, you might see the ORDER (Header) without the LINE_ITEMs (or, worse, in the absence of Referential Integrity constraints, LINE_ITEMs without the ORDER (Header) !).

Here's a demo, using the HR  DEPARTMENTS and EMPLOYEES table with corresponding MVs built in the HEMANT schema.

SQL> show user
USER is "HR"
SQL> select count(*) from departments;

COUNT(*)
----------
27

SQL> select count(*) from employees;

COUNT(*)
----------
107

SQL>
SQL> grant select on departments to hemant;

Grant succeeded.

SQL> grant select on employees to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on departments;

Materialized view log created.

SQL> grant select, delete on mlog$_departments to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on employees;

Materialized view log created.

SQL> grant select, delete on mlog$_employees to hemant;

Grant succeeded.

SQL>
SQL>


Having created the source MV Logs  note that I have to grant privileges to the account (HEMANT) that will be reading and deleting from the MV Logs.

Next, I setup the MVs and the Refresh Group

SQL> show user
USER is "HEMANT"
SQL>
SQL> select count(*) from hr.departments;

COUNT(*)
----------
27

SQL> select count(*) from hr.employees;

COUNT(*)
----------
107

SQL>
SQL>
SQL> create materialized view mv_dept
2 refresh fast on demand
3 as select department_id as dept_id, department_name as dept_name
4 from hr.departments
5 /

Materialized view created.

SQL>
SQL> create materialized view mv_emp
2 refresh fast on demand
3 as select department_id as dept_id, employee_id as emp_id,
4 first_name, last_name, hire_date
5 from hr.employees
6 /

Materialized view created.

SQL>
SQL> select count(*) from mv_dept;

COUNT(*)
----------
27

SQL> select count(*) from mv_emp;

COUNT(*)
----------
107

SQL>
SQL> execute dbms_refresh.make(-
> name=>'HR_MVs',-
> list=>'MV_DEPT,MV_EMP',-
> next_date=>sysdate+0.5,-
> interval=>'sysdate+1');

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>


Here, I have built two MVs and then a Refresh Group called "HR_MVS".  The first refresh will be 12hours from now and every subsequent refresh will be after 24hours.  (The Refresh Interval must be set to what would be larger than the time taken to execute the actual Refresh).

However, I can manually execute the Refresh after new rows are populated into the source tables. First, I insert new rows

SQL> show user
USER is "HR"
SQL> insert into departments (department_id, department_name)
2 values
3 (departments_seq.nextval, 'New Department');

1 row created.

SQL> select department_id
2 from departments
3 where department_name = 'New Department';

DEPARTMENT_ID
-------------
280

SQL> insert into employees(employee_id, first_name, last_name, email, hire_date, job_id, department_id)
2 values
3 (employees_seq.nextval, 'Hemant', 'Chitale', 'hkc@myenterprise.com', sysdate, 'AD_VP', 280);

1 row created.

SQL> select employee_id
2 from employees
3 where first_name = 'Hemant';

EMPLOYEE_ID
-----------
208

SQL> commit;

Commit complete.

SQL>


Now that there are new rows, the target MVs must be refreshed together.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> execute dbms_refresh.refresh('HR_MVS');

PL/SQL procedure successfully completed.

SQL> select count(*) from mv_dept;

COUNT(*)
----------
28

SQL> select count(*) from mv_emp;

COUNT(*)
----------
108

SQL>
SQL> select * from mv_dept
2 where dept_id=280;

DEPT_ID DEPT_NAME
---------- ------------------------------
280 New Department

SQL> select * from mv_emp
2 where emp_id=208;

DEPT_ID EMP_ID FIRST_NAME LAST_NAME HIRE_DATE
---------- ---------- -------------------- ------------------------- ---------
280 208 Hemant Chitale 12-OCT-19

SQL>


Both MVs have been Refresh'd together as an ATOMIC Transaction.  If either of the two MVs had failed to refresh (e.g. unable to allocate extent to grow the MV), both the INSERTs would be rolled back.  (Note : It is not a necessary requirement that both source tables have new / updated rows, the Refresh Group works even if only one of the two tables has new / updated rows).

Note : I have used DBMS_REFRESH.REFRESH (instead of DBMS_MVIEW.REFRESH) to execute the Refresh.

You can build multiple Refresh Groups, each consisting of *multiple* Source Tables from the same source database.
You would define each Refresh Group to maintain consistency of data across multiple MVs (sourced from different tables).
Besides the Refresh Group on two HR tables, I could have, within the HEMANT schema, more Refresh Groups on FINANCE schema tables as well.

(Can you have a Refresh Group sourcing from tables from different schemas ?  Try that out !)


What's the downside of Refresh Groups ?    
Undo and Redo !  Every Refresh consists of INSERT/UPDATE/DELETE operations on the MVs.  And if any one of the MVs fails to Refresh, the entire set of DMLs (across all the MVs in the Refresh Group) has to *Rollback* !


Categories: DBA Blogs

Oracle cloud: Login

Dietrich Schroff - Fri, 2019-10-11 15:01
Main problem for login into Oracle cloud is to there is no generic login URL.
Inside the mail Oracle sent after the registration there is a specific URL. Something like:
http://app.response.oracle-mail.com/e/er?elq_mid=920.......But this ends after some seconds at:

https://cloud.oracle.com/en_US/sign-in

There is also another login page, but this one does not work for my setup:

https://login.eu-frankfurt-1.oraclecloud.com/v1/oauth2/authorize


For this one i did not find any documentation at all, so i somebody knows how this login page can be used, please add a comment...

Patroni Operations – switchover and failover

Yann Neuhaus - Fri, 2019-10-11 09:22

In this post we will have a look at switchover and failover of a Patroni cluster. As well as a look at the maintenance mode Patroni offers, which gives the opportunity to prevent from an automatic failover.

Switchover

There are two possibilities to run a switchover, either in scheduled mode or immediately.

1. Scheduled Switchover
postgres@patroni1:/home/postgres/ [PG1] patronictl switchover
Master [patroni1]:
Candidate ['patroni2', 'patroni3'] []: patroni2
When should the switchover take place (e.g. 2019-10-08T11:31 )  [now]: 2019-10-08T10:32
Current cluster topology
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
Are you sure you want to schedule switchover of cluster PG1 at 2019-10-08T10:32:00+02:00, demoting current master patroni1? [y/N]: y
2019-10-08 10:31:14.89236 Switchover scheduled
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
 Switchover scheduled at: 2019-10-08T10:32:00+02:00
                    from: patroni1
                      to: patroni2
postgres@patroni1:/home/postgres/ [PG1]

That’s it. At the given time, the switchover will take place. All you see in the logfile is an entry like this

Oct  8 10:32:00 patroni1 patroni: 2019-10-08 10:32:00,006 INFO: Manual scheduled failover at 2019-10-08T10:32:00+02:00
Oct  8 10:32:00 patroni1 patroni: 2019-10-08 10:32:00,016 INFO: Got response from patroni2 http://192.168.22.112:8008/patroni: {"database_system_identifier": "6745341072751547355", "postmaster_start_time": "2019-10-08 10:09:40.217 CEST", "timeline": 2, "cluster_unlocked": false, "patroni": {"scope": "PG1", "version": "1.6.0"}, "state": "running", "role": "replica", "xlog": {"received_location": 83886560, "replayed_timestamp": null, "paused": false, "replayed_location": 83886560}, "server_version": 110005}
Oct  8 10:32:00 patroni1 patroni: 2019-10-08 10:32:00,113 INFO: manual failover: demoting myself
Oct  8 10:32:01 patroni1 patroni: 2019-10-08 10:32:01,256 INFO: Leader key released
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,271 INFO: Local timeline=2 lsn=0/6000028
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,279 INFO: master_timeline=3
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,281 INFO: master: history=1#0110/5000098#011no recovery target specified
Oct  8 10:32:03 patroni1 patroni: 2#0110/6000098#011no recovery target specified
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,282 INFO: closed patroni connection to the postgresql cluster
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03,312 INFO: postmaster pid=11537
Oct  8 10:32:03 patroni1 patroni: 192.168.22.111:5432 - no response
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03.325 CEST - 1 - 11537 -  - @ - 0LOG:  listening on IPv4 address "192.168.22.111", port 5432
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03.328 CEST - 2 - 11537 -  - @ - 0LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03.339 CEST - 3 - 11537 -  - @ - 0LOG:  redirecting log output to logging collector process
Oct  8 10:32:03 patroni1 patroni: 2019-10-08 10:32:03.339 CEST - 4 - 11537 -  - @ - 0HINT:  Future log output will appear in directory "pg_log".
Oct  8 10:32:04 patroni1 patroni: 192.168.22.111:5432 - accepting connections
Oct  8 10:32:04 patroni1 patroni: 192.168.22.111:5432 - accepting connections
Oct  8 10:32:04 patroni1 patroni: 2019-10-08 10:32:04,895 INFO: Lock owner: patroni2; I am patroni1
Oct  8 10:32:04 patroni1 patroni: 2019-10-08 10:32:04,895 INFO: does not have lock
Oct  8 10:32:04 patroni1 patroni: 2019-10-08 10:32:04,896 INFO: establishing a new patroni connection to the postgres cluster
2. Immediate switchover

Here you start the same way as for planned switchover, but the switchover will take place immediatelly.

postgres@patroni1:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 |        | running |  1 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 | Leader | running |  1 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  1 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
postgres@patroni1:/home/postgres/ [PG1] patronictl switchover
Master [patroni2]:
Candidate ['patroni1', 'patroni3'] []: patroni1
When should the switchover take place (e.g. 2019-10-08T11:09 )  [now]:
Current cluster topology
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 |        | running |  1 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 | Leader | running |  1 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  1 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
Are you sure you want to switchover cluster PG1, demoting current master patroni2? [y/N]: y
2019-10-08 10:09:38.88046 Successfully switched over to "patroni1"
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  1 |           |
|   PG1   | patroni2 | 192.168.22.112 |        | stopped |    |   unknown |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  1 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
postgres@patroni1:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
postgres@patroni1:/home/postgres/ [PG1]
Failover

In difference to the switchover, the failover is executed automatically, when the Leader node is getting unavailable for unplanned reason.
You can only adjust some database parameter to affect the failover.

The parameters for failover arre also managed using patronictl. But they are not in the parameter section, they are above. so let’s say, we adjust one parameter and add one paramter to not use the default anymore.

postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config
postgres@patroni1:/u01/app/postgres/local/dmk/etc/ [PG1] patronictl edit-config
---
+++
@@ -1,5 +1,6 @@
-loop_wait: 7
+loop_wait: 10
 maximum_lag_on_failover: 1048576
+master_start_timeout: 240
 postgresql:
   parameters:
     archive_command: /bin/true

Apply these changes? [y/N]: y
Configuration changed

Afterwards there is no need to restart the database. Changes take affect immediately. So the failover can be configured according to every special need. A list of all possible parameter changes can be found here .

Maintenance mode

In some cases it is necessary to do maintenance on a single node and you do not want Patroni to manage the cluster. This can be needed for e.g. release updates.
When Patroni paused, it won’t change the state of PostgeSQL. For example it will not try to start the cluster when it is stopped.

So let’s do an example. We will pause the cluster, stop the replica, upgrade from 9.6.8 to 9.6.13 and afterwards start the replica again. In case we do not pause the replica, the database will be started automatically by Patroni.

postgres@patroni1:/home/postgres/ [PG1] patronictl pause
Success: cluster management is paused
You have new mail in /var/spool/mail/opendb
postgres@patroni1:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  2 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  2 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  2 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+
 Maintenance mode: on

On the replica

postgres@patroni2:/home/postgres/ [PG1] pg_ctl stop -D /u02/pgdata/96/PG1/ -m fast

postgres@patroni2:/home/postgres/ [PG1] export PATH= /u01/app/postgres/product/PG96/db_13/bin:$PATH
postgres@patroni2:/home/postgres/ [PG1] export PORT=5432
postgres@patroni2:/home/postgres/ [PG1] which pg_ctl
/u01/app/opendb/product/PG96/db_13/bin/pg_ctl

postgres@patroni2:/home/postgres/ [PG1] pg_ctl -D /u02/pgdata/96/PG1 start
server starting
postgres@patroni2:/home/postgres/ [PG1] 2019-10-08 17:25:28.358 CEST - 1 - 23192 -  - @ - 0LOG:  redirecting log output to logging collector process
2019-10-08 17:25:28.358 CEST - 2 - 23192 -  - @ - 0HINT:  Future log output will appear in directory "pg_log".

postgres@patroni2:/home/postgres/ [PG1] psql -c "select version()" postgres
                                                           version
------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.13 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit
(1 row)

postgres@patroni2:/home/postgres/ [PG1] patronictl resume
Success: cluster management is resumed

postgres@patroni2:/home/postgres/ [PG1] patronictl list
+---------+----------+----------------+--------+---------+----+-----------+
| Cluster |  Member  |      Host      |  Role  |  State  | TL | Lag in MB |
+---------+----------+----------------+--------+---------+----+-----------+
|   PG1   | patroni1 | 192.168.22.111 | Leader | running |  5 |       0.0 |
|   PG1   | patroni2 | 192.168.22.112 |        | running |  5 |       0.0 |
|   PG1   | patroni3 | 192.168.22.113 |        | running |  5 |       0.0 |
+---------+----------+----------------+--------+---------+----+-----------+

You can do this on the other nodes as well.

Conclusion

Switchover is quite easy and for all the test I did so far it was really reliable. As well as the failover, here you just have to think about adjusting the parameters to your needs. Not in every case it is the best solution to wait 5 min for a failover.

Cet article Patroni Operations – switchover and failover est apparu en premier sur Blog dbi services.

v$session

Jonathan Lewis - Fri, 2019-10-11 06:29

Here’s an odd, and unpleasant, detail about querying v$session in the “most obvious” way. (And if you were wondering what made me resurrect and complete a draft on “my session id” a couple of days ago, this posting is the reason). Specifically if you want to select some information for your own session from v$session the query you’re likely to use in any recent version of Oracle will probably be of the form:


select {list for columns} from v$session where sid = to_number(sys_context('userenv','sid'));

Unfortunately that one little statement hides two anomalies – which you can see in the execution plan. Here’s a demonstration cut from an SQL*Plus session running under 19.3.0.0:


SQL> select * from table(dbms_xplan.display_cursor);

SQL_ID  gcfrzq9knynj3, child number 0
-------------------------------------
select program from V$session where sid = sys_context('userenv','sid')

Plan hash value: 2422122865

----------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |     1 (100)|
|   1 |  MERGE JOIN CARTESIAN     |                 |     1 |    33 |     0   (0)|
|   2 |   NESTED LOOPS            |                 |     1 |    12 |     0   (0)|
|*  3 |    FIXED TABLE FULL       | X$KSLWT         |     1 |     8 |     0   (0)|
|*  4 |    FIXED TABLE FIXED INDEX| X$KSLED (ind:2) |     1 |     4 |     0   (0)|
|   5 |   BUFFER SORT             |                 |     1 |    21 |     0   (0)|
|*  6 |    FIXED TABLE FULL       | X$KSUSE         |     1 |    21 |     0   (0)|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
------ -------------------------------------------
   3 - filter("W"."KSLWTSID"=TO_NUMBER(SYS_CONTEXT('userenv','sid')))
   4 - filter("W"."KSLWTEVT"="E"."INDX")
   6 - filter((BITAND("S"."KSUSEFLG",1)<>0 AND BITAND("S"."KSSPAFLG",1)<>0 AND 
              "S"."INDX"=TO_NUMBER(SYS_CONTEXT('userenv','sid'))
              AND INTERNAL_FUNCTION("S"."CON_ID") AND "S"."INST_ID"=USERENV('INSTANCE')))

As you can see, v$session is a join of 3 separate structures – x$kslwt (v$session_wait), x$ksled (v$event_name), and x$ksuse (the original v$session as it was some time around 8i), and the plan shows two “full tablescans” and a Cartesian merge join. Tablescans and Cartesian merge joins are not necessarily bad – especially where small tables and tiny numbers of rows are concerned – but they do merit at least a brief glance.

x$ksuse is a C structure in the fixed SGA and that structure is a segmented array (which seems to be chunks of 126 entries in 19.3, and chunks of 209 entries in 12.2 – but that’s fairly irrelevant). The SID is simply the index into the array counting from 1, so if you have a query with a predicate like ‘SID = 99’ Oracle can work out the address of the 99th entry in the array and access it very quickly – which is why the SID column is reported as a “fixed index” column in the view v$indexed_fixed_column.

But we have two problems immediately visible:

  1. the optimizer is not using the “index” to access x$ksuse despite the fact that we’re giving it exactly the value we want to use (and we can see a suitable predicate at operation 6 in the plan)
  2. the optimizer has decided to start executing the query at the x$kslwt table

Before looking at why thing’s have gone wrong, let’s check the execution plan to see what would have happened if we’d copied the value from the sys_context() call into a bind variable and queried using the bind variable – which we’ll keep as a character type to make it a fair comparison:

SQL_ID  cm3ub1tctpdyt, child number 0
-------------------------------------
select program from v$session where sid = to_number(:v1)

Plan hash value: 1627146547

----------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |     1 (100)|
|   1 |  MERGE JOIN CARTESIAN     |                 |     1 |    32 |     0   (0)|
|   2 |   NESTED LOOPS            |                 |     1 |    12 |     0   (0)|
|*  3 |    FIXED TABLE FIXED INDEX| X$KSLWT (ind:1) |     1 |     8 |     0   (0)|
|*  4 |    FIXED TABLE FIXED INDEX| X$KSLED (ind:2) |     1 |     4 |     0   (0)|
|   5 |   BUFFER SORT             |                 |     1 |    20 |     0   (0)|
|*  6 |    FIXED TABLE FIXED INDEX| X$KSUSE (ind:1) |     1 |    20 |     0   (0)|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("W"."KSLWTSID"=TO_NUMBER(:V1))
   4 - filter("W"."KSLWTEVT"="E"."INDX")
   6 - filter(("S"."INDX"=TO_NUMBER(:V1) AND BITAND("S"."KSSPAFLG",1)<>0
              AND BITAND("S"."KSUSEFLG",1)<>0 AND INTERNAL_FUNCTION("S"."CON_ID") AND
              "S"."INST_ID"=USERENV('INSTANCE')))


When we have a (character) bind variable instead of a sys_context() value the optimizer manages to use the “fixed indexes”., but it’s still started executing at x$kslwt, and still doing a Cartesian merge join. The plan would be the same if the bind variable were a numeric type, and we’d still get the same plan if we replaced the bind variable with a literal number.

So problem number 1 is that Oracle only seems able to use the fixed index path for literal values and simple bind variables (plus a few “simple” functions). It doesn’t seem to use the fixed indexes for most functions (even deterministic ones) returning a value and the sys_context() function is a particular example of this.

Transitivity

Problem number 2 comes from a side-effect of something that I first described about 13 years ago – transitive closure. Take a look at the predicates in both the execution plans above. Where’s the join condition between x$ksuse and x$kslwt ? There should be one, because the underlying SQL defining [g]v$session  has the following joins:

from
      x$ksuse s,
      x$ksled e,
      x$kslwt w
where
      bitand(s.ksspaflg,1)!=0
and   bitand(s.ksuseflg,1)!=0
and   s.indx=w.kslwtsid       -- this is the SID column for v$session and v$session_wait
and   w.kslwtevt=e.indx
 

What’s happened here is that the optimizer has used transitive closure: “if a = b and b = c then a = c” to clone the predicate “s.indx = to_number(sys_context(…))” to “w.kslwtsid = to_number(sys_context(…))”. But at the same time the optmizer has eliminated the predicate “s.indx = w.kslwtsid”, which it shouldn’t do because this is 12.2.0.1 and ever since 10g we’ve had the parameter _optimizer_transitivity_retain = true — but SYS is ignoring the parameter setting.

So we no longer have a join condition between x$ksuse and x$kslwt – which means there has to be a cartesian merge join between them and the only question is whether this should take place before or after the join between x$kslwt and x$ksled. In fact, the order doesn’t really matter because there will be only one row identified in x$kslwt and one row in x$ksuse, and the join to x$ksled is simply a lookup (by undeclarable unique key) to translate an id into a name and it will take place only once whatever we do about the other two structures.

But there is a catch – especially if your sessions parameter is 25,000 (which it shouldn’t be) and the number of connected sessions is currently 20,000 (which it shouldn’t be) – the predicate against x$ksuse does a huge amount of work as it walks the entire array testing every row (and it doesn’t even do the indx test first – it does a couple of bitand() operations). Even then this wouldn’t be a disaster – we’re only talking a couple of hundredths of a second of CPU – until you find the applications that run this query a huge number of times.

We would prefer to avoid two full tablescans since the arrays could be quite large, and of the two it’s the tablescan of x$ksuse that is going to be the greater threat; so is there a way to bypass the threat?  Once we’ve identified the optimizer anomaly we’ve got a pointer to a solution. Transitivity is going wrong, so let’s attack the transitivity. Checking the hidden parameters we can find a parameter: _optimizer_generate_transitive_pred which defaults to true, so let’s set it to false for the query and check the plan:

select  /*+   opt_param('_optimizer_generate_transitive_pred','FALSE')
*/  program from  v$session where  sid = sys_context('userenv','sid')

Plan hash value: 3425234845

----------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |     1 (100)|
|   1 |  NESTED LOOPS             |                 |     1 |    32 |     0   (0)|
|   2 |   NESTED LOOPS            |                 |     1 |    28 |     0   (0)|
|   3 |    FIXED TABLE FULL       | X$KSLWT         |    47 |   376 |     0   (0)|
|*  4 |    FIXED TABLE FIXED INDEX| X$KSUSE (ind:1) |     1 |    20 |     0   (0)|
|*  5 |   FIXED TABLE FIXED INDEX | X$KSLED (ind:2) |     1 |     4 |     0   (0)|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(("S"."INDX"="W"."KSLWTSID" AND BITAND("S"."KSSPAFLG",1)<>0
              AND BITAND("S"."KSUSEFLG",1)<>0 AND "S"."INDX"=TO_NUMBER(SYS_CONTEXT('user
              env','sid')) AND INTERNAL_FUNCTION("S"."CON_ID") AND
              "S"."INST_ID"=USERENV('INSTANCE')))
   5 - filter("W"."KSLWTEVT"="E"."INDX")


Although it’s not nice to insert hidden parameters into the optimizer activity we do have a result. We don’t have any filtering on x$kslwt – fortunately this seems to be limited in size (but see footnote) to the number of current sessions (unlike x$ksuse which has an array size defined by the sessions parameter or derived from the processes parameter). For each row in x$kslwt we do an access into x$ksuse using the “index” (note that we don’t see access predicates for the fixed tables, we just have to note the operation says FIXED INDEX and spot the “index-related” predicate in the filter predicate list), so this strategy has reduced the number of times we check the complex predicate on x$ksuse rows.

It’s still far from ideal, though. What we’d really like to do is access x$kslwt by index using the known value from sys_context(‘userenv’,’sid’). As it stands the path we get from using a hidden parameter which isn’t listed as legal for the opt_param() hint is a plan that we would get if we used an unhinted query that searched for audsid = sys_context(‘userenv’,’sessionid’).

SQL_ID  7f3f9b9f32u7z, child number 0
-------------------------------------
select  program from  v$session where  audsid =
sys_context('userenv','sessionid')

Plan hash value: 3425234845

----------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |     1 (100)|
|   1 |  NESTED LOOPS             |                 |     2 |    70 |     0   (0)|
|   2 |   NESTED LOOPS            |                 |     2 |    62 |     0   (0)|
|   3 |    FIXED TABLE FULL       | X$KSLWT         |    47 |   376 |     0   (0)|
|*  4 |    FIXED TABLE FIXED INDEX| X$KSUSE (ind:1) |     1 |    23 |     0   (0)|
|*  5 |   FIXED TABLE FIXED INDEX | X$KSLED (ind:2) |     1 |     4 |     0   (0)|
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(("S"."INDX"="W"."KSLWTSID" AND BITAND("S"."KSSPAFLG",1)<>0
              AND BITAND("S"."KSUSEFLG",1)<>0 AND "S"."KSUUDSES"=TO_NUMBER(SYS_CONTEXT('
              userenv','sessionid')) AND INTERNAL_FUNCTION("S"."CON_ID") AND
              "S"."INST_ID"=USERENV('INSTANCE')))
   5 - filter("W"."KSLWTEVT"="E"."INDX")


The bottom line, then, seems to be that if you need a query by SID against v$session to be as efficient as possible then your best bet is to load a numeric variable with the sys_context(‘userenv’,’sid’) and then select where “sid = :bindvariable”.  Otherwise query by audsid, or use a hidden parameter to affect the optimizer.

Until the the SYS schema follows the _optimizer_transitivity_retain parameter or treats sys_context() the same way it treats a bind variable there is always going to be some unnecessary work when querying v$session and that excess will grow with either the number of connected sessions (if you optimize the query) or with the value of the sessions parameter.

Footnote

In (much) older versions of Oracle v$session_wait sat on top of x$ksusecst, which was part of the same C structure as x$ksuse. In newer versions of Oracle x$kslwt is a structure that is created on demand in the PGA/UGA – I hope that there’s a short cut that allows Oracle to find the waiting elements in x$ksuse[cst] efficiently, rather than requiring a walk through the whole thing, otherwise a tablescan of the (nominally smaller) x$kslwt structure will be at least as expensive as a tablescan of the x$ksuse structure.

Update (just a few minutes after posting)

Bob Bryla has pointed out in a tweet that there are many “bugs” not fixed until 19.1 for which the workaround is to set “_optimizer_transitivity_retain” to false. So maybe this isn’t an example of SYS doing something particularly strange – it may be part of a general reworking of the mechanism that still has a couple of undesirable side effects.

Bob’s comment prompted me to clone the x$ tables into real tables in a non-SYS schema and model the fixed indexes with primary keys, and I found that the resulting plan (though very efficient) still discarded the join predicate. So we may be seeing the side effects of a code enhancement relating to generating predicates that produce unique key access paths. (A “contrary” test, the one in the 2013 article I linked to, still retains the join predicate for the query that has non-unique indexes.)

 

Show null for switch items

Jeff Kemp - Fri, 2019-10-11 02:51

An application I maintain needed a checklist feature added. I wanted to show a “Yes / No” switch for a list of checklist items. Initially, when the record is created, the checklist is populated with the questions along with a NULL for the response.

I generated the switches in an ordinary Classic report using code like this:

select r.name as risk_category
      ,apex_item.switch
         (p_idx        => 10
         ,p_value      => i.response
         ,p_on_value   => 'Yes'
         ,p_on_label   => 'Yes'
         ,p_off_value  => 'No'
         ,p_off_label  => 'No'
         ,p_item_id    => 'RESPONSE_' || rownum
         ,p_item_label => i.risk_category_code || '-' || i.rci_fk
         ,p_attributes => 'data-risk="' || i.risk_category_code || '"'
         )
       ||apex_item.hidden(p_idx => 11, p_value => i.rci_fk)
       as response
      ,i.question_text
from supplier_risk_checklist_items i
join risk_categories r on r.code = i.risk_category_code
where i.sri_fk = :P10_ID
order by r.sort_order nulls last, i.sort_order nulls last, i.rci_fk

I’ve used p_idx values of 10 and 11 in order to avoid conflicting with another tabular report on this particular page. The “response” column in the report has CSS Classes set to responseSwitch (this becomes useful later when we want to write javascript targeting just these items and nothing else on the page) and its Escape special characters attribute is set to No. The report when run looks like this:

Some of the responses are “Yes”, some “No”, and some are NULL (unanswered).

The problem is that all the NULL responses are indistinguishable from the “No” responses. If the user clicks “Yes” or “No”, the response is saved correctly – but the user cannot tell which ones haven’t explicitly been answered yet.

To find a solution for this issue I started by examining the HTML being generated for each question. I noticed that the input option for the “No” value was marked as “checked”, while the hidden input item had no “value” on it. These were the ones that needed fixing.

Example 1. Notice that the displayed radio button RESPONSE_10_N is “checked”, but the associated hidden input RESPONSE_10 has no value attribute. Example 2. In this example, the displayed radio button RESPONSE_5_N is “checked”, but that’s ok because the hidden input RESPONSE_5 has the value “No” – so we don’t want to change this one.

In the page’s Execute When Page Loads, I search for all instances of responseSwitch where the hidden input item does not have a value attribute; in each case, I find the associated input item that shows “No” and unset the “checked” property:

// workaround for generated switch items showing "No" when value is null
// search for the hidden input items without a value (i.e. null on the database)
$(".responseSwitch input[name='f10']:not([value])").each(function(i){
    var id = $(this).attr("id");
    // these will have "checked" on the "No" option; remove it
    $(".responseSwitch input#"+id+"_N").prop("checked",null);
});

This makes it clear to the user which checklist items have been answered so far, and which ones haven’t.

Note: the user is given no way to unset an answer once it has been saved; if this were a problem I would change this to use an ordinary Select list item instead of a Switch item.

OGB Appreciation Day: add an error in a PL/SQL Process to the inline notification in Oracle APEX

Dimitri Gielis - Thu, 2019-10-10 12:01
This post is part of the OGB (Oracle Groundbreakers) Appreciation Day 2019, a thank you to everyone that makes the community great, especially those people that work at keeping us all moving!

Before I give my tip on how to add an error message from your PL/SQL code in your Page Process to a notification message in Oracle APEX, I want to start with thanking some people.

What keeps me going are a few things:

  • The innovations of technology and more specifically the Oracle Database, ORDS, and Oracle APEX. I want to thank all the developers and the people behind those products. They allow me to help other people with the tools they create and keep on learning about the new features that are released. 
  • I want to thank the fantastic #orclapex (Oracle APEX) and Groundbreakers community. I believe we are a great example of how people help and support each other and are motivated to bring the technology further. Over time I got to know a lot of people, many I consider now friends.
  • I want to thank you because you read this, show your appreciation and push me forward to share more. I'm passionate about the technology I use. I love helping people with my skill set of developing software and while I learn, share my knowledge through this blog. 

So back to my tip of today... how do you show a message in the notification on a page?


You can do that with the APEX_ERROR PL/SQL API.


To see the usage yourself, create an empty page, with one region and a button that submits the page.
In the Submit Process, simulate some PL/SQL Code where you raise an error.

For example:


That's it! Now you can get your errors in the notification message area of your Oracle APEX Page.

Categories: Development

100 Good Tumblr Usernames

VitalSoftTech - Thu, 2019-10-10 09:38

Shakespeare once said, “What is in a name?”. Our boy Shakespeare never had a Tumblr blog because, to have a hit blog, one must think of a catchy and cool Tumblr username. He would be pulling his hair out in despair had he tried to get his Tumblr blog off the ground with all the […]

The post 100 Good Tumblr Usernames appeared first on VitalSoftTech.

Categories: DBA Blogs

MESTEC Revolutionizes Manufacturing Performance with Oracle Autonomous Database

Oracle Press Releases - Thu, 2019-10-10 09:00
Blog
MESTEC Revolutionizes Manufacturing Performance with Oracle Autonomous Database

By Peter Schutt, Senior Director, Oracle—Oct 10, 2019

Manufacturing is a 24/7 industry where high availability is critical. MESTEC provides intelligent SaaS solutions to optimize the lifecycle from planning to execution for some of the world’s most prestigious manufacturers of submarines, missiles, micro-semiconductors, orthopedic hips, and pastry pies. Moving MESTEC’s legacy on-premises infrastructure to the Oracle Autonomous Database Cloud that has zero downtime allows the company to more strategically focus resources on innovating tools to improve manufacturing quality, cost, and delivery performance.

Using Oracle Autonomous Transaction Processing in combination with Microsoft Azure Interconnect has helped MESTEC cut its labor and infrastructure costs in half compared to an equivalent on-premises environment, and it is seeing workloads run up to 600% faster with half as many CPUs. Autonomous Transaction Processing patches, maintains, and tunes itself, providing a more secure environment that frees up resources to spend more valuable time on customer services and training. MESTEC also has greater flexibility to autoscale capacity up and down in seconds depending on demand, and can very easily and quickly onboard new customers and assume less risk with automatic disaster recovery.

MESTEC clients are seeing wide-ranging benefits from embracing autonomous technologies throughout manufacturing factories. There’s a 60% increase in labor productivity, a 50% reduction in customer complaints, and cost savings through a 20% reduction in working inventory. With MESTEC and Oracle, the factory of the future is available now.

Watch the MESTEC video

Watch this video to learn how MESTEC is innovating manufacturing with Oracle Autonomous Database.

embedBrightcove('responsive', false, 'single', '6085104309001');

Read More Oracle Cloud Customer Stories

MESTEC is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes

GreenGo Gives Oracle Cloud the Green Light

Oracle Press Releases - Thu, 2019-10-10 09:00
Blog
GreenGo Gives Oracle Cloud the Green Light

By Guest Author, Oracle—Oct 10, 2019

When GreenGo launched in 2016, the Hungarian startup sought to offer an alternative mode of transport. The first electric car-sharing service in its area, GreenGo offers services to those who regularly travel around Hungary’s bustling capital, Budapest, but are not keen on maintaining a car or paying for parking in the city. Cars can be booked through the mobile application before pickup, and once the user has stopped using the car, the fee for usage is automatically withdrawn from the user’s bank account.

In the years since GreenGo first took to the streets, the company’s growth has been unstinting. The initial fleet of 45 cars rapidly expanded to more than 300 vehicles. This high-speed expansion of the service caused GreenGo to outgrow its existing on-premises solution.

A more scalable, flexible solution was needed, especially considering GreenGo’s ambitious business plans. GreenGo opted for Oracle Cloud Infrastructure to help drive it into the future.

The system has been running in the cloud for over six months, and since then, it has been working steadily. The problems caused by the increased load have disappeared. There are no slowdowns in the application during peak periods, so the mobile application used by customers is served by a stable operating system.

“Car sharing and the ‘appification’ of mobility transform the everyday and corporate use of cars. Oracle’s cloud-based services also play an exciting role in the life of a company handling complex and large data sets, like GreenGo,” said Bálint Michaletzky, GreenGo CEO.

Looking at the road ahead, GreenGo and Oracle are also working to move the front-end layer of the system into the cloud, a transition that GreenGo would like to go live with soon.

Read More Oracle Cloud Customer Stories

GreenGo is one of the thousands of customers on its journey to cloud. Read about others in Stories from Oracle Cloud: Business Successes.

Oracle Opens Retail Innovation and Technology Center in Portugal

Oracle Press Releases - Thu, 2019-10-10 07:00
Press Release
Oracle Opens Retail Innovation and Technology Center in Portugal Delivers innovations in data science and AI to help retailers gain a competitive edge

PORTO, PORTUGAL—Oct 10, 2019

Competition to build retail mindshare and customer loyalty has never been more fierce. Brands must continually evolve their products and deliver a personalized approach to win the hearts and minds of customers. Delivering the technology that helps them do exactly that, Portugal Country Leader Bruno Morais, and Mike Webster, senior vice president and general manager Oracle Retail, today unveiled a new Innovation and Technology Center in Porto. The center will focus initially on delivering breakthrough retail innovations leveraging the latest technologies, including artificial intelligence and machine learning.

“Oracle Portugal is delighted to see Oracle’s Global Retail business unit’s commitment to and investment in people and technologies in Porto. Similarly, we are delighted to contribute to the development of Porto through the creation of this global technology and innovation hub,” said Morais. “As it is true for all Oracle offices, we will actively foster community support and get involved in local causes to enrich the area wherever we can.”

Porto is a vibrant city that delivers a balance of history and innovation. A report by CBRE states that Porto ranks in the top ten fast pace and growing technological clusters in the “EMEA Tech Cities: Opportunities in Technology Hotspots.” The technology team, including many developers, will be dedicated to creating bespoke integrations and enhancements to the Oracle Retail portfolio and re-usable assets that deliver increased value to customers.

In the past year, Oracle has logged roughly 13,000 person-days of development, focused on continuous innovation with Micro-Apps that integrate with the base code of the Oracle Retail portfolio.

“Porto is home to a strong pool of tech talent coveted by major tech employers in Europe to Portugal. So it made sense to expand our existing, globally renowned team of Oracle Retail solution delivery experts in Porto by creating a complementary center for innovation in the area,” noted Webster. “We believe that innovation listens more than it speaks. Our consulting team works hand in hand with our retail customers to understand their unique challenges and needs and translate those into technology extensions that align with our broader roadmap. The Innovation and Technology Center will be critical in delivering these customer-driven enhancements to our global retail community.”

As an example, a fast-fashion retailer had wondered how a key season was performing and trading, both locally and globally. To address this need, Oracle Retail Consulting developed a simple but effective solution, called Lifecycle Inventory Planning (LIP), that extends and integrates to forecasting and planning solutions. LIP overcomes typical item/store/day parameter management by using a revolutionary method to set system parameters coupled with machine learning. The data science model was able to help the retailer obtain an end-of-life inventory projection based on the forecast, inventory availability, and the replenishment rule setup–improving worldwide planning. This is just one of the many innovations generated by the Porto development team to-date.

The Oracle Innovation and Technology Center is located in the Centro Empresarial Lionesa Business Resort, near the beautiful River Leça and the Leça do Balio Monastery, which is convenient for the airport, main highways, and trains. Oracle will continue to draw talent from nearby Universities specializing in Business and Information Technology from the surrounding areas.

Contact Info
Fabián Gradolph
Oracle Corporate Communications
+34 916267454
fabian.gradolph@oracle.com
About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations, and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Fabián Gradolph

  • +34 916267454

Free Oracle Cloud: 13. Final things to take away

Dimitri Gielis - Thu, 2019-10-10 04:42
This post is the last post of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

By now we have seen how you can set up the different components from the Always Free Oracle Cloud.

During Oracle Open World I talked to the people behind the Always Free Oracle Cloud, and they told me that when your account is inactive for a specified amount of time (I forgot if it's 5 days, or a week or more?), your instance is being backed-up to the Object Storage. You can see it as a VM which is being put in stand-by or halted and saved to disk. When you need it again, it can be restored, but it takes time and it might be annoying when you don't know this is what is happening.

If you have a production app running in the Fee Oracle Cloud, be sure people use your app at least once inside the window Oracle foresees. Maybe in the future, Oracle could foresee a setting where we can specify the (in-)activity window as a developer.

I'm really impressed by this free offering of Oracle and see many use cases for development environments and small to midsize applications. I believe the limits we get in the free plan are really generous of Oracle and much more than any other cloud provider. 
Here's a quick overview of what it looks like at the time of writing:
  • 2 Autonomous Databases, each with 1 OCPU and 20 GB storage
  • 2 Compute virtual machines, each with 1/8 OCPU and 1 GB memory
  • Storage:  2 Block Volumes, 100 GB total. 10 GB Object Storage. 10 GB Archive Storage.
  • Additional Services:  Load Balancer, 1 instance, 10 Mbps bandwidth. Monitoring, 500 million ingestion data points, 1 billion retrieval data points. Notifications, 1 million delivery options per month, 1,000 emails sent per month. Outbound Data Transfer, 10 TB per month.
So what if you outgrow these limits? It means your applications are successful, so you can be proud of that :) and at that time hopefully, there's enough revenue to upgrade to a Paid Oracle Cloud plan. This can be done super easy... you click the upgrade to the paid plan button and there you go!
Oracle will copy your DB, instance, ... and you go from there.

The way that Oracle is doing the upgrade is really cool, as it means you keep your free instance. So I see myself doing some development on the free instance, then for production upgrade to a paid plan. At that time I still have the development environment. The other free service could be the TEST environment, so you have DEV, TEST both free and PROD paid.


If you didn't check it out by now, go and try out the FREE Oracle Cloud yourself by going to https://www.oracle.com/cloud/free/ :)


Thanks Oracle!
Categories: Development

Creating archived redolog-files in group dba instead of oinstall

Yann Neuhaus - Wed, 2019-10-09 15:07
Since Oracle 11g files created by the database belong by default to the Linux group oinstall. Changing the default group after creating the central inventory is difficult. In this Blog I want to show how locally created archived redo can be in group dba instead of oinstall.

One of my customers had the requirement to provide read-access on archived redo to an application for logmining. To ensure the application can access the archived redo, we created an additinal local archive log destination:


LOG_ARCHIVE_DEST_9 = 'LOCATION=/logmining/ARCHDEST/NCEE19C valid_for=(online_logfile,primary_role)'

and provided NFS-access to that directory for the application. To ensure that the application can access the archived redo, the remote user was part of a remote dba-group, which had the same group-id (GID) as the dba-group on the DB-server. Everything worked fine until we migrated to a new server and changed the setup to use oinstall as the default group for Oracle. The application could no longer read the files, because they were created with group oinstall:


oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C] ls -ltr
-rw-r-----. 1 oracle oinstall 24403456 Oct 9 21:21 1_32_1017039068.dbf
-rw-r-----. 1 oracle oinstall 64000 Oct 9 21:25 1_33_1017039068.dbf
-rw-r-----. 1 oracle oinstall 29625856 Oct 9 21:27 1_34_1017039068.dbf
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C]

One possibility to workaround this would have been to use the id-mapper on Linux, but there’s something better:

With the group-sticky-bit on Linux we can achieve, that all files in a directory are part of the group of the directory.

I.e.


oracle@19c:/logmining/ARCHDEST/ [NCEE19C] ls -l
total 0
drwxr-xr-x. 1 oracle dba 114 Oct 9 21:27 NCEE19C
oracle@19c:/logmining/ARCHDEST/ [NCEE19C] chmod g+s NCEE19C
oracle@19c:/logmining/ARCHDEST/ [NCEE19C] ls -l
drwxr-sr-x. 1 oracle dba 114 Oct 9 21:27 NCEE19C

Whenever an archived redo is created in that directory it will be in the dba-group:


SQL> alter system switch logfile;
 
System altered.
 
SQL> exit
 
oracle@19c:/logmining/ARCHDEST/ [NCEE19C] cd NCEE19C/
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C] ls -ltr
-rw-r-----. 1 oracle oinstall 24403456 Oct 9 21:21 1_32_1017039068.dbf
-rw-r-----. 1 oracle oinstall 64000 Oct 9 21:25 1_33_1017039068.dbf
-rw-r-----. 1 oracle oinstall 29625856 Oct 9 21:27 1_34_1017039068.dbf
-rw-r-----. 1 oracle dba 193024 Oct 9 21:50 1_35_1017039068.dbf
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C]

To make all files part of the dba-group use chgrp and use the newest archivelog as a reference:


oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C] chgrp --reference 1_35_1017039068.dbf 1_3[2-4]*.dbf
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C] ls -ltr
-rw-r-----. 1 oracle dba 24403456 Oct 9 21:21 1_32_1017039068.dbf
-rw-r-----. 1 oracle dba 64000 Oct 9 21:25 1_33_1017039068.dbf
-rw-r-----. 1 oracle dba 29625856 Oct 9 21:27 1_34_1017039068.dbf
-rw-r-----. 1 oracle dba 193024 Oct 9 21:50 1_35_1017039068.dbf
oracle@19c:/logmining/ARCHDEST/NCEE19C/ [NCEE19C]

Hope this helps somebody.

Cet article Creating archived redolog-files in group dba instead of oinstall est apparu en premier sur Blog dbi services.

Cursor_sharing

Jonathan Lewis - Wed, 2019-10-09 10:58

Here’s a funny little detail that I don’t think I’ve noticed before – needing only a simple demo script:


rem
rem     Script:         cursor_sharing_oddity.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2019
rem
rem     Last tested 
rem             12.2.0.1
rem

create table t1 as
select  * 
from    all_objects 
;

set serveroutput off
alter session set cursor_sharing = force;

select  *
from    t1
where
        created between date'2019-06-01' and date'2019-06-30'
;

select * from table(dbms_xplan.display_cursor);

Given that I’ve set cursor_sharing to FORCE (and flushed the shared pool just in case), what SQL do you expect to see if I pull the plan memory, and what sort of thing do you expect to see in the Predicate Information. Probably some references to system-constructed bind variables like :”SYS_B_0″. This is what I got on 12.2.0.1:


SQL_ID  9qwa9gpg9rmjv, child number 0
-------------------------------------
select * from t1 where  created between date:"SYS_B_0" and
date:"SYS_B_1"

Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |   170 (100)|          |
|*  1 |  TABLE ACCESS FULL| T1   |  1906 |   251K|   170   (8)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("CREATED">=TO_DATE(' 2019-06-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "CREATED"<=TO_DATE(' 2019-06-30 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))


Somehow I’ve got system-generated bind variables in the SQL (and v$sql – when I checked), but the original literal values are still present (in a different form) in the predicate information. Then, when I re-ran the query changing 1st June to 15th June I got the same SQL_ID (and generated bind variables) but with child number 1 and suitably modified filter predicates.

Of course, just for completion, if I write the query using the “old-fashioned” to_date() approach I end up with a single statement with lots of system-generated bind variables that are consistent between the SQL and the Predicate Information.

SQL_ID  10sfymvwv00qx, child number 0
-------------------------------------
select * from t1 where  created between to_date(:"SYS_B_0",:"SYS_B_1")
                   and to_date(:"SYS_B_2",:"SYS_B_3")

Plan hash value: 3332582666

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   189 (100)|          |
|*  1 |  FILTER            |      |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |  1029 |   135K|   189  (17)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(TO_DATE(:SYS_B_2,:SYS_B_3)>=TO_DATE(:SYS_B_0,:SYS_B_1))
   2 - filter(("CREATED">=TO_DATE(:SYS_B_0,:SYS_B_1) AND
              "CREATED"<=TO_DATE(:SYS_B_2,:SYS_B_3)))

If you are planning to do anything with cursor_sharing, watch out for the side effects of the “ANSI” date and time operators.

Update (3 hours later)

It turns out that I have come across this before – and written about it

The same behaviour is shown by timestamp literals and interval literals. For details on the two types of literal here’s a link to the 12.2 SQL Language Reference Manual section on literals.

EBS 12.2 APPL_TOP Only Clone - ERRORMSG: Invalid APPS database user credentials.

Senthil Rajendran - Wed, 2019-10-09 10:28
[oracle@testebsop3app01 ~]$ perl /u01/install/APPS/fs1/EBSapps/comn/clone/bin/adcfgclone.pl appltop /u01/install/APPS/fs1/inst/apps/SATURN_testebsop3app01/appl/admin/SATURN_testebsop3app01.xml

Appl_top only clone might result in the below error

START: Instantiate the txkWfClone.sql...
instantiate file:
   source : /u01/install/APPS/fs1/EBSapps/appl/fnd/12.0.0/admin/template/txkWfClone.sql
   dest   : /u01/install/APPS/fs1/inst/apps/SATURN_testebsop3app01/admin/install/txkWfClone.sql
Adding execute permission to : /u01/install/APPS/fs1/inst/apps/SATURN_testebsop3app01/admin/install/txkWfClone.sql

END: Instantiated txkWfClone.sql...

START: Executing /u01/install/APPS/fs1/inst/apps/SATURN_testebsop3app01/admin/install/txkWfClone.sh -nopromptmsg
txkWfClone.sh exited with status 1
ERROR: txkWfClone.sh execution failed, exit code 1
[oracle@testebsop3app01 clone_10091511]$


if you scroll up the logs you may find some more errors

TXK Script Name: txkChkFormsLaunchMethod.pl
Enter the APPS user password:
*******FATAL ERROR*******
PROGRAM : (/u01/install/APPS/fs1/EBSapps/appl/fnd/12.0.0/patch/115/bin/txkChkFormsLaunchMethod.pl)
TIME    : Wed Oct  9 14:50:36 2019
FUNCTION: main::validateAppsSchemaCredentials [ Level 1 ]
ERRORMSG: Invalid APPS database user credentials.
ERRORCODE = 1 ERRORCODE_END
.end std out.
stty: standard input: Inappropriate ioctl for device
stty: standard input: Inappropriate ioctl for device
*******FATAL ERROR*******
PROGRAM : (/u01/install/APPS/fs1/EBSapps/appl/fnd/12.0.0/patch/115/bin/txkChkFormsLaunchMethod.pl)
TIME    : Wed Oct  9 14:50:36 2019
FUNCTION:  [ Level 1 ]
ERRORMSG: *******FATAL ERROR*******
PROGRAM : (/u01/install/APPS/fs1/EBSapps/appl/fnd/12.0.0/patch/115/bin/txkChkFormsLaunchMethod.pl)
TIME    : Wed Oct  9 14:50:36 2019
FUNCTION: main::validateAppsSchemaCredentials [ Level 1 ]
ERRORMSG: Invalid APPS database user credentials.


The problem is tnsnames.ora does not have an entry for TWO_TASK


from MT , sqlplus apps/apps@TWO_TASK will fail to connect.

Fix :
run autoconfig on the DB node
run adcfgclone.pl appltop

My SID

Jonathan Lewis - Wed, 2019-10-09 06:03

Here’s a little note that’s been hanging around as a draft for more than eight years according to the OTN (as it was) posting that prompted me to start writing it. At the time there were still plenty of people using Oracle 10g. so the question didn’t seem entirely inappropriate:

On 10g R2 when I open a sqlplus session how can I know my session SID ? I’m not DBA then can not open as sysdba and query v$session.

In all fairly recent versions of Oracle, of course, we have the option to use the sys_context() function to get the SID, but this specific option didn’t appear until some time in the 10g timeline – so you might have spent years “knowing” that you could get the audsid though sys_context(‘userenv’,’sessionid’) there was no equivalent way to get the sid. Now, of course, and even in the timeline of the original posting, the simplest solution to the requirement is to execute:


select sys_context('userenv','sid') from dual;

But there are a number of alternatives – which may occasionally do a better job (and sometimes are just plain silly). It’s also worth noting that even in 19c Oracle still doesn’t have access to v$session.serial# through sys_context() and, anyway, sys_context() behaves like an unpeekable bind variable – which can be a problem.

So here’s the first of several options:

select sid from V$mystat where rownum = 1;

You’ll need SYS to grant you select on v_$mystat to use this one, of course, but v$mystat is a very convenient view giving you the session activity stats since logon for your own session – so there ought to be some mechanism that allows you to see some form of it in place anyway (ideally including the join to v$statname).

One of the oldest ways of getting access to your session ID without having access to any of the dynamic performance views was through the dbms_support package:

variable v1 varchar2(32)
execute :v1 := dbms_support.mysid
execute dbms_output.put_line(:v1)

Again you’ll need SYS to grant you extra privileges, in this case execute on the dbms_support package – worse still, the package is not installed by default. In fact (after installing it) if you call dbms_support.package_version it returns the value: “DBMS_SUPPORT Version 1.0 (17-Aug-1998) – Requires Oracle 7.2 – 8.0.5” – which gives you some idea of how old it is. It used to be useful for the start_trace_in_session() procedure it contains but that procedure has been superseded by many newer mechanisms. If you enable SQL tracing to see what’s happening under the covers when you call dbms_support.mysid you’ll see that the function actually runs the query I showed above against v$mystat .

Unlike dbms_support the dbms_session package is installed automatically with the privilege to execute granted to public,  and this gives you a function to generate a “unique session id”, . The notes in the scripts $ORACLE_HOME/rdbms/admin/dbmssess.sql that create the package say that the return value can be up to 24 bytes long, but so far the maximum I’ve seen is 12.


select dbms_session.unique_session_id from dual;
UNIQUE_SESSION_ID
--------------------------
00FF5F980001


select
        to_number(substr(dbms_session.unique_session_id,1,4),'XXXX') sid,
        to_number(substr(dbms_session.unique_session_id,5,4),'XXXX') serial#,
        to_number(substr(dbms_session.unique_session_id,9,4),'XXXX') instance
from
        dual
;

       SID    SERIAL# INSTANCE
---------- ---------- --------
       255      24472        1

As you can see, the session_unique_id can be decoded to produce three useful bits of information, and the nice thing about this call is that it gives you session serial# at the same time as the SID. It’s possible, of course, that this query is as efficient as it could be, but there’s some scope for writing a query that uses a non-mergeable in-line view to call the function once, then splits the result into three pieces.

While we’re on the session_unique_id, the dbms_pipe package also has a “unique identifier” function unique_session_name():

SQL> select dbms_pipe.unique_session_name from dual;

UNIQUE_SESSION_NAME
------------------------
ORA$PIPE$00FF5F980001

It doesn’t take a lot of effort to spot that the “unique session name” is the “unique session id” of dbms_session prefixed with the text “ORA$PIPE$”. It’s convenient for the dbms_pipe package to be able to generate a unique name so that one session can create a safely named pipe and tell another session about it. Anyone using pipes should take advantage of this function for its original purpose. Unlike dbms_session you’ll need to be granted the privilege to execute this package, it’s not available to public. Interestingly the script that creates dbms_pipe says that this function could return 30 bytes – since it appears to be 9 bytes prepended to the (“could be 24 bytes”) dbms_session.unique_session_id you have to wonder whether there’s something more subtle that could happen.

There may be many more mechanisms available as built-ins, but the last one I know of is in the dbms_debug_jdwp package (another package with execute privilege already granted to public and the ability to supply both the sid and serial#):

SQL> select
  2          dbms_debug_jdwp.current_session_id     sid,
  3          dbms_debug_jdwp.current_session_serial serial#
  4  from dual
  5  /

       SID    SERIAL#
---------- ----------
       255      24472

There is a reason why I’ve decided to resurrect this list of ways of getting at a session’s SID, but that’s the topic of another blog note.

 

 

Oracle 19c

Yann Neuhaus - Wed, 2019-10-09 05:33
Oracle 19c has been released quite a while ago already and some customers already run it in Production. However, as it is the long term supported release, I thought I blog about some interesting information and features around 19c to encourage people to migrate to it.

Download Oracle 19c:

https://www.oracle.com/technetwork/database/enterprise-edition/downloads
or
https://edelivery.oracle.com (search e.g. for “Database Enterprise Edition”)

Docker-Images:
https://github.com/oracle/docker-images/tree/master/OracleDatabase

Oracle provides different offerings for 19c:

On-premises:
– Oracle Database Standard Edition 2 (SE2)
– Oracle Database Enterprise Edition (EE)
– Oracle Database Enterprise Edition on Engineered Systems (EE-ES)
– Oracle Database Personal Edition (PE)

Cloud:
– Oracle Database Cloud Service Standard Edition (DBCS SE)
– Oracle Database Cloud Service Enterprise Edition (DBCS EE)
– Oracle Database Cloud Service Enterprise Edition -High Performance (DBCS EE-HP)
– Oracle Database Cloud Service Enterprise Edition -Extreme Performance (DBCS EE-EP)
– Oracle Database Exadata Cloud Service (ExaCS)

REMARK: When this Blog was released the Autonomous DB offerings provided by Oracle did not run on 19c yet (they actually ran on 18c).

Unfortunately some promising 19c new features are only available on Exadata. If that’s the case (like for Automatic Indexing) then you can still test the feature on EE after setting:


SQL> alter system set "_exadata_feature_on"=TRUE scope=spfile;

and a DB-Restart.

REMARK: DO THAT ON YOUR OWN TESTSYSTEMS ONLY AND USE INTERNAL ORACLE PARAMETERS ONLY WHEN ORACLE SUPPORT RECOMMENDS TO DO SO.

Anyway, there are lots of new features and I wanted to share some interesting of them with you and provide some examples.

REMARK: You may check https://www.oracle.com/a/tech/docs/database19c-wp.pdf as well

1. Automatic Indexing (only available on EE-ES and ExaCS)

Oracle continually evaluates the executing SQL and the underlying tables to determine which indexes to automatically create and which ones to potentially remove.

Documentation:

You can use the AUTO_INDEX_MODE configuration setting to enable or disable automatic indexing in a database.

The following statement enables automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be used in SQL statements:


EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

The following statement enables automatic indexing in a database, but creates any new auto indexes as invisible indexes, so that they cannot be used in SQL statements:


EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');

The following statement disables automatic indexing in a database, so that no new auto indexes are created, and the existing auto indexes are disabled:


EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');

Show a report of automatic indexing activity:


set serveroutput on size unlimited lines 200 pages 200
declare
report clob := null;
begin
report := DBMS_AUTO_INDEX.REPORT_LAST_ACTIVITY();
dbms_output.put_line(report);
end;
/

In a test I ran some statements repeatedly on a table T1 (which contains 32 times the data of all_objects). The table has no index:


SQL> select * from t1 where object_id=:b1;
SQL> select * from t1 where data_object_id=:b2;

After some time indexes were created automatically:


SQL> select table_name, index_name, auto from ind;
 
TABLE_NAME INDEX_NAME AUT
-------------------------------- -------------------------------- ---
T1 SYS_AI_5mzwj826444wv YES
T1 SYS_AI_gs3pbvztmyaqx YES
 
2 rows selected.
 
SQL> select dbms_metadata.get_ddl('INDEX','SYS_AI_5mzwj826444wv') from dual;
 
DBMS_METADATA.GET_DDL('INDEX','SYS_AI_5MZWJ826444WV')
------------------------------------------------------------------------------------
CREATE INDEX "CBLEILE"."SYS_AI_5mzwj826444wv" ON "CBLEILE"."T1" ("OBJECT_ID") AUTO
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"

2. Real-Time Statistics (only available on EE-ES and ExaCS)

The database automatically gathers real-time statistics during conventional DML operations. You can see in the Note-section of dbms_xplan.display_cursor when stats used to optimize a Query were gathered during DML:


SQL> select * from table(dbms_xplan.display_cursor);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
SQL_ID 7cd3thpuf7jxm, child number 0
-------------------------------------
 
select * from t2 where object_id=:y
 
Plan hash value: 1513984157
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 24048 (100)| |
|* 1 | TABLE ACCESS FULL| T2 | 254 | 31242 | 24048 (1)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("OBJECT_ID"=:Y)
 
Note
-----
- dynamic statistics used: statistics for conventional DML

3. Quarantine problematic SQL (only available on EE-ES and ExaCS)

Runaway SQL statements terminated by Resource Manager due to excessive consumption of processor and I/O resources can now be automatically quarantined. I.e. instead of letting the SQL run until it reaches a resource plan limit, the SQL is not executed at all.

E.g. create a resource plan which limits SQL-exec-time for User CBLEILE to 16 seconds:


begin
-- Create a pending area
dbms_resource_manager.create_pending_area();
...
dbms_resource_manager.create_plan_directive(
plan => 'LIMIT_RESOURCE',
group_or_subplan => 'TEST_RUNAWAY_GROUP',
comment => 'Terminate SQL statements when they exceed the' ||'execution time of 16 seconds',
switch_group => 'CANCEL_SQL',
switch_time => 16,
switch_estimate => false);
...
-- Set the initial consumer group of the 'CBLEILE' user to 'TEST_RUNAWAY_GROUP'
dbms_resource_manager.set_initial_consumer_group('CBLEILE','TEST_RUNAWAY_GROUP');
end;
/

A SQL-Statement with SQL_ID 12jc0zpmb85tm executed by CBLEILE runs in the 16 seconds limit:


SQL> select count(*) X
2 from kill_cpu
3 connect by n > prior n
4 start with n = 1
5 ;
from kill_cpu
*
ERROR at line 2:
ORA-00040: active time limit exceeded - call aborted
 
Elapsed: 00:00:19.85

So I quarantine the SQL now:


set serveroutput on size unlimited
DECLARE
quarantine_config VARCHAR2(80);
BEGIN
quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(
SQL_ID => '12jc0zpmb85tm');
dbms_output.put_line(quarantine_config);
END;
/
 
SQL_QUARANTINE_1d93x3d6vumvs
 
PL/SQL procedure successfully completed.
 
SQL> select NAME,ELAPSED_TIME,ENABLED from dba_sql_quarantine;
 
NAME ELAPSED_TIME ENA
---------------------------------------- -------------------------------- ---
SQL_QUARANTINE_1d93x3d6vumvs ALWAYS YES

Other CBLEILE-session:


SQL> select count(*) X
2 from kill_cpu
3 connect by n > prior n
4 start with n = 1
5 ;
from kill_cpu
*
 
ERROR at line 2:
ORA-56955: quarantined plan used
Elapsed: 00:00:00.00
 
SQL> !oerr ora 56955
56955, 00000, "quarantined plan used"
// *Cause: A quarantined plan was used for this statement.
// *Action: Increase the Oracle Database Resource Manager limits or use a new plan.

–> The SQL does not run for 16 seconds, but is stopped immediately (is under quarantine). You can define the Plan-Hash-Value for which a SQL should be in quarantine and define quarantine thresholds. E.g. 20 seconds for the elapsed time. As long as the resource plan is below those 20 seconds the SQL is under quarantine. If the resource plan is defined to be above 20 seconds execution time limit, the SQL is executed.

4. Active Standby DML Redirect (only available with Active Data Guard)

On Active Data Guard you may allow moderate write activity. These writes are then transparently redirected to the primary database and written there first (to ensure consistency) and then the changes are shipped back to the standby. This approach allows applications to use the standby for moderate write workloads.

5. Hybrid Partitioned Tables

Create partitioned tables where some partitions are inside and some partitions are outside the database (on filesystem, on a Cloud-Filesystem-service or on a Hadoop Distributed File System (HDFS)). This allows e.g. “cold” partitions to remain accessible, but on cheap storage.

Here an example with 3 partitions external (data of 2016-2018) and 1 partition in the DB (data of 2019):


!mkdir -p /u01/my_data/sales_data1
!mkdir -p /u01/my_data/sales_data2
!mkdir -p /u01/my_data/sales_data3
!echo "1,1,01-01-2016,1,1,1000,2000" > /u01/my_data/sales_data1/sales2016_data.txt
!echo "2,2,01-01-2017,2,2,2000,4000" > /u01/my_data/sales_data2/sales2017_data.txt
!echo "3,3,01-01-2018,3,3,3000,6000" > /u01/my_data/sales_data3/sales2018_data.txt
 
connect / as sysdba
alter session set container=pdb1;
 
CREATE DIRECTORY sales_data1 AS '/u01/my_data/sales_data1';
GRANT READ,WRITE ON DIRECTORY sales_data1 TO cbleile;
 
CREATE DIRECTORY sales_data2 AS '/u01/my_data/sales_data2';
GRANT READ,WRITE ON DIRECTORY sales_data2 TO cbleile;
 
CREATE DIRECTORY sales_data3 AS '/u01/my_data/sales_data3';
GRANT READ,WRITE ON DIRECTORY sales_data3 TO cbleile;
 
connect cbleile/difficult_password@pdb1
 
CREATE TABLE hybrid_partition_table
( prod_id NUMBER NOT NULL,
cust_id NUMBER NOT NULL,
time_id DATE NOT NULL,
channel_id NUMBER NOT NULL,
promo_id NUMBER NOT NULL,
quantity_sold NUMBER(10,2) NOT NULL,
amount_sold NUMBER(10,2) NOT NULL
)
EXTERNAL PARTITION ATTRIBUTES (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY sales_data1
ACCESS PARAMETERS(
FIELDS TERMINATED BY ','
(prod_id,cust_id,time_id DATE 'dd-mm-yyyy',channel_id,promo_id,quantity_sold,amount_sold)
)
REJECT LIMIT UNLIMITED
)
PARTITION BY RANGE (time_id)
(
PARTITION sales_2016 VALUES LESS THAN (TO_DATE('01-01-2017','dd-mm-yyyy')) EXTERNAL
LOCATION ('sales2016_data.txt'),
PARTITION sales_2017 VALUES LESS THAN (TO_DATE('01-01-2018','dd-mm-yyyy')) EXTERNAL
DEFAULT DIRECTORY sales_data2 LOCATION ('sales2017_data.txt'),
PARTITION sales_2018 VALUES LESS THAN (TO_DATE('01-01-2019','dd-mm-yyyy')) EXTERNAL
DEFAULT DIRECTORY sales_data3 LOCATION ('sales2018_data.txt'),
PARTITION sales_2019 VALUES LESS THAN (TO_DATE('01-01-2020','dd-mm-yyyy'))
);
 
insert into hybrid_partition_table values (4,4,to_date('01-01-2019','dd-mm-yyyy'),4,4,4000,8000);
 
commit;
 
SQL> select * from hybrid_partition_table where time_id in (to_date('01-01-2017','dd-mm-yyyy'),to_date('01-01-2019','dd-mm-yyyy'));
 
PROD_ID CUST_ID TIME_ID CHANNEL_ID PROMO_ID QUANTITY_SOLD AMOUNT_SOLD
---------- ---------- --------- ---------- ---------- ------------- -----------
2 2 01-JAN-17 2 2 2000 4000
4 4 01-JAN-19 4 4 4000 8000
 
2 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
SQL_ID c5s33u5kanzb5, child number 0
-------------------------------------
select * from hybrid_partition_table where time_id in
(to_date('01-01-2017','dd-mm-yyyy'),to_date('01-01-2019','dd-mm-yyyy'))
 
Plan hash value: 2612538111
 
-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 83 (100)| | | |
| 1 | PARTITION RANGE INLIST | | 246 | 21402 | 83 (0)| 00:00:01 |KEY(I) |KEY(I) |
|* 2 | TABLE ACCESS HYBRID PART FULL| HYBRID_PARTITION_TABLE | 246 | 21402 | 83 (0)| 00:00:01 |KEY(I) |KEY(I) |
|* 3 | TABLE ACCESS FULL | HYBRID_PARTITION_TABLE | | | | |KEY(I) |KEY(I) |
-------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter((SYS_OP_XTNN("HYBRID_PARTITION_TABLE"."AMOUNT_SOLD","HYBRID_PARTITION_TABLE"."QUANTITY_SOLD","HYBRID_PARTITION_TABLE"."PROMO_ID","HYBRID_PARTITION_TABLE"."CHANNEL_ID","HYBRID_PARTITION_TABLE"."TIME_ID","HYBRID_PARTITION_TABLE"."CUST_ID","HYBRID_PARTITION_TABLE"."PROD_ID") AND INTERNAL_FUNCTION("TIME_ID")))
 
3 - filter((SYS_OP_XTNN("HYBRID_PARTITION_TABLE"."AMOUNT_SOLD","HYBRID_PARTITION_TABLE"."QUANTITY_SOLD","HYBRID_PARTITION_TABLE"."PROMO_ID","HYBRID_PARTITION_TABLE"."CHANNEL_ID","HYBRID_PARTITION_TABLE"."TIME_ID","HYBRID_PARTITION_TABLE"."CUST_ID","HYBRID_PARTITION_TABLE"."PROD_ID") AND INTERNAL_FUNCTION("TIME_ID")))

6. Memoptimized Rowstore

Enables fast data inserts into Oracle Database 19c from applications, such as Internet of Things (IoT), which ingest small, high volume transactions with a minimal amount of transactional overhead.

7. 3 PDBs per Multitenant-DB without having to pay for the Multitenant option

Beginning with 19c it is allowed to create 3 PDBs in a Container-DB without requiring the Mutitenant-Option license from Oracle. As the single- or multi-tenant DB becomes a must in Oracle 20, it is a good idea to start using the container-DB architecture with 19c already.

Please let me know your experience with Oracle 19c.

Cet article Oracle 19c est apparu en premier sur Blog dbi services.

NetSuite Announces New Industry Cloud Solutions to Help Mexican Organizations Grow

Oracle Press Releases - Tue, 2019-10-08 10:35
Press Release
NetSuite Announces New Industry Cloud Solutions to Help Mexican Organizations Grow New Industry Cloud Solutions Help Organizations in Mexico Unlock Their Potential

SuiteConnect, MEXICO CITY, Mexico—Oct 8, 2019

Oracle NetSuite today announced a series of new innovations to help organizations in Mexico unlock growth and take their business to the next level. The latest innovations within the NetSuite platform include new SuiteSuccess industry cloud solutions and financial management capabilities that are designed to help organizations in Mexico drive growth, reduce costs and quickly and easily achieve the benefits of cloud computing.

“There is no such thing as a ‘one-size-fits-all’ approach to a growing business as there are as many ways to execute as there are businesses in the world,” said Gustavo Moussalli, LAD senior director, Oracle NetSuite. “Even though there is no one journey, we have learned from working with more than 18,000 organizations across the world that there are patterns and we have built those insights into SuiteSuccess. The new SuiteSuccess solutions provide a blueprint for growth that organizations operating in Mexico can take advantage of to unlock their potential.” 

SuiteSuccess is a pre-configured industry cloud solution that helps organizations achieve the benefits of the cloud in as little as 45 days. With the new SuiteSuccess solutions and add ons, organizations in Mexico can take advantage of industry-leading practices, that combine deep domain knowledge with pre-built workflows, KPIs and dashboards, to achieve the visibility, control and agility needed to grow their business and unlock their potential. New SuiteSuccess solutions for Mexico include:   

  • SuiteSuccess Starter: SuiteSuccess Starter helps small and rapidly growing companies manage all aspects of their business in a single system. With SuiteSuccess Starter, customers in Mexico can get up and running quickly with pre-configured KPIs, workflows, reminders, reports and value-driven dashboards for all key roles within a business from day one.
  • SuiteSuccess Financials First: Financials First includes pre-defined roles, KPIs, dashboards and workflows for finance departments. It will help organizations in Mexico automate financial processes, speed month-end close, improve reporting and achieve real-time visibility into the financial status of the organization. Financials First Standard and Premium editions are now available for customers in Mexico.
  • SuiteSuccess add-ons: New add-on modules including Inventory Management, Financial Management, Project Management, Work Orders & Assembly, Demand Planning, Fixed Asset Management and Revenue Management allow customers to maximize the value of NetSuite quickly and efficiently.
 

SuiteSuccess customers across all markets also have immediate access to NetSuite global capabilities to process multi-currency transactions and to take advantage of international growth opportunities. To learn more about SuiteSuccess, please visit www.netsuite.com/suitesuccess.

Contact Info
D'Nara Cush
Oracle NetSuite
(650) 506-8692
dnara.cush@oracle.com
About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 18,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

D'Nara Cush

  • (650) 506-8692

Ecommerce Leader Helps Mexican Brands Tap into $12B+ Market Opportunity

Oracle Press Releases - Tue, 2019-10-08 10:33
Press Release
Ecommerce Leader Helps Mexican Brands Tap into $12B+ Market Opportunity Ibushak Drives Efficiencies and Enhances the Customer Experience with NetSuite

SuiteConnect, MEXICO CITY, Mexico—Oct 8, 2019

Ibushak, a Mexican ecommerce provider, is using Oracle NetSuite to help Mexican brands unlock new revenue opportunities and meet changing customer expectations. With NetSuite, Ibushak has streamlined financials, optimized inventory management and gained a comprehensive view into its core business processes so that it can personalize the customer experience and drive brand loyalty.
 
Founded in 2004 by brothers Mauricio and Salomon Bouzali, Ibushak is focused on making it easier for Mexican brands to conduct business online. The company currently works with more than 300 brands in Mexico and is tapping into a thriving industry, which is expected to reach $12.77 billion by 2023. To meet increasing demand that has seen its business grow by 120 percent in the last 4 years, Ibushak needed a unified business platform that could automate processes and scale to support its future growth. After careful evaluation, Ibushak selected NetSuite over Microsoft Dynamics and SAP.
 
“Our father had a successful 30-year career in the Mexican retail industry and taught us a lot, but at the time the industry was just scratching the surface,” said Mauricio Bouzali, co-founder and CEO, Ibushak. “In a single platform, NetSuite gives us the visibility, control and agility we need to streamline operations, drive customer satisfaction and enhance decision making. NetSuite will continue to support us in our journey as we grow well beyond the 300 brands and 62,000 products we have today.”
 
NetSuite has enabled Ibushak to take advantage of an integrated platform to automate and centralize key business functions including financials, order management and inventory management systems across its eight different business units. With a unified view into its core business functions, Ibushak has been able to provide its customers with the insights required to increase satisfaction and drive brand loyalty. By eliminating manual data entry with online orders, NetSuite has enabled Ibushak to more than double the number of orders shipped without decreasing service level agreements with its customers. In addition, Ibushak has enhanced decision making through increased visibility into financials.  
 
“Technology has transformed the retail industry and consumers now expect things to happen seamlessly on their own terms, whenever and however they want,” said Gustavo Moussalli, LAD senior director, Oracle NetSuite. “Ibushak is helping brands operating in Mexico meet these changing customer expectations and unlock significant new revenue streams. With NetSuite, the Ibushak team will be able to focus on helping brands reach and engage customers in new ways as their business continues to grow.”
Contact Info
D'Nara Cush
Oracle NetSuite
(650) 506-8692
dnara.cush@oracle.com
About Ibushak

Ibushak delivers the expertise and infrastructure to help companies of any size sell their products and services on the largest ecommerce platforms in Mexico. It also has its own online marketplace, ibushak.com. The company provides an end-to-end solution for ecommerce—handling everything from digital marketing to warehousing and fulfillment.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 18,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

D'Nara Cush

  • (650) 506-8692

Medical Service Provider Restores Vision in Communities Across Mexico

Oracle Press Releases - Tue, 2019-10-08 10:31
Press Release
Medical Service Provider Restores Vision in Communities Across Mexico salauno Works to Give All Mexicans the Chance to See Well and Improve Their Lives with the Help of NetSuite

SuiteConnect, MEXICO CITY, Mexico—Oct 8, 2019

salauno, a nonprofit focused on delivering visual health services, is using Oracle NetSuite to support its mission of providing high-quality, accessible eye care services to all of Mexico. With NetSuite, salauno has been able to focus time and resources on its mission by increasing supply chain efficiency and gaining a single view of the financial data of its entire organization.

Founded in 2011, the company has performed more than 35,000 surgeries that have restored vision and provided low-cost and quality ophthalmologic care, including treatments like cataract surgery and diabetic retinopathy, to more than 300,000 patients. To ensure that its team can stay focused on their mission and reach more communities, salauno needed a unified business platform that could optimize its supply chain and financial management operations.

"In Mexico, only a third of the people who require visual correction seek treatment and this includes more than nine million who suffer from cataracts. We wanted to change that situation," said Javier Okhuysen, co-founder and director of salauno. "By providing valuable real-time information about our business, NetSuite has enabled us to streamline operations and drastically reduce costs. Which, in turn, allows us to help more people and offer new essential services to the community."

With NetSuite, salauno has been able to take advantage of a single platform to run its core business processes, including accounting, inventory management and sales. This has helped salauno increase the efficiencies across its business and offer its services at a cost up to 60 percent less. NetSuite will also help support salauno’s rapid expansion plans to reach 60 new clinics and four new surgical centers throughout Mexico by 2021.

"It’s amazing to see how many people in Mexico have fallen victim to unnecessary blindness," said Gustavo Moussalli, LAD senior director, Oracle NetSuite. "We are proud to support an organization like salauno that is helping restore the visual health of people across the country."

NetSuite supports more than 1,500 nonprofit organizations and social enterprises globally. salauno is currently benefiting from SuiteDonation. Part of the NetSuite Social Impact program, SuiteDonation helps nonprofits benefit from the power of technology by providing discounted licensing.

salauno’s mission will be showcased at SuiteConnect Mexico City. Attendees will have the chance to meet members of the salauno team and hear firsthand how the organization is changing visual health by providing high quality, low cost treatments. Attendees will also be able to learn about their eyesight with interactive colored lenses that reveal fun facts and images in the process.

Contact Info
D'Nara Cush
Oracle NetSuite
(650) 506-8692
dnara.cush@oracle.com
About salauno

salauno is a network of ophthalmological clinics empowered by leading technology in their sector. The company combines a social approach with business discipline to grow rapidly and efficiently to give the whole country the chance to see well.

For more information visit: www.salauno.com.mx or its Facebook page.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 18,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

D'Nara Cush

  • (650) 506-8692

Pages

Subscribe to Oracle FAQ aggregator