Feed aggregator

ANSI flashback

Jonathan Lewis - Mon, 2020-01-27 03:45

I am seeing “traditional” Oracle SQL syntax being replaced by “ANSI”-style far more frequently than I used to – so I thought I’d just flag up another reminder that you shouldn’t be too surprised if you see odd little glitches showing up in ANSI style that don’t show up when you translate to traditional; so if your SQL throws an unexpected error (and if it’s only a minor effort to modify the code for testing purposes) it might be a good idea to see if the problem goes away when you switch styles. Today’s little glitch is one that showed up on the Oracle-l listserver 7 years ago running 11.2.0.3 but the anomaly still exists in 19c.

As so often happens it’s a problem that appears in one of the less commonly used Oracle features – in this case flashback queries. We’ll start by creating a table, then pausing for thought (Note: this code is little more than a cosmetic rewrite of the original posting on Oracle-l):


rem
rem     Script:         ansi_flashback_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1 
as
select  * 
from    all_objects
where   rownum <= 10000 -- > comment to avoid wordpress format issue
;

create table t2
as
select  *
from    t1
where   rownum <= 10 -- > comment to avoid wordpress format issue
;
 
prompt  =======================
prompt  Sleeping for 10 seconds
prompt  =======================

execute dbms_lock.sleep(10)

column current_scn new_value m_scn format a15

select  to_char(current_scn,'99999999999999') current_scn 
from    v$database
/

 
select 
        v1.object_name
from 
        t1 as of scn &m_scn v1
join 
        t2 as of scn &m_scn v2
on 
        v2.object_id = v1.object_id
/

I’ve created a couple of tables then introduced a 10 second sleep before checking the database SCN. The sleep is there because I want to be able to query the tables “as of SCN” and if I don’t pause for a little while (typically about 5 seconds) the code will probably raise Oracle error ORA-01466: unable to read data – table definition has changed.

The query I want to use references both table as of the same SCN, using “ANSI” syntax to do the join. The query behaves perfectly reasonably when run from SQL(Plus; the problem starts to appear when I try to embed the query as a cursor in a PL/SQL procedure. First I’ll copy the SQL exactly as it is (with substitution variable) into a procedure declaration. The variable will be replaced in both cases by an actual value before the procedure is created, as the subsequent check of user_source will show:


create or replace procedure p1( myscn in varchar2 ) as

        cursor c1 is 
                select  v1.object_name
                from 
                        t1 as of scn &m_scn v1
                join 
                        t2 as of scn &m_scn v2
                on 
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;
/
 
select  text 
from    user_source
where   type = 'PROCEDURE'
and     name = 'P1'
order by 
        line
/

execute p1 ('0')

The procedure compiles successfully and the query against user_source shows it stored as follows (note, particularly, an actual value has been stored for the SCN):


procedure p1( myscn in varchar2 ) as

        cursor c1 is
                select  v1.object_name
                from
                        t1 as of scn  12670394063090 v1
                join
                        t2 as of scn  12670394063090 v2
                on
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;

Next we recreate the procedure but replace the substitution variable with the name of the incoming formal parameter:


create or replace procedure p1( myscn in varchar2 ) as

        cursor c1 is
                select  v1.object_name
                from 
                        t1 as of scn myscn v1
                join 
                        t2 as of scn myscn v2
                on 
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;
/
 
show errors

You’ll notice that instead of doing a test execution of the procedure I’ve called “show errors”. This is because the procedure won’t compile and reports “Warning: Procedure created with compilation errors” with the following output from the call to show errors:


Errors for PROCEDURE P1:

LINE/COL ERROR
-------- -----------------------------------------------------------------
3/9      PLS-00341: declaration of cursor 'C1' is incomplete or malformed
4/3      PL/SQL: ORA-00984: column not allowed here
4/3      PL/SQL: SQL Statement ignored
13/8     PL/SQL: Item ignored
17/2     PL/SQL: SQL Statement ignored
17/16    PLS-00320: the declaration of the type of this expression is
         incomplete or malformed

18/2     PL/SQL: Statement ignored
18/23    PLS-00320: the declaration of the type of this expression is
         incomplete or malformed

So we check to see if the same warning and list of errors appear if I switch to “traditional” Oracle syntax:


create or replace procedure p1( myscn in varchar2 ) as

        cursor c1 is
                select  v1.object_name
                from 
                        t1 as of scn myscn v1,
                        t2 as of scn myscn v2
                where 
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;
/
 
execute p1 (&m_scn)

The answer is no. This version of the query is accepted by the PL/SQL compiler, and the call to execute it with the SCN supplied in the substitution variable produces the expected results.

Is there anything we can do to stick with ANSI style syntax? Almost invariably the answer will be yes. Here’s a little workaround in this case:

create or replace procedure p1( myscn in varchar2 ) as

        cursor c1 is
                select  v1.object_name
                from 
                        (select * from t1 as of scn myscn) v1
                join 
                        (select * from t2 as of scn myscn) v2
                on 
                        v2.object_id = v1.object_id
        ;

        l_row c1%rowtype;

begin
        open c1;
        fetch c1 into l_row;
        dbms_output.put_line(l_row.object_name);
        close c1;
end;
/
 
execute p1 (&m_scn)

We simply embed each “as of scn” clause inside an inline view and then join the views. If you enable the CBO (10053) trace before executing this version of the procedure you’ll find that the final “unparsed” SQL produced by the optimzer has, of course, been translated back into the traditional syntax.

Warning: it’s fairly likely that this workaround will do what you want, but it’s possible that in a few cases it may result in a different execution plan from the one you were expecting (or would get from traditional syntax).

 

Blowing Out Barriers

Floyd Teter - Sat, 2020-01-25 12:31
I was chatting with a new friend last weekend.  Turns out we're both in the enterprise apps business: I work for Oracle on HCM Cloud apps while he works at a small start-up building and selling a warehouse management system.

While we were chatting about the biz, we got into SaaS and Cloud.  My friend made a statement I really want to share:  "Cloud blew out all our barriers to entering this marketplace; we wouldn't be in business without Cloud."  When I pressed him on this, he made the following points:

  • Our entire development platform is cloud-based; no on-prem servers whatsoever
  • We track our bugs and manage our development cycles with JIRA
  • All our accounting, HR, and sales apps are SaaS with providers focused on SMB customers
  • We don't even have internal email - we have a Slack channel
  • Our product is SaaS-based and runs on two IaaS partner platforms.  Our SMB customers would not be able to afford the infrastructure required to run on prem.
  • Most of our user documentation is visual and chart-based.  Our off-shore team uses LucidChart to build that stuff. 
After hearing all this, I expressed that my mental image of his company was a small group of people in a local office sitting around a conference table with laptops and cell phones.  He corrected me:  5 people in North America, developers in Vietnam and documentation team in India - no brick and mortar offices anywhere.

I walked away from the conversation thinking: "Isn't this cool?  These guys have leveraged the cloud to build a product startup about to break into 9-digit revenue (in U.S. dollars) with zero infrastructure.  These guys get it."

This has always been the promise of the Cloud: do business without large capital investments.  It was just neat to experience yet another example of this promise fulfilled IRL.

Tell me again why you're running on-prem enterprise apps?

Oracle VM Server: OVMAPI_2005E and OVMEVT_003503D_000

Dietrich Schroff - Sat, 2020-01-25 07:16
In my OVM environment i was trying to delete a server from a pool and this did not work. The error i got signalled was:
OVMAPI_2005E "[ServerDbImpl] xx:yy:zz:... (oraclevm)" contains a component "xx:yy:zz:..." in error. Error event: server.cluster.failure., Summary: Server Cluster Failure, Description: Failed to leave cluster OVMEVT_003503D_000 Server reboot is required.. I rebootet the server: still the same error.
I rebootet the ovm manager: still the same error.

Then i realized, that there was a red cross at my server:

The solution was: Select events from the drop down for this server:

There i had to "Acknowledge" the events and after that i was able to remove the server without any further error...

Solved: Restore Of Standby Control File On Standby DB In ExaCS Failing With ERROR (ORA-600, RMAN-03002)

Online Apps DBA - Sat, 2020-01-25 06:59

Fixed: Restore Of Standby Control File On Standby DB In ExaCS Failing With Error (ORA-600, RMAN-03002) Are you looking for a fix to troubleshoot the “PSDRPC returns significant error 600 & RMAN-03002: failure of restore command” error while performing a restore of Standby Control File using RMAN on the Standby Database on Exadata Cloud Service […]

The post Solved: Restore Of Standby Control File On Standby DB In ExaCS Failing With ERROR (ORA-600, RMAN-03002) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

WITH Subquery

Jonathan Lewis - Thu, 2020-01-23 14:37

Here’s another anomaly that appears when you mix and match Oracle features. In this case it’s “With” subqueries (common table expressions / CTEs) and Active Dataguard (ADG) Standby databases. The problem appeared on the Oracle-l listserver and luckily for the OP another member of the list had seen it before and could point to a relevant MOS document id which explained the issue and supplied a workaround.

The OP had their standby database opened read-only for reporting and found the following oddity in the extended SQL trace file for one of their reports:


WAIT #140196872952648: nam='db file scattered read' ela= 1588 file#=4097 block#=579715946 blocks=128 obj#=-39778567 tim=17263910670242
WAIT #140196872952648: nam='db file scattered read' ela= 1495 file#=4097 block#=579715947 blocks=128 obj#=-39778567 tim=17263910672065
WAIT #140196872952648: nam='db file scattered read' ela= 1671 file#=4097 block#=579715948 blocks=128 obj#=-39778567 tim=17263910674042
WAIT #140196872952648: nam='db file scattered read' ela= 1094 file#=4097 block#=579715949 blocks=128 obj#=-39778567 tim=17263910675443

Before pointing out the oddity (if you haven’t spotted it already) I’ll just explain a few of the numbers thayt are a little unusual.

  • File# = 4097: the user has parameter db_files = 4096, so this is the first Temp file.
  • Block# = 579,715,946: the database is 120TB, and the temporary tablespace is a “bigfile” tablespace so it’s okay for the file to hold more than 579M blocks.
  • Obj# < 0: Negative object numbers is a characteristic of materialized CTEs: if you look at the execution plan a materialized CTE will be reported as a table with a name like  SYS_TEMP_FDA106F9_E259E68.  If you take the first hexadecimal number and treat is as a 32-bit signed integer you get the value that would be reported as the obj# in the trace file.  (Converting to decimal and subtract power(2,32) is one way of doing the arithmetic).
  • tim= nnnnnnnn:  this is the timestamp (usually in microseconds), and we can see intervals of roughly 1,400 to 2,000 microseconds between these lines.

So here’s the oddity: in this set of 4 consecutive waits we’re waiting for multiblock reads of 128 blocks – but each read starts one block after the previous read. It’s as if Oracle is reading 128 blocks and forgetting everything after the first one. And the timestamps are significant because they tell us that this isn’t a case of Oracle spending so much time between reads that the other blocks fall off the end of  the buffer cache before the query reaches them.

I think I’ve seen a pattern like this once before but it would have been quite a long time ago and I can’t find any notes I might have made about it (and it turns out that my previous experience was not relevant to this case). Fortunately another member of Oracle-l had also seen the pattern and supplied the solution through a reference to a MOS document that led to: Doc ID 2251339.1 With Subquery Factorization Temp Table Does Not Cache in Standby in 12.1.0.2.

It’s not a bug – Oracle is supposed to do this if you manage to materialize a CTE in a Read-only Standby database. I don’t understand exactly why there’s a problem but thanks to some feature of how consistent reads operate and block SCNs are generated when you populate the blocks of the global temporary table (GTT) that is your materialized CTE it’s possible for Oracle to produce the wrong results if it re-visits blocks that have been read into the cache from the GTT. So when you do a multiblock read during a tablescan of the GTT Oracle can use the first block it has read (presumably because it’s immediately pinned), but can’t use the remaining 127 – and so you get the odd pattern of consecutive blocks appearing at the start of consecutive multiblock reads.

This raises a couple of interesting (and nasty) questions.

  • First – does every 128 block read get read to the middle of the buffer cache, pushing another 128 blocks out of the buffer cache or does Oracle automatically read the blocks to the “cold” end of the LRU, minimising the impact on the rest of the cache; we hope it’s the latter.
  • Second – If I use a small fetch size while running my query might I find that I have to re-read the same block (with its 127 neghbours) many times because Oracle releases any pinned blocks at the end of each fetch and has to re-acquire the blocks on the next fetch.

If anyone wants to test the second question by running a query from SQL*Plus with extended trace enabled the following simple query should answer the question:

alter session set events '10046 trace name context forever, level 8';
set arraysize 2

with v1 as (select /*+ materialize */ * from all_objects)
select object_name from v1;

Workarounds

There is a workaround to the issue – you can add the hint /*+ inline */ to the query to ensure that the CTE is not materialized. There is a bit of a catch to this, though (on top of the fact that you might then need to have two slightly different versions of the code if you want to run the query on production and standby) – if Oracle places the subquery text inline the optimizer may manage to merge it into the rest of the query and come up with a bad execution plan. Again you can probably work around this threat by extending the hint to read: /*+ inline no_merge */. Even then the optimizer could decide it has better statistics about the “real” table columns that it might have lost when it materialized the subquery, so it could still produce a different execution plan from the materialized plan.

As an alternative (and somewhat more brutal) workaround you could set the hidden parameter “_with_subquery” to inline either at the session or system level, or in the startup parameter file.

 

Time-Series Prediction Beyond Test Data

Andrejus Baranovski - Thu, 2020-01-23 12:13
I was working on the assignment to build a large scale time-series prediction solution. I end up using a combination of approaches in the single solution — Prophet, ARIMA and LSTM Neural Network (running on top of Keras/TensorFlow). With Prophet (Serving Prophet Model with Flask — Predicting Future) and ARIMA it is straightforward to calculate a prediction for future dates, both provide a function to return prediction for a given future horizon. The same is not obvious with LSTM, if you are new — this will require a significant amount of time to research how to forecast true future dates (most of the examples are showing how to predict against test dataset only).

I found one good example though which I was following and it helped me to solve my task — A Quick Example of Time-Series Prediction Using Long Short-Term Memory (LSTM) Networks. In this post, I will show how to predict shampoo sales monthly data, mainly based on the code from the above example.

Read more - Time-Series Prediction Beyond Test Data.


PostgreSQL Meetup at CERN, Geneva, January 17, 2020

Yann Neuhaus - Thu, 2020-01-23 10:37
Registration

Last Friday (17/01/20) I had the opportunity to go to Geneva at CERN to attend a PostgreSQL meetup.
I’m really happy to write a blog about for a lot of reasons. First of all, it was about PostgreSQL (essential these days), also for the contributors as Oleg Bartunov and finally because of the organizers: Franck Pachot (old & soon new colleague by dbi-services) & Laeticia Avrot I met at the SOUG Day in Lausane.


After the registration process where a lot of people were waiting to get their badge, I followed the instructions given to reach the Building 503.  I was rapidly lost in this  labyrinth and I had to ask several times to find the conference room.

Unfortunately I was late and missed the “Welcome & opening speech” at 02pm. Worst, the first minutes of Gülçin Yıldırım’s session.
I took discreetly a seat closed to the door in the conference room. I was immediately impressed as already more than 100 persons were in.
It seems to be that there was a huge interest for this event.

So the first talk was about the evolution of Fault Tolerance in PostgreSQL.
Gülcin mainly focused on the evolution of fault tolerance capabilities in PostgreSQL throughout its versions.
Robustness, replication methods, failover & Switchover were the main topics.

The second talk was from Pavlo Golub , a PostgreSQL expert and developer at Cybertec (Software & database partner in Austria).
After some words about the existing scheduling tools in the Postgres community, he introduced a new Open Source tool from Cybertec, called pg_timetable.
He went very deeply in details & discussed some advanced topics like transaction support and cross-platform tasks.
At the end we had a successful demo.

Break

After this session, we had a break as it was 04pm. A super kings cake’s party (Galette des Rois) accompanied by excellent ciders were offered.
We could even choose either the one from Normandy or from Brittany.
I tasted both and appreciated.
By the way, I found the bean twice. I have to admit that this was a really good idea as the athmospere was relaxed and friendly.

At around 04:45pm, Anastasia Lubennikova started her session. A comparison between various backup/recovery solutions for PostgreSQL.
The goal  was to help us to choose the most appropriate tool. She covered several advanced topics in that area as: incremental backups, archive management, backup validation, retention policies.

The next session from Vik Fearing was a short one – Advent of Code Using PostgreSQL
During his talk, Vik showed some techniques to solve general-purpose programming challenges using just a single query with lateral joins.

Romuald Thion, a PostgreSQL teacher at the university of Lyon (France) followed: his talk was about some lessons learned
since he has progressively introduced PostgreSQL in its courses in 2018.
Since then,  400 students enrolled from second to fifth year use PostgreSQL in databases, web, security or system administration courses.

Finally and for the last session like an apotheosis, came on stage Oleg Bartunov, a PostgreSQL Major Contributor & developer.
“All You Need Is Postgres” was the title of his session. This could be soon a hymn.
Oleg talked a lot about his long Postgres professional experience since Postgres95, about all the projects he participated as GiST, GIN and SP-GiST, and also full text search, NoSQL features (hstore and jsonb). He talked also about astronomy and Supernovae.
This was really fascinating and we could listen to him for hours.

Conclusion

A Drinks & Food party concluded this meetup. It was a nice opportunity to meet all speakers. To discuss with them and also some customers or old colleagues (Pierre Boizot).
Learn and share, this is part of dbi-services spirit and matches our values!
One last thing, I would like to address special thanks to Franck Pachot & Laetitia Avrot for the good organization & overall the cool atmosphere during this meetup.
Hoping for next…

Cet article PostgreSQL Meetup at CERN, Geneva, January 17, 2020 est apparu en premier sur Blog dbi services.

New Feature: Restoring Deleted Compartments In Oracle Cloud (OCI)

Online Apps DBA - Thu, 2020-01-23 07:19

New feature: Restoring Deleted Compartments In Oracle Cloud (OCI) Previously, once the Compartments were deleted in OCI, they couldn’t be restored. With the recent update, now you can restore the deleted Compartments. Check out K21Academy’s post at https://k21academy.com/oci53 that covers: ▪Steps For Restoring The Deleted Compartments ▪What Changes Take Place In The Compartments After Restoring? […]

The post New Feature: Restoring Deleted Compartments In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Troubleshoot Socket:000445] Connection Rejected, Filter Blocked Socket While Accessing EBS R12 WebLogic Console On OCI

Online Apps DBA - Thu, 2020-01-23 03:06

Fixed: WebLogic Console For E-Business (EBS) R12 on Oracle Cloud (OCI) Socket Blocked Did you encounter the “The Server is not able to service this request: blocked Socket, weblogic.security.net.FilterException:” error while accessing the Weblogic console from the Oracle EBS environment created from the Oracle EBS Cloud Manager? If YES, then the blog post at https://k21academy.com/ebscloud36 […]

The post [Troubleshoot Socket:000445] Connection Rejected, Filter Blocked Socket While Accessing EBS R12 WebLogic Console On OCI appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Adding PostgreSQL extensions without being super user?

Yann Neuhaus - Wed, 2020-01-22 09:33

Usually, when you need to install a PostgreSQL extension you do this as superuser (or at least I am doing it like this). The downside of that is, of course, that a super user must be available once a new extension is required or that all the extensions are installed by default (e.g. in template1). Recently the question popped up internally if you can install extensions without being super user. The answer is: yes and no. It depends on the objects that get created by the extensions and it depends on the permissions of the user that wants to install the extension. As always, lets do some demos as this is the easiest way to understand how things work.

Currently there are no extensions installed in my database except for the standard PL/pgSQLl extension:

postgres=# \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(1 row)

For the purpose of this post lets create a new user without any special permissions other than login:

postgres=# create user a with login password 'a';
CREATE ROLE
postgres=# \du a
           List of roles
 Role name | Attributes | Member of 
-----------+------------+-----------
 a         |            | {}

This user, even if not granted anything, by default has permissions to create objects in the public schema (you should definitely avoid that, check here):

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> select current_database();
 current_database 
------------------
 postgres
(1 row)
postgres=> create table tab1 ( a int );
CREATE TABLE

What this user is not able to do, is to use create extension” to install a new extension into the database:

postgres=> create extension lo;
ERROR:  permission denied to create extension "lo"
HINT:  Must be superuser to create this extension.
postgres=> 

Why is that? If we take a look at the extension’s SQL file the first statement is this:

CREATE FUNCTION ltree_in(cstring)
RETURNS ltree
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;

“CREATE FUNCTION” does work as we are able to create objects in the public schema. The issue is this:

postgres=> CREATE FUNCTION ltree_in(cstring)
postgres-> RETURNS ltree
postgres-> AS 'MODULE_PATHNAME'
postgres-> LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
ERROR:  permission denied for language c
postgres=> 

We do not have access to the language. Lets try to grant the required privilege for using the language:

postgres=# grant USAGE on LANGUAGE c to a;
ERROR:  language "c" is not trusted
DETAIL:  GRANT and REVOKE are not allowed on untrusted languages, because only superusers can use untrusted languages.

That does not work as well, as c is untrusted. The same is true for “language internal” as in intagg–1.1.sql:

postgres=> CREATE FUNCTION int_agg_state (internal, int4)
postgres-> RETURNS internal
postgres-> AS 'array_agg_transfn'
postgres-> PARALLEL SAFE
postgres-> LANGUAGE INTERNAL;
ERROR:  permission denied for language internal

As all the extensions in standard PostgreSQL community do reference either “c” or “internal” somehow we do not have any chance to get an extension installed as user “a”. Lets do another test and create a new database with user “a” as it’s owner:

postgres=> \c postgres postgres
You are now connected to database "postgres" as user "postgres".
postgres=# create database dba with owner a;
CREATE DATABASE
postgres=# \c dba a
You are now connected to database "dba" as user "a".
dba=> 

Can we install extensions now?

dba=> create extension lo;
ERROR:  permission denied to create extension "lo"
HINT:  Must be superuser to create this extension.
dba=> 

Another error message but it still does not work. We can, however, install PL/Perl as this is a trusted language:

dba=> create extension plperl;
CREATE EXTENSION
dba=> 

Actually PL/Perl is the only extension that can be installed in this scenario, which can be confirmed by a simply PL/pgSQL code block:

dba=> drop extension plperl;
DROP EXTENSION
dba=> do $$
dba$> declare
dba$>   ext record;
dba$>   exception_text text;
dba$> begin
dba$>   for ext in
dba$>       select name
dba$>         from pg_available_extensions
dba$>        order by name
dba$>   loop
dba$>     begin
dba$>        execute 'create extension '||ext.name;
dba$>        raise notice 'SUCCESS for %', ext.name;
dba$>     exception
dba$>        when others
dba$>          then get stacked diagnostics exception_text = MESSAGE_TEXT;
dba$>          raise notice '% failed with %', ext.name, exception_text;
dba$>     end;
dba$>   end loop;
dba$> end $$;
NOTICE:  adminpack failed with permission denied to create extension "adminpack"
NOTICE:  amcheck failed with permission denied to create extension "amcheck"
NOTICE:  autoinc failed with permission denied to create extension "autoinc"
NOTICE:  bloom failed with permission denied to create extension "bloom"
NOTICE:  btree_gin failed with permission denied to create extension "btree_gin"
NOTICE:  btree_gist failed with permission denied to create extension "btree_gist"
NOTICE:  citext failed with permission denied to create extension "citext"
NOTICE:  cube failed with permission denied to create extension "cube"
NOTICE:  dblink failed with permission denied to create extension "dblink"
NOTICE:  dict_int failed with permission denied to create extension "dict_int"
NOTICE:  dict_xsyn failed with permission denied to create extension "dict_xsyn"
NOTICE:  earthdistance failed with required extension "cube" is not installed
NOTICE:  file_fdw failed with permission denied to create extension "file_fdw"
NOTICE:  fuzzystrmatch failed with permission denied to create extension "fuzzystrmatch"
NOTICE:  hstore failed with permission denied to create extension "hstore"
NOTICE:  hstore_plperl failed with required extension "hstore" is not installed
NOTICE:  hstore_plperlu failed with required extension "hstore" is not installed
NOTICE:  hstore_plpython2u failed with required extension "hstore" is not installed
NOTICE:  hstore_plpython3u failed with required extension "hstore" is not installed
NOTICE:  hstore_plpythonu failed with required extension "hstore" is not installed
NOTICE:  insert_username failed with permission denied to create extension "insert_username"
NOTICE:  intagg failed with permission denied to create extension "intagg"
NOTICE:  intarray failed with permission denied to create extension "intarray"
NOTICE:  isn failed with permission denied to create extension "isn"
NOTICE:  jsonb_plperl failed with required extension "plperl" is not installed
NOTICE:  jsonb_plperlu failed with required extension "plperlu" is not installed
NOTICE:  jsonb_plpython2u failed with required extension "plpython2u" is not installed
NOTICE:  jsonb_plpython3u failed with required extension "plpython3u" is not installed
NOTICE:  jsonb_plpythonu failed with required extension "plpythonu" is not installed
NOTICE:  lo failed with permission denied to create extension "lo"
NOTICE:  ltree failed with permission denied to create extension "ltree"
NOTICE:  ltree_plpython2u failed with required extension "ltree" is not installed
NOTICE:  ltree_plpython3u failed with required extension "ltree" is not installed
NOTICE:  ltree_plpythonu failed with required extension "ltree" is not installed
NOTICE:  moddatetime failed with permission denied to create extension "moddatetime"
NOTICE:  pageinspect failed with permission denied to create extension "pageinspect"
NOTICE:  pg_buffercache failed with permission denied to create extension "pg_buffercache"
NOTICE:  pg_freespacemap failed with permission denied to create extension "pg_freespacemap"
NOTICE:  pg_prewarm failed with permission denied to create extension "pg_prewarm"
NOTICE:  pg_stat_statements failed with permission denied to create extension "pg_stat_statements"
NOTICE:  pg_trgm failed with permission denied to create extension "pg_trgm"
NOTICE:  pg_visibility failed with permission denied to create extension "pg_visibility"
NOTICE:  pgcrypto failed with permission denied to create extension "pgcrypto"
NOTICE:  pgrowlocks failed with permission denied to create extension "pgrowlocks"
NOTICE:  pgstattuple failed with permission denied to create extension "pgstattuple"
NOTICE:  SUCCESS for plperl
NOTICE:  plperlu failed with permission denied to create extension "plperlu"
NOTICE:  plpgsql failed with extension "plpgsql" already exists
NOTICE:  plpython3u failed with permission denied to create extension "plpython3u"
NOTICE:  postgres_fdw failed with permission denied to create extension "postgres_fdw"
NOTICE:  refint failed with permission denied to create extension "refint"
NOTICE:  seg failed with permission denied to create extension "seg"
NOTICE:  sslinfo failed with permission denied to create extension "sslinfo"
NOTICE:  tablefunc failed with permission denied to create extension "tablefunc"
NOTICE:  tcn failed with permission denied to create extension "tcn"
NOTICE:  tsm_system_rows failed with permission denied to create extension "tsm_system_rows"
NOTICE:  tsm_system_time failed with permission denied to create extension "tsm_system_time"
NOTICE:  unaccent failed with permission denied to create extension "unaccent"
NOTICE:  xml2 failed with permission denied to create extension "xml2"
DO

If you want to install an extension that only creates objects you are allowed to create anyway, that would succeed. The probably best way to handle extensions is to install all the required ones by default and then grant permissions to the users who need them.

Cet article Adding PostgreSQL extensions without being super user? est apparu en premier sur Blog dbi services.

Make Oracle database simple again!

Yann Neuhaus - Wed, 2020-01-22 09:30
Introduction

Let’s have a look at how to make Oracle database as simple as it was before.

Oracle database is a great piece of software, yes it’s quite expensive, but it’s still the reference and most of the companies can find a configuration that fits their needs according to a budget. Another complain about Oracle is the complexity: nothing is really simple, and you’ll need skillful DBA(s) to deploy, manage, upgrade, troubleshoot your databases. But complexity is sometimes caused by wrong decisions you make without having the knowledge, mainly because some choices add significant complexity compared to others.

The goal

Why the things need to be simple?

Obviously, simplification is:

  • easier troubleshooting
  • more “understandable by the others”
  • reinstallation made possible in case of big troubles
  • avoiding bugs related to the mix of multiple components
  • less work, because you probably have enough work with migration, patching, performance, …
  • more reliability because less components is less problems
On the hardware side

Rules for simplifying on the hardware side are:

  • Choose the same hardware for all your environments (DEV/TEST/QUAL/PROD/…): same server models, same CPU family, same revision. Make only slight variations on memory amount, number of disks and processor cores configuration if needed. Order all the servers at the same time. If a problem is related to hardware, you will be able to test the fix on a less important environment before going on production
  • Don’t use SAN: SAN is very nice, but SAN is not the performance guarantee you expect. Adopt local SSD disks: NVMe type SSDs have amazing speed, they are a true game changer in today’s database performance. Getting rid of the SAN is also getting rid of multipathing, resource sharing, complex troubleshooting, external dependencies and so on
  • Provision very large volumes for data: dealing with space pressure is not the most interesting part of your job. And it’s time consuming. You need 4TB of disks? Order 12TB and you’ll be ready for each situation even those not planned. For sure it’s more expensive, but adding disks afterall is not always that simple. It makes me think about a customer case where trying to add a single disk led to a nightmare (production down for several days)
  • Consider ODA appliances (Small or Medium): even if it’s not simplifying everything, at least hardware is all that you need and is dedicated to Oracle database software
  • Think consolidation: Oracle database has a strong database isolation thus leading to easy consolidation. Consolidate to limit the number of servers you need to manage is also simplifying your environment
  • Avoid virtualization: without talking about the license, virtualization is for sure underlying complexity
On the system side

Some rules are also good to know regarding the system:

  • Go for Redhat or Oracle Linux: mainly because it’s the most common OS for Oracle databases. Releases and patches are always available for Linux first. UNIX and Windows are decreasing in popularity for Oracle Databases these past 10 years
  • Same OS: please keep your operating systems strictly identical from development to production. If you decide to upgrade the OS, do that first on TEST/DEV and finish with production servers without waiting months. And never update through internet as packages can be different each time you update a system
  • Limit the filesystems number for your databases: 1 big oradata and 1 big FRA is enough on SSD, you don’t need to slice everything as we did before, and slicing is always wasting space
On the software side

A lot of things should be done, or not done regarding software:

  • Install the same Oracle version (release + patch) and use the same tree everywhere. Use OFA (/u01/…) or not but be consistent
  • Limit the Oracle versions in use: inform your software vendors that your platform cannot support too old versions, especially non-terminal releases like 11.2.0.3. 19c, 12.1.0.2 and eventually 11.2.0.4 are the only recommended version to deploy
  • Don’t use ASM: because ext4 is fine and SSDs now bring you maximum performance even on a classic filesystem. ASM will always be linked to Grid Infrastructure making dependencies between the DB Homes and that Grid stack, making patching much more complex
  • Don’t use RAC: because most of your applications cannot correctly manage high availability. RAC is much more complex compared to single instance databases. Not choosing RAC is getting rid of interconnect, cluster repository, fusion cache for SGA sharing, SAN or NAS technologies, split brains, scan listeners and complex troubleshooting. Replacing RAC with Data Guard or Dbvisit standby is the new way of doing sort of high availability without high complexity
Simplify backup configuration

For sure you will use RMAN, but how to simplify backups with RMAN?

  • Use the same backup script for all the databases
  • Use the same backup frequency for each database because when you need to restore, you’d better have a fresh backup
  • Configure only different retention on each database
  • Backup to disk (the most convenient being on a big nfs share) and without any specific library (backup your /backup filesystem later with your enterprise backup tool if needed)
  • provision large enough filesystem to never need to delete backups still in the retention period

Using the same backup strategy means being able to use the same restore procedure on all databases because you always need a quick restore of a broken database.

Always backup controlfile and spfile on the standby databases, the resulting backupset has a very small footprint and makes easier restore of the standby using database backupsets from the primary without the need for duplication.

Consider RMAN catalog only if you have enough databases to manage.

Simplify database management and configuration
  • Create scripts for database configuration and tablespace creation (for example: configure_SID.sql and tablespaces_SID.sql) to be able to reconfigure the same database elsewhere
  • don’t create grid and oracle users if you plan to use Grid Infrastructure/ASM: as a DBA you probably manage both ASM and database instances. Instead of loosing time switching between these 2 users, configure only one oracle user for both
  • never use graphical tools to create a database, deploy an appliance, configure something: because screenshots are far less convenient than pure text commands easily repeatable and scriptable
  • Use OMF: configure only db_create_file_dest and db_recovery_file_dest and Oracle will multiplex the controlfile and the redologs in these areas. OMF is also naming datafiles for you: there is no need for manual naming, who really cares about the name of the datafiles?
  • Don’t use multitenant: multitenant is fine but it’s been years we’re living with non-CDB databases and it works like a charm. You can still use non-CDB architecture in 19c, so multitenant is not mandatory even on this latest version. Later migration from non-CDB to pluggable database is quite easy, you will be able to use multitenant later
  • Keep your spfile clean: don’t specify unused parameters or parameters that already have the given value as a standard. Remove from the spfile these parameters using ALTER SYSTEM RESET parameter SCOPE=spfile SID='*';
Simplify patching

Patching can also be simplified:

  • Patch once a year, because you need to patch, but you don’t need to spend all your time applying each PSU every 3 months
  • Start with test/dev databases and take the time to test from the application
  • Don’t wait too much to patch the other environments: production should be patched few weeks after the first patched environment
Simplify Oracle*Net configuration

Simplifying also concerns Oracle*Net configuration:

  • Avoid configuring multiple listeners on a system because one is enough for all your databases
  • put your Oracle*Net configuration files in /etc because you don’t want multiple files in multiple homes
  • Keep your Oracle*Net configuration files clean and organized for increased readability
Make your database movable

Last but not least, one of the biggest mistake is to create a strong dependency between a database and a system. How to make your database easily movable? By configuring a standby database and using Data Guard or Dbvisit standby. Moving your database to another server is done within a few minutes with a single SWITCHOVER operation.

Using standby databases make your life easier for all of these purposes:

  • you need to move your server to another datacenter
  • a power outage happened on one site
  • you need to update hardware or software
  • you suspect a hardware problem impacting the database

Don’t only create standbys for production databases: even development databases are some kind of production for developers. If a database cannot be down for 1 day, you need a standby database.

Finest configuration is not dedicating a server for the primaries and a server for the standbys but dispatching the primaries between 2 identical servers on 2 sites. Each database having a preference server for its primary, the standby being on the opposite server.

Conclusion

It’s so easy to increase complexity without any good reason. Simplifying is the power of saying NO. No to interesting features and configurations that are not absolutely necessary. All you need for your databases is reliability, safety, availability, performance. Simplicity helps you in that way.

Cet article Make Oracle database simple again! est apparu en premier sur Blog dbi services.

Philosophy 23

Jonathan Lewis - Wed, 2020-01-22 08:46

It’s a long time since I wrote a note with the “philosophy” theme, but a recent complaint about Oracle prompted me to suggest the following thought:

“If you write SQL that is technically incorrect it’s not Oracle’s fault if sometimes the SQL completes without an error.”

Consider the following fragment of code:

drop table t1 purge;

create table t1 (n1 number, n2 number);
insert into t1 values (1,1);
commit;

select n1 from t1 where n2 = to_number('x');
select n1 from t1 where n2 = to_number('x') and n1 = 2;

select 
        case 
                when 
                        0 in  (select n1 from t1 where n2 = to_number('x'))
                then
                        1
                else
                        0
        end
from
        dual
;

Clearly the first query must raise an error because ‘x’ can’t be converted to a number (until Oracle supplies a format option to read it as a Roman Numeral).

Clearly the second query must raise an error because it’s just the first query with an extra predicate on the end.

Clearly the third query must raise an error because it’s going to execute a subquery that must raise an error.

Here’s the output from running the code from SQL*Plus (on 12.2.0.1).

Table dropped.


Table created.


1 row created.


Commit complete.

select n1 from t1 where n2 = to_number('x')
                                       *
ERROR at line 1:
ORA-01722: invalid number



no rows selected


CASEWHEN0IN(SELECTN1FROMT1WHEREN2=TO_NUMBER('X'))THEN1ELSE0END
--------------------------------------------------------------
                                                             0

1 row selected.

The first query fails with the expected conversion error: ORA-01722: invalid number. The second and third queries run to completion because SQL is a declarative language, not a procedural language, and the optimizer is allowed to transform your query in any way that it thinks might make it complete more quickly.

Here’s the execution plan – with predicate information – for the second query:

-------------------------------------+-----------------------------------+
| Id  | Operation          | Name    | Rows  | Bytes | Cost  | Time      |
-------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT   |         |       |       |     2 |           |
| 1   |  TABLE ACCESS FULL | T1      |     1 |    26 |     2 |  00:00:01 |
-------------------------------------+-----------------------------------+

Predicate Information:
----------------------
1 - filter(("N1"=2 AND "N2"=TO_NUMBER('x')))

The optimizer has decided that it’s more efficent to test for the constant 2 than it is to call the to_number() function, so it’s evaluated n1 = 2 first for each row and never had to check the second predicate because nothing got past the first.

The explanation for the successful completion of the third query is slightly different, but again it revolves around transforming for efficiency. Oracle will (very often) convert an IN subquery to an EXISTS subquery. In my example the resulting SQL looks like this (taken from the CBO (10053) trace file, with some cosmeticy enhancement):


SELECT 
        CASE
                WHEN 
                        EXISTS (SELECT 0 FROM T1 WHERE N1=0 AND N2=TO_NUMBER('x')) 
                THEN 1 
                ELSE 0 
        END  "CASEWHEN0IN(SELECTN1FROMT1WHEREN2=TO_NUMBER('X'))THEN1ELSE0END" 
FROM
        DUAL

Note how the “guaranteed to fail” subquery has an extra predicate added as the IN subquery is transformed into an EXISTS subquery and, just like the previous example, the extra predicate is applied before the to_number() predicate, and there’s no data to match the first predicate so the to_number() function never gets called and never gets a chance to raise an exception.

You could argue, or course, that the optimizer should spot the attempt to generate a constant and evaluate it (if possible) at parse time and raise the error before the runtime engine even sees the query – and it wouldn’t be too  hard to make a case for that – but it would only take a few seconds of thought to create slightly more complex examples to demonstrate the point I’ve been making.

Bottom line(s):

1) Use the correct datatypes for your data.

2) Don’t assume that a statement that raises an exception in some circumstances will result in an exception if it is subsequently embedded in a more complex statement. The optimizer is non-procedural and may transform your statement in a way that bypasses your bad design.

 

Footnote:

I probably shouldn’t say this – it will only encourage someone to do the wrong thing – but for the very simplest examples you can show that the ordered_predicates hint still works:


SQL> select /*+ ordered_predicates */  n1 from t1 where n2 = to_number('x') and n1 = 2;
select /*+ ordered_predicates */  n1 from t1 where n2 = to_number('x') and n1 = 2
                                                                  *
ERROR at line 1:
ORA-01722: invalid number

[AZ-103] Roles And Responsibilities Of A Microsoft Azure Administrator

Online Apps DBA - Wed, 2020-01-22 07:07

AZ-103 | Roles and Responsibilities as a Microsoft Azure Administrator Want to know the Roles and Responsibilities of a Microsoft Azure Administrator or knowledge and role-specific skills he/she should possess for bigger & better job opportunities? Check out our blog at https://k21academy.com/az10312 which covers: ▪ What Is Microsoft Azure Certification? ▪ Why Is Azure Certification […]

The post [AZ-103] Roles And Responsibilities Of A Microsoft Azure Administrator appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Manquehue Hotels Expands Properties, Personalizes Guest Experiences with Oracle

Oracle Press Releases - Wed, 2020-01-22 07:00
Press Release
Manquehue Hotels Expands Properties, Personalizes Guest Experiences with Oracle Leading hospitality technology fuels hotel chain growth, while fostering guest loyalty

Redwood Shores, Calif.—Jan 22, 2020

For 36 years, Manquehue Hotels has been delighting guests with unique properties and top-notch service. When the group looked to expand its footprint in Chile with three new hotels, they turned to Oracle Hospitality to help support this rapid growth. With Oracle, Manquehue was able to efficiently establish internal operations at the new hotels and enhance the overall guest experience for its customers.

“As we continue to grow our hotels and chain, we need a system in place that provides us with standardized, timely and online information to better serve our customers,” said Abner Cayul, Hotel Manquehue corporate general manager. “Oracle Hospitality was the only solution that could help us expand to be one of the most prominent hotel chains in Chile. We look forward to utilizing their offerings to fuel our continued growth.”

Oracle Hospitality OPERA Property features a range of innovations, including deeper customer insights and comprehensive functionality for all areas of hotel management. With it, Hotel Manquehue can create in-depth customer profiles to tailor experiences to each customer’s specific preferences, such as suggesting certain dining venues and luxury items. A personal touch that has helped the group increase customer loyalty. Hotel Manquehue also relies on OPERA for improved internal operations for employees, offering them immediate access to 24/7 support and connection to electronic invoices for easy back-office reporting.

The chain also uses the Oracle MICROS Simphony Cloud Point of Sale system to support a wide range of food and beverage operations within the hotel, including restaurant and room service amenities. Not only does MICROS Simphony streamline employee ordering and delivery, it also provides key data on what items are most popular and when. Conversely, it provides insight into which items are not selling, providing more informed decisions on menu planning to better serve guests while improving inventory management and reducing waste.

With hoteliers increasingly focusing on restaurant operations for revenue growth, it’s imperative for them to tap the advantages of an integrated property management and point-of-sale platform. The combination of OPERA and MICROS Simphony provide hoteliers with a comprehensive view of all their operations – including food and beverage – and insightful analysis of the contributions of each. Such integration also yields one-stop access to 360-degree guest profiles, giving staff detailed information about guests’ preferences and behavior anywhere on property.

“From their favorite dining options to customized rooms, today’s guests want an unprecedented level of personalization when enjoying their visit,” said David Meltzer, Group Vice President, Oracle Hospitality. “Hotel Manquehue understands this and uses the latest technology to better understand and serve the needs and desires of their customers every step of the way. As they expand throughout Chile, Oracle will continue to provide the intelligence and tools both management and front-line employees will need to ensure seamless, unforgettable guest experiences that create loyalty and bring guests back to their properties.”

Watch the Manquehue Hotels Video

In this video, Abner Cayul, corporate general manager of Manquehue Hotels shares how Oracle Hospitality Solutions is helping to support the hotel chain’s strategic growth.

Contact Info
Christine Allen
Oracle
+1.603.743.4534
christine.allen@oracle.com
About Oracle Hospitality

Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our solutions include platforms for property management, point-of-sale, distribution, reporting and analytics all delivered from the cloud to lower IT cost and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and continues to serve as a foundation for industry innovation. Learn more about Oracle’s Hospitality Solutions here.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • +1.603.743.4534

Add a deployment to Oracle GoldenGate 19c Microservices

DBASolved - Tue, 2020-01-21 13:33

Once you have an up and running Oracle GoldenGate Microservices environment, there may come a time when you need to add another deployment to the enviornment.  Adding deployments is easily done using Oracle GoldenGate Configuration Assistant (OGGCA).  In the below video, I show you how to add a deployment to an existing ServiceManager.  Additionally, I […]

The post Add a deployment to Oracle GoldenGate 19c Microservices appeared first on DBASolved.

Categories: DBA Blogs

If you’d like your ads to show on certain sites across the Internet, you can add these websites as:

VitalSoftTech - Tue, 2020-01-21 09:53

There are certain guesses you make about what would be the right course of action while displaying an ad. If you’d like to show your ads on certain sites across the Internet what should you add them as?  The obvious way is to include those websites in your ad. The question of how you should […]

The post If you’d like your ads to show on certain sites across the Internet, you can add these websites as: appeared first on VitalSoftTech.

Categories: DBA Blogs

Index Engineering

Jonathan Lewis - Mon, 2020-01-20 10:53

This is a case study based on a question that appeared on the Oracle Developer Community forum a few days ago.

What I’m aiming to present in this note is the pattern of thinking that you should adopt in cases like this. The final suggestion in this note isn’t necessarily the best answer to the question posed (at the time of writing the OP hadn’t supplied enough information to allow anyone to come up with a best solution), but the point of the exercise is to talk about the journey and (perhaps) remind you of some of the extreme engineering you can do with indexes.

The (massaged) problem statement is as follows:

I have a table of more than 200 million rows that is used for inserts, updates and queries. I have a query on this table and want to know what index I could create to speed up the query.

The supplied definition of the table was not consistent with the names used in the query, so I’ve had to do a little editing, but table, current indexes, and query were as follows:

rem
rem     Script:         extreme_indexing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

create table tbl (
        r_id                    varchar2(30) not null,
        c_id                    number,
        n_id                    varchar2(40),
        created_by              varchar2(30) not null,
        last_modified_by        varchar2(30),
        c_status                char(1),
        a_action                char(1),
        r_creation_dt           timestamp(6),
        cnt                     number(38)
)
;

create        index tbl_1 on tbl(cnt, r_creation_dt, c_id, a_action, last_modified_by);  
create        index tbl_2 on tbl(cnt, c_status, r_creation_dt);  
create bitmap index tbl_3 on tbl(c_status); 

select
        /*+ index(tbl) */
        c_id,
        a_action,
        cnt,
        last_modified_by
from
        tbl
where
        c_status in(
            'N',
            'F'
        )
and     cnt <= 5 -- > comment to avoid wordpress format issue
and     r_creation_dt is not null
group by
        cnt,
        r_creation_dt,
        c_id,
        a_action,
        last_modified_by,
        c_status
order by
        r_creation_dt
fetch 
        first 1000 rows only
;


The first thing to point out is the bitmap index tbl_i3 is almost certainly a bad idea – bitmaps and transactional activity do not mix. It seems quite likely that the OP in this case had read one of the many Internet notes that makes the “not totally wrong” but very misleading statement “bitmap indexes are good when you have a small number of distinct values”, and appled the principle to a column that looks like a “status” column holding only a few distisnct values.

Having got that error out of the way we can start to think about the query.  It’s using the (fairly new) “Fetch first N rows” syntax, which means we may have to find a lot of data and sort it before returning a subset: performance issues can be very deceptive in cases like this because we might want a small result set but have to do a large amount of work to get it.

In this case we’re after the first 1,000 rows – which makes you think that maybe there will be a lot of data satisfying the query. So we have two targets to meet to optimise the query:

  • acquire the data we need as efficiently as possible
  • post-process the data we acquire to derive the 1,000 rows as efficiently as possible

The query is just a single table access – which means we’re either going to do a full tablescan or find a good indexed access path, we don’t have to worry about join strategies.  So the first thing to consider is the volume (and scatter) of data that matches the predicates. If there’s only a “small” amount of data where “c_status in (‘N’,’F’) and cnt <= 5” then an index on – or starting with – (c_status, cnt) may be very helpful. (Note how I’ve specified the column with the equality predicate first – that’s part of a generic strategy for creating multi-column indexes.)

This, though, raises several questions that need to be answered:

  • How small is “small” ? In the context of 200 million rows, 100,000 is small; but if you had to visit 100,000 different blocks in the table and do 100,000 real single block reads from disc that might still be a very bad thing.
  • How many rows have status ‘N’, how many have status ‘F’, how many have cnt <= 5 ? Maybe a really tiny number of rows have cnt<=5 and lots have c_status in (‘N’,’F’) which could make this a case where ignoring the generic column-ordering strategy would be very effective.  Maybe the number of rows satisfying the individual conditions is high but the number satisfying the combination is very low.
  • Is this the ONLY combination of c_status and cnt that is of interest, or (for example) was 5 just the number that was picked as an example,  Would different c_status values be of interest, would some required combinations of c_status and cnt have to use completley different execution paths for the best performance.

I’m going to make some decisions in order to proceed – they may be totally wrong as far as the OP is concerned – so remember that this note is just for discussion purposes. Let’s assume that the common query is always exactly as stated. Perhaps it’s a query that runs every few minutes to clear up some outstanding work with the expectation that new rows matching the query keep appearing while older rows are processed, change status, and disappear from the result set. Let’s also assume that the result set is always “small”, and that it’s small because ‘N’ and ‘F’ are rare (even if the total number of rows with cnt <= 5 is large).

With these assumptions we could start by creating an index on (c_status, cnt), which gets us to exactly the rows we want from the table with no “throwaway” after visiting the table. Here’s the excution plan if that’s our choice of index (running on 12.2.0.1, and with an index() hint to force the use of the index when necessary):

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |        |      1 |        |   1000 |00:00:00.03 |    1573 |     34 |       |       |          |
|*  1 |  VIEW                           |        |      1 |   1000 |   1000 |00:00:00.03 |    1573 |     34 |       |       |          |
|*  2 |   WINDOW NOSORT STOPKEY         |        |      1 |   6451 |   1000 |00:00:00.03 |    1573 |     34 |   219K|   219K|          |
|   3 |    SORT GROUP BY                |        |      1 |   6451 |   1001 |00:00:00.03 |    1573 |     34 |  1186K|   567K| 1054K (0)|
|   4 |     INLIST ITERATOR             |        |      1 |        |  13142 |00:00:00.02 |    1573 |     34 |       |       |          |
|*  5 |      TABLE ACCESS BY INDEX ROWID| TBL    |      2 |  13743 |  13142 |00:00:00.02 |    1573 |     34 |       |       |          |
|*  6 |       INDEX RANGE SCAN          | TBL_I1 |      2 |  13743 |  13142 |00:00:00.01 |      33 |     34 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "R_CREATION_DT")<=1000)
   5 - filter("R_CREATION_DT" IS NOT NULL)
   6 - access((("C_STATUS"='F' OR "C_STATUS"='N')) AND "CNT"<=5)

I’ve enabled rowsource_execution_statistics (alter session set statistics_level = all) and pulled my execution plan from memory. As you can see from the A-rows for the index range scan and table access by index rowid, I’ve identified and acquired exactly the rows from the table that might be relevant (all 13,142 of them), then I’ve done a sort group by of all that data, sorting in a way that means the rows will be produced in exactly the order I need for the windowing function that Oracle will use to select the 1,000 rows I want.

If you’re curious, here (courtesy of dbms_utility.expand_sql_text() but cosmetically enhanced) is the transformed SQL that was actually optimised and executed:

SELECT 
        A1.C_ID C_ID,A1.A_ACTION A_ACTION,A1.CNT CNT,A1.LAST_MODIFIED_BY LAST_MODIFIED_BY 
FROM  (
        SELECT 
                /*+ INDEX (A2) */ 
                A2.C_ID C_ID,
                A2.A_ACTION A_ACTION,
                A2.CNT CNT,
                A2.LAST_MODIFIED_BY LAST_ MODIFIED_BY,
                A2.R_CREATION_DT rowlimit_$_0,
                ROW_NUMBER() OVER ( ORDER BY A2.R_CREATION_DT) rowlimit_$$_rownumber 
        FROM 
                TEST_USER.TBL A2 
        WHERE 
                (A2.C_STATUS='N' OR A2.C_STATUS='F') 
        AND     A2.CNT<=5 
        AND     A2.R_CREATION_DT IS NOT NULL 
        GROUP BY 
                A2.CNT,A2.R_CREATION_DT,A2.C_ID,A2.A_ACTION,A2.LAST_MODIFIED_BY,A2.C_STATUS
        ) A1 
WHERE 
        A1.rowlimit_$$_rownumber<=1000 
ORDER BY 
        A1.rowlimit_$_0

There are three main drawbacks to this choice of index.

  • I’ve acquired all the rows in the table that match the predicate even though I only really needed a subset
  • I’ve done a massive sort
  • I’ve created an index that includes every row in the table

Remember that the OP has a table of 200M rows, and we are assuming (pretending) that only a very small fraction of them match the initial predicates. Creating an index on 200M rows because we’re interested in only a few tens of thousands is wasteful of space and (given we have a “status” column) probably wasteful of processing resources as the status moves through several values. So I’m going to address that issue first. Let’s create a “function-based” index that ignores most of the data, and change the code to take advantage of that index – but since this is 12c, let’s do it by adding a virtual column and indexing that column.


alter table tbl add nf_r_creation_dt invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then r_creation_dt
                end
        ) virtual
/

create index tbl_i2 on tbl(nf_r_creation_dt)
/

I’ve introduced an invisible virtual column called nf_r_creation_dt (nf_ for status N/F) which uses a CASE expression matching the original predicate to return the r_creation_dt for rows that match and null for all the other (ca. 200M) rows. So when I create an index on the column the only entries in the index are for rows that I might want to see.

I have to edit the SQL to match – which simply means changing every appearance of r_creation_dt to nf_r_creation_dt, and eliminating the original predicate giving the following text and execution plan:


select
        /*+ index(tbl) */
        c_id,
        a_action,
        cnt,
        last_modified_by
from
        tbl
where
        nf_r_creation_dt is not null
group by
        nf_r_creation_dt,
        cnt,
        c_id,
        a_action,
        last_modified_by,
        c_status
order by
        nf_r_creation_dt
fetch 
        first 1000 rows only    -- 1,000 rows in the original
/

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |      1 |        |   1000 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  1 |  VIEW                          |        |      1 |   1000 |   1000 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  2 |   WINDOW NOSORT STOPKEY        |        |      1 |     48 |   1000 |00:00:00.02 |   13139 |     35 | 73728 | 73728 |          |
|   3 |    SORT GROUP BY               |        |      1 |     48 |   1001 |00:00:00.02 |   13139 |     35 |  1116K|   556K|  991K (0)|
|   4 |     TABLE ACCESS BY INDEX ROWID| TBL    |      1 |   2500 |  13142 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  5 |      INDEX FULL SCAN           | TBL_I2 |      1 |  13142 |  13142 |00:00:00.01 |      36 |     35 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "NF_R_CREATION_DT")<=1000)
   5 - filter("NF_R_CREATION_DT" IS NOT NULL)

The plan shows an index full scan on the new index. Since the index holds only those rows that might be interesting this isn’t a threat. However we still have to visit all the matching rows in the table – and that might result in more random I/O than we like. So the next step in enhancing performance is to consider adding all the columns we want to the index. There’s a little problem with that: if we add the columns as they are we will go back to having an index entry for every single row in the table so we need to use the same CASE mechanism to create more virtual columns:

alter table tbl add nf_c_status invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then c_status
                end
        ) virtual
/

alter table tbl add nf_last_modified_by invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then last_modified_by
                end
        ) virtual
/

alter table tbl add nf_a_action invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then a_action
                end
        ) virtual
/

alter table tbl add nf_c_id invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then c_id
                end
        ) virtual
/

alter table tbl add nf_cnt invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then cnt
                end
        ) virtual
/

create index tbl_i3 on tbl(
        nf_r_creation_dt,
        nf_cnt,
        nf_c_id,
        nf_a_action,
        nf_last_modified_by,
        nf_c_status
)
;

It looks like a bit of a pain to go through all this rigmarole to get all those columns that are null most of the time but echo the original values when the rows match our original predicate; and then we have to modify the query to match:


select
        /*+ index(tbl) */
        nf_c_id,
        nf_a_action,
        nf_cnt,
        nf_last_modified_by
from
        tbl
where
        nf_r_creation_dt is not null
group by
        nf_r_creation_dt,
        nf_cnt,
        nf_c_id,
        nf_a_action,
        nf_last_modified_by,
        nf_c_status
order by
        nf_r_creation_dt
fetch 
        first 1000 rows only    -- 1,000 rows in the original
/

But the big payoff comes from the execution plan:


----------------------------------------------------------------------------------------------------
| Id  | Operation              | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |        |      1 |        |   1000 |00:00:00.01 |      74 |     12 |
|*  1 |  VIEW                  |        |      1 |   1000 |   1000 |00:00:00.01 |      74 |     12 |
|*  2 |   WINDOW NOSORT STOPKEY|        |      1 |   2500 |   1000 |00:00:00.01 |      74 |     12 |
|   3 |    SORT GROUP BY NOSORT|        |      1 |   2500 |   1001 |00:00:00.01 |      74 |     12 |
|*  4 |     INDEX FULL SCAN    | TBL_I3 |      1 |   2500 |   1003 |00:00:00.01 |      74 |     12 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "NF_R_CREATION_DT")<=1000)
   4 - filter("NF_R_CREATION_DT" IS NOT NULL)

Notice how the SORT GROUP BY operation is a NOSORT, and the WINDOW operation is both NOSORT and STOPKEY ?

We’ve got the smallest index possible that only gets modified as rows move into, or out of, the interesting state, and when we run the query Oracle does a full scan of the index maintaining “running totals” but stop as soon as it’s aggregated enough results.

tl;dr

For very special cases it’s really amazing what you can (sometimes) do – if you can modify the code – with carefully engineered indexes to minimise the work done by a query AND the work done maintaining the infrastructure needed for that query. Virtual columns are a fantastic aid, especially now that 12c allows them to be invisible.

Video : Schema Only Accounts in Oracle Database 18c Onward

Tim Hall - Mon, 2020-01-20 01:53

Today’s video is a demonstration of schema only accounts, introduced in Oracle Database 18c.

This is based on the following articles.

The star of today’s video is Paul Vallee, of Pythian and Tehama fame.

Cheers

Tim…

Video : Schema Only Accounts in Oracle Database 18c Onward was first posted on January 20, 2020 at 8:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

PostgreSQL 13: parallel vacuum for indexes

Yann Neuhaus - Mon, 2020-01-20 01:06

Because of its implementation of MVCC PostgreSQL needs a way to cleanup old/dead rows and this is the responsibility of vacuum. Up to PostgreSQL 12 this is done table per table and index per index. There are a lot of parameters to fine tune auto vacuum but none of those allowed vacuum to run in parallel against a relation. The only option you had to allow auto vacuum to do more work in parallel was to increase autovacuum_max_workers so that more relations can be worked on at the same time. Working against multiple indexes of the same table was not possible. Once PostgreSQL 13 will be released this will change.

When you have a look at the help of vacuum you will notice a new option:

postgres=# \h vacuum
Command:     VACUUM
Description: garbage-collect and optionally analyze a database
Syntax:
VACUUM [ ( option [, ...] ) ] [ table_and_columns [, ...] ]
VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns [, ...] ]

where option can be one of:

    FULL [ boolean ]
    FREEZE [ boolean ]
    VERBOSE [ boolean ]
    ANALYZE [ boolean ]
    DISABLE_PAGE_SKIPPING [ boolean ]
    SKIP_LOCKED [ boolean ]
    INDEX_CLEANUP [ boolean ]
    TRUNCATE [ boolean ]
    PARALLEL integer

and table_and_columns is:

    table_name [ ( column_name [, ...] ) ]

URL: https://www.postgresql.org/docs/devel/sql-vacuum.html

By providing a positive integer to the “PARALLEL” option you tell vacuum how many background workers should be used to vacuum indexes for a given table (0 will disable parallel processing). Lets do a small test setup to demonstrate this:

postgres=# create table t1 as select i as a, i::text as b, now() as c from generate_series(1,3000000) i;
SELECT 3000000
postgres=# create index i1 on t1(a);
CREATE INDEX
postgres=# 
postgres=# create index i2 on t1(b);
CREATE INDEX
postgres=# create index i3 on t1(c);
CREATE INDEX

One table, four indexes. If we go for parallel 4 we should see four background workers doing the work against the indexes:

postgres=# update t1 set a=5,b='ccc',c=now() where mod(a,5)=0;
UPDATE 600000
postgres=# vacuum (parallel 4) t1;
VACUUM

As the table and the indexes are quite small we need to be fast but at least 2 parallel workers show up in the process list for the vacuum operation:

postgres 16688 15925 13 07:30 ?        00:01:07 postgres: postgres postgres [local] VACUUM
postgres 19184 15925  0 07:39 ?        00:00:00 postgres: parallel worker for PID 16688   
postgres 19185 15925  0 07:39 ?        00:00:00 postgres: parallel worker for PID 16688   

Nice. Please note that indexes are only considered for parallel vacuum when they meet the min_parallel_index_scan_size criteria. For FULL vacuum there will be no parallel processing as well.

Btw: The current maximal value is 1024:

postgres=# vacuum (parallel -4) t1;
ERROR:  parallel vacuum degree must be between 0 and 1024
LINE 1: vacuum (parallel -4) t1;

You can also see the parallel stuff on the verbose output:

postgres=# vacuum (parallel 4, verbose true) t1;
INFO:  vacuuming "public.t1"
INFO:  launched 2 parallel vacuum workers for index vacuuming (planned: 2)
INFO:  scanned index "i2" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.24 s, system: 0.06 s, elapsed: 0.89 s
INFO:  scanned index "i1" to remove 600000 row versions
DETAIL:  CPU: user: 0.17 s, system: 0.10 s, elapsed: 1.83 s
INFO:  scanned index "i3" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.16 s, system: 0.14 s, elapsed: 1.69 s
INFO:  scanned index "i4" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.25 s, system: 0.09 s, elapsed: 1.17 s
INFO:  "t1": removed 600000 row versions in 20452 pages
DETAIL:  CPU: user: 0.17 s, system: 0.16 s, elapsed: 1.43 s
INFO:  index "i1" now contains 3000000 row versions in 14308 pages
DETAIL:  600000 index row versions were removed.
1852 index pages have been deleted, 640 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i2" now contains 3000000 row versions in 14305 pages
DETAIL:  600000 index row versions were removed.
1851 index pages have been deleted, 640 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i3" now contains 3000000 row versions in 14326 pages
DETAIL:  600000 index row versions were removed.
3941 index pages have been deleted, 1603 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i4" now contains 3000000 row versions in 23391 pages
DETAIL:  600000 index row versions were removed.
5527 index pages have been deleted, 2246 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "t1": found 600000 removable, 3000000 nonremovable row versions in 21835 out of 22072 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 499
There were 132 unused item identifiers.
Skipped 0 pages due to buffer pins, 237 frozen pages.
0 pages are entirely empty.
CPU: user: 0.75 s, system: 0.36 s, elapsed: 5.07 s.
INFO:  vacuuming "pg_toast.pg_toast_16392"
INFO:  index "pg_toast_16392_index" now contains 0 row versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_toast_16392": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 499
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM

Cet article PostgreSQL 13: parallel vacuum for indexes est apparu en premier sur Blog dbi services.

Running the (Segment) Space Advisor - on a Partitioned Table

Hemant K Chitale - Sat, 2020-01-18 08:30
Here is a quick demo on running the Segment Space Advisor manually

I need to start with the ADVISOR privilege

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:10 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: system
Enter password:
Last Successful login time: Sat Jan 18 2020 22:00:32 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> grant advisor to hemant;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production


I can then connect with my account to run the Advisor

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:35 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: hemant
Enter password:
Last Successful login time: Sat Jan 18 2020 21:50:05 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
SQL> DECLARE
l_object_id NUMBER;
l_task_name VARCHAR2(32767) := 'Advice on My SALES_DATA Table';

BEGIN
DBMS_ADVISOR.create_task (
advisor_name => 'Segment Advisor',
task_name => l_task_name
);

DBMS_ADVISOR.create_object (
task_name => l_task_name,
object_type => 'TABLE',
attr1 => 'HEMANT',
attr2 => 'SALES_DATA',
attr3 => NULL,
attr4 => NULL,
attr5 => NULL,
object_id => l_object_id
);

DBMS_ADVISOR.set_task_parameter (
task_name => l_task_name,
parameter => 'RECOMMEND_ALL',
value => 'TRUE');

DBMS_ADVISOR.execute_task(task_name => l_task_name);
end;
/

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
PL/SQL procedure successfully completed.

SQL>


I can then review the advise :

SQL> set serveroutput on
begin
FOR cur_rec IN (SELECT f.impact,
o.type,
o.attr1,
o.attr2,
o.attr3,
o.attr4,
f.message,
f.more_info
FROM dba_advisor_findings f, dba_advisor_objects o
WHERE f.object_id = o.object_id
AND f.task_name = o.task_name
AND f.task_name = 'Advice on My SALES_DATA Table'
ORDER BY f.impact DESC)
LOOP
DBMS_OUTPUT.put_line('..');
DBMS_OUTPUT.put_line('Type : ' || cur_rec.type);
DBMS_OUTPUT.put_line('Schema : ' || cur_rec.attr1);
DBMS_OUTPUT.put_line('Table Name : ' || cur_rec.attr2);
DBMS_OUTPUT.put_line('Partition Name : ' || cur_rec.attr3);
DBMS_OUTPUT.put_line('Tablespace Name : ' || cur_rec.attr4);
DBMS_OUTPUT.put_line('Message : ' || cur_rec.message);
DBMS_OUTPUT.put_line('More info : ' || cur_rec.more_info);
END LOOP;
end;
/

SQL> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2015
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2016
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2017
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2018
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2019
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_MAXVALUE
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:

PL/SQL procedure successfully completed.

SQL>


Thus, it actually reports for each Partition in the table.


Note : Script based on script by Tim Hall  (@oraclebase)  at https://oracle-base.com/dba/script?category=10g&file=segment_advisor.sql


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator