Feed aggregator

pgconf.eu – Welcome to the community

Yann Neuhaus - Fri, 2019-10-18 15:41

On tuesday I started my journey to Milan to attend my first pgconf.eu, which was also my first big conference. I was really excited what will come up to me. How will it be, to become a visible part of the community. How will it be, to give my first presentation in front of so many people?

The conference started with the welcome and opening session. It took place in a huge room, to give all of the participants a seat. Really amazing, how big this community is and it is still growing. So many people from all over the world (Japan, USA, Chile, Canada….) attending this conference.

And suddenly I realized, this is the room, where I have to give my session. Some really strange feelings came up. This is my first presentation at a conference, this is the main stage, there is space for so many people! And I really hoped, they will make it smaller for me. But there was something else: Anticipation.

But first I want to give you some impressions from my time at the pgconf. Amazing to talk to one of the main developers of Patroni. I was really nervous when I just went to him and said: “Hi, may I ask you a question?” For sure he didn’t say NO. Even all the other ladies and gentlemen I met (the list is quite long), they all are so nice and all of them really open minded (is this because they all work with an open source database?). And of course a special thanks to Pavel Golub for the great picture. Find it in Daniel’s blog
Beside meeting all that great people, I enjoyed some really informative and cool sessions.


Although I still hoped they are going to make the room smaller for my presentation, of course they didn’t do it. So I had only one chance:

And I did it and afterwards I book it under “good experience”. A huge room is not so much different than a small one.

As I am back home now, I want to say: Thanks pgconf.eu and dbi services for giving me this opportunity and thanks to the community for this warm welcome.

Cet article pgconf.eu – Welcome to the community est apparu en premier sur Blog dbi services.

CBO Oddities – 1

Jonathan Lewis - Fri, 2019-10-18 12:10

I’ve decided to do a little rewriting and collating so that I can catalogue related ideas in an order that makes for a better narrative. So this is the first in a series of notes designed to help you understand why the optimizer has made a particular choice and why that choice is (from your perspective) a bad one, and what you can do either to help the optimizer find a better plan, or subvert the optimizer and force a better plan.

If you’re wondering why I choose to differentiate between “help the optimizer” and “subvert the optimizer” consider the following examples.

  • A query is joining two tables in the wrong order with a hash join when you know that a nested loop join in the opposite order would far better because you know that the data you want is very nicely clustered and there’s a really good index that would make access to that data very efficient. You check the table preferences and discover that the table_cached_blocks preference (see end notes) is at its default value of 1, so you set it to 16 and gather fresh stats on the indexes on the table. Oracle now recognises the effectiveness of this index and changes plan accordingly.
  • The optimizer has done a surprising transformation of a query, aggregating a table before joining to a couple of other tables when you were expecting it to use the joins to eliminate a huge fraction of the data before aggregating it.  After a little investigation you find that setting hidden parameter _optimizer_distinct_placement to false stops this happening.

You may find the distinction unnecessarily fussy, but I’d call the first example “helping the optimzier” – it gives the optimizer some truthful information about your data that is potentially going to result in better decisions in many different statements – and the second example “subverting the optimizer” – you’ve brute-forced it into not taking a path you didn’t like but at the same time you may have stopped that feature from appearing in other ways or in other queries. Of course, you might have minimised the impact of setting the parameter by using the opt_param() hint to apply the restriction to just this one query, nevertheless it’s possible that there is a better plan for the query that would have used the feature at some other point in the query if you’d managed to do something to help the optimizer rather than constraining it.

What’s up with the Optimizer

It’s likely that most of the articles will be based around interpreting execution plans since those are the things that tell us what the optimizer thinks will happen when it executes a statement, and within execution plans there are three critical aspects to consider –

  1. the numbers (most particularly Cost and Rows),
  2. the shape of the plan,
  3. the Predicate Information.

I want to use this note to make a couple of points about just the first of the three.

  • First – the estimates on any one line of an execution plan are “per start” of the line; some lines of an execution plan will be called many times in the course of a statement. In many cases the Rows estimate from one line of a plan will dictate the number of times that some other line of the plan will be executed – so a bad estimate of “how much data” can double up as a bad estimate of “how many times”, leading to a plan that looks efficient on paper but does far too much work at run-time. A line in a plan that looks a little inefficient may be fine if it executes only one, a line that looks very efficient may be a disaster if it executes a million time. Being able to read a plan and spot the places where the optimizer has produced a poor estimate of Rows is a critical skill – and there are many reasons why the optimizer produces poor estimates. Being able to spot poor estimates depends fairly heavily on knowing the data, but if you know the generic reasons for the optimizer producing poor estimates you’ve got a head start for recognising and addressing the errors when they appear.
  • Second – Cost is synonymous with Time. For a given instance at a given moment there is a simple, linear, relationship between the figure that the optimizer reports for the Cost of a statement (or subsection of a statement) and the Time that the optimizer reports. For many systems (those that have not run the calibrate_io procedure) the Time is simply the Cost multiplied by the time the optimizer thinks it will take to satisfy a single block read request, and the Cost is the optimizer’s estimate of the I/O requirement to satisfy the statement – with a fudge factor introduced to recognise the fact that a “single block” read request ought to complete in less time than a “multiblock” read request. Generally speaking the optimizer will consider many possible plans for a statement and pick the plan with the lowest estimated cost – but there is at least one exception to this rule, and it is an unfortunate weakness in the optimizer that there are many valid reasons why its estimates of Cost/Time are poor. Of course, you will note that the values that Oracle reports for the Time column are only accurate to the second – which isn’t particularly helpful when a single block read typically operates in the range of a few milliseconds.

To a large degree the optimizer’s task boils down to:

  • What’s the volume and scatter of the data I need
  • What access paths, with what wastage, are available to get to that data
  • How much time will I spend on I/O reading (and possibly discarding) data to extract the bit I want

Of course there are other considerations like the amount of CPU needed for a sort, the potential for I/O as sorts or hash joins, the time to handle a round-trip to a remote system, and RAC variations on the basic theme. But for many statements the driving issue is that any bad estimates of “how much data” and “how much (real) I/O” will lead to bad, potentially catastrophic, choices of execution plan. In the next article I’ll list all the different reasons (that I can think of at the time) why the optimizer can produce bad estimates of volume and time.

References for Cost vs. Time

References for table_cached_blocks:

 

OAC v105.4: Understanding Map Data Quality

Rittman Mead Consulting - Fri, 2019-10-18 08:58
 Understanding Map Data Quality

Last week Oracle Analytics Cloud v105.4 was announced. One of the features particularly interested me since it reminded the story of an Italian couple willing to spend their honeymoon in the Australian Sydney and ending up in the same Sydney city but in Nova Scotia for a travel agency error. For the funny people out there: don't worry, it wasn't me!

The feature is "Maps ambiguous location matches" and I wanted to write a bit about it.

#OracleAnalytics 105.4 update just about to go live and deploy on your environments. Check-out some of the new features coming. Here is a first list of selected items: https://t.co/Megqz5ekcx. Stay tuned with whole #OAC team (@OracleAnalytics,@BenjaminArnulf...) for more examples pic.twitter.com/CWpj8rC1Bf

— Philippe Lions (@philippe_lions) October 8, 2019

Btw OAC 105.4  includes a good set of new features like a unified Home page, the possibility to customize any DV font and more options for security and on-premises connections amongst others. For a full list of new features check out the related Oracle blog or videos.

Maps: a bit of History

Let's start with a bit of history. Maps have been around in OBIEE first and OAC later since a long time, in the earlier stages of my career I spent quite a lot of time writing HTML and Javascript to include map visualizations within OBIEE 10g. The basic tool was called Mapviewer and the knowledge & time required to create a custom clickable or drillable map was....huge!

With the raise of OBIEE 11g and 12c the Mapping capability became easier, a new "Map" visualization type was included in the Answers and all we had to do was to match the geographical reference coming from one of our Subject Areas (e.g. Country Name) with the related column containing the shape information (e.g. the Country Shape).

 Understanding Map Data Quality

After doing so, we were able to plot our geographical information properly: adding multiple layers, drilling capabilities and tooltips was just a matter of few clicks.

 Understanding Map Data QualityThe Secret Source: Good Maps and Data Quality

Perfect, you might think, we can easily use maps everywhere as soon as we have any type of geo-location data available in our dataset! Well, the reality in the old days wasn't like that, Oracle at the time provided some sample maps with a certain level of granularity and covering only some countries in detail. What if we wanted to display all the suburbs of Verona? Sadly that wasn't included so we were forced to either find a free source online or to purchase it from a Vendor.

The source of map shapes was only half of the problem to solve: we always need to create a join with a column coming from our Subject Area! Should we use the Zip Code? What about the Address? Is City name enough? The deeper we were going into the mapping details the more problems were arising.

A common problem (as we saw before about Sydney) was using the City name. How many cities are called the same? How many regions? Is the street name correct? Data quality was and still is crucial to provide accurate data and not only a nice but useless map view.

OAC and the Automatic Mapping Capability

Within OAC, DV offers the Automatic Mapping Capability, we only need to include in a Project a column containing a geographical reference (lat/long, country name etc), select "Map" as visualization type and the tool will choose the most appropriate mapping granularity that matches our dataset.

 Understanding Map Data Quality

Great! This solves all our issues! Well... not all of them! The Automatic Mapping capability doesn't have all the possible maps in it, but we can always include new custom maps using the OAC Console if we need them.

 Understanding Map Data QualitySo What's New in 105.4?

All the above was available way before the latest OAC release. The 105.4 adds the "Maps ambiguous location matches" feature, which means that every time we create a Map View, OAC will provide us with a Location Matches option

 Understanding Map Data Quality

If we click this option OAC will provide as a simple window where we can see:

  • How many locations matched
  • How many locations have issues
  • What's the type of Issue?
 Understanding Map Data Quality

The type of issue can be one between:

  • No Match in case OAC doesn't find any comparable geographical value
  • Multiple Matches  when there are multiple possible associations
  • Partial Matches when there is a match only to part of the content
 Understanding Map Data Quality

We can then take this useful information and start a process of data cleaning to raise the quality of our data visualization.

Conclusion

Maps were and are a really important visualization available in OAC. The Maps ambiguous location matches feature provides a way to understand if our visualization is representative of our dataset. So, if you want to avoid spending your honeymoon in the wrong Sydney or if you just want to provide accurate maps on top of your dataset, use this feature available in OAC!

Categories: BI & Warehousing

In Defence of Best Practices

Tim Hall - Fri, 2019-10-18 03:38

The subject of “Best Practices” came up again yesterday in a thread on Twitter. This is a subject that rears its head every so often.

I understand all the arguments against the term “Best Practices”. There isn’t one correct way to do things. If there were it would be the only way, or automatic etc. It’s all situational etc. I really do understand all that. I’ve been in this conversation so many times over the years you wouldn’t believe it. I’ve heard all the various sentences and terms people would prefer to use rather than “Best Practice”, but here’s my answer to all that.

“Best practices are fine. Get over yourself and shut up!”

Tim Hall : 18th October 2019

I’ve said this more politely in many other conversations, including endless email chains etc.

When it comes down to it, people need guidance. A good best practice will give some context to suggest it is a starting point, and will give people directions for further information/investigation, but it’s targeted at people who don’t know enough about what they are doing and need help. Without a best practice they will do something really bad, and when shit happens they will blame the product. A good best practice can be the start of a journey for people.

I agree that the “Always do this because ‘just bloody do it!'” style of best practice is bad, but we all know that…

I just find the whole conversation so elitist. I spend half of my life Googling solutions (mostly non-Oracle stuff) and reading best practices and some of them are really good. Some of them have definitely improved my understanding, and left me in a position where I have a working production system that would otherwise not be working.

I’m sure this post will get a lot of reactions where people try and “explain to me” why I am wrong, and what I’m not understanding about the problems with best practices. As mentioned before, I really do know all that and I think you are wrong, and so do the vast majority of people outside your elitist echo chamber. Want to test that? Try these…

  • Write a post called “Best Practices for {insert subject of your choice}”. It will get more hits than anything else you’ve ever written.
  • Submit a conference session called “Best Practices for {insert subject of your choice}”. Assuming it gets through the paper selection, you will have more bums on seats than you’ve ever had before for that same subject.

Rather than wasting your life arguing about how flawed the term “Best Practices” is, why don’t you just write some good best practices? Show the world how they should be done, and start people on a positive journey. It’s just a term. Seriously. Get over yourself!

Cheers

Tim…

PS. I hope people from yesterday’s tweets don’t think this is directed at them. It’s really not. It’s the subject matter! This really is a subject I’ve revisited so many times over the years…

Updates

Due to repeatedly having to explain myself, here come some points people have raised and my reactions. I’m sure this list will grow as people insist on “educating me” about why I’m wrong.

I prefer “standard” or “normal” to “best”. As I said at the start of the post, I’ve heard just about every potential variation of this, and I just don’t care. They are all the same thing. They are all best practices. It’s just words. Yes, I know what “best” means, but that’s irrelevant. This is a commonly used term in tech and you aren’t getting rid of it, so own it!

I’ve seen people weaponize best practices. OK. So are you saying they won’t weaponize “standard practices” or “normal practices”? They won’t ever say, “So are you telling me you went against normal practices?”. Of course they will. Stupid people/companies will do stupid things regardless of the name.

But it’s not the “best”! Did you even read my post? I’m so tired of this. It’s a best practice to never use hints in SQL. I think that’s pretty solid advice. I do use hints in some SQL, but I always include a comment to explain why. I have deviated from best practice, but documented the reason why. If a person/company wants no deviation from best practice, they can remove it and have shit performance. That’s their choice. I’ve been transparent and explained my deviation. If this is not the way you work, you are wrong, not the best practice.

Most vendor best practice documents are crap. I have some sympathy for this, but I raise tickets against bad documentation, including best practices, and generally the reception to these has been good. The last one was a couple of weeks ago and the company (not Oracle) changed the docs the same day. I always recommend raising an SR/ticket/bug against bad documentation. It doesn’t take much time and you are improving things for yourself and everyone else. I feel like you can’t complain about the quality of the docs if you never point out the faults.

In Defence of Best Practices was first posted on October 18, 2019 at 9:38 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Getting started with Pivotal Telemetry Collector

Pas Apicella - Thu, 2019-10-17 18:44
Pivotal Telemetry Collector is an automated tool that collects data from a series of Pivotal Cloud Foundry (PCF) APIs found within a foundation and securely sends that data to Pivotal. The tool collects:

  • Configuration data from the Ops Manager API.
  • Optional certificate data from the CredHub API.
  • Optional app, task and service instance usage data from the Usage Service API.

Pivotal uses this information to do the following:

  • Improve its products and services.
  • Fix problems.
  • Advise customers on how best to deploy and use Pivotal products.
  • Provide better customer support.
Steps to Run

1. Download the scripts required to run "Pivotal Telemetry Collector" using this URL from Pivotal Network

https://network.pivotal.io/products/pivotal-telemetry-collector/

2. Extract to file system. You will notice 3 executables use the right one for your OS, in my case it was the Mac OSX executable "telemetry-collector-darwin-amd64"

-rwxr-xr-x   1 papicella  staff  14877449  5 Oct 00:42 telemetry-collector-linux-amd64*
-rwxr-xr-x   1 papicella  staff  14771312  5 Oct 00:42 telemetry-collector-darwin-amd64*
-rwxr-xr-x   1 papicella  staff  14447104  5 Oct 00:42 telemetry-collector-windows-amd64.exe*

3. Make sure you have network access to your PCF env. You will need to hit the Operations Manager URL as well as the CF CLI API and usage service API endpoints as shown below

Ops Manager endpoint

$ ping opsmgr-02.haas-yyy.pez.pivotal.io
PING opsmgr-02.haas-yyy.pez.pivotal.io (10.195.1.1): 56 data bytes
64 bytes from 10.195.1.1: icmp_seq=0 ttl=58 time=338.412 ms

CF API endpoint

$ ping api.system.run.haas-yyy.pez.pivotal.io
PING api.system.run.haas-yyy.pez.pivotal.io (10.195.1.2): 56 data bytes
64 bytes from 10.195.1.2: icmp_seq=0 ttl=58 time=380.852 ms

Usage Service API endpoint

$ ping app-usage.system.run.haas-yyy.pez.pivotal.io
PING app-usage.system.run.haas-yyy.pez.pivotal.io (10.195.1.3): 56 data bytes
64 bytes from 10.195.1.3: icmp_seq=0 ttl=58 time=495.996 ms

4. Now you can use this via two options. As you would of guessed we are using the CLI given we have downloaded the scripts.

Concourse: https://docs.pivotal.io/telemetry/1-1/using-concourse.html
CLI: https://docs.pivotal.io/telemetry/1-1/using-cli.html

5. So to run out first collect we would run the collector script as follows. More information about what the CLI options are can be found on this link or using help option "./telemetry-collector-darwin-amd64 --help"

https://docs.pivotal.io/telemetry/1-1/using-cli.html

Script Name: run-with-usage.sh

$ ./telemetry-collector-darwin-amd64 collect --url https://opsmgr-02.haas-yyy.pez.pivotal.io/ --username admin --password {PASSWD} --env-type production --output-dir output --usage-service-url https://app-usage.system.run.haas-yyy.pez.pivotal.io/ --usage-service-client-id push_usage_service --usage-service-client-secret {PUSH-USAGE-SERVICE-PASSWORD} --usage-service-insecure-skip-tls-verify --insecure-skip-tls-verify --cf-api-url https://api.system.run.haas-yyy.pez.pivotal.io

Note: You would obtain the PUSH-USAGE-SERVICE-PASSWORD from Ops Manager PAS tile credentials tab as shown in screen shot below


6. All set let's try it out

$ ./run-with-usage.sh
Collecting data from Operations Manager at https://opsmgr-02.haas-yyy.pez.pivotal.io/
Collecting data from Usage Service at https://app-usage.system.run.haas-yyy.pez.pivotal.io/
Wrote output to output/FoundationDetails_1571355194.tar
Success!

7. Let's extract the output TAR as follows

$ cd output/
$ tar -xvf FoundationDetails_1571355194.tar
x opsmanager/ops_manager_deployed_products
x opsmanager/pivotal-container-service_resources
x opsmanager/pivotal-container-service_properties
x opsmanager/pivotal-mysql_resources
x opsmanager/pivotal-mysql_properties
x opsmanager/cf_resources
x opsmanager/cf_properties
x opsmanager/p-compliance-scanner_resources
x opsmanager/p-compliance-scanner_properties
x opsmanager/ops_manager_vm_types
x opsmanager/ops_manager_diagnostic_report
x opsmanager/ops_manager_installations
x opsmanager/ops_manager_certificates
x opsmanager/ops_manager_certificate_authorities
x opsmanager/metadata
x usage_service/app_usage
x usage_service/service_usage
x usage_service/task_usage
x usage_service/metadata

7. Now let's view the output which is a SET of JSON files and to do that I simply use "cat" command and pipe that to JQ as shown below

$ cat ./output/opsmanager/ops_manager_installations | jq -r
{
  "installations": [
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-compliance-scanner-a53448be03a372a13d89",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-08-30T09:38:29.679Z",
      "id": 25,
      "started_at": "2019-08-30T09:21:44.810Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [
        {
          "change_type": "deletion",
          "deployment_status": "pending",
          "guid": "p-compliance-scanner-1905a6707e4f434e315a",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0-beta.25"
        }
      ],
      "finished_at": "2019-08-08T02:10:51.130Z",
      "id": 24,
      "started_at": "2019-08-08T02:09:10.290Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-07-18T12:27:54.301Z",
      "id": 23,
      "started_at": "2019-07-18T11:31:19.781Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ]
    },
    {
      "additions": [],
      "deletions": [
        {
          "change_type": "deletion",
          "deployment_status": "pending",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "finished_at": "2019-07-07T00:16:31.948Z",
      "id": 22,
      "started_at": "2019-07-07T00:04:32.974Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-07-07T00:02:12.003Z",
      "id": 21,
      "started_at": "2019-07-06T23:57:06.401Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-compliance-scanner-1905a6707e4f434e315a",
          "identifier": "p-compliance-scanner",
          "label": "Compliance Scanner for PCF",
          "product_version": "1.0.0-beta.25"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-10T09:23:19.595Z",
      "id": 20,
      "started_at": "2019-06-10T09:10:44.431Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "skipped",
          "guid": "aquasec-1b94477ae275ee81be58",
          "identifier": "aquasec",
          "label": "Aqua Security for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-06T17:38:18.396Z",
      "id": 19,
      "started_at": "2019-06-06T17:35:34.614Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "skipped",
          "guid": "aquasec-1b94477ae275ee81be58",
          "identifier": "aquasec",
          "label": "Aqua Security for PCF",
          "product_version": "1.0.0"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-06T17:33:18.545Z",
      "id": 18,
      "started_at": "2019-06-06T17:21:41.529Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T11:15:43.546Z",
      "id": 17,
      "started_at": "2019-06-04T10:49:57.969Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T10:44:04.018Z",
      "id": 16,
      "started_at": "2019-06-04T10:17:28.230Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T09:52:30.782Z",
      "id": 15,
      "started_at": "2019-06-04T09:48:45.867Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T09:21:17.245Z",
      "id": 14,
      "started_at": "2019-06-04T09:17:45.360Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:50:33.333Z",
      "id": 13,
      "started_at": "2019-06-04T08:47:09.790Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:32:44.772Z",
      "id": 12,
      "started_at": "2019-06-04T08:23:27.386Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T08:16:41.757Z",
      "id": 11,
      "started_at": "2019-06-04T08:13:54.645Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "failed",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T01:53:50.594Z",
      "id": 10,
      "started_at": "2019-06-04T01:43:56.205Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-04T01:28:22.975Z",
      "id": 9,
      "started_at": "2019-06-04T01:24:52.587Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-06-03T08:37:25.961Z",
      "id": 8,
      "started_at": "2019-06-03T08:13:07.511Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pas-windows-72031f60ab052fa4d473",
          "identifier": "pas-windows",
          "label": "Pivotal Application Service for Windows",
          "product_version": "2.5.2"
        }
      ],
      "deletions": [],
      "finished_at": "2019-06-03T04:57:06.897Z",
      "id": 7,
      "started_at": "2019-06-03T03:52:13.705Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pivotal-mysql-0e5d717f1c87c8095c9d",
          "identifier": "pivotal-mysql",
          "label": "MySQL for Pivotal Cloud Foundry v2",
          "product_version": "2.5.4-build.51"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-22T05:15:55.703Z",
      "id": 6,
      "started_at": "2019-05-22T04:09:49.841Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-05-22T02:12:22.934Z",
      "id": 5,
      "started_at": "2019-05-22T01:45:28.101Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        },
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "failed",
          "guid": "cf-3095a0a264aa5900d79f",
          "identifier": "cf",
          "label": "Small Footprint PAS",
          "product_version": "2.5.3"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-22T00:23:29.844Z",
      "id": 4,
      "started_at": "2019-05-21T23:16:42.418Z",
      "status": "failed",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [],
      "deletions": [],
      "finished_at": "2019-05-16T01:50:50.640Z",
      "id": 3,
      "started_at": "2019-05-16T01:45:22.438Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": [
        {
          "change_type": "update",
          "deployment_status": "successful",
          "guid": "pivotal-container-service-5c28f63410227c2221c8",
          "identifier": "pivotal-container-service",
          "label": "Enterprise PKS",
          "product_version": "1.4.0-build.31"
        }
      ]
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "pivotal-container-service-5c28f63410227c2221c8",
          "identifier": "pivotal-container-service",
          "label": "Enterprise PKS",
          "product_version": "1.4.0-build.31"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-15T00:08:32.241Z",
      "id": 2,
      "started_at": "2019-05-14T23:33:58.105Z",
      "status": "succeeded",
      "unchanged": [
        {
          "change_type": "unchanged",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "updates": []
    },
    {
      "additions": [
        {
          "change_type": "addition",
          "deployment_status": "successful",
          "guid": "p-bosh-c1853604618b1b3e10fd",
          "identifier": "p-bosh",
          "label": "BOSH Director",
          "product_version": "2.5.3-build.185"
        }
      ],
      "deletions": [],
      "finished_at": "2019-05-14T23:29:47.525Z",
      "id": 1,
      "started_at": "2019-05-14T23:13:13.244Z",
      "status": "succeeded",
      "unchanged": [],
      "updates": []
    }
  ]
}

Optionally you should send this TAR file output on every ticket/case your create so support has a great snapshot of what your ENV looks like to help diagnose support issues for you.

telemetry-collector send --path --api-key

For the API-KEY please contact your Pivotal AE or Platform Architect to request that as the Telemetry team issues API key to customer's


More Information 

https://docs.pivotal.io/telemetry/1-1/index.html
Categories: Fusion Middleware

Free Oracle Cloud: 14. Your Oracle Cloud Free Trial has expired (but FREE still running)

Dimitri Gielis - Thu, 2019-10-17 16:19
This post is the last post of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

Today I got an email that my Oracle Cloud account was Expired. While I have an Always FREE Oracle Cloud, when I signed up I also got some extra credits that lasted for a month. Those credits are no longer valid.


When you log in to your Oracle Cloud Dashboard you will get a notification on top too, but nothing to worry about.


It has some consequences tho, on the menu, some options are grayed out. The one I actually use is the Email Delivery, which seems to be grayed out too although normally you should be able to send 1,000 emails per month. So maybe grayed out also means, not full service.


When I checked it out, it said it's part of the paid plan. I remember some discussions at Oracle Open World where they recommend upgrading to a Paid account, but as you only use the Always FREE services, you are not charged.


So I decided to upgrade to a Paid account: Pay As You Go:


You have to provide a Credit Card, but that was a bit of an issue for me. Apparently, Safari is not really working well with this screen, so I switched to Chrome. The next hick-up I had was when I added my AMEX card... it said it was an invalid card.


Then I used my VISA card and that seemed to work well:

Click the Start Paid Account:


Finally, it will say your payment method will be reviewed and after that you are live.


It wasn't immediately clear for me I had to wait for the confirmation email, but when I went to Payment Method again, I saw the review was still in progress:


And a few minutes later I got the email that my account was upgraded:


When you look at your Oracle Cloud Dashboard, there's a cost calculator, so you see how much you have to pay. As long as I use the Always FREE components, I expect the amount to stay 0 :)


But the nice thing now is that you have access to all of Oracle Cloud again (e.g. Email Delivery).
Categories: Development

Funny Gamertags for Xbox That You Don’t Want to Miss

VitalSoftTech - Thu, 2019-10-17 09:49

Being a regular gamer on Xbox player, some cool and funny Gamertags are very easy to come by. In the gaming universe of Xbox, the two essential things you need are 90% smartness and 10% luck, and you will come out the conqueror of the game. However, apart from skill and fortune, you need one […]

The post Funny Gamertags for Xbox That You Don’t Want to Miss appeared first on VitalSoftTech.

Categories: DBA Blogs

All Nippon Airways Builds a Bridge between Japan and the Rest of the World

Oracle Press Releases - Thu, 2019-10-17 09:00
Blog
All Nippon Airways Builds a Bridge between Japan and the Rest of the World

By Guest Author—Oct 17, 2019

All Nippon Airways (ANA) is the largest airline in Japan and one of the world’s leading carriers, carrying over 47 million passengers annually. Japan is already this decade’s fastest-growing major destination for tourism and by 2030, the country expects the number of international visitors to double to 60 million per year.

With many people expected to visit Tokyo for the Olympic and Paralympic Games next year, ANA wishes to bridge Japan and the rest of the world, providing a pleasant air-travel experience for all. “For ANA, this presents an opportunity to showcase our world-class services on a global stage,” says Manabu Yada, Manager, Corporate Office Procurement, Material & Services at ANA.

But the airline industry is crowded, and even at its size, ANA still needs to find ways to stand out. “The challenge we faced was finding a way to ensure our customers could trust they’re getting the best service in the market, while we remain profitable. To adopt to changes in a global business environment and make important business decisions fast, we needed to develop a system to visualize all procurement processes, comply with complex regulations, and optimize all procurement costs,” Yada said.

ANA implemented Oracle ERP Cloud to streamline how it approves invoices and manages its suppliers through automation and social collaboration. “Oracle ERP Cloud covers procurement processes and reduces required transactions. It also provides the tools to improve efficiency and productivity of our procurement team. In addition, it shows us the history of all transactions so we can reduce costs and risks of all processes and increase profitability, while staying compliant with all related regulations,” said Yada.

The airline also intends to use Oracle ERP Cloud for its 88 branches in the future and aims to cut procurement costs for indirect materials by 5%.

Watch the All Nippon Airways video to learn how it is delivering the perfect customer experience


Read MoreStories from Oracle Cloud

All Nippon Airways is one of the thousands of customers on its journey to the cloud. Read about others in Stories from Oracle Cloud: Business Successes.

Oracle Named a Leader in Digital Commerce for Ten Consecutive Evaluations

Oracle Press Releases - Thu, 2019-10-17 07:00
Press Release
Oracle Named a Leader in Digital Commerce for Ten Consecutive Evaluations Oracle evaluated for its completeness of vision and ability to execute in Gartner Magic Quadrant

Redwood Shores, Calif.—Oct 17, 2019

Oracle has been recognized as a Leader in the Gartner Magic Quadrant for Digital Commerce for the 10th consecutive time.* The report focuses on transformational technologies and approaches in the digital commerce market that are delivering on the future needs of sellers and their customers. The report evaluates vendors based on completeness of vision and ability to execute. A complimentary copy of the report is available here.

“Our number one priority is to create agility for customers as the time to implement, adjust and innovate is paramount. That is why we designed Oracle CX Commerce to be highly customizable using cloud-based extensibility approaches and technologies,” said Ian Davis, vice president of product management, Oracle CX Commerce. “Our customers have complete flexibility whether they want a highly-curated Site Design or are looking to implement a Headless application based on our API-first solution. We do all of this in a backward-compatible way that allows Oracle to deliver more features consistently so customers can easily adopt without adding risk.”

Oracle CX Commerce (or Commerce Cloud as noted in the report) helps online businesses lower their total cost of ownership (TCO) with out-of-the-box essentials and intuitive business tools that support B2C and B2B multisite management, localization, price and promotions, merchandising, search, personalization, and rich asset management. With an API-first modern architecture, businesses can extend captivating buying experiences across any digital display—from mobile and web to voice, AR/VR, and IoT—to increase customer satisfaction with a library of predefined omnichannel journeys, including marketing, sales, returns, service and preorder.

“We consistently see our customers gaining positive momentum, and we attribute this progress to a concerted effort to minimize silos and put the customer at the center of their retail operations,” said Jeff Warren, vice president of strategy and solution management for Oracle Retail. “With commerce being the first touchpoint in most omnichannel journeys, it’s critical to help ensure the data associated with commerce interactions is seamlessly and effortlessly integrated into all aspects of the retail enterprise—from merchandise plan to stores, and supply chain fulfillment. We believe Oracle’s continued placement as a Leader in this Magic Quadrant is a testament to the value we are delivering to our customers on this front.”

Gartner, Magic Quadrant for Digital Commerce, Penny Gillespie, Christina Klock, Mike Lowndes, Sandy Shen, Jason Daigler, Yanna Dharmasthira, August 22, 2019.
*Oracle was previously listed as ATG because Oracle announced its acquisition of the company in November 2010.

Contact Info
Kimberly Guillon
Oracle
209.601.9152
kim.guillon@oracle.com
Gartner Disclaimer

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Oracle CX

Oracle CX Commerce is part of Oracle Customer Experience (CX), which empowers organizations to take a smarter approach to customer experience management and business transformation initiatives. By providing a trusted business platform that connects data, experiences, and outcomes, Oracle CX Suite helps customers reduce IT complexity, deliver innovative customer experiences and achieve predictable and tangible business results.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. Oracle Retail offers an unmatched suite of applications that empower retailers to pivot their operations to customer, adopt next-practice retail, and invest with intent. For more information, visit www.oracle.com/industries/retail/.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kimberly Guillon

  • 209.601.9152

Excel Roundup Function – A Step By Step Tutorial

VitalSoftTech - Wed, 2019-10-16 10:28

Let’s face it; it’s all fun and games when you are playing with the numbers, but as soon as the decimals start to make an appearance, things start to take a turn. But it doesn’t necessarily have to be that way. With applications like Microsoft Excel at your disposal, handling the decimals has never been […]

The post Excel Roundup Function – A Step By Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Creating a customized PostgreSQL container using buildah

Yann Neuhaus - Wed, 2019-10-16 09:09

Quite some time ago I blogged about how you could build your customzized PostgreSQL container by using a Dockerfile and Docker build. In the meantime Red Hat replaced Docker in OpenShift and SUSE replaced Docker as well in CaaS. As a consequence there need to be other ways of building containers and one of them is buildah. You can use buildah to build from a Docker file as well, but in this post we will use a simple bash script to create the container.

We start be defining four variables that define PGDATA, the PostgreSQL major version, the full version string and the minor version which will be used to create our standard installation location (these will also go into the entrypoint, see below):

#!/bin/bash
_PGDATA="/u02/pgdata"
_PGMAJOR=12
_PGVERSION=12.0
_PGMINOR="db_0"

As mentioned in the beginning buildah will be used to create the container. For running the container we need something else, and that is podman. You can run the container buildah creates with plain Docker as well, if you want, as it is oci compliant but as Red Hat does not ship Docker anymore we will use the recommended way of doing it by using podman. So the natural next step in the script is do install buildah and podman:

dnf install -y buildah podman

Buildah can create containers from scratch, which means you start with a container that contains nothing except some meta data:

newcontainer=$(buildah from scratch)

Once we have the new scratch container it gets mounted so dnf can be used to install the packages we need into the container without actually using dnf in the container:

scratchmnt=$(buildah mount $newcontainer)
ls -la $scratchmnt
dnf install --installroot $scratchmnt --releasever 8 bash coreutils gcc openldap-devel platform-python-devel readline-devel bison flex perl-ExtUtils-Embed zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel bzip2 wget policycoreutils-python-utils make tar --setopt install_weak_deps=false --setopt=tsflags=nodocs --setopt=override_install_langs=en_US.utf8 -y

Using “buildah config” the container can be configured. Here it is about the author, environment variables, the default user and the entrypoint that will be used once the conatiner will be started:

buildah config --created-by "dbi services"  $newcontainer
buildah config --author "dbi services" --label name=dbiservices $newcontainer
buildah run $newcontainer groupadd postgres
buildah run $newcontainer useradd -g postgres -m postgres
buildah config --user postgres $newcontainer
buildah config --workingdir /home/postgres $newcontainer
buildah config --env PGDATABASE="" $newcontainer
buildah config --env PGUSERNAME="" $newcontainer
buildah config --env PGPASSWORD="" $newcontainer
buildah config --env PGDATA=${_PGDATA} $newcontainer
buildah config --env PGMAJOR=${_PGMAJOR} $newcontainer
buildah config --env PGMINOR=${_PGMINOR} $newcontainer
buildah config --env PGVERSION=${_PGVERSION} $newcontainer
buildah config --entrypoint /usr/bin/entrypoint.sh $newcontainer
buildah copy $newcontainer ./entrypoint.sh /usr/bin/entrypoint.sh
buildah run $newcontainer chmod +x /usr/bin/entrypoint.sh

What follows is basically installing PostgreSQL from source code:

buildah run --user root $newcontainer mkdir -p /u01 /u02
buildah run --user root $newcontainer chown postgres:postgres /u01 /u02
buildah run --user postgres $newcontainer wget https://ftp.postgresql.org/pub/source/v${_PGVERSION}/postgresql-${_PGVERSION}.tar.bz2 -O /home/postgres/postgresql-${_PGVERSION}.tar.bz2
buildah run --user postgres $newcontainer /usr/bin/bunzip2 /home/postgres/postgresql-${_PGVERSION}.tar.bz2
buildah run --user postgres $newcontainer /usr/bin/tar -xvf /home/postgres/postgresql-${_PGVERSION}.tar -C /home/postgres/
buildah run --user postgres $newcontainer /home/postgres/postgresql-12.0/configure --prefix=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR} --exec-prefix=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR} --bindir=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin --libdir=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/lib --includedir=/u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/include 
buildah run --user postgres $newcontainer /usr/bin/make -C /home/postgres all
buildah run --user postgres $newcontainer /usr/bin/make -C /home/postgres install
buildah run --user postgres $newcontainer /usr/bin/make -C /home/postgres/contrib install

Containers shoud be as small as possible so lets do some cleanup:

buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/postgresql-${_PGVERSION}.tar
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/config
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/config.log
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/config.status
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/contrib
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/GNUmakefile
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/postgresql-12.0
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/src
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/doc
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/Makefile
buildah run --user postgres $newcontainer /usr/bin/rm -rf /home/postgres/.wget-hsts

When you want to run PostgreSQL inside a container you do not need any of the following binaries, so these can be cleaned as well:

buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/vacuumlo
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/vacuumdb
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/reindexdb
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pgbench
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_waldump
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_test_timing
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_test_fsync
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_standby
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_restore
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_recvlogical
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_receivewal
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_isready
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_dumpall
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_dump
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_checksums
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_basebackup
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/pg_archivecleanup
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/oid2name
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/dropuser
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/dropdb
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/createuser
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/createdb
buildah run --user postgres $newcontainer /usr/bin/rm -rf /u01/app/postgres/product/${_PGMAJOR}/${_PGMINOR}/bin/clusterdb

Last, but not least remove all the packages we do not require anymore and get rid of the dnf cache:

dnf remove --installroot $scratchmnt --releasever 8 gcc openldap-devel readline-devel bison flex perl-ExtUtils-Embed zlib-devel openssl-devel pam-devel libxml2-devel libxslt-devel bzip2 wget policycoreutils-python-utils make tar -y
dnf clean all -y --installroot $scratchmnt --releasever 8
# Clean up yum cache
if [ -d "${scratchmnt}" ]; then
rm -rf "${scratchmnt}"/var/cache/yum
fi
buildah unmount $newcontainer

Ready to publish the container:

buildah commit $newcontainer dbi-postgres

When you put all those steps into a script and run that you should see the just created container:

[root@doag2019 ~]$ buildah containers
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
47946e4b4fc8     *                  scratch                          working-container
[root@doag2019 ~]$

… but now we also have a new image that can be started:

IMAGE NAME                                               IMAGE TAG            IMAGE ID             CREATED AT             SIZE
localhost/dbi-postgres                                   latest               dfcd3e8d5273         Oct 13, 2019 13:22     461 MB

Once we start that the entrypoint will be executed:

#!/bin/bash
# this are the environment variables which need to be set
PGDATA=${PGDATA}/${PGMAJOR}
PGHOME="/u01/app/postgres/product/${PGMAJOR}/${PGMINOR}"
PGAUTOCONF=${PGDATA}/postgresql.auto.conf
PGHBACONF=${PGDATA}/pg_hba.conf
PGDATABASENAME=${PGDATABASE}
PGUSERNAME=${PGUSERNAME}
PGPASSWD=${PGPASSWORD}
# create the database and the user
_pg_create_database_and_user()
{
${PGHOME}/bin/psql -c "create user ${PGUSERNAME} with login password '${PGPASSWD}'" postgres
${PGHOME}/bin/psql -c "create database ${PGDATABASENAME} with owner = ${PGUSERNAME}" postgres
${PGHOME}/bin/psql -c "create extension pg_stat_statements" postgres
}
# start the PostgreSQL instance
_pg_prestart()
{
${PGHOME}/bin/pg_ctl -D ${PGDATA} -w start
}
# Start PostgreSQL without detaching 
_pg_start()
{
exec ${PGHOME}/bin/postgres "-D" "${PGDATA}"
}
# stop the PostgreSQL instance
_pg_stop()
{
${PGHOME}/bin/pg_ctl -D ${PGDATA} stop -m fast
}
# initdb a new cluster
_pg_initdb()
{
${PGHOME}/bin/initdb -D ${PGDATA} --data-checksums
}
# adjust the postgresql parameters
_pg_adjust_config() {
if [ -z $PGMEMORY ]; then MEM="128MB"
else                      MEM=$PGMEMORY; fi
# PostgreSQL parameters
echo "shared_buffers='$MEM'" >> ${PGAUTOCONF}
echo "effective_cache_size='128MB'" >> ${PGAUTOCONF}
echo "listen_addresses = '*'" >> ${PGAUTOCONF}
echo "logging_collector = 'off'" >> ${PGAUTOCONF}
echo "log_truncate_on_rotation = 'on'" >> ${PGAUTOCONF}
echo "log_line_prefix = '%m - %l - %p - %h - %u@%d '" >> ${PGAUTOCONF}
echo "log_directory = 'pg_log'" >> ${PGAUTOCONF}
echo "log_min_messages = 'WARNING'" >> ${PGAUTOCONF}
echo "log_autovacuum_min_duration = '60s'" >> ${PGAUTOCONF}
echo "log_min_error_statement = 'NOTICE'" >> ${PGAUTOCONF}
echo "log_min_duration_statement = '30s'" >> ${PGAUTOCONF}
echo "log_checkpoints = 'on'" >> ${PGAUTOCONF}
echo "log_statement = 'none'" >> ${PGAUTOCONF}
echo "log_lock_waits = 'on'" >> ${PGAUTOCONF}
echo "log_temp_files = '0'" >> ${PGAUTOCONF}
echo "log_timezone = 'Europe/Zurich'" >> ${PGAUTOCONF}
echo "log_connections=on" >> ${PGAUTOCONF}
echo "log_disconnections=on" >> ${PGAUTOCONF}
echo "log_duration=off" >> ${PGAUTOCONF}
echo "client_min_messages = 'WARNING'" >> ${PGAUTOCONF}
echo "wal_level = 'replica'" >> ${PGAUTOCONF}
echo "wal_compression=on" >> ${PGAUTOCONF}
echo "max_replication_slots=20" >> ${PGAUTOCONF}
echo "max_wal_senders=20" >> ${PGAUTOCONF}
echo "hot_standby_feedback = 'on'" >> ${PGAUTOCONF}
echo "cluster_name = '${PGDATABASENAME}'" >> ${PGAUTOCONF}
echo "max_replication_slots = '10'" >> ${PGAUTOCONF}
echo "work_mem=8MB" >> ${PGAUTOCONF}
echo "maintenance_work_mem=64MB" >> ${PGAUTOCONF}
echo "shared_preload_libraries='pg_stat_statements'" >> ${PGAUTOCONF}
echo "autovacuum_max_workers=6" >> ${PGAUTOCONF}
echo "autovacuum_vacuum_scale_factor=0.1" >> ${PGAUTOCONF}
echo "autovacuum_vacuum_threshold=50" >> ${PGAUTOCONF}
echo "archive_mode=on" >> ${PGAUTOCONF}
echo "archive_command='/bin/true'" >> ${PGAUTOCONF}
# Authentication settings in pg_hba.conf
echo "host    all             all             0.0.0.0/0            md5"  >> ${PGHBACONF}
}
# initialize and start a new cluster
_pg_init_and_start()
{
# initialize a new cluster
_pg_initdb
# set params and access permissions
_pg_adjust_config
# start the new cluster
_pg_prestart
# set username and password
_pg_create_database_and_user
# restart database with correct pid
_pg_stop
_pg_start
}
# check if $PGDATA exists
if [ -e ${PGDATA} ]; then
# when $PGDATA exists we need to check if there are files
# because when there are files we do not want to initdb
if [ -e "${DEBUG}" ]; then
/bin/bash
elif [ -e "${PGDATA}/base" ]; then
# when there is the base directory this
# probably is a valid PostgreSQL cluster
# so we just start it
_pg_start
else
# when there is no base directory then we
# should be able to initialize a new cluster
# and then start it
_pg_init_and_start
fi
else
# create PGDATA
mkdir -p ${PGDATA}
# initialze and start the new cluster
_pg_init_and_start
fi

Starting that up using podman:

[root@doag2019 ~]$ podman run -e PGDATABASE=test -e PGUSERNAME=test -e PGPASSWORD=test --detach -p 5432:5432 localhost/dbi-postgres
f933df8216de83b3c2243860ace02f231748a05273c16d3ddb0308231004552f
CONTAINER ID  IMAGE                          COMMAND               CREATED             STATUS             PORTS                   NAMES
f933df8216de  localhost/dbi-postgres:latest  /bin/sh -c /usr/b...  About a minute ago  Up 59 seconds ago  0.0.0.0:5432->5432/tcp  nervous_leavitt

… and connecting from the host system:

[root@doag2019 ~]$ psql -p 5432 -h localhost -U test test
Password for user test:
psql (10.6, server 12.0)
WARNING: psql major version 10, server major version 12.
Some psql features might not work.
Type "help" for help.
test=> select version();
version
--------------------------------------------------------------------------------------------------------
PostgreSQL 12.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)
test=> \q

One you have that scripted and ready it is a very convinient way for creating images. What I like most is, that you can make changes afterwards without starting from scratch:

[root@doag2019 ~]$ podman inspect localhost/dbi-postgres
[
{
"Id": "dfcd3e8d5273116e5678806dfe7bbf3ca2276549db73e62f27b967673df8084c",
"Digest": "sha256:b2d65e569becafbe64e8bcb6d49b065188411f596c04dea2cf335f677e2db68e",
"RepoTags": [
"localhost/dbi-postgres:latest"
],
"RepoDigests": [
"localhost/dbi-postgres@sha256:b2d65e569becafbe64e8bcb6d49b065188411f596c04dea2cf335f677e2db68e"
],
"Parent": "",
"Comment": "",
"Created": "2019-10-13T11:22:15.096957689Z",
"Config": {
"User": "postgres",
"Env": [
"PGDATABASE=",
"PGUSERNAME=",
"PGPASSWORD=",
"PGDATA=/u02/pgdata",
"PGMAJOR=12",
"PGMINOR=db_0",
"PGVERSION=12.0"
],
"Entrypoint": [
"/bin/sh",
"-c",
"/usr/bin/entrypoint.sh"
],
"WorkingDir": "/home/postgres",
"Labels": {
"name": "dbiservices"
}
},
"Version": "",
"Author": "dbiservices",
"Architecture": "amd64",
"Os": "linux",
"Size": 460805033,
"VirtualSize": 460805033,
"GraphDriver": {
"Name": "overlay",
"Data": {
"MergedDir": "/var/lib/containers/storage/overlay/89de699f19781bb61eec12cf61a097a9daa31d7725fc3c078c76d0d6291cb074/merged",
"UpperDir": "/var/lib/containers/storage/overlay/89de699f19781bb61eec12cf61a097a9daa31d7725fc3c078c76d0d6291cb074/diff",
"WorkDir": "/var/lib/containers/storage/overlay/89de699f19781bb61eec12cf61a097a9daa31d7725fc3c078c76d0d6291cb074/work"
}
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:89de699f19781bb61eec12cf61a097a9daa31d7725fc3c078c76d0d6291cb074"
]
},
"Labels": {
"name": "dbiservices"
},
"Annotations": {},
"ManifestType": "application/vnd.oci.image.manifest.v1+json",
"User": "postgres",
"History": [
{
"created": "2019-10-13T11:22:15.096957689Z",
"created_by": "dbi services",
"author": "dbiservices"
}
]
}
]

Assume we want to add a new environment variable. All we need to do is this:

[root@doag2019 ~]$ buildah containers
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
47946e4b4fc8     *                  scratch                          working-container
[root@doag2019 ~]$ buildah config --env XXXXXXX="xxxxxxxx" 47946e4b4fc8
[root@doag2019 ~]$ buildah commit 47946e4b4fc8 dbi-postgres
Getting image source signatures
Skipping fetch of repeat blob sha256:9b74f2770486cdb56539b4a112b95ad7e10aced3a2213d33878f8fd736b5c684
Copying config sha256:e2db86571bfa2e64e6079077fe023e38a07544ccda529ba1c3bfc04984f2ac74
606 B / 606 B [============================================================] 0s
Writing manifest to image destination
Storing signatures
e2db86571bfa2e64e6079077fe023e38a07544ccda529ba1c3bfc04984f2ac74

The new image with the new variable is ready:

[root@doag2019 ~]$ buildah images
IMAGE NAME                                               IMAGE TAG            IMAGE ID             CREATED AT             SIZE
                                                                              dfcd3e8d5273         Oct 13, 2019 13:22     461 MB
localhost/dbi-postgres                                   latest               e2db86571bfa         Oct 13, 2019 13:52     461 MB
[root@doag2019 ~]$ buildah inspect localhost/dbi-postgres
...
"Env": [
"PGDATABASE=",
"PGUSERNAME=",
"PGPASSWORD=",
"PGDATA=/u02/pgdata",
"PGMAJOR=12",
"PGMINOR=db_0",
"PGVERSION=12.0",
"XXXXXXX=xxxxxxxx"
],
...

Nice. If you are happy with the image the scratch container can be deleted.

Cet article Creating a customized PostgreSQL container using buildah est apparu en premier sur Blog dbi services.

Clustering_Factor

Jonathan Lewis - Wed, 2019-10-16 08:07

Originally drafted July 2018

“How do you find out what the clustering_factor of an index would be without first creating the index ?”

I’m not sure this is really a question worth asking or answering[1], but since someone asked it (and given the draft date I have no idea who, where, when or why), here’s an answer for simple heap tables in the good old days before Oracle made public the table_cached_blocks preference. It works by sorting the columns you want in the index together with the table rowid, and then comparing the file/block component of the rowid (cutting the relevant characters from the string representation of the rowid) with the previous one to see if the current row is in the same block as the previous row.  If the row is in a different block we count one, otherwise zero.  Finally we sum the ones.

In the demo I’ve copied a little data from view all_objects, with the intention of creating an index on object_name. After running my counting query I’ve then created the index and checked its clustering_factor to show that I’ve got a match.


rem
rem     Script: clustering_factor_est.sql
rem     Author: J.P.Lewis
rem     Dated:  July 2018
rem
create table t1
as
select  *
from    all_objects
where   rownum <= 10000
;

prompt  ====================
prompt  Tablescan with lag()
prompt  ====================

select
        sum(count_chg)
from    (
select
        case
                when substr(rowid,7,9) <> lag(substr(rowid,7,9),1,'000000000') over(order by object_name, rowid)
                        then 1
                        else 0
        end     count_chg
from t1
)
;

prompt  ======================
prompt  create index and check
prompt  ======================

create index t1_i1 on t1(object_name);

select  clustering_factor
from    user_indexes 
where   table_name = 'T1'
and     index_name = 'T1_I1'
;

Pasting the output from running the above:


Table created.

====================
Tablescan with lag()
====================

SUM(COUNT_CHG)
--------------
          3901

1 row selected.

======================
create index and check
======================

Index created.


CLUSTERING_FACTOR
-----------------
             3901

1 row selected.


This will work for a global index on a partitioned table, but will give meaningless answers for globally partitioned indexes and local indexes. Furthermore it’s not a mechanism that lends itself to calculating the clustering_factor if you’ve set the table_cached_blocks preference to a value other than 1.

[1] Given the amount of work it would take to run the query to check the clustering_factor you might as well create the index (invisible, perhaps and nologging if that doesn’t cause problems) – which also gives you the option for setting the table_cached_blocks and gathering_stats to see how the clustering_factor varies.

Update (shortly after posting)

It has occurred to me that if you have a “nosegment” index that you’ve been using to test whether or not the optimizer would use it IF you created it, then the one piece of information that is missing from the data dicionary for that index is its clustering_factor (For a multicolumn index you can get a valid value for distinct_keys by creating a column group on the set of columns – which would otherwise be faked when you create the nosegment index.) This code might allow you to write a clustering_factor to the index definition that would give you a better idea of whether the optimizer would use it if you created it.  (To be tested.)

 

 

Which Version Number Corresponds Which PeopleTools Object?

David Kurtz - Tue, 2019-10-15 10:15
Recently somebody asked me "Our “CRM” values in PSLOCK and PSVERSION are growing tremendously and we don’t know why. We will be opening a case with Oracle but … do you know what that “CRM” value is related to? We are not using the CRM product in our PeopleSoft installation."
There is some documentation on the Oracle Support website, but it is not exhaustive and may not be completely up to date.
The page in the PTRef utility that describes the relationship of version numbers to PeopleTools tables is one of the few static pages in the tool.  I have now updated it with the information in the above Oracle support notes, but there are other version numbers.
In the previous blog post, I showed how to increment version numbers before updating PeopleTools objects.  I knew RDM (the Record Definition Manager) is the OBJECTTYPENAME for PSRECDEFN because I worked that out by tracing Application Designer while it saved a record change.  That remains the only completely reliable way to determine the relationship.
However, version number matching is also a useful technique, though when it does not provide a completely definitive answer, it dramatically cuts down the amount of work then necessary.
I have written a little PL/SQL code, that is delivered with PTRef, that extracts the maximum version number for each PeopleTools table in PeopleSoft (other than the PS%DEL) tables and stores it on a working storage table (I used the PLAN_TABLE because it is always there on an Oracle database).  Then you can compare the version number on PSLOCK/PSVERSION with the maximum version on the PeopleTools object.
If the version number is 1, you can’t do the match because the version number has never been incremented, at least since it was last reset by the VERSION application engine.
If the version is only updated occasionally you may have some trouble with duplicate matches. In my example, 3 tables have a maximum version number of 80, while only one version number is 80.
RECNAME                   MAX        CNT
------------------ ---------- ----------
PSAEAPPLDEFN 80 3504
PSMSGNODEDEFN 80 78
PSOPRVERDFN 80 1468
I would guess that OPRVM matches PSOPRVERDFN, and the above support notes confirm this, but otherwise, you would have to check that manually with trace.
OBJECTTY    VERSION
-------- ----------
OPRVM 80
Higher version numbers are easier to match because they are less likely to have duplicate matches.
So to return to the original question, what is CRM?  In my sample system, version numbers CRM and SCATM are both 3.
OBJECTTY    VERSION
-------- ----------
CRM 3
SCATM 3
However, only PSCONTDEFN has a maximum version of 3.
RECNAME                   MAX        CNT
------------------ ---------- ----------
PSCONTDEFN 3 7567
Again not a perfect match, but again Oracle Support Note 664848.1 confirms that CRM corresponds to PSCONTDEFN.  CRM stands for Content Registry Manager.
So the question now becomes what is updating the content definitions, and hence increasing the version number?  It turned out to be an entity registry synchronisation process that was run daily.
It is perfectly legitimate for many updated rows on the PeopleTools table can be given the same version number.  The version number does not need to be incremented again for each row being updated, but then the row-level locks on PSVERSION and PSLOCK created by their updates must be held until the update on the PeopleTools table is committed.  That can increase contention on the version number update.  The alternative is to commit after each update and then increment the version numbers again.  Many PeopleSoft processes do exactly that, and it can, in turn, lead to massive increase in some version numbers.

Marks & Spencer Sparks, a Walkthrough

VitalSoftTech - Tue, 2019-10-15 10:02

As North Americans have their Walmart & Targets, the British have their own one-stop shop called Marks & Spencer established over one hundred years ago. It is a retail store for all home goods. It is a household name in Britain that caters to all kinds of domestic needs. A business as large as this […]

The post Marks & Spencer Sparks, a Walkthrough appeared first on VitalSoftTech.

Categories: DBA Blogs

New Study: 64% of People Trust a Robot More Than Their Manager

Oracle Press Releases - Tue, 2019-10-15 08:00
Press Release
New Study: 64% of People Trust a Robot More Than Their Manager Global research highlights how AI is changing the relationship between people and technology at work

Redwood Shores, Calif.—Oct 15, 2019

People have more trust in robots than their managers, according to the second annual AI at Work study conducted by Oracle and Future Workplace, a research firm preparing leaders for disruptions in recruiting, development and employee engagement. The study of 8,370 employees, managers and HR leaders across 10 countries, found that AI has changed the relationship between people and technology at work and is reshaping the role HR teams and managers need to play in attracting, retaining and developing talent.

AI is Changing the Relationship Between People and Technology at Work

Contrary to common fears around how AI will impact jobs, employees, managers and HR leaders across the globe are reporting increased adoption of AI at work and many are welcoming AI with love and optimism.

  • AI is becoming more prominent with 50 percent of workers currently using some form of AI at work compared to only 32 percent last year. Workers in China (77 percent) and India (78 percent) have adopted AI over 2X more than those in France (32 percent) and Japan (29 percent).
  • The majority (65 percent) of workers are optimistic, excited and grateful about having robot co-workers and nearly a quarter report having a loving and gratifying relationship with AI at work.
  • Workers in India (60 percent) and China (56 percent) are the most excited about AI, followed by the UAE (44 percent), Singapore (41 percent), Brazil (32 percent), Australia/New Zealand (26 percent), Japan (25 percent), U.S. (22 percent), UK (20 percent) and France (8 percent).
  • Men have a more positive view of AI at work than women with 32 percent of men optimistic vs. 23 percent of women.
  Workers Trust Robots More Than Their Managers

The increasing adoption of AI at work is having a significant impact on the way employees interact with their managers. As a result, the traditional role of HR teams and the manager is shifting.

  • 64 percent of people would trust a robot more than their manager and half have turned to a robot instead of their manager for advice.
  • Workers in India (89 percent) and China (88 percent) are more trusting of robots over their managers, followed by Singapore (83 percent), Brazil (78%), Japan (76 percent), UAE (74 percent), Australia/New Zealand (58 percent), U.S. (57 percent), UK (54 percent) and France (56 percent).
  • More men (56 percent) than women (44 percent) have turned to AI over their managers.
  • 82% of people think robots can do things better than their managers.
  • When asked what robots can do better than their managers, survey respondents said robots are better at providing unbiased information (26 percent), maintaining work schedules (34 percent), problem solving (29 percent) and managing a budget (26 percent).
  • When asked what managers can do better than robots, workers said the top three tasks were understanding their feelings (45 percent), coaching them (33 percent) and creating a work culture (29 percent).
  AI is Here to Stay: Organizations Need to Simplify and Secure AI to Stay Competitive

The impact of AI at work is only just beginning and in order to take advantage of the latest advancements in AI, organizations need to focus on simplifying and securing AI at work or risk being left behind.

  • 76 percent of workers (and 81 percent of HR leaders) find it challenging to keep up with the pace of technological changes in the workplace.
  • Workers want a simplified experience with AI at work, asking for a better user interface (34 percent), best practice training (30 percent) and an experience that is personalized to their behavior (30 percent).
  • Security (31 percent) and privacy (30 percent) are the main concerns preventing workers from using AI at work.
  • Digital natives Gen Z (43 percent) and Millennials (45 percent) are more concerned about privacy and security at work than Gen X (29 percent) and Baby Boomers (23 percent).
  Supporting quotes

“The latest advancements in machine learning and artificial intelligence are rapidly reaching mainstream, resulting in a massive shift in the way people across the world interact with technology and their teams. As this study shows, the relationship between humans and machines is being redefined at work, and there is no one-size-fits-all approach to successfully managing this change. Instead, organizations need to partner with their HR organization to personalize the approach to implementing AI at work in order to meet the changing expectations of their teams around the world,” said Emily He, SVP, Human Capital Management Cloud Business Group, Oracle

“Over the past two years we’ve found that workers have become more optimistic as they’ve adopted AI in the workplace and HR is leading the way. The 2019 study shows that AI is redefining not only the relationship between worker and manager, but also the role of a manager in an AI-driven workplace. Based on the findings, managers will remain relevant in the future if they focus on being human and using their soft skills, while leaving the technical skills and routine tasks to robots,” said Dan Schawbel, Research Director at Future Workplace.

"Our 2019 results reveal that forward looking companies are already capitalizing on the power of AI," said Jeanne Meister Founding Partner, Future Workplace. “As workers and managers leverage the power of artificial intelligence in the workplace, they are moving from fear to enthusiasm as they see the possibility of being freed of many of their routine tasks and having more time to solve critical business problems for the enterprise.”

Learn more about this global study here.

Methodology

Research findings are based on a global survey conducted by Savanta between July 2 to August 9, 2019. In total, 8,370 completed the survey. The study was administered online and fielded in 10 different countries (and in six languages). Permanent full-time employees between the ages 18-74 years old were eligible to participate. The survey targeted HR Leaders, Managers and Employees. Respondents are recruited through a number of different mechanisms, via different sources to join the panels and participate in market research surveys. All panelists have passed a double opt-in process and complete on average 300 profiling data points prior to taking part in surveys. Respondents are invited to take part via email and are provided with a small monetary incentive for doing so.

Contact Info
Celina Bertallee
Oracle
559-283-2425
celina.bertallee@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Future Workplace

Future Workplace is an executive development firm dedicated to rethinking and re-imagining the workplace. Future Workplace works with heads of talent management, human resources, corporate learning, and diversity to prepare for the changes impacting recruitment, employee development, and engagement. Future Workplace is host to the 2020 Workplace Network, an Executive Council that includes 50 plus heads of Corporate Learning, Talent, and Human Resources who come together to discuss debate and share “next” practices impacting the workplace and workforce of the future. For more information, please visit: http://www.futureworkplace.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Celina Bertallee

  • 559-283-2425

Oracle Equips Case Workers and Citizens to Address Complex Societal Problems

Oracle Press Releases - Tue, 2019-10-15 07:00
Press Release
Oracle Equips Case Workers and Citizens to Address Complex Societal Problems Comprehensive case management platform brings real-time data to bear on decisions by health and human services personnel.

Redwood Shores, Calif.—Oct 15, 2019

Taking a new people-centric approach to providing support and care, Oracle offers new services to citizens in need. New digital case management capabilities, now available in Oracle Customer Experience (CX) for Public Sector, provide a modern platform to help citizens, case workers, and third party social service providers deal with complex societal issues and achieve positive outcomes.

By offering real time access to vital data, Oracle’s solution will enable state and local organizations to take a more people-centric approach to providing support and care—an approach that has not previously been possible due to the limitations of older technologies. Care providers will be able to effectively launch new services to their citizens, and to easily manage complex interactions for cases that span many years and complex, changing family circumstances.

“Today’s digitally-empowered citizens demand easy access to health and human services agencies for critical support and care. And case workers expect accessible tools that can manage complex cases that often evolve over extended periods,” said Rob Tarkoff, executive vice president of Oracle CX product development. “Until now, there has been a serious gap in the comprehensiveness and sophistication of solutions to serve many of the needs of this market. With Oracle, case workers can now have real-time snapshots of client cases along with intelligent recommendations for next steps - even in the most complex situations. This enables workers to take immediate and potentially life-saving actions.”

The Oracle solution transforms traditional ways social service agencies have managed cases and served customers through the integration of modern digital technologies and channels. The platform integrates technologies spanning cloud, digital outreach, omni-channel service, case management, mobile, social, Internet of Things, and Artificial Intelligence while ensuring maximum security and information privacy.

The digital enablement of complex case management not only serves state and local health and human service organizations, but also other government agencies in areas as diverse as immigration, taxation, benefits, and city and state citizen engagement.

Oracle CX solutions for health and human services combine decades of experience with a broad cloud service portfolio that enables current IT infrastructures with a proven path to digital transformation. It offers comprehensive digital engagement solutions for several aspects of citizen engagement including real-time personalized advice, self-service portals with embedded knowledge management, and a digital application process for benefit programs which is accessible through mobile, web, social channels, chatbots, call centers, and local offices.

For more information, go to https://www.oracle.com/applications/customer-experience/industries/public-sector/

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Top Universities Make the Grade with Oracle Cloud

Oracle Press Releases - Tue, 2019-10-15 07:00
Press Release
Top Universities Make the Grade with Oracle Cloud DePaul University and Ithaca College look to Oracle to cut costs and streamline operations

EDUCAUSE, Chicago—Oct 15, 2019

Higher education institutions are rapidly digitizing to meet industry demands for new models of teaching, learning and research. Those institutions seeking to build capacity for long-term innovation are increasingly moving to the cloud to compete in this evolving environment, as institutions across the U.S. continue to look to Oracle Cloud to increase business efficiency and improve student outcomes.

Oracle brings unparalleled advantages and opportunities to the Higher Education market, including the ability to reduce operational and student costs and help ensure fiscal sustainability.

Universities recently adopting Oracle Cloud include:

  • Ithaca College, a private liberal arts college founded over 125 years ago, will consolidate all of its applications with Oracle to operate on a single cloud platform. It is currently implementing Oracle’s E-Business Suite and HCM Cloud solutions, and is expanding to automate and migrate its financial and planning functions to the cloud. With the addition of Oracle ERP Cloud and EPM Cloud, Ithaca College expects to see an increase in staff and business efficiency, along with enhanced analysis, monitoring and management reporting throughout the planning and budgeting process.

    “Oracle’s successful track record helping higher education institutions transition to the cloud inspired us to expand our partnership with them,” said David Weil, associate vice president and chief information officer, Ithaca College. “We’re confident that Oracle Cloud will enable us to streamline our operations to even better serve our students.”

  • DePaul University, the largest Catholic university in the country, will implement Oracle ERP Cloud and HCM Cloud and integrate them with its existing Oracle PeopleSoft Campus Solutions application. The interest to move to a modern system was based on the university’s desire to adopt best practices for business processes, and new capabilities such as AI and embedded intelligence to drive better efficiency and sustainability.

    “We are proud to be the first institution using Oracle’s PeopleSoft solution to migrate to cloud in the Chicagoland area,” said Bob McCormick, vice president for Information Services of DePaul University, “At DePaul, we are looking forward to modernizing our business and demonstrating leadership in the higher education industry.”

“From financial pressures, to rising student expectations, the higher education industry is rapidly evolving. Oracle is committed to helping these institutions adapt and thrive through the change,” said Keith Rajecki, vice president of Oracle Public Sector, Education and Research. “We are proud to be partnering with Ithaca College and DePaul University to provide them with the industry’s most complete, scalable cloud to transform their operations and provide the agility they need to enhance financial planning functions.”

Oracle’s EPM Suite provides data on demand, enabling institutions to use their time more effectively and provide real business value. The solution elevates business outcomes through embedded intelligence and advanced analytics, while also helping to lower costs and gain continuous innovation.

Oracle ERP Cloud provides accelerated business outcomes real-time data for predictive insights to improve your decision-making and performance management that will help you improve business performance and drive operational excellence across the institution.

Oracle’s HCM Cloud provides a very powerful single, global human resources solution to help processes and local compliance needs while also engaging workforces throughout the entire talent lifecycle.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Michigan State University Amplifies Student Success through the Cloud

Oracle Press Releases - Tue, 2019-10-15 07:00
Press Release
Michigan State University Amplifies Student Success through the Cloud Leading research university invests in Oracle Cloud

EDUCAUSE, Chicago—Oct 15, 2019

Michigan State University, a leading higher education and research institution, has engaged Oracle to replace its mainframe student information system infrastructure. This modernization project will help the university address challenges around fiscal sustainability, transforming the student experience and building capacity for long-term institutional innovation. To reach these goals, the university charted an incremental path to the cloud and will be implementing several Oracle Cloud solutions, including Oracle Student Financial Planning (SFP). Michigan State University also selected Exadata Cloud at Customer to operate PeopleSoft Campus Solutions in one hybrid cloud environment.

“Our partnership with Michigan State confirms our investment, commitment and success in the higher education market,” said Keith Rajecki, vice president of Oracle Public Sector, Education and Research, “We offer a complete and market-leading set of applications and technology and we are continuing to invest in research and development efforts.”

For the past 160 years, Michigan State has pushed the boundaries of discovery and forged partnerships to solve the most pressing global challenges while providing life-changing opportunities to a diverse and inclusive academic community. The university joins an increasing number of higher education institutions moving to the cloud following a practical path that maps to its goals of improving student outcomes and institutional standing, achieving operational efficiency, and optimizing decision making through enhanced insights, all while also reducing compliance costs.

“With higher education facing increasing student expectations, changing fiscal realities, and the need to future-proof their institutions, the adoption of cloud technologies and applications is proving to be a key component of overall institutional success,” said Vivian Wong, group vice president of Higher Education Development at Oracle. “Oracle’s unified stack of solutions, including infrastructure through to business and purpose-built higher education applications, offer institutions, like Michigan State, the ability to choose an incremental path to cloud—delivering immediate value while also considering their unique contextual requirements,” she concluded.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

Understanding PeopleTools Object Version Numbers

David Kurtz - Mon, 2019-10-14 13:36
I was recently asked a question about PeopleSoft version numbers, but before I address that directly, I think it would be useful to explain what is their general purpose.
CachingThe PeopleSoft data model and application are mostly stored in the database in PeopleTools tables.  These tables are queried as the application executes.  For example, when you open a component, the component and pages, including all the PeopleCode, the definition of any records used, and so on have to be loaded into the component buffer.  Ultimately this information comes from the PeopleTools tables.  To save the overhead of repeatedly querying these tables, PeopleSoft caches this data locally in physical files the application server and process scheduler domains.  The application servers also cache some of this information in memory to save visiting the local physical cache.  Application Designer also maintains a physical cache.
Over time, as the application executes, the cache files build up.  Occasionally, when it is necessary to delete the cache files and then it becomes clear just how significant is the overhead of the PeopleTools queries as a period of poor performance is seen as the application builds up fresh cache files.
Physical cache files are created in directories in the application server and process scheduler Tuxedo domains.  By default, each process maintains its own private cache.  Separate directories of cache files are created for each type of PeopleSoft server process in each domain.    Pairs of cache files are created in each directory for each object type as needed.  There is also a CACHE.LOK file in each directory that is used to ensure that only one process is accessing that cache directory concurrently.
It is possible to run with a shared physical cache, but then it is read-only and must be pre-generated.  It is very rare to see this implemented, because everyone expects to continuously deliver changes over time, and if you had a shared cache you would have to deliver an updated set of shared cache file to every domain every time you delivered a new PeopleTools object.
The cache files come in pairs.  The name of the cache files is the Object Type Name.  This corresponds to the OBJECTTYPENAME on the PSLOCK and PSVERSION tables.  The .DAT file contains the data to be cached.  The .KEY file is an index for the .DAT file, and it also holds the version number of the cached object.

-rw------- 1 psadm2 oracle 5228492 Jun 12 06:37 RDM.DAT
-rw------- 1 psadm2 oracle 69120 Jun 12 06:37 RDM.KEY
-rw------- 1 psadm2 oracle 0 Oct 26 2015 ROLM.DAT
-rw------- 1 psadm2 oracle 24192 Oct 26 2015 ROLM.KEY
-rw------- 1 psadm2 oracle 0 Oct 26 2015 RSM.DAT
-rw------- 1 psadm2 oracle 24192 Oct 26 2015 RSM.KEY
Version Numbers
Version numbers track when a cached PeopleTools object has been changed, either by Application Designer, or a change in configuration, or the application.  The version numbers are sequences generated from two PeopleTools tables PSLOCK and PSVERSION that hold the highest version number for each type of object.  These two tables have the same structure.
SQL> desc psversion
Name Null? Type
----------------------------------------- -------- ----------------------------
OBJECTTYPENAME NOT NULL VARCHAR2(8 CHAR)
VERSION NOT NULL NUMBER(38)
There are now over 100 different version numbers, each with a specific object type name that each track a specific PeopleTools object.  There is a global version number, with the object type name of SYS, that is incremented whenever any other version number is incremented.
I have no idea why two identical tables of version numbers were created.  I can see no reason for this, but it has been like this since the version numbers were changed (if I remember correctly) in PeopleTools 7.  In early versions of PeopleTools, not all version numbers were on both tables, but in at least PeopleTools 8.55 only one object type appears on PSVERSION and not PSLOCK.
When an object is changed, the object and global version numbers are incremented, and the incremented object version number is recorded on the object in the PeopleTools table.  The version number on the object is also stored in the physical cache files when the object is cached.  If the version on the database is higher than that in the cache file, then the PeopleSoft process knows it must query the latest version from the PeopleTools table and update the cache file.
How to Update Version Numbers
It is not generally recommended, nor strictly speaking supported, to update PeopleTools tables directly with SQL.  Apart from the risk of updating them incorrectly, or to invalid values, you also need to ensure that the changes are picked up by PeopleTools processes and that they do not simply continue to read the cached data.  However, occasionally, it is the pragmatic way to doing something.  
Here is an example from Chapter 5 of PeopleSoft for the Oracle DBA that shows how to maintain version numbers so the change is picked up by PeopleTools processes.  I want to mark alternate search key indexes as unique where there is a unique key on a record because they are unique because the unique key is a subset of their columns.  Then Application Designer will build the indexes as unique.  
UPDATE psversion SET version = version + 1
WHERE objecttypename IN('SYS','RDM');

UPDATE pslock SET version = version + 1
WHERE objecttypename IN('SYS','RDM');

UPDATE psrecdefn
SET version = (
SELECT version FROM psversion WHERE objecttypename = 'RDM')
WHERE recname = '';

UPDATE psindexdefn a
SET a.uniqueflag = 1
WHERE a.uniqueflag = 0
AND a.indextype = 3
AND EXISTS(
SELECT 'x'
FROM psindexdefn k
WHERE k.recname = a.recname
AND k.indexid = '_'
AND k.indextype = 1
AND k.uniqueflag = 1)
AND a.recname = '';
I am updating a PeopleTools object (PSINDEXDEFN) that doesn't have a version number, but its parent is PSRECDEFN that does have a version number.  I happen to know that object type RDM (the Record Definition Manager) generates the version number for PSRECDEFN.  I found that out by tracing Application Designer while it saved a record change.  That is the only completely reliable method to determine which sequence is used for which record.  However, I will discuss another less onerous matching method in a subsequent blog post.
I must increment the RDM and SYS version numbers and write the new RDM version number to the updated rows on PSRECDEFN.  Next time a PeopleSoft process needs to read a record definition it will check the version numbers.  The increment of the SYS object tells PeopleSoft than an object number has changed, and then it will detect that the RDM version number has changed so it has to reload and cache objects with version numbers greater than the last cached version number for that object.

Basic Replication -- 7 : Refresh Groups

Hemant K Chitale - Fri, 2019-10-11 23:24
So far, all my blog posts in this series cover "single" Materialized Views (even if I have created two MVs, they are independent of each other and can be refreshed at different schedules).

A Refresh Group is what you would define if you want multiple MVs to be refreshed to the same point in time.  This allows for
(a) data from transaction that touch multiple tables
or
(b) views of multiple tables
to be consistent in the target MVs.

For example, if you have SALES_ORDER and LINE_ITEMS tables and the MVs on these are refreshed at different times, you might see the ORDER (Header) without the LINE_ITEMs (or, worse, in the absence of Referential Integrity constraints, LINE_ITEMs without the ORDER (Header) !).

Here's a demo, using the HR  DEPARTMENTS and EMPLOYEES table with corresponding MVs built in the HEMANT schema.

SQL> show user
USER is "HR"
SQL> select count(*) from departments;

COUNT(*)
----------
27

SQL> select count(*) from employees;

COUNT(*)
----------
107

SQL>
SQL> grant select on departments to hemant;

Grant succeeded.

SQL> grant select on employees to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on departments;

Materialized view log created.

SQL> grant select, delete on mlog$_departments to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on employees;

Materialized view log created.

SQL> grant select, delete on mlog$_employees to hemant;

Grant succeeded.

SQL>
SQL>


Having created the source MV Logs  note that I have to grant privileges to the account (HEMANT) that will be reading and deleting from the MV Logs.

Next, I setup the MVs and the Refresh Group

SQL> show user
USER is "HEMANT"
SQL>
SQL> select count(*) from hr.departments;

COUNT(*)
----------
27

SQL> select count(*) from hr.employees;

COUNT(*)
----------
107

SQL>
SQL>
SQL> create materialized view mv_dept
2 refresh fast on demand
3 as select department_id as dept_id, department_name as dept_name
4 from hr.departments
5 /

Materialized view created.

SQL>
SQL> create materialized view mv_emp
2 refresh fast on demand
3 as select department_id as dept_id, employee_id as emp_id,
4 first_name, last_name, hire_date
5 from hr.employees
6 /

Materialized view created.

SQL>
SQL> select count(*) from mv_dept;

COUNT(*)
----------
27

SQL> select count(*) from mv_emp;

COUNT(*)
----------
107

SQL>
SQL> execute dbms_refresh.make(-
> name=>'HR_MVs',-
> list=>'MV_DEPT,MV_EMP',-
> next_date=>sysdate+0.5,-
> interval=>'sysdate+1');

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>


Here, I have built two MVs and then a Refresh Group called "HR_MVS".  The first refresh will be 12hours from now and every subsequent refresh will be after 24hours.  (The Refresh Interval must be set to what would be larger than the time taken to execute the actual Refresh).

However, I can manually execute the Refresh after new rows are populated into the source tables. First, I insert new rows

SQL> show user
USER is "HR"
SQL> insert into departments (department_id, department_name)
2 values
3 (departments_seq.nextval, 'New Department');

1 row created.

SQL> select department_id
2 from departments
3 where department_name = 'New Department';

DEPARTMENT_ID
-------------
280

SQL> insert into employees(employee_id, first_name, last_name, email, hire_date, job_id, department_id)
2 values
3 (employees_seq.nextval, 'Hemant', 'Chitale', 'hkc@myenterprise.com', sysdate, 'AD_VP', 280);

1 row created.

SQL> select employee_id
2 from employees
3 where first_name = 'Hemant';

EMPLOYEE_ID
-----------
208

SQL> commit;

Commit complete.

SQL>


Now that there are new rows, the target MVs must be refreshed together.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> execute dbms_refresh.refresh('HR_MVS');

PL/SQL procedure successfully completed.

SQL> select count(*) from mv_dept;

COUNT(*)
----------
28

SQL> select count(*) from mv_emp;

COUNT(*)
----------
108

SQL>
SQL> select * from mv_dept
2 where dept_id=280;

DEPT_ID DEPT_NAME
---------- ------------------------------
280 New Department

SQL> select * from mv_emp
2 where emp_id=208;

DEPT_ID EMP_ID FIRST_NAME LAST_NAME HIRE_DATE
---------- ---------- -------------------- ------------------------- ---------
280 208 Hemant Chitale 12-OCT-19

SQL>


Both MVs have been Refresh'd together as an ATOMIC Transaction.  If either of the two MVs had failed to refresh (e.g. unable to allocate extent to grow the MV), both the INSERTs would be rolled back.  (Note : It is not a necessary requirement that both source tables have new / updated rows, the Refresh Group works even if only one of the two tables has new / updated rows).

Note : I have used DBMS_REFRESH.REFRESH (instead of DBMS_MVIEW.REFRESH) to execute the Refresh.

You can build multiple Refresh Groups, each consisting of *multiple* Source Tables from the same source database.
You would define each Refresh Group to maintain consistency of data across multiple MVs (sourced from different tables).
Besides the Refresh Group on two HR tables, I could have, within the HEMANT schema, more Refresh Groups on FINANCE schema tables as well.

(Can you have a Refresh Group sourcing from tables from different schemas ?  Try that out !)


What's the downside of Refresh Groups ?    
Undo and Redo !  Every Refresh consists of INSERT/UPDATE/DELETE operations on the MVs.  And if any one of the MVs fails to Refresh, the entire set of DMLs (across all the MVs in the Refresh Group) has to *Rollback* !


Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator