Feed aggregator

Using Cloud Native Buildpacks (CNB) on a local registry to speed up the building of images for test purposes

Pas Apicella - Tue, 2019-06-11 20:58
I previously blogged about the CNCF project known as Cloud Native Buildpacks previously on this blog entry below.

Building PivotalMySQLWeb using Cloud Native Buildpacks (CNB)
http://theblasfrompas.blogspot.com/2019/06/building-pivotalmysqlweb-using-cloud.html

In the steps below I will show how to use a local docker registry on your laptop or desktop to enable faster builds of your OCI compliant images using CNB's. Here is how using the same application.

Pre Steps:

1. Ensure you have Docker CE installed if not use this link

  https://hub.docker.com/search/?type=edition&offering=community

Steps:

1. Start by running a local registry on your own laptop. The guide shows how to get a container running which will be our local registry and then how you verify it's running.

https://docs.docker.com/registry/

$ docker run -d -p 5000:5000 --restart=always --name registry registry:2

Verify it's running:

$ netstat -an | grep 5000
tcp6       0      0  ::1.5000               *.*                    LISTEN
tcp4       0      0  *.5000                 *.*                    LISTEN

2. Then pull the CNB images versions of the "official" build and run images from the GCR as follows. Those images exist here

https://console.cloud.google.com/gcr/images/cncf-buildpacks-ci/GLOBAL/packs/run?gcrImageListsize=30

Here I am using the latest build/run images which at the time of this post was "run:0.2.0-build.12"

papicella@papicella:~$ docker pull gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
0.2.0-build.12: Pulling from cncf-buildpacks-ci/packs/run
Digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9
Status: Downloaded newer image for gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12

papicella@papicella:~$ docker tag gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12 localhost:5000/run:0.2.0-build.12

papicella@papicella:~$ docker rmi gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
Untagged: gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
Untagged: gcr.io:443/cncf-buildpacks-ci/packs/run@sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9

papicella@papicella:~$ docker push localhost:5000/run:0.2.0-build.12
The push refers to repository [localhost:5000/run]
1315c94f2536: Layer already exists
63696cbb6c17: Layer already exists
30ede08f8231: Layer already exists
b57c79f4a9f3: Layer already exists
d60e01b37e74: Layer already exists
e45cfbc98a50: Layer already exists
762d8e1a6054: Layer already exists
0.2.0-build.12: digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9 size: 1780

3. Now lets use our local registry and build/run images which will be much faster for local development

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker tag localhost:5000/run:0.2.0-build.12 localhost:5000/run

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker push localhost:5000/run:latest
The push refers to repository [localhost:5000/run]
1315c94f2536: Layer already exists
63696cbb6c17: Layer already exists
30ede08f8231: Layer already exists
b57c79f4a9f3: Layer already exists
d60e01b37e74: Layer already exists
e45cfbc98a50: Layer already exists
762d8e1a6054: Layer already exists
latest: digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9 size: 1780

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build localhost:5000/pivotal-mysql-web --path ./PivotalMySQLWeb --no-pull --publish
Using default builder image cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Using build cache volume pack-cache-65bb470893c1.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] restoring cached layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4'
===> ANALYZING
[analyzer] using cached layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4'
[analyzer] writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'
[analyzer] writing metadata for uncached layer 'org.cloudfoundry.jvmapplication:main-class'
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Reusing cached layer
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Reusing cached layer
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Reusing layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Reusing layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Reusing layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Reusing layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Reusing layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: localhost:5000/pivotal-mysql-web:latest@sha256:f1d7a25fc5159ceb668c26b595dcffb00ef54ada31cbb52eaa8319dc143fc9d8
===> CACHING
[cacher] Reusing layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image localhost:5000/pivotal-mysql-web

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker pull localhost:5000/pivotal-mysql-web
Using default tag: latest
latest: Pulling from pivotal-mysql-web
410238d178d0: Already exists
a00e90b544bc: Already exists
9de264eecc08: Already exists
4acedf754175: Already exists
d5a72fc0c7a1: Already exists
4066d2d744ac: Already exists
dba1ef680b99: Already exists
Digest: sha256:f1d7a25fc5159ceb668c26b595dcffb00ef54ada31cbb52eaa8319dc143fc9d8
Status: Downloaded newer image for localhost:5000/pivotal-mysql-web:latest

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker run -m 1G -p 8080:8080 localhost:5000/pivotal-mysql-web

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.0.RELEASE)

2019-06-12 01:02:16.174  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Starting PivotalMySqlWebApplication on a018f17d6121 with PID 1 (/workspace/BOOT-INF/classes started by vcap in /workspace)
2019-06-12 01:02:16.179  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : No active profile set, falling back to default profiles: default
2019-06-12 01:02:18.336  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2019-06-12 01:02:18.374  INFO 1 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2019-06-12 01:02:18.375  INFO 1 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/9.0.12
2019-06-12 01:02:18.391  INFO 1 --- [           main] o.a.catalina.core.AprLifecycleListener   : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/layers/org.cloudfoundry.openjdk/openjdk-jre/lib:/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib]
2019-06-12 01:02:18.512  INFO 1 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-06-12 01:02:18.512  INFO 1 --- [           main] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 2270 ms
2019-06-12 01:02:19.019  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'webMvcMetricsFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'formContentFilter' to: [/*]
2019-06-12 01:02:19.021  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-12 01:02:19.021  INFO 1 --- [           main] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2019-06-12 01:02:19.022  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpTraceFilter' to: [/*]
2019-06-12 01:02:19.022  INFO 1 --- [           main] o.s.b.w.servlet.ServletRegistrationBean  : Servlet dispatcherServlet mapped to [/]
2019-06-12 01:02:19.374  INFO 1 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2019-06-12 01:02:19.918  INFO 1 --- [           main] .s.s.UserDetailsServiceAutoConfiguration :

Using generated security password: 42d4ec01-6459-4205-a66b-1b49d333121e

2019-06-12 01:02:20.043  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: Ant [pattern='/**'], []
2019-06-12 01:02:20.092  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@47e4d9d0, org.springframework.security.web.context.SecurityContextPersistenceFilter@5e4fa1da, org.springframework.security.web.header.HeaderWriterFilter@4ae263bf, org.springframework.security.web.csrf.CsrfFilter@2788d0fe, org.springframework.security.web.authentication.logout.LogoutFilter@15fdd1f2, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@2d746ce4, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@70e02081, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter@49798e84, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1948ea69, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3f92c349, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@66ba7e45, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6ed06f69, org.springframework.security.web.session.SessionManagementFilter@19ccca5, org.springframework.security.web.access.ExceptionTranslationFilter@57aa341b, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@7c6442c2]
2019-06-12 01:02:20.138  INFO 1 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 9 endpoint(s) beneath base path '/actuator'
2019-06-12 01:02:20.259  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-06-12 01:02:20.265  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Started PivotalMySqlWebApplication in 4.841 seconds (JVM running for 5.646)



And that's it a locally built OCI image (Built very fast all locally) you have run locally from your local image registry.

Here is how to view your local registry using HTTPie showing our locally built "pivotal-mysql-web" OCI image we created above

papicella@papicella:~$ http http://localhost:5000/v2/_catalog
HTTP/1.1 200 OK
Content-Length: 63
Content-Type: application/json; charset=utf-8
Date: Wed, 12 Jun 2019 01:53:40 GMT
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff

{
    "repositories": [
        "pivotal-mysql-web",
        "run",
        "sample-java-app"
    ]
}


More Information

1. Cloud Native Buildpacks: an Industry-Standard Build Process for Kubernetes and Beyond.
https://content.pivotal.io/blog/cloud-native-buildpacks-for-kubernetes-and-beyond

2. buildspacks.io Home Page
https://buildpacks.io/

Categories: Fusion Middleware

Slides and My Impressions from May 17th AZORA Meetup

Bobby Durrett's DBA Blog - Tue, 2019-06-11 18:44

We have the slides from the two talks at our May 17th AZORA Meetup.

Here are Stephen Andert’s slides: Networking is NOT just cables and fiber!

Here are Doug Hood’s slides in three parts:

  1. Using SQL and PLSQL for Mid-Tier Database Caching
  2. Oracle TimesTen Scaleout – World’s Fastest OLTP DB
  3. Oracle In-Memory Data Processing

I wanted to share my impressions of the two talks and the meeting as well as share the slides. Of course, these are my opinions shaped by my own experiences and not universally true of everyone!

This meetup had two very different types of talks. I thought of Stephen’s talk as a career development or soft skills sort of talk. I have been to a number of talks like that at national Oracle user group conferences such as Collaborate. They help balance out the purely technical Oracle database content. Once Stephen got into his talk, I really started to appreciate the quality of the content. To me he was talking about keeping in touch with people in an intentional but sincere way. I like the idea of planning on contacting people a certain number of times per year for example.

Years ago, in my first job I worked for Campus Crusade for Christ (now Cru) and I raised money to support myself and my family. I networked with people that I met through churches and friends and family members. It was different than networking as part of a DBA career because I was directly asking for money instead of making career-oriented connections. But the core idea that I remember from Stephen’s talk applied then. Stephen’s idea was to genuinely seek to help the folks in your network without focusing on what they could do for you. In my CCC days the support raising training told us that we were not “fundraising” but instead “friend raising”. I had some great experiences meeting people and getting to know them and I think it was best when my focus was on how to inspire and encourage the people I met rather than to anxiously think about whether they could give money to support what I did.

The other less serious connection I saw between Stephen’s presentation and my Cru days is that Stephen has a hand-written database setup to keep track of his people. Back in the day I had a Paradox database from Borland running on MS-DOS to do a lot of the same things. So, hearing Stephen talk about his contact database was a blast from the past for me.

I am not really doing much in the way of networking myself these days. I write this blog. I speak at conferences every couple of years or so. I help with the local Oracle user group AZORA. But I am not intentionally making and maintaining relationships with other technical people in the way Stephen described so his talk gave me something to think about.

Doug Hood’s talk was at the other end of the spectrum with some cool technology. Doug spoke on several things and with a lot of detail so I cannot do a good job of summarizing what he said. Check the slides for more details. But I do want to write down my impressions. Listening to Doug’s talk reminded me of some of the computer science study that I have been doing on my own recently. I have sort of gone back to school as an independent learner. When Doug talked about the memory hierarchy and caching it tied right back to the assembly language and algorithms study I have been doing.

Doug presented some cool hardware that puts persistent memory close enough to the CPU that it changes the way we think about memory hierarchy. What if you replace your RAM with persistent RAM that did not get cleared when you power off your computer? Evidently in some architectures (maybe all the modern ones these days I don’t know) the RAM is closely connected to the CPU and does not have to be accessed over the bus in the way I/O is. So, persistent RAM would be much faster than some solid-state disk being read over the bus no matter how fast the SSD is. Anyway, see Doug’s slides. I am sure that I am butchering the details, but I am giving my impression and my memory so keep that in mind.

In general database work and database performance has a lot to do with caching. I have heard a little bit about how algorithms can be designed to work well with CPU caches. I recently read a chapter about the B-Tree data structure that is used in databases and it was a big contrast to the other data structures I had studied because it took disk reads and memory accesses into consideration. Anyway, at a high level I took away from Doug’s talk notions about memory and caching and different ways people can tweak the memory hierarchy to get better database performance.

I had to leave a little early to head for the mountains for the weekend but as always, I valued the time I spent at AZORA, my local Oracle user group. I appreciate Stephen and Doug stepping up and giving their presentations. I hope that my links and the way I characterized their talks is accurate enough. I am sure that I made mistakes, but I got something out of my time and appreciate their efforts.

AZORA is taking a break for the hot Arizona summer but coming back strong with our next meeting on September 27th. The details are being finalized so keep an eye on our Meetup page.

Bobby

Categories: DBA Blogs

RAMBleed DRAM Vulnerabilities

Oracle Security Team - Tue, 2019-06-11 12:00

On June 11th, security researchers published a paper titled “RAMBleed Reading Bits in Memory without Accessing Them”.  This paper describes attacks against Dynamic Random Access Memory (DRAM) modules that are already susceptible to Rowhammer-style attacks.

The new attack methods described in this paper are not microprocessor-specific, they leverage known issues in DRAM memory.  These attacks only impact DDR4 and DDR3 memory modules, and older generations DDR2 and DDR1 memory modules are not vulnerable to these attacks.

While the RAMBleed issues leverage RowHammer, RAMBleed is different in that confidentiality of data may be compromised: RAMBleed uses RowHammer as a side channel to discover the values of adjacent memory. 

Please note that successfully leveraging RAMBleed exploits require that the malicious attacker be able to locally execute malicious code against the targeted system. 

At this point in time, Oracle believes that:

  • All current and many older families of Oracle x86 (X5, X6, X7, X8, E1) and Oracle SPARC servers (S7, T7, T8, M7, M8) employing DDR4 DIMMs are not expected to be impacted by RAMBleed.  This is because Oracle only employs DDR4 DIMMs that have implemented the Target Row Refresh (TRR) defense mechanism against RowHammer.  Oracle’s memory suppliers have stated that these implementations have been designed to be effective against RowHammer. 
  • Older systems making use of DDR3 memory are also not expected to be impacted by RAMBleed because they are making use of a combination of other RowHammer mitigations (e.g., pseudo-TRR and increased DIMM refresh rates in addition to Error-Correcting Code (ECC)).  Oracle is currently not aware of any research that would indicate that the combination of these mechanisms would not be effective against RAMBleed. 
  • Oracle Cloud Infrastructure (OCI) is not impacted by the RAMBleed issues because OCI servers only use DDR4 memory with built-in defenses as previously described.  Exadata Engineered Systems use DDR4 memory (X5 family and newer) and DDR3 memory (X4 family and older).
  • Finally, Oracle does not believe that additional software patches will need to be produced to address the RAMBleed issues, as these memory issues can be only be addressed through hardware configuration changes.  In other words, no additional security patches are expected for Oracle product distributions.
For more information about Oracle Corporate Security Practices, see https://www.oracle.com/corporate/security-practices/

See APEX Debug info in PL/SQL and SQL Developer

Dimitri Gielis - Tue, 2019-06-11 07:05
When developping Oracle APEX apps I like to instrument my code with APEX_DEBUG and/or Logger.

With our APEX Office Print (AOP) PL/SQL API and APEX Plug-in we did the same, we use APEX_DEBUG and Logger behind the scenes to allow you to see what it going on. But when I tried to view the APEX debug messages in SQL Developer, I didn't see any. Christian Neumueller of the APEX Dev team, gave me the answer: APEX debug is buffering it's output, to reduce the I/O overhead. Buffering is disabled for LEVEL9, but the other levels only write:
 - after 1000 records
 - at the end of request processing
 - when you detach the session

This explained perfectly what was happening for me.

Here's a quick example when you want to debug the AOP PL/SQL API from PL/SQL and SQL Developer. The code downloads an Interactive Report of page 200 to Excel and stores it in a table.

declare
l_return blob;
l_output_filename varchar2(100) := 'output';
begin
apex_session.create_session(p_app_id=>498,p_page_id=>200,p_username=>'DIMI');

apex_debug.enable(p_level => apex_debug.c_log_level_info); 
-- for more details, use: c_log_level_app_trace
apex_debug.message(p_message => 'Debug enabled.');

l_return := aop_api_pkg.plsql_call_to_aop (
p_data_type => aop_api_pkg.c_source_type_rpt,
p_data_source => 'report1',
p_template_type => aop_api_pkg.c_source_type_apex,
p_template_source => 'aop_template_ir_customers.xlsx',
p_output_type => aop_api_pkg.c_excel_xlsx,
p_output_filename => l_output_filename,
p_aop_url => apex_app_setting.get_value('AOP_URL'),
p_api_key => apex_app_setting.get_value('AOP_API_KEY'),
p_app_id => 498,
p_page_id => 200);

insert into aop_output (output_blob,filename) 
values (l_return, l_output_filename);
commit;

dbms_output.put_line('To view debug messages:');
dbms_output.put_line('select * from apex_debug_messages where session_id = '
||apex_util.get_session_state('APP_SESSION') ||' order by message_timestamp');

apex_session.detach;
end;

Running the SQL statement to view the debug messages:
select * from apex_debug_messages where session_id = 16458652970080 order by message_timestamp

Et voila... the APEX debug info is available straight away :)


Categories: Development

Redo Dumps

Jonathan Lewis - Tue, 2019-06-11 06:53

A thread started on the Oracle-L list-server a few days ago asking for help analysing a problem where a simple “insert values()” (that handled millions of rows per day) was running very slowly. There are many reasons why this might happen, ranging from the trivial (someone has locked the table in exclusive mode), through the slightly subtle (we’re trying to insert a row that collides on a uniqueness constraint with an uncommitted insert from another session) to the subtle (Oracle has to read through the undo to check current versions of blocks against read-consistent versions) ending up at the esoteric (the ASSM space management blocks are completely messed up again).

A 10046 trace of a session doing an insert showed only that there was a lot of time spent on single block reads. Unfortunately, since this was on an Exadata system the waits were reported as “cell single block physical read”. Unfortunately the parameters to this wait event are “cellhash#”, “diskhash#”, and “bytes” and we don’t see the file_id, block_id which can be very helpful for a case like this. The only information we got from the trace file was that the object_id was for the table were rows were being inserted.

Before digging into exotic debugging methods, the OP supplied us with a 1-second session report from Tanel Poder’s snapper script:


------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>  SID @INST, USERNAME  , TYPE, STATISTIC                                                 ,         DELTA, HDELTA/SEC,    %TIME, GRAPH       , NUM_WAITS,  WAITS/SEC,   AVERAGES
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>   1070  @2, SYSADMIN, STAT, session logical reads                                     ,         13865,      7.73k,         ,             ,          ,           ,      14.1k total buffer visits
>   1070  @2, SYSADMIN, STAT, user I/O wait time                                        ,           141,      78.65,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, non-idle wait time                                        ,           141,      78.65,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, non-idle wait count                                       ,         12230,      6.82k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read total IO requests                           ,          6112,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read requests optimized                          ,          6111,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read total bytes optimized                       ,      50069504,     27.93M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read total bytes                                 ,      50069504,     27.93M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, cell physical IO interconnect bytes                       ,      50069504,     27.93M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, gcs messages sent                                         ,             3,       1.67,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, db block gets                                             ,         13860,      7.73k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, db block gets from cache                                  ,         13860,      7.73k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, db block gets from cache (fastpath)                       ,          7737,      4.32k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, consistent gets                                           ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, consistent gets from cache                                ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, consistent gets pin                                       ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, consistent gets pin (fastpath)                            ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, logical read bytes from cache                             ,     113541120,     63.34M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical reads                                            ,          6111,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical reads cache                                      ,          6111,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, physical read IO requests                                 ,          6112,      3.41k,         ,             ,          ,           ,      8.19k bytes per request
>   1070  @2, SYSADMIN, STAT, physical read bytes                                       ,      50069504,     27.93M,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, db block changes                                          ,            11,       6.14,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, free buffer requested                                     ,          6112,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, hot buffers moved to head of LRU                          ,           958,     534.39,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, free buffer inspected                                     ,          6144,      3.43k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, shared hash latch upgrades - no wait                      ,             7,        3.9,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, blocks decrypted                                          ,          6110,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, redo entries                                              ,          6120,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, redo size                                                 ,        465504,    259.67k,         ,             ,          ,           ,          ~ bytes per user commit
>   1070  @2, SYSADMIN, STAT, redo entries for lost write detection                     ,          6110,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, redo size for lost write detection                        ,        464756,    259.25k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, redo subscn max counts                                    ,             7,        3.9,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, file io wait time                                         ,       1408659,    785.78k,         ,             ,          ,           ,   230.47us bad guess of IO wait time per IO request
>   1070  @2, SYSADMIN, STAT, gc current blocks received                                ,             3,       1.67,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, gc local grants                                           ,          6116,      3.41k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, ASSM cbk:blocks examined                                  ,         12366,       6.9k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, ASSM gsp:L1 bitmaps examined                              ,          2478,      1.38k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, ASSM gsp:L2 bitmaps examined                              ,             1,        .56,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, ASSM gsp:reject db                                        ,         12388,      6.91k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, STAT, buffer is pinned count                                    ,           230,      128.3,         ,             ,          ,           ,       1.63 % buffer gets avoided thanks to buffer pin caching
>   1070  @2, SYSADMIN, STAT, cell flash cache read hits                                ,          6723,      3.75k,         ,             ,          ,           ,          ~ per execution
>   1070  @2, SYSADMIN, TIME, background cpu time                                       ,        365192,   203.71ms,    20.4%, [##        ],          ,           ,
>   1070  @2, SYSADMIN, TIME, background elapsed time                                   ,       1273623,   710.45ms,    71.0%, [########  ],          ,           ,      28.95 % unaccounted time
>   1070  @2, SYSADMIN, WAIT, gc current block busy                                     ,           629,   350.87us,      .0%, [          ],         3,       1.67,   209.67us average wait
>   1070  @2, SYSADMIN, WAIT, cell single block physical read                           ,       1557638,   868.88ms,    86.9%, [WWWWWWWWW ],      6746,      3.76k,    230.9us average wait
>  
> --  End of Stats snap 1, end=2019-05-18 12:58:58, seconds=1.8

My first step was simply to read down the list (using a very small font to get the entire width on screen without wrapping) to see if anything stood out as unusual. The report showed two things I rarely see in the session stats:


blocks decrypted                                          ,          6110
redo entries for lost write detection                     ,          6110

These stats tell me that there are two “uncommon” features enabled: db_lost_write_protect, and block level encryption. (So whatever else is going on it’s just possible that mixing in two rarely used – and therefore less frequently tested – features may be confusing the issue.

Lost write protection means Oracle writes a “block read record” (BRR) to the redo log every time it reads a block from disc, so I decided to follow up the 6,110 figure to see what other stats reported similar values.


physical read total IO requests                           ,          6112
physical read requests optimized                          ,          6111
physical reads                                            ,          6111
physical reads cache                                      ,          6111
physical read IO requests                                 ,          6112
free buffer requested                                     ,          6112
redo entries                                              ,          6120
redo entries for lost write detection                     ,          6110
gc local grants                                           ,          6116
cell flash cache read hits                                ,          6723

There’s nothing particularly surprising here – basically we see all the blocks being read as single block reads, into cache. All the necessary global cache (gc) grants are local so it’s possible the table of interest has been remastered to this node. The value for “cell flash cache read hits” look a little odd as the cache is hit more frequently than blocks are read – but dynamic performance views are not read-consistent and this session is hammering away like crazy so this might just be a side effect of the time to gather the data for the report.

We can chase the redo a little further – the number of redo entries is slightly larger than the number of blocks read, so (even though small inconsistencies are not necessarily meaningful) this might tell us something:


redo entries                                              ,          6120
redo size                                                 ,        465504
redo entries for lost write detection                     ,          6110
redo size for lost write detection                        ,        464756 
db block changes                                          ,            11

The number of “redo entries” that were NOT for lost write detection is 10, totalling 748 bytes (not a lot – so indicative of “non-user” activity). The number of “db block changes” is 11 (close enough to 10), and generally it’s changes to db blocks that require redo to be generated. The final significant number is the one that isn’t there – there’s no undo generated, so no user-change to data. This system is working like crazy achieving absolutely nothing at this point.

The next point to ponder is what sort of work it is doing – so let’s check how the physical reads turn into buffer gets.


session logical reads                                     ,         13865
db block gets                                             ,         13860
db block gets from cache                                  ,         13860
db block gets from cache (fastpath)                       ,          7737
consistent gets                                           ,             1
consistent gets from cache                                ,             1
consistent gets pin                                       ,             1
consistent gets pin (fastpath)                            ,             1
hot buffers moved to head of LRU                          ,           958

buffer is pinned count                                    ,           230 

The unusual thing you notice with these figures is that virtually every buffer get is a current get. We’ve also got a number of blocks pinned – this might just be the segment header block, or the segment header and level 2 bitmap block that we keep revisiting. Finally we can see a lot of hot buffers being moved to the head of the LRU; since our session has been doing a lot of work for a long time it seems likely that those buffers are ones that our session is keeping hot – and for a big insert that shouldn’t really be happening unless, perhaps, we were managing to do a lot of maintenance of (well-clustered) indexes.

I’ve isolated the (new in 12.2) “ASSM gsp (get space)” statistics from this output – they’re all about handling blocks, but I wanted to look at them without being distracted by other stats.


ASSM cbk:blocks examined                                  ,         12366 
ASSM gsp:L1 bitmaps examined                              ,          2478 
ASSM gsp:L2 bitmaps examined                              ,             1 
ASSM gsp:reject db                                        ,         12388

We can see that we’ve examined 2,478 “level 1” bitmap blocks. A level 1 block holds the basic “bitmap” that records the state of a number of data blocks (typically 128 blocks once the object gets very large) so our session has worked its way through 2,478 maps trying to find a data block that it could use to insert a row. The “reject db” statistic tells us about data blocks that have been examined and rejected (presumably because the row we want to insert is too large to fit, or maybe because there are no free ITL (interested transaction list) entries available in the block). So we seem to be spending all our time searching for somewhere to insert rows. This shouldn’t really be happening – it’s a type of problem that Oracle has been worrying away at for quite some time: how do you avoid “losing” space by updating bitmap blocks too soon on inserts without going to the opposite extreme and leaving bitmap blocks that claim the space is free when it’s in use by uncommitted transactions.

Note to self: I don’t know how we managed to reject more blocks (12,388) than we’ve examined (12,366) but possibly it’s just one of those timing glitches (the error is less than one fifth of one percent) , possibly it’s something to do with the reject count including some of the L1 bitmap blocks.

Clearly there’s something funny going on with space management – and we need to look at a few blocks that are exhibiting the problems. But how do we find a few suitable blocks? And that’s where, finally, we get to the title of the piece.

We are in the lucky position of having “lost write protection” enabled – so the redo log file will hold lots of “block read records”. We can’t get the file and block addresses we need from the “cell physical read” wait events so let’s ask the redo log to supply them. We just have to pick a log file (online or archived) and tell Oracle to dump some of it – and we can probably get away with a fairly small dump since we want just a single type of redo record over a short period of time. Here’s an example showing the format of two slightly different commands you could execute:


alter system dump logfile '/u01/app/oracle/oradata/orcl12c/redo03.log'
        rba min 2781    1
        rba max 2781    1000
        layer 23 opcode 2
;

alter system dump redo 
        scn min 19859991 scn max 19964263 
        layer 23 opcode 2
;

The first command is to dump a log file by name – but you may have to fiddle around a bit to find the names of an archived log file because if you choose this option you need to know the sequence number (sequence# in v$log_history) of the file if you want to restrict the size of the dump. The second command simply dumps redo for (in this example) an SCN range – and it’s easy to query v$log_history to find dates, times, and SCN ranges – Oracle will work out for itself which file it has to access. In both cases I’ve restricted the dump to just those redo records that contain change vectors of type BRR (block read records) which is what the layer 23 opcode 2 line is about.

Here’s an example of a redo record that contains nothing but a single BRR. (It’s from a single-block read, a multi-block read would produce a redo record with multiple change vectors, one vector for each block read.)


REDO RECORD - Thread:1 RBA: 0x000add.00000019.01b0 LEN: 0x004c VLD: 0x10 CON_UID: 2846920952
SCN: 0x00000000025a7c13 SUBSCN:  1 05/23/2019 10:42:51
CHANGE #1 CON_ID:3 TYP:2 CLS:6 AFN:9 DBA:0x00407930 OBJ:40 SCN:0x00000000001a1e2a SEQ:2 OP:23.2 ENC:0 RBL:0 FLG:0x0000
 Block Read - afn: 9 rdba: 0x00407930 BFT:(1024,4225328) non-BFT:(1,31024)
              scn: 0x00000000001a1e2a seq: 0x02
              flags: 0x00000006 ( dlog ckval )
              where: qeilwh05: qeilbk

If you’re wondering about the two interpretations of the rdba (relative datablock address), one is for BFTs (big file tablespaces) and one for non-BFTs. The other thing you’ll notice about the interpretations is that neither file number (1024 or 1) matches the afn (absolute file number). In smaller, non-CDB databases you will probably find that the afn matches the file number in the non-BFT interpretation, but I happen to be testing on a PDB and the first file in my SYSTEM tablespace happens to be the 9th file created in the CDB – connecting as SYS in my PDB I can compare the absolute and “relative” file number very easily:


SQL> select file#, rfile#, name from v$datafile;

     FILE#     RFILE# NAME
---------- ---------- ----------------------------------------------------------------
         9          1 /u01/app/oracle/oradata/orcl12c/orcl/system01.dbf
        10          4 /u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf
        11          9 /u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf
        12         12 /u01/app/oracle/oradata/orcl12c/orcl/users01.dbf
        13         13 /u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf
        14         14 /u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf
        22         22 /u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf
        23         23 /u01/app/oracle/oradata/orcl12c/orcl/test_8k.dbf

8 rows selected.

For bigfile tablespaces the “relative” file number is a complete fake and simply reports 1024 – you’re only allowed one file in a bigfile tablespace, so there is no “relativity” involved. (Unless you’re working at CERN and storing data about particle collisions in the LHC.)

The key point to remember when reading BRRs then, is that you should take the file number from the afn and the block number from the (appropriate) interpretation of the rdba. For the example above I would issue: “alter database dump datafile 9 block 31024;”

Finally

The originator of the thread hasn’t yet made any public response to the suggestion of dumping and reviewing blocks – possibly they’ve started a private conversation with Stefan Koehler who had suggested a strategy that examined function calls rather than block contents – so we’re unable to do any further analysis on what’s going on behind the scenes.

What we would be looking for is any indication that Oracle is repeatedly re-reading the same bitmap blocks and the same data blocks (by a simple check of block addresses); and if that is the case we would want to get some clue about why that might be happening by examining the contents of the data blocks that are subject to repeated reads without changing their status in the bitmap from “space available” to “full”.  As it is we just have to wait for the OP to tell us if they’ve made any further progress.

 

Dbvisit Standby 9 Installation on Linux (and Vagrant)

Tim Hall - Tue, 2019-06-11 03:45

The folks at Dbvisit recently released version 9 of their Dbvisit standby product.

It’s been a while since I last played with the product, so I downloaded the free trial and gave it a whirl.

I have to admit I forgot just how easy it is to work with. It feels pretty much like “unzip and go”. The result of my playtime was this article.

I also knocked up a Vagrant build, so I can easily recreate it. You can find that here.

I stuck to a basic configuration of a single instance primary (node1) and standby (node2), with the console on a separate VM (console). If you want to try something more exotic, or you are using Windows, you can get more information from the Installing Dbvisit Standby documentation.

Cheers

Tim…

PS. This isn’t a sponsored post. I’ve known the folks at Dbvisit for years so I keep an eye on what they are doing.

Dbvisit Standby 9 Installation on Linux (and Vagrant) was first posted on June 11, 2019 at 9:45 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

[Part I] Oracle and Microsoft Cloud: Interconnect Overview

Online Apps DBA - Tue, 2019-06-11 02:15

Oracle and Microsoft Cloud: Interconnect Overview Oracle has announced a Cloud interoperability partnership between Microsoft and Oracle Cloud. This cross-cloud interlink enables customers to migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud Infrastructure (OCI). Check our latest blog post at https://k21academy.com/oci38 to know more about: ▪Network Connectivity Between Oracle and Microsoft […]

The post [Part I] Oracle and Microsoft Cloud: Interconnect Overview appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Creating An ATP Instance With The OCI Service Broker

OTN TechBlog - Mon, 2019-06-10 08:03
.gist { border-left: none !important;} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; }

We recently announced the release of the OCI Service Broker for Kubernetes, an implementation of the Open Service Broker API that streamlines the process of provisioning and binding to services that your cloud native applications depend on.

The Kubernetes documentation lays out the following use case for the Service Catalog API:

An application developer wants to use message queuing as part of their application running in a Kubernetes cluster. However, they do not want to deal with the overhead of setting such a service up and administering it themselves. Fortunately, there is a cloud provider that offers message queuing as a managed service through its service broker.

A cluster operator can setup Service Catalog and use it to communicate with the cloud provider’s service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster. The application developer therefore does not need to be concerned with the implementation details or management of the message queue. The application can simply use it as a service.

Put simply, the Service Catalog API lets you manage services within Kubernetes that are not be deployed within Kubernetes.  Things like messaging queues, object storage and databases can be deployed with a set of Kubernetes configuration files without needing knowledge of the underlying API or tools used to create those instances thus simplifying the deployment and making it portable to virtually any Kubernetes cluster.

The current OCI Service Broker adapters that are available at this time include:

  • Autonomous Transaction Processing (ATP)
  • Autonomous Data Warehouse (ADW)
  • Object Storage
  • Streaming

I won't go into too much detail in this post about the feature, as the introduction post and GitHub documentation do a great job of explaining service brokers and the problems that they solve. Rather, I'll focus on using the OCI Service Broker to provision an ATP instance and deploy a container which has access to the ATP credentials and wallet.  

To get started, you'll first have to follow the installation instructions on GitHub. At a high level, the process involves:

  1. Deploy the Kubernetes Service Catalog client to the OKE cluster
  2. Install the svcat CLI tool
  3. Deploy the OCI Service Broker
  4. Create a Kubernetes Secret containing OCI credentials
  5. Configure Service Broker with TLS
  6. Configure RBAC (Role Based Access Control) permissions
  7. Register the OCI Service Broker

Once you've installed and registered the service broker, you're ready to use the ATP service plan to provision an ATP instance. I'll go into details below, but the overview of the process looks like so:

  1. Create a Kubernetes secret with a new admin and wallet password (in JSON format)
  2. Create a YAML configuration for the ATP Service Instance
  3. Deploy the Service Instance
  4. Create a YAML config for the ATP Service Binding
  5. Deploy the Service Binding to obtain which results in the creation of a new Kubernetes secret containing the wallet contents
  6. Create a Kubernetes secret for Microservice deployment use containing the admin password and the wallet password (in plain text format)
  7. Create a YAML config for the Microservice deployment which uses an initContainer to decode the wallet secrets (due to a bug which double encodes them) and mounts the wallet contents as a volume

Following that overview, let's take a look at a detailed example. The first thing we'll have to do is make sure that the user we're using with the OCI Service Broker has the proper permissions.  If you're using a user that is a member of the group devops then you would make sure that you have a policy in place that looks like this:

Allow group devops to manage autonomous-database in compartment [COMPARTMENT_NAME]

The next step is to create a secret that will be used to set some passwords during ATP instance creation.  Create a file called atp-secret.yaml and populate it similarly to the example below.  The values for password and walletPassword must be in the format of a JSON object as shown in the comments inline below, and must be base64 encoded.  You can use an online tool for the base64 encoding, or use the command line if you're on a Unix system (echo '{"password":"Passw0rd123456"}' | base64).

Now create the secret via: kubectl create -f app-secret.yaml.

Next, create a file called atp-instance.yaml and populate as follows (updating the name, compartmentId, dbName, cpuCount, storageSizeTBs, licenseType as necessary).  The paremeters are detailed in the full documentation (link below).  Note, we're referring to the previously created secret in this YAML file.

Create the instance with: kubectl create -f atp-instance.yaml. This will take a bit of time, but in about 15 minutes or less your instance will be up and running. You can check the status via the OCI console UI, or with the command: svcat get instances which will return a status of "ready" when the instance has been provisioned.

Now that the instance has been provisioned, we can create a binding.  Create a file called atp-binding.yaml and populate it as such:

Note that we're once again using a value from the initial secret that we created in step 1. Apply the binding with: kubectl create -f atp-binding.yaml and check the binding status with svcat get bindings, looking again for a status of "ready". Once it's ready, you'll be able to view the secret that was created by the binding via: kubectl get secrets atp-demo-binding -o yaml where the secret name matches the 'name' value used in atp-binding.yaml. The secret will look similar to the following output:

This secret contains the contents of your ATP instance wallet and next we'll mount these as a volume inside of the application deployment.  Let's create a final YAML file called atp-demo.yaml and populate it like below.  Note, there is currently a bug in the service broker that double encodes the secrets, so it's currently necessary to use an initContainer to get the values properly decoded.

Here we're just creating a basic alpine linux instance just to test the service instance. Your application deployment would use a Docker image with your application, but the format and premise would be nearly identical to this. Create the deployment with kubectl create -f atp-demo.yaml and once the pod is in a "ready" state we can launch a terminal and test things out a bit:

Note that we have 3 environment variables available in the instance:  DB_ADMIN_USER, DB_ADMIN_PWD and WALLET_PWD.  We also have a volume available at /db-demo/creds containing all of our wallet contents that we need to make a connection to the new ATP instance.

Check out the full instructions for more information or background on the ATP service broker. The ability to bind to an existing ATP instance is scheduled as an enhancement to the service broker in the near future, and some other exciting features are planned.

PeopleTools Table Reference Generator

David Kurtz - Sun, 2019-06-09 15:46
Like many other PeopleSoft professionals, I spend a lot of time looking at the PeopleTools tables because they contain meta-data about the PeopleSoft application. Much of the application is stored in PeopleTools tables. Some provide information about the Data Model. Many of my utility scripts reference the PeopleTools tables, some of them update them too. Therefore, it is very helpful to be able to understand what is in these tables. In PeopleSoft for the Oracle DBA, I discussed some tables that are of regular interest. I included the tables that correspond to the database catalogue and that are used during the PeopleSoft login procedure. The tables that are maintained by the process scheduler are valuable because they contain information about who ran what process when, and how long they ran for.
I am not the only person to have started to document the PeopleTools tables on their website or blog, most people have picked a few tables that are of particular interest. I wanted to produce a complete reference.  However, with over 4000 PeopleTools tables and views, it is simply not viable to do even a significant number of them manually.  So, I wrote some SQL and PL/SQL to dynamically generate a page for each PeopleTools table and views and put the generated pages on my website.  If you use Google to search for a PeopleTools name you will probably find the reference page.
I have now revisited that code and made a number of changes and improvements.
  • I have used a later version of PeopleTools to generate the reference on my website.  The list of PeopleTools tables is no longer defined in PeopleTools by object security, so I have used an independent and somewhat broader definition: Table or View records that are either owned by PPT or whose SQLTABLENAME is the same as the record name.
  • There is much less reliance on static pages.  There are now only 3 such pages everything else is generated.  Instead, additional data is loaded from static scripts into the PLAN_TABLE that should always be present in an Oracle database and so it doesn't have to be created.  It should be a global temporary table so there is no problem with debris being left behind or interference with other processes.  That data is then combined with data in the PeopleTools tables during the generation process.
  • The utility has been rewritten as a PL/SQL package that should be created in the PeopleSoft Owner ID schema (usually SYSADM).
  • The generated HTML is simply tidier, and more consistent.
The source is available on GitHub at https://github.com/davidkurtz/PTRef, so you can download and generate your own reference on your own current version of PeopleTools. An example of the generated output can be found on my website.
The idea of the original PeopleTools tables reference was the people could contribute additional descriptions and information that were not in the PeopleTools tables.  That can still happen, and indeed should be easier, by making changes to the scripts that load the additional data and uploading new versions to GitHub.


Dell Boomi Training: Day 8 Review/Introduction & Q/As

Online Apps DBA - Sun, 2019-06-09 09:00

[Q/A] Dell Boomi: Day 8: Cache & APIs   Dell Boomi is a business unit acquired by Dell in 2010 that specializes in cloud-based integration, API management, and Master Data Management. Dell Boomi AtomSphere is a multi-tenant cloud-based integration platform. That facilitates data and application integration.   It is a leading iPaas which helps to […]

The post Dell Boomi Training: Day 8 Review/Introduction & Q/As appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

CPU percent

Jonathan Lewis - Sun, 2019-06-09 08:31

A recent post on the ODC General Database forum asked for an explanation of the AWR report values “%Total CPU” and “%Busy CPU” under the “Instance CPU” label, and how the “%Busy CPU “ could be greater than 100%.  Here’s a text reproduction of the relevant sample supplied:

Host CPU CPUs Cores Sockets Load Average Begin Load Average End %User %System %WIO %Idle 2 2 1 0.30 1.23 10.7 5.6 5.3 77.7 Instance CPU %Total CPU %Busy CPU %DB Time waiting for CPU (Resource Manager) 29.8 133.8 0.0

The answer is probably “It’s 12.1 and it’s a programmer error”.

  • Note that the Host CPU %Idle is not consistent with the three usage figures:  10.7 + 5.6 + 5.3 = 21.6 whereas 100 – 77.7 = 22.3.
  • So let’s run with 22.3% and see what else we can notice: 29.8 / 22.3 = 1.3363 – that’s pretty close (when expressed as a percentage) to 133.8%
Hypothesis:

Someone did the division the wrong way round when trying to work out the percentage of the host’s non-idle CPU that could be attributed to the instance. In this example the “%Busy CPU” should actually report 100 * 22.3 / 29.8 = 74.8%

Note – the difference between 133.8 and 133.63 can be attributed to the fact that the various figures reported in this bit of the AWR are rounded to the nearest 1 decimal place.

Note 2 – I don’t think this error is present in 11.2.0.4 or 12.2.0.1

 

 

 

 

 

Google Jamboard ?!

Dietrich Schroff - Sun, 2019-06-09 04:24
By reorganising my Google drive i ran into "Jamboard"...
This is a tiny Google web application, where you can do some fast sketches but the functionality is really limited:








[Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 2: IAM (Compartments, Policies, Users, Groups)

Online Apps DBA - Sun, 2019-06-09 01:14

[Q/A] 1Z0-932 Oracle Cloud Infra Architect Day2: IAM (Compartments, Policies, Users, Groups) IAM in OCI though carries 10% weightage in certification exam 1Z0-932 but it is one of the most important topics as IAM controls who has access to what OCI resources (Network, Storage, Compute, Database etc). It is important that you understand what is […]

The post [Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 2: IAM (Compartments, Policies, Users, Groups) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Building PivotalMySQLWeb using Cloud Native Buildpacks (CNB)

Pas Apicella - Sun, 2019-06-09 00:28
Not heard of Cloud Native Buildpacks? If not this post will show why you might want to. If you want to make your developers more productive with Kubernetes, you’re going to want to look at Cloud Native Buildpacks.

https://buildpacks.io/

Until cloud platforms such as Heroku and Pivotal Cloud Foundry incorporate the Buildpack v3 Lifecycle, the fastest way to try Cloud Native Buildpacks is via the pack CLI, which integrates with your local Docker daemon. Here is an example below taking Pivotal MySQLWeb application and creating an OCI compliant image from that

Pre Steps:

1. Install pack using this link

  https://buildpacks.io/docs/install-pack/

2. Ensure you have Docker CE installed if not use this link

  https://hub.docker.com/search/?type=edition&offering=community

Steps:

1. I am using Pivotal MySQLWeb which i have packaged using maven and then taken the JAR and exploded it onto the file system to avoid compilation. You can still just use source code and the Cloud Native Buildpack's will still work but in this example I avoid the maven compilation step by using an exploded JAR file already compiled which is what a Build Service on a cloud platform would do in any case

Let's start by using "pack" to create our image as per below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build pivotal-mysql-web --path ./PivotalMySQLWeb

Using default builder image cloudfoundry/cnb:cflinuxfs3
Pulling image index.docker.io/cloudfoundry/cnb:cflinuxfs3
cflinuxfs3: Pulling from cloudfoundry/cnb
18d7ea8d445c: Pull complete
18d0be9dc457: Pull complete
f5407c34df38: Pull complete
35c61e03e6bf: Pull complete
40d144c93ada: Pull complete
4f4fb700ef54: Pull complete
0432ec3bb9f8: Pull complete
3731e128636c: Pull complete
1bab066bbafe: Pull complete
4cc53e89f635: Pull complete
4fd62e90f994: Pull complete
dc9fa77b2cd2: Pull complete
3cd4ed6e9bbf: Pull complete
a525f8221dc8: Pull complete
f01bc40f59c5: Pull complete
1f9842b1696d: Pull complete
3e15eeb884d5: Pull complete
3c0f59c7956f: Pull complete
c3e6214340d9: Pull complete
6955f2c8bfad: Pull complete
5112994886a0: Pull complete
e19195f86112: Pull complete
07fb5cd454f2: Pull complete
Digest: sha256:197439e9ccc699daa6431bd7154c80b3b0ce75b072792a0e93edd6779756f3bc
Status: Downloaded newer image for cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Pulling image cloudfoundry/cnb-run:cflinuxfs3
cflinuxfs3: Pulling from cloudfoundry/cnb-run
0a25bf28c5eb: Pull complete
7216becd0525: Pull complete
Digest: sha256:f9605c5af04b2ba04918879f2bf9d37c55620ae28e73b94e9926cd97bbf8fe96
Status: Downloaded newer image for cloudfoundry/cnb-run:cflinuxfs3
Using build cache volume pack-cache-1f2556cf858e.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] cache '/cache': metadata not found, nothing to restore
===> ANALYZING
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Contributing to layer
[builder]        Downloading from https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jre_x64_linux_hotspot_11.0.3_7.tar.gz
[builder]        Verifying checksum
[builder]        Expanding to /layers/org.cloudfoundry.openjdk/openjdk-jre
[builder]        Writing JAVA_HOME to shared
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Contributing to layer
[builder]        Writing CLASSPATH to launch
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Exporting layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Exporting layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Exporting layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Exporting layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Exporting layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: index.docker.io/library/pivotal-mysql-web:latest@8957afa91f464e2c0adc24968c31613148b9905ff1fb90ec59ff84e165d939ac
===> CACHING
[cacher] Caching layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image pivotal-mysql-web

2. Inspect the docker image on your laptop as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker image inspect pivotal-mysql-web

[
    {
        "Id": "sha256:8957afa91f464e2c0adc24968c31613148b9905ff1fb90ec59ff84e165d939ac",
        "RepoTags": [
            "pivotal-mysql-web:latest"
        ],
        "RepoDigests": [],
        "Parent": "",
        "Comment": "",
        "Created": "2019-06-05T05:25:58Z",
        "Container": "",
        "ContainerConfig": {

...

3. Run the docker image as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker run --rm -p 8080:8080 pivotal-mysql-web

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.0.RELEASE)

2019-06-05 05:30:43.005  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Starting PivotalMySqlWebApplication on 5d21f8f32ba4 with PID 1 (/workspace/BOOT-INF/classes started by vcap in /workspace)
2019-06-05 05:30:43.009  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : No active profile set, falling back to default profiles: default
2019-06-05 05:30:44.662  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2019-06-05 05:30:44.686  INFO 1 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2019-06-05 05:30:44.687  INFO 1 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/9.0.12
2019-06-05 05:30:44.698  INFO 1 --- [           main] o.a.catalina.core.AprLifecycleListener   : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/layers/org.cloudfoundry.openjdk/openjdk-jre/lib:/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib]
2019-06-05 05:30:44.793  INFO 1 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-06-05 05:30:44.794  INFO 1 --- [           main] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 1736 ms
2019-06-05 05:30:45.130  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'webMvcMetricsFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'formContentFilter' to: [/*]
2019-06-05 05:30:45.132  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-05 05:30:45.132  INFO 1 --- [           main] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2019-06-05 05:30:45.133  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpTraceFilter' to: [/*]
2019-06-05 05:30:45.134  INFO 1 --- [           main] o.s.b.w.servlet.ServletRegistrationBean  : Servlet dispatcherServlet mapped to [/]
2019-06-05 05:30:45.436  INFO 1 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2019-06-05 05:30:45.851  INFO 1 --- [           main] .s.s.UserDetailsServiceAutoConfiguration :

Using generated security password: 3823aef6-6f72-4f5f-939d-bbd3d57ec2fa

2019-06-05 05:30:45.931  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: Ant [pattern='/**'], []
2019-06-05 05:30:45.967  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2e140e59, org.springframework.security.web.context.SecurityContextPersistenceFilter@26ae880a, org.springframework.security.web.header.HeaderWriterFilter@25a73de1, org.springframework.security.web.csrf.CsrfFilter@652ab8d9, org.springframework.security.web.authentication.logout.LogoutFilter@17814b1c, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@54f66455, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@58399d82, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter@49a71302, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4c03a37, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3c017078, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@298d9a05, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5cd61783, org.springframework.security.web.session.SessionManagementFilter@771db12c, org.springframework.security.web.access.ExceptionTranslationFilter@5f303ecd, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@73ab3aac]
2019-06-05 05:30:46.000  INFO 1 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 9 endpoint(s) beneath base path '/actuator'
2019-06-05 05:30:46.096  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-06-05 05:30:46.101  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Started PivotalMySqlWebApplication in 3.654 seconds (JVM running for 4.324)

4. Browse to localhost:8080 to invoke the application just to be sure it worked



5. Publish the OCI compliant image to your registry. In this example I am using Docker Hub as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build pasapples/pivotal-mysql-web:latest --publish --path ./PivotalMySQLWeb --no-pull

Using default builder image cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Using build cache volume pack-cache-a4a78257c7be.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] cache '/cache': metadata not found, nothing to restore
===> ANALYZING
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Contributing to layer
[builder]        Downloading from https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jre_x64_linux_hotspot_11.0.3_7.tar.gz
[builder]        Verifying checksum
[builder]        Expanding to /layers/org.cloudfoundry.openjdk/openjdk-jre
[builder]        Writing JAVA_HOME to shared
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Contributing to layer
[builder]        Writing CLASSPATH to launch
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Exporting layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Exporting layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Exporting layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Exporting layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Exporting layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: index.docker.io/pasapples/pivotal-mysql-web:latest@sha256:c862eda516289c2daa29580c95b74b4d72eca9caf941a3a6ac2bf2bd886057e5
===> CACHING
[cacher] Caching layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image pasapples/pivotal-mysql-web:latest


At this point you have a OCI compliant image sitting in your registry ready to be consumed for your K8's application needs all from just source code or pre compiled source code in this example a Java Application. Let's not forget this support's a polyglot programming model so supports NodeJS, Python and anything that buildpack's supports.

More Information

1. Cloud Native Buildpacks: an Industry-Standard Build Process for Kubernetes and Beyond.

2. buildspacks.io Home Page

Categories: Fusion Middleware

Dell Boomi Training: How to check number of Connection Licenses in Boomi Account

Online Apps DBA - Sat, 2019-06-08 03:10

Connection Licenses in Boomi Account Our [New Blog] Dell Boomi: How to check the number of Connection Licenses in Boomi Account at https://k21academy.com/dellboomi21 is ready to explain: ✔ How to check the number of Connection Licenses in Boomi Account with, ✔ Step-Wise Guidance To perform the same. ✔ Types of Environments (PROD & TEST) & much […]

The post Dell Boomi Training: How to check number of Connection Licenses in Boomi Account appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

RAC Installation Logs

Michael Dinh - Fri, 2019-06-07 12:24

Note to self for 2 Nodes RAC installation and DB creation logs location.

Oracle Universal Installer logs for GI/DB:

[oracle@racnode-dc1-1 logs]$ pwd; ls -lhrt
/u01/app/oraInventory/logs
total 2.3M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-13-PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-35-PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.8K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-13-PM.log
-rw-r----- 1 oracle oinstall  153 Jun  7 17:06 oraInstall2019-06-07_04-57-13-PM.out
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
-rw-r----- 1 oracle oinstall   47 Jun  7 17:09 time2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:09 oraInstall2019-06-07_05-09-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:13 oraInstall2019-06-07_05-09-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall   29 Jun  7 17:14 oraInstall2019-06-07_05-09-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:14 AttachHome2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  507 Jun  7 17:14 silentInstall2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  14K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 9.5K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  496 Jun  7 17:15 oraInstall2019-06-07_05-09-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 17:15 installActions2019-06-07_05-09-01PM.log
[oracle@racnode-dc1-1 logs]$

silentInstall*.log

[oracle@racnode-dc1-1 logs]$ grep successful silent*.log

silentInstall2019-06-07_04-39-01PM.log:The installation of Oracle Grid Infrastructure 12c was successful.

silentInstall2019-06-07_05-09-01PM.log:The installation of Oracle Database 12c was successful.

[oracle@racnode-dc1-1 logs

installActions*.log

[oracle@racnode-dc1-1 logs]$ grep "Using paramFile" install*.log

installActions2019-06-07_04-39-01PM.log:INFO: Using paramFile: /u01/stage/12.1.0.2/grid/install/oraparam.ini

installActions2019-06-07_05-09-01PM.log:Using paramFile: /u01/stage/12.1.0.2/database/install/oraparam.ini

[oracle@racnode-dc1-1 logs]$

Run root script after installation:
$GRID_HOME/root.sh

[oracle@racnode-dc1-1 install]$ pwd; ls -lhrt root*.log
/u01/app/12.1.0.2/grid/install
-rw------- 1 oracle oinstall 7.4K Jun  7 16:51 root_racnode-dc1-1_2019-06-07_16-44-37.log
[oracle@racnode-dc1-1 install]$

Run configToolAllCommands:
$GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/u01/stage/rsp/configtoolallcommands.rsp

[oracle@racnode-dc1-1 oui]$ pwd; ls -lhrt
/u01/app/12.1.0.2/grid/cfgtoollogs/oui
total 1.2M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r--r-- 1 oracle oinstall    0 Jun  7 16:57 configActions2019-06-07_04-57-10-PM.err
-rw-r--r-- 1 oracle oinstall  13K Jun  7 17:06 configActions2019-06-07_04-57-10-PM.log
-rw------- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw------- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
[oracle@racnode-dc1-1 oui]$

dbca

[oracle@racnode-dc1-1 dbca]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca
total 116K
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:02 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM.lck
drwxrwxr-x 3 oracle oinstall 4.0K Jun  7 17:02 _mgmtdb
-rwxrwxr-x 1 oracle oinstall 105K Jun  7 17:03 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM
drwxr-x--- 2 oracle oinstall 4.0K Jun  7 17:23 hawk
[oracle@racnode-dc1-1 dbca]$

dbca _mgmtdb

[oracle@racnode-dc1-1 _mgmtdb]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb
total 19M
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 16:58 trace.log.lck
-rwxrwxr-x 1 oracle oinstall  18M Jun  7 16:59 tempControl.ctl
-rwxrwxr-x 1 oracle oinstall  349 Jun  7 16:59 CloneRmanRestore.log
-rwxrwxr-x 1 oracle oinstall  596 Jun  7 16:59 cloneDBCreation.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:00 rmanUtil
-rwxrwxr-x 1 oracle oinstall 2.1K Jun  7 17:00 plugDatabase.log
-rwxrwxr-x 1 oracle oinstall  428 Jun  7 17:01 dbmssml_catcon_12271.lst
-rwxrwxr-x 1 oracle oinstall 3.5K Jun  7 17:01 dbmssml0.log
-rwxrwxr-x 1 oracle oinstall  396 Jun  7 17:01 postScripts.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:01 lockAccount.log
-rwxrwxr-x 1 oracle oinstall  442 Jun  7 17:01 catbundleapply_catcon_12348.lst
-rwxrwxr-x 1 oracle oinstall 3.9K Jun  7 17:01 catbundleapply0.log
-rwxrwxr-x 1 oracle oinstall  424 Jun  7 17:01 utlrp_catcon_12416.lst
-rwxrwxr-x 1 oracle oinstall 9.2K Jun  7 17:02 utlrp0.log
-rwxrwxr-x 1 oracle oinstall  964 Jun  7 17:02 postDBCreation.log
-rwxrwxr-x 1 oracle oinstall  737 Jun  7 17:02 OraGI12Home1__mgmtdb_creation_checkpoint.xml
-rwxrwxr-x 1 oracle oinstall  877 Jun  7 17:02 _mgmtdb.log
-rwxrwxr-x 1 oracle oinstall 1.1M Jun  7 17:02 trace.log
-rwxrwxr-x 1 oracle oinstall 1.3K Jun  7 17:02 DetectOption.log
drwxrwxr-x 2 oracle oinstall 4.0K Jun  7 17:03 vbox_rac_dc1

[oracle@racnode-dc1-1 _mgmtdb]$ tail _mgmtdb.log
Completing Database Creation
DBCA_PROGRESS : 68%
DBCA_PROGRESS : 79%
DBCA_PROGRESS : 89%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
[oracle@racnode-dc1-1 _mgmtdb]$

dbca hawk

[oracle@racnode-dc1-1 hawk]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/hawk
total 34M
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 trace.log.lck
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 rmanUtil
-rw-r----- 1 oracle oinstall  18M Jun  7 17:17 tempControl.ctl
-rw-r----- 1 oracle oinstall  384 Jun  7 17:17 CloneRmanRestore.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 17:20 cloneDBCreation.log
-rw-r----- 1 oracle oinstall    8 Jun  7 17:20 postScripts.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:21 CreateClustDBViews.log
-rw-r----- 1 oracle oinstall    6 Jun  7 17:21 lockAccount.log
-rw-r----- 1 oracle oinstall 1.3K Jun  7 17:22 postDBCreation.log
-rw-r----- 1 oracle oinstall  511 Jun  7 17:23 OraDB12Home1_hawk_creation_checkpoint.xml
-rw-r----- 1 oracle oinstall  24K Jun  7 17:23 hawk.log
-rw-r----- 1 oracle oinstall  16M Jun  7 17:23 trace.log

[oracle@racnode-dc1-1 hawk]$ tail hawk.log
DBCA_PROGRESS : 73%
DBCA_PROGRESS : 76%
DBCA_PROGRESS : 85%
DBCA_PROGRESS : 94%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/hawk.
Database Information:
Global Database Name:hawk
System Identifier(SID) Prefix:hawk
[oracle@racnode-dc1-1 hawk]$

How to Install Oracle Java in Oracle Cloud Infrastructure

OTN TechBlog - Fri, 2019-06-07 11:10
Oracle Java Support and Updates Included in Oracle Cloud Infrastructure

We recently announced that Oracle Java, Oracle’s widely adopted and proven Java Development Kit, is now included with Oracle Cloud Infrastructure subscriptions at no extra cost.

In this blog post I show how to install Oracle Java on Oracle Linux running in an OCI compute shape by using RPMs available yum servers available within OCI.

Installing Oracle Java

The Oracle Java RPMs are in the ol7_oci_included repository on Oracle Linux yum server accessible from within OCI.

To enable this repository:

$ sudo yum install -y --enablerepo=ol7_ociyum_config oci-included-release-el7

As of this writing, the repository containst Oracle Java 8, 11 and 12.

$ yum list jdk* Loaded plugins: langpacks, ulninfo Available Packages jdk-11.0.3.x86_64 2000:11.0.3-ga ol7_oci_included jdk-12.0.1.x86_64 2000:12.0.1-ga ol7_oci_included jdk1.8.x86_64 2000:1.8.0_211-fcs ol7_oci_included

To install Oracle Java 12, version 12.0.1:

$ sudo yum install jdk-12.0.1

To confirm the Java version:

$ java -version java version "12.0.1" 2019-04-16 Java(TM) SE Runtime Environment (build 12.0.1+12) Java HotSpot(TM) 64-Bit Server VM (build 12.0.1+12, mixed mode, sharing) Multiple JDK versions and setting the default

If you install multiple version of the JDK, you may want to set the default version using alternatives. For example, let’s first install Oracle Java 8:

$ sudo yum install -y jdk1.8

The alternatives command shows that two programs provide java:

$ sudo alternatives --config java There are 2 programs which provide 'java'. Selection Command ----------------------------------------------- *+ 1 /usr/java/jdk-12.0.1/bin/java 2 /usr/java/jdk1.8.0_211-amd64/jre/bin/java

Choosing selection 2 sets the default to JDK 1.8 (Oracle Java 8):

$ java -version java version "1.8.0_211" Java(TM) SE Runtime Environment (build 1.8.0_211-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode) Conclusion

Oracle Cloud Infrastructure includes Oracle Java —with support and updates— at no additional cost. By providing Oracle Java RPMs in OCI’s yum servers, installation is greatly simplified.

Build and Deploy a Golang Application Using Oracle Developer Cloud

OTN TechBlog - Fri, 2019-06-07 10:36

Golang recently became a trending programming language in the developer community. This blog will help you develop, build, and deploy your first Golang-based REST application using Docker and Kubernetes on Oracle Developer Cloud.

Before getting our first Golang application up and running, let’s examine Golang a little.

What is Golang?

Golang, or Go for short, is an open source programming language that is a statically-typed, compiled all-purpose programming language. It is fast and supports concurrency and cross-platform compilation. To learn more about Go, visit the following link:

https://golang.org/

Let’s get set and Go

To develop, build, and deploy a Golang-based application, you’ll need to create the following files on your machine:

  • main.go - Contains the Go application code and the listener
  • Dockerfile - Builds the Docker image for the Go application code
  • gorest.yml – A YAML file that deploys the Docker image of the Go application on Oracle Container Engine for Kubernetes

Here are the code snippets for the files mentioned above.

main.go

This file imports the required packages and defines the handler() function for the request, which is called by the main() function, where the http listener port is defined. As the name itself suggests, the errorHandler() function comes into play when an error occurs.

package main import ( "fmt" "log" "net/http" "os" ) func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello %s!", r.URL.Path[1:]) fmt.Println("RESTfulServ. on:8093, Controller:",r.URL.Path[1:]) } func main() { http.HandleFunc("/", handler) fmt.Println("Starting Restful services...") fmt.Println("Using port:8093") err := http.ListenAndServe(":8093", nil) log.Print(err) errorHandler(err) } func errorHandler(err error){ if err!=nil { fmt.Println(err) os.Exit(1) } }

Dockerfile

This Dockerfile pulls the latest Go Docker image from DockerHub, creates an app folder in the container, and then adds all the application files on the build machine(from Git repository cloning) to the app folder in the container. Next, it makes the app directory the working directory and runs the go build command to build the Go code and execute the main file.

 

FROM golang:latest RUN mkdir /app ADD . /app/ WORKDIR /app RUN go build -o main . CMD ["/app/main"]

 

gorest.yml

The script shown below defines the Kubernetes service and deployment, including the respective names, ports, and Docker image that will be downloaded from the DockerHub registry and deployed on the Kubernetes cluster. In the script, we defined the service and deployment as gorest-se, the port as 8093, and the container image as <DockerHub username>/gorest:1.0

kind: Service apiVersion: v1 metadata: name: gorest-se labels: app: gorest-se spec: type: NodePort selector: app: gorest-se ports: - port: 8093 targetPort: 8093 name: http --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: gorest-se spec: replicas: 1 template: metadata: labels: app: gorest-se version: v1 spec: containers: - name: gorest-se image: abhinavshroff/gorest:1.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8093 ---

 

Create a Git repository in the Oracle Developer Cloud Project

To create a Git repository in the Developer Cloud project, navigate to the Project Home page and then click the +Create Repository button, found on the right-hand side of the page. In the New Repository dialog, enter GoREST for the repository Name and select Empty Repository for the Initial Content option, as shown. Then, click the Create button.

 

You should now see the GoREST.git repository created in the Repositories tab on the Project Home page. Click the Clone dropdown and then click the copy icon, as shown in the screen shot, to copy the Git repository HTTPS URL. Keep this URL handy.

 

Push the code to the Git Repository

Now, in your command prompt window, navigate to the GoREST application folder and execute the following Git commands to push the application code to the Git repository you created.

Note: You need to have gitcli installed on your development machine to execute Git commands. Also, you’ll be using the Git URL that you just copied from the Repositories tab, as previously mentioned.

git init

git add --all

git commit -m "First commit"

git remote add origin <git repository url>

git push origin master

Your GoREST.git repository should have the structure shown below.

 

 

Configure the Build Job

In Developer Cloud, select Builds in the left navigation bar to display the Builds page. Then click the +Create Job button. 

In the New Job dialog, enter BuildGoRESTAppl for the Name and select a Template that has the Docker runtime. Then click the Create button. This build job will build the Docker image for the Go REST application code in the Git repository and push it to the DockerHub registry.

In the Git tab, select Git from the Add Git dropdown, select GoREST.git as the Git repository and, for the branch, select master.

In the Steps tab, use the Add Step dropdown to add Docker login, Docker build, and Docker push steps.

In the Docker login step, provide your DockerHub Username and Password. Leave the Registry Host empty, since we’re using DockerHub as the registry.

In the Docker build step, enter <DockerHub Username>/gorest for the Image Name and 1.0 for the Version Tag. The full image name shown is <DockerHub Username>/gorest:1.0

In the Docker push step, enter <DockerHub Username>/gorest for the Image Name and 1.0 for the Version Tag. Then click the Save button.

To create another build job for deployment, navigate to the Builds page and click the +Create Job button. 

In the New Job dialog enter DeployGoRESTAppl for the Name, select the template with Kubectl, then click the Create button. This build job will deploy the Docker image built by the BuildGoRESTAppl build job to the Kubernetes cluster.

The first thing you’ll do to configure the DeployGoRESTAppl build job is to specify the repository where the code is found and select the branch where you’ll be working on the files.  To do this, in the Git tab, add Git from the dropdown, select GoREST.git as the Git repository and, for the branch, select master.

In the Steps tab, select OCIcli from the Add Step dropdown. Take a look at this blog link to see how and where to get the values for the OCIcli configuration. Then, select Unix Shell from the Add Step dropdown and, in the Unix Shell build step, enter the following script.

 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl create -f gorest.yml sleep 30 kubectl get services gorest-se kubectl get pods kubectl describe pods

 

When you’re done, click the Save button.

 

Create the Build Pipeline

Navigate to the Pipelines tab in the Builds page. Then click the +Create Pipeline button.

In the Create Pipeline dialog, you can enter the Name as GoApplPipeline. Then click the Create button.

 

Drag and drop the BuildGoRESTAppl and DeployGoRESTAppl build jobs and then connect them.

 

Double click the link that connects the build jobs and select Successful as the Result Condition. Then click the Apply button.

 

Then click on the Save button.

 

Click the Build button, as shown, to run the build pipeline. The BuildGoRESTAppl build job will be executed first and, if it is successful, then the DeployGoRESTAppl build job that deploys the container on the Kubernetes cluster on Oracle Cloud will be executed next.

 

After the jobs in the build pipeline finish executing, navigate to the Jobs tab and click the link for the DeployGoRESTAppl build job.  Then click the Build Log icon for the executed build.

 

You should see messages that the service and deployment were successfully created.  Search the log for the gorest-se service and deployment that were created on the Kubernetes cluster, and find the public IP address and port to access the microservice, as shown below.

 

Enter the IP address and port that you retrieved from the log, into the browser using the format shown in the following URL:

http://<retrieved IP address>:<retrieved port>/<your name>

You should see the “Hello <your name>!” message in your browser.

 

So, you’ve seen how Oracle Developer Cloud can help you manage the complete DevOps lifecycle for your Golang-based REST applications and how out-of-the-box support for Build and Deploy to Oracle Container Engine for Kubernetes makes it easier.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

PS360 enhancement: Added report of DDL models

David Kurtz - Fri, 2019-06-07 09:08
I have written several blogs and presentations recently about how and how not to collect statistics in PeopleSoft.
  • Managing Cost-Based Optimizer Statistics for PeopleSoft recommends
    • If you are going to continue to use DBMS_STATS in the DDL model then
      • Do not specify ESTIMATE_PERCENT because it disables the hash-based number-of-distinct-values calculation, forcing it to go back to the COUNT(DISTINCT ...) method that requires a sort, and may not produce accurate values because it only samples data.
      • Do not specify METHOD_OPT='FOR ALL INDEXED COLUMNS SIZE 1' because this will not collect histograms on indexed columns, and will not update column statistics on unindexed columns.
      • Do specify FORCE=>TRUE so that you can lock and delete statistics on temporary records.
    • However, the alternative is to use GFCSTATS11 package to collects these statistics.  This package is controlled by a metadata table so you can defined statistics collection behaviour for specific records.
  • How Not to Collect Optimizer Statistics in an Application Engine Program
    • This blog explains why you should not explicitly code DBMS_STATS calls into Application Engine programs.
This has prompted me to add a new report to the PS360 utility that simply reports the various DDL models for Oracle.  Thus during a health check, I can see how statistics are collected during batch processes.

PS360 can be download from https://github.com/davidkurtz/ps360.




Pages

Subscribe to Oracle FAQ aggregator