Feed aggregator

DSS and BI

Dylan Wan - Fri, 2007-11-30 14:09

I found a very old book, called Decision Support Systems: An Organizational Perspective, in a library last weekend. It was written by Peter Keen,an author of several popular books, which help many business managersand users understand the value of information technology. His DSS bookdraw my attention because he is also the author of my textbook Network in Actions.

More...The DSS book uses a very typical and conventional categorization system which puts the IT systems into three types:

Transactional System, Structure Decision system, and Decision Support System.

These categories are created based on the classification ofdecisions into structured, unstructured, and partially structureddecision. His focus is the 3rd category, DSS. Peter believes that a DSSshould assist in solving the semi-structured problems. A DSS shouldsupport, not replace, the managers.

I feel that the above is a very good framework to view the role ofan analytics apps. A BI analytics application should be a DSS solution.However, BI analytics apps can do much more then just a decisionsupport system. BI may help the structured decision making.

BI is not just a collection of reports. The design of a BI analyticsapps needs to consider what are the business decision need to make andwhat kind of information is helpful for making the decision.

Categories: BI & Warehousing

DBUA FAILS WITH unable to extend rollback segment ODMA_RBS01

Madhu Thatamsetty - Thu, 2007-11-29 21:10
--Clip--BEGIN2 dbms_prvtaqis.upgrade_rulesub_msgs;3 END;4 /BEGIN*ERROR at line 1:ORA-01562: failed to extend rollback segment number 12ORA-01650: unable to extend rollback segment ODMA_RBS01 by 384 in tablespace ODMA_RBSORA-06512: at "SYS.DBMS_PRVTAQIS", line 3507ORA-06512: at "SYS.DBMS_PRVTAQIS", line 3515ORA-06512: at line 2--End Clip--If the Oracle Applications used database is having Madhu Sudhanhttp://www.blogger.com/profile/11947987602520523332noreply@blogger.com0

Boring life.

Moans Nogood - Thu, 2007-11-29 16:57
As my 18-year old daughter Christine said yesterday when she arrived at the scene: "There's ALWAYS something going on in this house."

Here's what happened:

We have insufficient pressure on the cold water in our house these days. It means that something must be done, so CarpenterTorben called on Grethe's brother, who can fix water things.

It required that we broke up a bit of the floor in our entre/foyer/whatever it's called in our house and dig a big hole on the outside in order to get to the water pipes. Torben and I joked about an idea: Why not tear down the whole house and build a new one where there's plenty of space for pipes, wires, and such?

Well, one should be careful about what one wishes for.

Grethe's brother arrived around 1400 hours yesterday and within minutes I had complaints from my wife via text messages and phone calls that he was leaving dust everywhere, and how on Earth were we supposed to get it cleaned in time for the weekend?

So when Anette called the third time I was tempted not to answer, but I did.

This time she wasn't worried about the dust. She was in our bedroom upstairs with Viktor and Melina (2 and 9 years old) and the staircase was blocked due to fire and thick smoke downstairs.

Grethe's brother had been cutting a water pipe when a spark ignited the styrofoam used for insulation under the floor. Since he had cut the water supply he acted quickly and took a towel, dipped in the toilet, and tried to put out the fire that way. Didn't work. It kept creeping further and further under the floor through the styrofoam.

So he called for fire figthers while Anette shut doors upstairs and opened the windows in the bedroom where she was, ready to throw the kids out of the window into either blankets or arms of the people gathered under the window. She even had the wherewithal to look for tape that could seal the door from the smoke. She then called me, CarpenterTorben and others. Then she made sure the kids were dressed warmly. Cool lady under pressure!

Turns out I can drive 180 km/h on a bike path and cross lots of red lights if I have to (due to the rush hour traffic blocking the roads). I found out later that Torben had done the same in an attempt to get quickly from our new office to Kratvej.

When I arrived there were already seven fire trucks, ambulances and police cars on site. Anette and the kids had been saved out of the window by a big, strong fire fighter, and the fire had pretty much been put out.

Anette, Viktor and Melina were taken to ER, had oxygene and came home again in a taxi, still with no shoes. All shoes are kind of rubbish. They slept in NabooPeter's house, Christine slept in the Garage and I slept (of course!) on the couch in the living room, just in case somebody wanted to sneak in and steal my laptop or other important things.

So nobody got hurt, and we didn't lose any dear possessions (bar all our shoes), which is very nice.

Oh, the Emergency Service company that took over when the fire fighters left wouldn't listen to CarpenterTorben, who kept saying that it was still burning somewhere underneath the floor. But suddenly they were convinced, too, and the fire fighters had to show up a second time. This time Torben had to break down the floor in the entre/foyer with a huge drill hammer (or whatever it's called) so they could get down to the styrofoam. Otherwise, the fire would have spread underneath the floors to the kitchen and living room (and possibly to the oak table!). Good man, this Torben. I think he saved my house yesterday.

We can't live in the house for some days - it has to be cleaned due to the smoke, particles and such. But Jytte, who runs the local restaurant, has made an apartment upstairs ready for us, and all the neightbours are ready to house us, too.

So we wished for a lot of cold water with high pressure. We got that, courtesy of the Ballerup fire brigade.

We also wished for easy access to the water pipes. We got that.

Anette wanted new shoes. Check.

I never liked the white colour of the entre/foyer walls. Man, it's black now.

Viktor always loved fire trucks. Check.

Women dream about being rescued out of a burning building through the window by a big, strong fire fighter. Check.

Tomorrow we'll have the traditional, Danish Christmas lunch in Miracle. We have things to talk about now.

Mogens

PS: And I forgot to mention that it was CarpenterTorben's birthday that day!

After 9.2.0.8 patchset adgrants.sql fails "O/S Message: No such file or directory"

Madhu Thatamsetty - Wed, 2007-11-28 21:40
SYMPTOM:sqlplus "/ as sysdba" @adgrants.sqlSQL*Plus: Release 9.2.0.8.0 - Production on Sun Nov 18 16:14:16 2007Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.Connected to:Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit ProductionWith the Partitioning, OLAP and Oracle Data Mining optionsJServer Release 9.2.0.8.0 - ProductionO/S Message: No such file or directorySQL*PLUS Madhu Sudhanhttp://www.blogger.com/profile/11947987602520523332noreply@blogger.com0

Converting MySQL "on update current_timestamp" to Oracle

Hampus Linden - Wed, 2007-11-28 15:20
Another short simple SQL for all out there in the process of converting old MySQL schemas to Oracle.

MySQL has got a built in feature to automatically update a column to the current timestamp whenever the row is updated, just by using the default-clause. The "on update current_timestamp" feature can be quite handy if you have lazy developers who can't be bothered writing full insert statements. :)

The MySQL create table statement would be something like this:
create table p (
id int,
a varchar(10),
d timestamp DEFAULT CURRENT_TIMESTAMP
ON UPDATE CURRENT_TIMESTAMP,
constraint p_pk primary key (id)
);

Not difficult to do in Oracle either, but we need a trigger to assist.
Example:
SQL> alter session set nls_Date_format='HH24:MI:SS';

Session altered.

SQL> create table p (id number, a varchar2(10), d date default sysdate,
constraint p_pk primary key (id));

Table created.

SQL> insert into p(id,a) values(1,'test');

1 row created.

SQL> host cat p_test.sql
CREATE OR REPLACE TRIGGER p_d_trig
BEFORE UPDATE ON p
FOR EACH ROW
BEGIN
select sysdate into :new.d from dual;
END p_d_trig;
/

SQL> @p_test

Trigger created.

SQL> select * from p;

ID A D
---------- ---------- --------
1 test 21:15:05

SQL> update p set a='foo' where id=1;

1 row updated.

SQL> select * from p;

ID A D
---------- ---------- --------
1 foo 21:16:44

SQL>

Safe Harbor Statement

Chris Grillone - Wed, 2007-11-28 12:53
The postings on this blog are intended to outline general product direction. They are intended for information purposes only, and may not be incorporated into any contract. They are not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

Data Auditing in Oracle Applications - Audit Trail

Aviad Elbaz - Wed, 2007-11-28 02:01
Following my last post about Audit users in Oracle Applications, in this post I’m going to talk about the Audit Trail - the data auditing feature in Oracle Applications.
The AuditTrail enable us to know who, when and what was changed on each table we would like to audit.
For example: Some of the profiles in Oracle Applications are very critical and important for normal activity of the system (like MO: Operating Unit).
We might be interested to know who changed the value of critical profiles and the Audit Trail certainly can help us.

Now I’ll show a step by step demo how to start Audit Trail on a table, I will go on with my previous example on the profiles table.
The table we would like to audit is FND_PROFILE_OPTION_VALUES.

1) Logon to system with System Administrator responsibility.

2) Make sure the profile “AuditTrail:Activate” is set to Yes

3) Check the audited table's owner
select owner
from dba_tables
where table_name like 'FND_PROFILE_OPTION_VALUES';

The owner is APPLSYS.

4) Check that audit is enabled for APPLSYS user
Go to: Security -> AuditTrail -> Install -> Query for APPLSYS




5) Define new Audit Group
Since the audit enabled on groups of tables, we should define a new Audit Group which will contain the FND_PROFILE_OPTION_VALUES table.

Find the table’s application name by this query:
select fav.application_name
from fnd_application_vl fav, fnd_tables ft
where fav.application_id = ft.application_id
and ft.table_name = 'FND_PROFILE_OPTION_VALUES';

The query result: Application Object Library

Navigate to: Security -> AuditTrail -> Groups

Application: Application Object Library
Audit Group: AC FND Profile Values Audit
Group State: Enable Requested


At this level, the columns that will be audited are the columns of the primary key or the first unique index on the audited table.
You can add columns to be audited as much as you want.

6) Check/Add column to audit
Navigate to: Security -> AuditTrail -> Tables
Query for FND_PROFILE_OPTION_VALUES table.


Add column PROFILE_OPTION_VALUE to be audited.


7) Run the “AuditTrail Update Tables” concurrent to enable the audit
Navigate to: Requests -> Run -> Single Request -> choose request name: “AuditTrail Update Tables” -> Submit


All definitions we made until here will not take effect until we execute this concurrent.
This concurrent will create all objects (triggers and procedures) required for audit, and views to retrieve the audited data.

This request should be executed each time we make changes in audit definitions, to generate new audit objects.

To make sure it works, we can check if all audit objects were created:


All audit objects names will be the first 26 characters of the audied table + suffix (_A, _AC, _AD...).
We can also run report “AuditTrail Report for Audit Group Validation” to make sure all audit objects created successfully.


Now let’s make a test to see how it works:
To test the audit on the profile's table, we will change a value for a profile (any profile) and check the shadow table (fnd_profile_option_value_a) for an audit inormation.

Navigate to: Profile -> System , and update the profile “FND: Diagnostics” (just for testing... you can choose any profile) to Yes.

To see the audited data of the last change we can run this query:


The change was audited as expected...

As you can see, the “U” in the audit_transaction_type column indicates an update and the profile_option_value contain the value before update.

The audit_transaction_type could be:
1) U - update
2) D - delete
3) I - insert


How to disable Audit Trail?

The AuditTrail could be stopped by 3 ways:
1) Disable Prepare for Archive
2) Disable Interrupt Audit
3) Disable Purge Table

I've tried the third only… It deletes the data from the shadow table and drops all the audit objects from database.

1) Login to application and choose the System Administrator responsibility.
2) Security -> AuditTrail -> Groups
3) Query for your group
4) Update the Group State field to: “Disable – Purge Table”.
5) Run the “AuditTrail Update Tables” concurrent to make the changes.

That’s all about auditing…

You can read more at "Oracle Applications System Administrator’s Guide - Security Release 11i" - Chapter 5 - User and Data Auditing

To get my posts directly to your email, you can register for email subscription by using the box at the right side bar.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

AMR Research: The Manufacturing Operations Software Application Market Sizing Report, 2006-2011

Chris Grillone - Tue, 2007-11-27 10:28
Published: October 29, 2007
Analyst: Alison Smith

"While SAP has placed its manufacturing operations bets on delivering performance visibility, an integration stack, and a strong network of manufacturing partners, Oracle is taking a different strategy, adding deep manufacturing operations management capabilities that are tightly integrated with core-ERP financial accounting functions, resource planning, and its Daily Business Intelligence. Oracle’s functionality for manufacturing execution rivals that offered by best-of-breed competitors and includes centralized creation of routings and workflows that accommodate local customization, part- and lot-level track and trace and genealogy, in-process quality tracking, work-in-process (WIP) tracking, marriage of as-built record with the as-planned product design, and of course, EAM functionality that can be linked directly to real-time asset status data via Oracle’s Sensor Edge server technology."

Starting UNIX Enterprise Servers

Mark Vakoc - Mon, 2007-11-26 21:34
Starting and stopping an enterprise servers is an easy task. Start the appropriate service on Windows, run STRNET in the appropriate system library on iSeries, and execute the RunOneWorld.sh script on UNIX based platforms. Simple enough, right? While the first two examples are as easy as they sound configuring the proper environment in order to start the UNIX based servers has become an art form of its own. This article while detail the steps, fixes, and other details on how Server Manager starts the enterprise server on UNIX based platforms.

The various install permutations of apps releases and introduction of the platform pack have resulted in different environments for the UNIX servers. Prior to 8.11SP1 an installer was used to install the enterprise server code. This installer created a script, named .oneworld, in the operating system user's home directory and modified the user's .profile to call this script. The .oneworld script defined several environment variables used by the RunOneWorld.sh and other scripts including the EVRHOME and SYSTEM variables. These environment variables must be properly set in order for the startup scripts to operate properly.

8.11SP1 introduced the platform pack, an improved installation program, used to install the enterprise server and database files. The platform pack did away with the .oneworld script instead favoring a more robust script named enterpriseone.sh located in the $EVRHOME/SharedScripts directory

The $EVRHOME environment variable refers to the installation path of the enterprise server. Underneath this directory would be, among other things, the pathcodes and system directory containing the tools release.

The user's .profile was modified to call the enterpriseone.sh script during login.
Since Server Manager supports all application releases from 8.9 through 8.12 the startup and shutdown logic must accommodate both of these conventions. In all cases the corresponding environment setup script (.oneworld or enterpriseone.sh) must be called prior to calling RunOneWorld.sh or EndOneWorld.sh.

8.97.0.0 Server Manager

The 8.97.0.0 release of Server Manager contained several issues around properly setting up the environment prior to invoking the start/stop scripts. Most of these issues have been addressed by 8.97.0.1 and as such all UNIX based installations should immediately upgrade server manager to 8.97.0.1.

8.97.0.1 Server Manager

Server Manager starts/stops the enterprise server by calling the script startEntServer.sh/stopEntServer.sh located in the $EVRHOME/SharedScripts location. The scripts are dynamically created each time the server is started or stopped. If that directory does not exist (as it may not in 8.9, 8.10, or 8.11) it will be created. The startEntServer.sh and stopEntServer.sh scripts will do the following tasks
  • Include the .oneworld or .enterpriseone.sh script, depending on release
  • Call the RunOneWorld.sh or EndOneWorld.sh script
Including the appropriate environment scripts resolved most of the startup issues discovered.

There remains one pending issue for IBM UDB users: prior to 8.11SP1 the installation instructions required the user to add a call to the db2profile script used to setup the environment needed for the UDB binaries and environment variables. The documentation instructed the user to add this call to the operating system user's .profile. The startEntServer.sh/stopEntServer.sh scripts did not directly include the db2profile script call, resulting in an enterprise server started using server manager failing to connect to UDB. The jde.log files for the kernel processes that establish a database connection would include numerous errors indicating that libodbc.[sl or so, depending on platform] failed to load.

8.97.0.next Server Manager

The next tools release will include some fixes to address the UDB issue mentioned above. As the startEntServer.sh and stopEntServer.sh files are created, during start or shutdown, the management agent will parse the user's .profile looking for a call to the db2profile script. If found it will add a call to this script thus properly setting up the UDB environment.

If you are using 8.97.0.1 Server Manager in a UDB environment you can easily workaround this issue by adding a call to the db2profile into the .oneworld script rather than the .profile.

Depending on how the user is setup, how the db2profile script is called, and the virtually unlimited permutations we realize that our .profile parsing logic may not always find the db2profile script. There may also be other environment setup that we did not anticipate. Since the startEntServer.sh and stopEntServer.sh scripts are dynamically created it is not possible to modify these files directly. To address this a check for the scripts startExtras.sh or stopExtras.sh has been added to the dynamically created scripts. We do not deliver these scripts, however, if present they will be invoked. This permits the administrator to add any additional setup each time the server is started or stopped. These scripts are located in $EVRHOME/SharedScripts and are available for all apps releases.

The JVM Issue

The 8.96 tools release introduced Java based kernel processes using a bundled JVM for the metadata kernel. 8.97 makes further use of this capability for both the BI Publisher and Server Manager kernel processes. The required JVM is bundled with the tools release (except iSeries where it is built-in to the OS).

The server manager managed home agent also includes a bundled JVM (again, except for iSeries). The 8.12 platform pack (and the included enterpriseone.sh) shell script properly configures the LD_LIBRARY_PATH or SHLIB_PATH (depending on platform) to properly define the directories in which shared libraries should be loaded.

The scripts created by the installers/platform packs prior to 8.12 did not have these additional environment configuration parameters needed to properly startup the enterprise server. With Server Manager the startup scripts are invoked by a Java process (our bundled JVM, based on 1.5). The JVM will modify the LD_LIBRARY_PATH or SHLIB_PATH to include locations specific to the 1.5 based JDK we deliver. This will conflict with the 1.4 JDKs delivered with the enterprise server tools release.

If you installed your enterprise server using the 8.12 or later platform pack stop reading; this issue will not affect you. If you initially registered your enterprise server using the 8.97.0.2 release or later stop reading; we have fixed the issue.

If you installed your enterprise server using 8.11SP1 or earlier AND registered your enterprise server using 8.97.0.1 or earlier server manager you may still have an issue. We modified, during the registration process, the .oneworld or enterpriseone.sh script to properly setup the LD_LIBRARY_PATH/SHLIB_PATH for you. However, there was a flaw that you may have to manually correct. If you look in these files you will see a line that was added by server manager that defines the JVM_LIB environment variable, and the LD_LIBRARY_PATH/SHLIB_PATH are modified to include it. The flaw, however, is in the order added. We appended the JVM_LIB environment variable to the existing LD_LIBRARY_PATH/SHLIB_PATH. Instead it should have been prepended to it.

To resolve this issue edit the appropriate file and ensure that LD_LIBRARY_PATH is redefined to be something like:
LD_LIBRARY_PATH=$SYSTEM/lib:$(JVM_LIB)........:$LD_LIBRARY_PATH rather than
LD_LIBRARY_PATH=$SYSTEM/lib:......$LD_LIBRARY_PATH:$JVM_LIB

Summary

If you registered your enterprise server using 8.97.0.2 or later server manager don't worry -- all these issues have been resolved.

If you registered your enterprise server using 8.97.0.1 or earlier server manager and your enterprise server was installed using 8.11SP1 or earlier platform pack/installer you may need to modify the LD_LIBRARY_PATH/SHLIB_PATH to move the JVM_LIB definition.

If you use UDB and you are using 8.97.0.1 or earlier server manager and your enterprise server was installed using the 8.11 or earlier installer you may need to add a call to db2profile to the .oneworld shell script.

Installing Server Manager for Non-English Machines

Mark Vakoc - Mon, 2007-11-26 18:01
An issue has been discovered that prevents the successful installation of Server Manager when the user that performs the installation is configured for something other than English.  The problem relates around messages that are output by the embedded container.  The installer is looking for particular words, in English, and the container is outputing that text localized.  So even though the installation was successful the installer thinks it failed and begins to uninstall.

A workaround is to change the current user's language, in Windows, back to English and perform the installation.  Also make sure the 'Language for non-Unicode programs' on the advanced tab is configured to English as well.  These settings can be found in the 'Regional and Language Options' control panel option in Windows.

Once the installation is complete you may return to using the desired language of choice.

The Management Kernel

Mark Vakoc - Mon, 2007-11-26 12:04
Tools release 8.97 adds two new kernel definitions to the enterprise server: the BI Publisher (XML Publisher) and Server Manager kernels. Like the metadata kernel introduced in 8.96 these new kernels are Java based kernels that start their own JVM instance to run Java code rather than the traditional C based code of kernels past. SM will automatically add and configure these kernels into the JDE.INI when changing the tools release to 8.97. This post will focus on the server manager kernel.


The server manager kernel (number 32 for those keeping count) operates on all platforms and loads a JVM upon startup. This JVM will in turn load the management agent. This is the same codebase as that used by the managed home agent and is used to provide runtime information about the enterprise server to the management console. It uses a couple of INI settings, configured automatically, to provide it with the instance name and managed home location associated with the enterprise server instance. From the managed home location it will read the agent.properties discussed in previous posts to obtain the connection details for the management console. It also follows the same connection logic used to establish communications as outlined in that post.


The management console uses this kernel to obtain all the runtime information about the server such as the active process list. It also uses this agent to expose and provide the log files that are active for the processes that appear in the process list.

This kernel definition is a singleton; that is there may only be a single process active and must be configured to start automatically, again all configured automatically. In fact the management console will not permit configuring more than one process for this kernel. The underlying network communications used by the management agents is different than the JDENET communications used by other kernels. As such the singleton can properly handle any amount of load and introducing additional processes is not necessary and will cause unexpected results.

The SAW Kernel

The Although SAW has been replaced with 8.97 there are still portions of the SAW infrastructure that are used by server manager. On the enterprise server the SAW kernel is still used. The Management Kernel (Java) exposes the runtime information using the JMX standard. It obtains much of this information by sending JDENET messages to the SAW kernel.

It is advisable to continue to run multiple SAW processes. The information exposed by the management kernel is collected periodically, approximately every 30 seconds. After that elapsed time JDENET messages are sent to the SAW kernels to collect the information. Having multiple SAW kernels will ensure that the information update to the management kernel occurs quickly and reduces the likelihood of any of these messages timing out.

Troubleshooting

In reality there is very little that needs to be known about this kernel. That said if the runtime information isn't available for a running enterprise server it may be desirable to investigate the log files for the kernel. There are two sets of log files of interest: the jde/jdedebug.log and the java log files.

The first place to check is the server manager jde.log. Since the process list isn't available (that's what we're troubleshooting) we'll have to randomly go through the log files exposed on the management page for the enterprise server until we find the management kernel. A successful startup is shown below. If there are any errors about starting up the JVM, loading the SM classes, or any other content in this log file this is the first place to look.

If there are no problems indicated in the JDE.LOG you may look at the java based logs. Since this is standard E1 Java code, logging is configured in the jdelog.properties file contained within the system/classes directory. For each log defined in the jdelog.properties you will see a corresponding log file created for each java based kernel process. Since multiple processes can be defined for the other java kernel types (metadata and xml publisher) the process id is added to the filename, as shown below:


Summary

The management kernel provides server manager powerful new monitoring capabilities. The operation of the management kernel can generally be ignored and should only be of concern should runtime information not be available for a particular enterprise server in the management console web application.

How to Import the PUTTY Settings from One Machine to Another Machine

Madan Mohan - Mon, 2007-11-26 00:15
Most of the DBA's work 24 x 7 and finds difficiult and time consuming to configure the Whole List of Server settings in PUTTY. I had gone through Google hits and found one workaround to import your putty settings .


Work Around
************

1. Run the command --> regedit /e "%userprofile%\desktop\putty.reg" HKEY_CURRENT_USER\Software\SimonTatham at the command prompt.

2. Copy the Putty.exe and putty.reg onto Target Machine.

3. Right Click the putty.reg and click the option "Merge", this will import the settings to the target registry, and after that you can see all the server details which were defined by you earlier in Source Machine.

Note:- SimonTatham is the person behind the PUTTY Software.

Copy and Compress the Datafiles using multiple Processes.

Madan Mohan - Fri, 2007-11-23 00:35
Following are the scripts used for copying and compressing the Datafiles within the Same Server.

Scripts
*********

1. copy_process.sh ------> This File consists of all the functions which are used for copy and compress.

2. copy_file_process.sh ------> This File consists of commands used for copying , compressing , uncompressing . This file is being called by the copy_process.sh

3. worker_no ------> Define the No. of Workers (Process) for the whole process. This value can be dynamically changed by using the command , echo 4 > worker_no.


copy_process.sh
****************

#! /usr/bin/ksh
##################################################################################
# bkp_dir_path is the source (TO directory ) name
# src_path is the target (From directory ) name
# worker_pid is the worker pid file
# worker_no is the number of workers, can be adjusted while the script is running
# example, to set 3 workers, perform the following before running the script: echo 3 > worker_no
# file

function Gen_Env
{
bkp_dir_path="/tmp/to_data"
src_path="/tmp/from_data"
Log_Date=$(date +"%d%m%y")
worker_pid=/tmp/copy_process_
worker_no=/tmp/worker_no
worker_max_count=0
COPY_FILE_PROCESS=/tmp/copy_file_process.sh
File_List=$(cd $src_path; ls -l *.dbf |grep -v cntrl | awk '{print $9}')
}

function Copy_Phase
{
num_copy_workers=`cat $worker_no`
if [[ $worker_max_count -lt $num_copy_workers ]] then
worker_max_count=num_copy_workers
fi

worker=1
while [[ $worker -le $num_copy_workers ]]
do
worker_file=${worker_pid}${worker}.pid
if [[ ! -s $worker_file ]] then
echo "Busy" > $worker_file
$COPY_FILE_PROCESS $src_path $datafile $worker $worker_file $bkp_dir_path &
echo "Copy Assigned to Worker pid file: $worker"
Copy_Assigned=Yes
return
fi
let worker=$worker+1
done
sleep 5
}

function Check_Final_Copy
{
worker=1
while [[ $worker -le $worker_max_count ]]
do
worker_file=${worker_pid}${worker}.pid
if [[ -s $worker_file ]] then
echo "Background Copy is still Running... $(date)"
sleep 60
CheckFinal=No
return
else
rm -f $worker_file
fi
let worker=$worker+1
done
CheckFinal=Yes
}

function main
{
Gen_Env
echo "Total number of Datafiles in the Source Instance"
src_dbf_count=`ls $src_path|wc -l`
echo $src_dbf_count
date > $bkp_dir_path/time.log
for datafile in `echo $File_List`
do
echo "Copy $datafile : $(date)"
echo "Wait for the Next Worker ... $(date)"
Copy_Assigned=No
while [[ $Copy_Assigned == 'No' ]]
do
Copy_Phase
done
done

CheckFinal=No
while [[ $CheckFinal == 'No' ]]
do
Check_Final_Copy
done
date >> $bkp_dir_path/time.log
echo "**********************************************"
echo " Copy Process Completed Successfully"
echo "**********************************************"
echo "Number of Data files copied to the target "
target_dbf_count=`ls $bkp_dir_path|wc -l`
echo $target_dbf_count
}
main

################### End of Copy_process.sh ###########


copy_file_process.sh
**********************

#!/usr/bin/ksh
##############################################################
## Copy and compress script
## Phase - I = Copy the files from Source to Destination
## Phase - II = Compress the Destination Files.
## Phase - III = Uncompress the Destination File. (if required)
##
##
##############################################################
s_path=$1
filename=$2
worker_no=$3
statusfile=$4
d_path=$5
statuss=`cat $statusfile`
##############################################################
if [[ $statuss == 'Busy' ]] then
echo "$$" >$statusfile
## Phase I
echo "Copying $filename by worker $worker_no"
cp $s_path/$filename $d_path
sleep 5

# Phase II
echo "Zipping $filename by worker $worker_no"
/usr/bin/gzip $d_path/$filename
sleep 5

#Phase III
#echo "Unzipping $filename.gz by worker $worker_no"
#/usr/bin/gunzip $s_path/$filename.gz

>$statusfile
else
echo "Process is not Busy"
fi

echo '*********************'


################### END of copy_file_process.sh #############


Execution Syntax
*****************

1. Need to update the directory structure for the below variables within the script "copy_process.sh".

a) bkp_dir_path ---> Where to backup the datafiles.
b) src_path ---> Location of source data files.
c) COPY_FILE_PROCESS ---> Location of "copy_file_process.sh script.


Syntax
********

nohup ./Location of copy_process.sh &

Pls-00436 removed in oracle 11g

Adrian Billington - Thu, 2007-11-22 02:00
Oracle relaxes the long-standing FORALL implementation restriction in 11g. November 2007

Subprogram inlining in 11g

Adrian Billington - Thu, 2007-11-22 02:00
Oracle extends the scope of its PL/SQL compiler optimisation. November 2007

Sql plan enhancements in 10g

Adrian Billington - Thu, 2007-11-22 02:00
New plan features in 10g make SQL performance investigations much more simple. December 2004 (updated November 2007)

Things I learned at Oracle Open World in SFO 2008

Moans Nogood - Tue, 2007-11-20 15:53
I just came back from a few days at OOW, and it was fun. I shared a biggish apartment with Anjo Kolk, Krister (Sweden) and Oliver (Danish CSC), and it was beautiful to see the beer bottles (good beers, mind you!) gradually filling up the kitchen table allocated for that purpose.

On Sunday, November 11, I was invited to a seven-hour briefing for Oracle ACE Directors (I am such a thing). It was mostly about the Fusion Middle Ware (MW) and in the end a bit about the 11g database.

I have three observations:

1. During the MW presentations I saw more acronyms than in my entire military career.

2. 'Oracle' was the only word with less than seven letters in all those slides.

3. The best thing that can happen to any product is to be bought by Oracle. Turns out, that the purchase itself will transfer the product overnight from being worth-, use- and hopeless to being an absolutely state-of-the-art, best-of-beer product.

Interesting to learn that MW is database agnostic and Apps server agnostic. This obviously generates some interesting discussions inside Oracle.

And, man, do things change in the MW: Forget SOA, here comes SCA. Forget hub-and-spoke - it's just SO yesterday. Forget Portal - here comes WebCenter.

I am tempted to quote the standup comedian Billy Connolly, who said some years ago: "... and it will all change tomorrow, so f.... stay awake!"

Apart from that, it was a good day with knowledgeable presenters, and I learned a lot. Thanks to the Oracle ACE ladies (Emily & Victoria) for setting this up.

Monday, Tuesday and Wednesday I simply set up a virtual office at the tex-mex restaurant Chevy's and met a bunch of friends from inside and outside Oracle during those days. It was good, and it generated a lot of good ideas.

There were 1600 presentations in total. 100 of these were database-related. Interesting.

Upgrading the Recovery Catalog Database from 9i to 10g

Madan Mohan - Mon, 2007-11-19 19:18
The Rman Catalog Upgarde is same as normal Database upgrade and can be accomplished in two ways.

a) Updrade the Database from 9i to 10g
- Connect to rman catalog datase.
- Issue the rman command "upgrade catalog" as this upgrades the catalog database from 09.02.00 to 10.02.00.03

b) Fresh Install of 10g Database / Use the existing 10g Database.
- Create the rman user and grant create session, recovery_catalog_owner, create type to rman user.
- Export import of Rman Schema
- Issue the rman Command :upgrade catalog"


Note:- You will encounter the below warning or error message , if you have not upgraded the catalog after the database version upgrade.
connected to target database: DSSPROD (DBID=1021024992)
connected to recovery catalog database
PL/SQL package RMAN.DBMS_RCVCAT version 09.02.00 in RCVCAT database is too old

Solution
**********
1. Connect to recovery catalog database as rman user.
2. issue the rman command " upgrade catalog" twice

Debugging with the APEX repository

Anthony Rayner - Mon, 2007-11-19 06:56
Introduction...
I was recently working on an APEX application on a familiar 'Form on a Table' wizard-built page and encountering a ORA-00957 Duplicate Column Name error, meaning that a column name must be specified more than once in the INSERT.

So the problem must be that I had more than one page item bound to the same database column. This page had got quite bloated with lots of other business logic so I started by turning off regions, the old 'process of elimination', debugging through seeing if there was anything obvious, but nothing sprang up. Then I thought if only there was a way to view all the page items on my page which were bound to the same database column. Well there is, using the APEX repository.

For those of you who are not familiar with the repository, it is basically a set of views that expose all the APEX application metadata. For further information see Have a clean up, utilising the APEX repository which contains links to loads more information.


How...
In the repository there is a view called APEX_APPLICATION_PAGE_DB_ITEMS which is what we are interested in. It is described in the APEX_DICTIONARY view as...

'Identifies Page Items which are associated with Database Table Columns. This view represents a subset of the items in the APEX_APPLICATION_PAGE_ITEMS view.'

We can then run the following query to return all the items for a specific application / page bound to a db column more than once.
SELECT   db_column_name,
SUM(1) Duplicates
FROM apex_application_page_db_items
WHERE page_id = :page_id
AND application_id = :app_id
GROUP BY db_column_name
HAVING SUM(1) > 1
ORDER BY SUM(1) DESC
(Note: This view does not contain conditional rendering information, so if this was required you would need to join to APEX_APPLICATION_PAGE_ITEMS on ITEM_ID).


Conclusion...
So if ever you think, I wish I could see this information about this page / report or whatever, you probably can, just take a look into the repository. I wonder if there would be any scope for an APEX debugging framework that defines sets of processes linked to common 'ORA' errors. So for this example, it would simply be:

Error: ORA-00957 - Duplicate column name
  • Step 1: Run the following query, binding in your page and application id.
    SELECT   db_column_name,
    SUM(1) Duplicates
    FROM apex_application_page_db_items
    WHERE page_id = :page_id
    AND application_id = :app_id
    GROUP BY db_column_name
    HAVING SUM(1) > 1
    ORDER BY SUM(1) DESC

  • Step 2: Investigate all rows returned from the query and unbind items which should not be bound to the database column.

  • Step 3: Retest your page.


That would be nice.
Anthony.

Categories: Development

Survey results

Anthony Rayner - Mon, 2007-11-19 06:53
Following the recent survey I conducted on this blog, 'What would you like to read more about on my blog?', the results were:

Developer Tips - 54%
AJAX General - 45%
AJAX with JSON - 45%
BI Publisher Integration - 33%
Access Migration - 0%

I will thus be focusing my efforts on Developer Tips and AJAX related posts in the near future. I was quite surprised at the 0% interest in 'Access Migration' and would have thought this was be quite popular as this is one of the main platforms systems are built on that APEX applications replace. Or maybe it's just that developers aren't using the migration functionality built in to APEX / SQL Developer and just doing it without looking at these. Interesting.

Thank you all for your feedback. More to come shortly.
Anthony.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator