Feed aggregator

DR-Setup....

Ayyu's Blog - Thu, 2008-01-24 01:25
Categories: DBA Blogs

DR-Setup....

Ayyappa Yelburgi - Thu, 2008-01-24 01:25
Architecture Database Synchronization Options Setup No-Data-Divergence Setup Primary Database Setup Standby Database Start Managed Standby Recovery Protect Primary Database Cancel Managed Standby Recovery Activating A Standby Database Backup Standby Database Database Switchover Database Failover Automatic Archive Gap Detection Background Managed Recovery Delayed Redo ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com284

How to shrink Oracle Context Index

Pawel Barut - Wed, 2008-01-23 14:50
On of databases I've found that one of Context Indexes is getting bigger and bigger everyday. It was around 5-10% grow per day for table DR$MY_TEXT_I$I which contains tokens(word). No of records indexed was almost the same, but index grown was unexpectedly big. It was due fact, that quite many rows were modified every day, and it caused re-indexing those rows during index synchronization (CTX_DDL.SYNC_INDEX). So I've decided to reclaim this space.
Method 1.This was my first though:
  • Drop Index ... /Create Index ...
  • or Alter Index ... Rebuild
But those methods have some disadvantages:
  • Users cannot perform text searches during this operation,
  • It does not solve problem for longer time, as index will still grow ...
So I had to find root cause and eliminate it.
Method 2.While looking for root cause I've find out, that Index has never been optimized, so it kept old (obsolete) data. My solution was:
SQL> exec ctx_ddl.optimize_index('MY_TEXT_I', 'FULL', 120);
SQL> alter table DR$MY_TEXT_I$I enable row movement;
SQL> alter table DR$MY_TEXT_I$I shrink space cascade;

With this method table DR$MY_TEXT_I$I took 30% of its original size.
So let me explain why this worked. With ctx_ddl.optimize_index context index was internally optimized. It means information about old documents were completely deleted and index was internally minimized. Remember - third parameter limits time (in minutes) allowed for optimization. If it is really big index it can take hours. But you can run this optimization many times, until your index will be fully optimized.
Then I've just shrunk table (shrink space cascade). But this operation requires to enable row movement on table first.

But this was one time operation. To avoid this problem in future I've scheduled job to run ctx_ddl.optimize_index on regular basis. Now this table has grown a little, but is no longer growing so fast.

Foot note:
  • It was tested on Oracle EE 10gR2 (10.2.0.3),
  • I'm not sure if row movement is supported by Oracle for Context Index table DR$<index>$I. It worked for me and I did not experience any problems since turning it on.


Cheers, Paweł
Categories: Development

Publishing Gadgets

Oracle EPM Smart Space - Wed, 2008-01-23 11:24

In the world of Gadgets and Widgets getting a new gadget is as simple as downloading and installing one from a Gadget site, message board or blog. This is great for making a framework pervasive but would wreak havoc on the complex IT communities that implement EPM or BI solutions. Security is an obvious concern but also the idea of maintaining and understanding what users have installed becomes impossible in a scenario like this. When designing Smart Space we often argued this topic wanting to keep idea of gadgets open and pervasive while maintain control for customers. This is how the idea of Gadget publishing came to be.

Gadget publishing gives an administrator the ability to control what gadgets his or her users can download and install. It also allows the administrator to recommend gadgets to users giving administrators some assurance as to what gadgets are installed on a user desktop. Gadget publishing is done using the same technology we use to deploy and maintain the Smart Space client (often referred to as the framework); Click Once. Going into details on Click Once would warrant an additional post, so I will keep it short for now. Click Once helps to provide a safe and secure environment for users to download and install the gadgets while providing automatic updates. The gadgets are also digitally signed with a certificate so users will be assured of their safety when installing.

I am a simple guy so I will put this is simple terms. When Smart Space is installed in an environment the administrator essentially runs a 'store' where they will maintain an inventory of gadgets or other components. The users can go to this store and download from the 'stores' inventory with confidence.

Here is how it is done. The administrator will first have to install the Smart Space Administration Utility. This utility will probably only be installed by one or two people and does require administrative privileges to install. The installation files for this get laid down when the Smart Space server is installed and can be usually accessed from the following URL: http://[ServerName]:17080/SmartSpace/Installations/AdminUtility/Setup.msi. (For installation and setup help check out the Administrator's Guide for Smart Space) Once installed and configured you should be able to open the administration utility and get to the Sign & Publish tab.

From this tab you can do the following; publish or remove the framework (Smart Space client), publish or remove gadgets, download already published gadgets, sign gadgets, etc. Most of the time you are going to just sign and publish gadgets; but if this is a first time installation you will probably sign and publish the framework and the gadgets. In either case the process is the same but it is key to make sure the framework is properly published otherwise users will not be able to get any gadgets. The utility should be pretty self explanatory and there is plenty of help in the Administrator's guide but I will run through the steps for publishing a gadget.

  • From the Sign & Publish tab click Add.
  • Browse for a framework or gadget file to publish. (.SmartSpaceGadget or .SmartSpaceFramework)

    These files are compressed files with all the bits needed for the Click Once installations and deployments. An administrator will find the framework and default gadgets in the directory where they installed the Administration Utility (usually C:\Program Files\Oracle EPM\Oracle EPM Smart Space Administration Utility).
  • Once on the list from the action dropdown verify that it says Publish.
  • Click Apply.

This will move the gadget file to the server and place all the bits in the correct location so that users will see it as available to them in the Smart Space palette on their client.

To summarize, gadgets can be a great end user tool but in the world on enterprise software there needs to be some level of control.

Categories: Development

LOG_ARCHIVE_DEST_n

Ayyu's Blog - Tue, 2008-01-22 21:28
Categories: DBA Blogs

LOG_ARCHIVE_DEST_n

Ayyappa Yelburgi - Tue, 2008-01-22 21:28
Syntax LOG_ARCHIVE_DEST_[1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10] = "null_string" | ((SERVICE=service | LOCATION=location) [AFFIRM | NOAFFIRM] [ALTERNATE=destination | NOALTERNATE] [ARCH | LGWR] [DELAY[=minutes] | NODELAY] [DEPENDENCY=destination | NODEPENDENCY] [MANDATORY | OPTIONAL] [MAX_FAILURE=count | NOMAX_FAILURE] [QUOTA_SIZE=blocks | NOQUOTA_SIZE] [QUOTA_USED=blocks | NOQUOTA_USED] [ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com7

Data Guard 9i Log Transportation on RAC

Ayyu's Blog - Tue, 2008-01-22 21:23
Categories: DBA Blogs

Data Guard 9i Log Transportation on RAC

Ayyappa Yelburgi - Tue, 2008-01-22 21:23
PURPOSE-------This article gives an overview about how to create a Data Guard Configuration on Real Application Clusters (RAC). The Configurationyou can find here is for a Physical Standby Database. SCOPE & APPLICATION-------------------You can see which initialisation parameters you have to use / changeand how the Log Transport is organized in a RAC-DataGuard environment. NOTE: The Data Guard ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com6

Creating a Data Guard Configuration

Ayyu's Blog - Tue, 2008-01-22 21:14
Categories: DBA Blogs

Creating a Data Guard Configuration

Ayyappa Yelburgi - Tue, 2008-01-22 21:14
1) Ensure the Primary database is in ARCHIVELOG mode: SQL> archive log list Database log mode No Archive Mode Automatic archival Disabled Archive destination /export/home/oracle/temp/oracle/arch Oldest online log sequence 7 Current log sequence 9 SQL> alter database close; Database altered. SQL> alter database archivelog; Database ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com43

Improvement of AUTO sampling statistics gathering feature in Oracle 11g

Inside the Oracle Optimizer - Tue, 2008-01-22 17:34
Optimizer statistics in Oracle are managed via a pl/sql package, dbms_stats. It provides several pl/sql procedures to gather statistics for a table, schema, or a database. For example, gather_table_statistics is used to gather statistics on a table. This procedure has an estimate_percent parameter, which specifies the sampling percentage of the statistics gathering. The users can specify any number between 0 ~ 100 for this parameter. For example, suppose you have a table BIGT, you can specify a 1% sampling percentage as follows:

exec dbms_stats.gather_table_stats(null, 'BIGT', 
estimate_percent => 1);


It is not always easy for users to pick the appropriate sampling percentage. If the specified sampling percentage is too high, it can take longer to gather statistics. On the contray, if the data is skewed and the specified sampling percentage is too low, the resulting statistics can be inaccurate.

For this reason, Oracle introduced the AUTO value for the estimate_percent parameter. For example, you can gather statistics on BIGT as follows:

exec dbms_stats.gather_table_stats(null, 'BIGT', 
estimate_percent => dbms_stats.auto_sample_size);


The advantage of using AUTO sample size over a fixed number is two-folds. First, when AUTO sample size is specified, the system automatically determines the appropriate sampling percentage. Second, AUTO sample size is more flexible than a fixed sampling percentage. A fixed sampling percentage size that was good at some point in time may not be appropriate after the data distribution in the table has changed. On the other hand when AUTO value is used Oracle will adjust the sample size when the data distribution changes.

When AUTO is used Oracle picks a sample size where the statistics quality is good enough. However, it does not work very well under cases of extreme skew in the data. In Oracle 11g, we improved the behavior when the AUTO value is used. First, AUTO sampling now generates deterministic statistics. Second, and more importantly, AUTO sampling generates statistics that are almost as accurate as 100% sampling but takes much less time than 100% sampling. To illustrate these merits, we compare the performance of using a fixed sampling percentage, AUTO sample size in Oracle 10g and AUTO sample size in Oracle 11g.

We used the standard TPC-D data generator to generate a Lineitem table. The Lineitem table is about 230G and contains 1.8 million rows with 16 columns. The schema of the lineitem table is as follows:

column namecolumn typel_shipdatedatel_orderkeynumberl_discountnumberl_extendedpricenumberl_suppkeynumberl_quantitynumberl_returnflagvarchar2l_partkeynumberl_linestatusvarchar2l_taxnumberl_commitdatedatel_receiptdatedatel_shipmodevarchar2l_linenumbernumberl_shipinstructvarchar2l_commentvarchar2
Table 1 gives the elapsed time of gathering statistics on the Lineitem table by different sampling percentages.

Sampling PercentageElapsed Time (sec) 1% sampling 797100% sampling (Compute)18772Auto sampling in Oracle 10g 2935Auto sampling in Oracle 11g 1908

Table 1: Statistics gathering time on 230G TPC-D Lineitem Table Using Different Estimate Percentages

We also compare the quality of the statistics gathered using different estimate percentages. Among all the statistics of a column, number of distinct values (NDV) is the one whose accuracy used to be an issue. We define the accuracy rate of NDV of a column as follows:


accuracy rate = 1 - (|estimated NDV - actual NDV|)/actual NDV.


The accuracy rate ranges from 0% to 100%. The higher the accuracy rate is, the more accurate the gathered statistics are. Since 100% sampling always lead to an accuracy rate of 100%, we do not report it. We focus on the columns which has at least one statistics accuracy rate below 99.9% when using different estimate percentages. Table 2 illustrates the accurate rates of the columns.

Column NameActual NDV Auto Sampling in Oracle 11g 1% Sampling orderkey 450,000,000 98.0% 50% comment 181,122,127 98.60% 4.60% partkey 60,000,000 99.20% 98.20% suppkey 3,000,000 99.60%
99.90% extendedprice 3,791,320 99.60% 94.30%

Table 2: Accuracy Rate of Gathering NDV LineItem Using Different Estimate Percentages

In short, the elapsed time of ''AUTO sampling in Oracle 11g'' is 10 times faster than 100% sampling but yields the statistics of similar quality (accuracy rate close to 100%).


Categories: DBA Blogs, Development

Improvement of AUTO sampling statistics gathering feature in Oracle 11g

Oracle Optimizer Team - Tue, 2008-01-22 17:34
Optimizer statistics in Oracle are managed via a pl/sql package, dbms_stats. It provides several pl/sql procedures to gather statistics for a table, schema, or a database. For example, gather_table_statistics is used to gather statistics on a table. This procedure has an estimate_percent parameter, which specifies the sampling percentage of the statistics gathering. The users can specify any number between 0 ~ 100 for this parameter. For example, suppose you have a table BIGT, you can specify a 1% sampling percentage as follows:

exec dbms_stats.gather_table_stats(null, 'BIGT', 
estimate_percent => 1);


It is not always easy for users to pick the appropriate sampling percentage. If the specified sampling percentage is too high, it can take longer to gather statistics. On the contray, if the data is skewed and the specified sampling percentage is too low, the resulting statistics can be inaccurate.

For this reason, Oracle introduced the AUTO value for the estimate_percent parameter. For example, you can gather statistics on BIGT as follows:

exec dbms_stats.gather_table_stats(null, 'BIGT', 
estimate_percent => dbms_stats.auto_sample_size);


The advantage of using AUTO sample size over a fixed number is two-folds. First, when AUTO sample size is specified, the system automatically determines the appropriate sampling percentage. Second, AUTO sample size is more flexible than a fixed sampling percentage. A fixed sampling percentage size that was good at some point in time may not be appropriate after the data distribution in the table has changed. On the other hand when AUTO value is used Oracle will adjust the sample size when the data distribution changes.

When AUTO is used Oracle picks a sample size where the statistics quality is good enough. However, it does not work very well under cases of extreme skew in the data. In Oracle 11g, we improved the behavior when the AUTO value is used. First, AUTO sampling now generates deterministic statistics. Second, and more importantly, AUTO sampling generates statistics that are almost as accurate as 100% sampling but takes much less time than 100% sampling. To illustrate these merits, we compare the performance of using a fixed sampling percentage, AUTO sample size in Oracle 10g and AUTO sample size in Oracle 11g.

We used the standard TPC-D data generator to generate a Lineitem table. The Lineitem table is about 230G and contains 1.8 million rows with 16 columns. The schema of the lineitem table is as follows:

column namecolumn typel_shipdatedatel_orderkeynumberl_discountnumberl_extendedpricenumberl_suppkeynumberl_quantitynumberl_returnflagvarchar2l_partkeynumberl_linestatusvarchar2l_taxnumberl_commitdatedatel_receiptdatedatel_shipmodevarchar2l_linenumbernumberl_shipinstructvarchar2l_commentvarchar2
Table 1 gives the elapsed time of gathering statistics on the Lineitem table by different sampling percentages.

Sampling PercentageElapsed Time (sec) 1% sampling 797100% sampling (Compute)18772Auto sampling in Oracle 10g 2935Auto sampling in Oracle 11g 1908

Table 1: Statistics gathering time on 230G TPC-D Lineitem Table Using Different Estimate Percentages

We also compare the quality of the statistics gathered using different estimate percentages. Among all the statistics of a column, number of distinct values (NDV) is the one whose accuracy used to be an issue. We define the accuracy rate of NDV of a column as follows:


accuracy rate = 1 - (|estimated NDV - actual NDV|)/actual NDV.


The accuracy rate ranges from 0% to 100%. The higher the accuracy rate is, the more accurate the gathered statistics are. Since 100% sampling always lead to an accuracy rate of 100%, we do not report it. We focus on the columns which has at least one statistics accuracy rate below 99.9% when using different estimate percentages. Table 2 illustrates the accurate rates of the columns.

Column NameActual NDV Auto Sampling in Oracle 11g 1% Sampling orderkey 450,000,000 98.0% 50% comment 181,122,127 98.60% 4.60% partkey 60,000,000 99.20% 98.20% suppkey 3,000,000 99.60%
99.90% extendedprice 3,791,320 99.60% 94.30%

Table 2: Accuracy Rate of Gathering NDV LineItem Using Different Estimate Percentages

In short, the elapsed time of ''AUTO sampling in Oracle 11g'' is 10 times faster than 100% sampling but yields the statistics of similar quality (accuracy rate close to 100%).


Oracle Discoverer Query Prediction functionality and Performance

Aviad Elbaz - Tue, 2008-01-22 12:21

Lately we noticed that our Discoverer reports runs very slow.
Actually, the problem wasn’t the Discoverer reports query but the query prediction that for some reason took so long.

What is Query Prediction in Discoverer?

“Discoverer includes functionality to predict the time required to retrieve the information in a Discoverer query.
The query prediction appears before the query begins, enabling Discoverer users to decide whether or not to run the query.
This is a powerful facility that enables Discoverer users to control how long they wait for large reports.”
(from Oracle doc’)

The query prediction is the elapsed time while the following message appeared on the bottom left of Discoverer Desktop window: "Determining query time estimate".

For each report we tested, we found query prediction runs 30%-50% (!!!) from the report’s total run time.

Next phase was to start a SQL trace on Discoverer session to see what actually happens when running a Discoverer report.

This is the relevant section from the SQL Trace:

SELECT QS_ID, QS_COST, QS_ACT_CPU_TIME,
       QS_ACT_ELAP_TIME, QS_EST_ELAP_TIME,
       QS_CREATED_DATE, QS_OBJECT_USE_KEY,
       QS_NUM_ROWS,QS_STATE
FROM [EUL_USER].EUL5_QPP_STATS WHERE  QS_COST IS NOT NULL
AND    QS_OBJECT_USE_KEY = :OBJECTUSEKEY
ORDER BY QS_CREATED_DATE DESC


As you can see, the query prediction functionality tries to retrieve statistics information from EUL5_QPP_STATS and it takes 35 seconds. (total time for this report is 55 seconds).

The query prediction based, among other things, on the query prediction statistics table – EUL5_QPP_STATS.
This table records query prediction statistics while running Discoverer reports.

There is no logic by estimating query run time longer than the report’s query itself…

Since the query prediction functionality is important to our users we avoid from disable this functionality (by setting the QPPEnable to 0).
Furthermore, I found that we have statistics data in this table from 7 years ago… 
There is no meaning to hold these statistics…

I tried to find information about purging the EUL5_QPP_STATS and I found this: “How to delete old query prediction statistics” in Oracle® Business Intelligence Discoverer Administration Guide 10g Release 2 (10.1.2.1)

There is a SQL script at [ORACLE_ HOME]\discoverer\util\eulstdel.sql – that deletes all query prediction statistics that were created before a specified date.

Great!
I executed this sql on my database, gave 90 days as a parameter and it deleted 460,000 (from 468,000) rows.
I ran a Discoverer report again, but still query prediction takes too long, same like before.
I checked the explain plan and the cost of the above SQL and it remains the same.
I tried to gather statistics on EUL5_QPP_ENABLE table and rebuild its indexes but cost become higher… (More than 103, something like 800…).

I had no choice but rebuild the EUL5_QPP_ENABLE table (by export, drop table and import).

After recreation of EUL5_QPP_STATS table I ran a Discoverer report again and query prediction takes insignificant time, almost nothing…  :-)

This is from the trace I took after:

SELECT QS_ID, QS_COST, QS_ACT_CPU_TIME,
       QS_ACT_ELAP_TIME, QS_EST_ELAP_TIME,
       QS_CREATED_DATE, QS_OBJECT_USE_KEY,
       QS_NUM_ROWS,QS_STATE
FROM [EUL_USER].EUL5_QPP_STATS WHERE  QS_COST IS NOT NULL
AND    QS_OBJECT_USE_KEY = :OBJECTUSEKEY
ORDER BY QS_CREATED_DATE DESC


The elapsed time for this sql reduced to 0.05 seconds!! (was 35 sec’ before)


Sql cost reduced from 103 to 31!

I checked this issue on Discoverer Desktop 10g (10.1.2.2) but it is relevant to the web tools (Discoverer viewer and Discoverer Plus) as well, since the query prediction functionality exist in these tools like in the client version.

You are welcome to leave a comment.

Aviad

Categories: APPS Blogs

Oracle SOA Suite 11 g - Technology Preview

Peeyush Tugnawat - Tue, 2008-01-22 06:59

Get started with the new technology preview release of SOA Suite 11g!

Refer to this link for details:

Oracle SOA Suite 11 g - Technology Preview

Regular Taska Of a DBA

Ayyu's Blog - Sun, 2008-01-20 20:34
Categories: DBA Blogs

Regular Tasks Of a DBA....

Ayyu's Blog - Sun, 2008-01-20 20:34
Categories: DBA Blogs

Regular Taska Of a DBA

Ayyappa Yelburgi - Sun, 2008-01-20 20:34
1.Regular Monitoring of The free space in Database.2.Taking logical bakups of important table.3.Checking the locks on the Database.4.Checking the long running queries on ur database5.Analysing the performance of ur Database6.Investigate Wait Statistics 7.Tablespace Usage 8.Ensure Connectivity to Oracle 9.Lock Contention 10.Extent Failure 11.Alert Logs 12.Redo Logs 13.Check if all the instances ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com5

Regular Tasks Of a DBA....

Ayyappa Yelburgi - Sun, 2008-01-20 20:34
1.Regular Monitoring of The free space in Database. 2.Taking logical bakups of important table. 3.Checking the locks on the Database. 4.Checking the long running queries on ur database 5.Analysing the performance of ur Database 6.Investigate Wait Statistics 7.Tablespace Usage 8.Ensure Connectivity to Oracle 9.Lock Contention 10.Extent Failure 11.Alert Logs ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com8

R12 Suitability Matching functionality

RameshKumar Shanmugam - Sat, 2008-01-19 18:02
Suitability Matching Functionality is the new functionality available in both Employee and Manager Self Service.
To explain this is in a very common language - this function helps the employee to find the opportunity in the organization that suites his role and competency.The employee can compare the various competency that is desired and essential for a particular role
For the manager Self service this function helps them to find the employee / applicant based on the competency and the role


List of Options available in the Manager Self Service




In the manager Self Service lets take an example of searching the employee/contingent worker/an applicant based on the competency profile and View it in a Graph for analysing the competency for the Job

Start with the option 'Find Suitable People by Competency'

Add the list of competency that you want to search for



Once you select the continue button, you will be able to list of people that match your competency requirement



Click the button Graph people by Competency button that will help you out in the viewing the graph based on the Min , max and the personal proficiency level



Try this out!!!
Categories: APPS Blogs

Oracle’s clusterware real time priority oddity

Christian Bilien - Sat, 2008-01-19 12:16
The CSS processes use both the interconnect and the voting disks to monitor remote node. A node must be able to access strictly more than half of the voting disks at any time (this is the reason for the odd number of voting disks), which prevents split brain. Let’s just recall that split brains are […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator