DBA Blogs

Why cost for TABLE ACCESS BY INDEX ROWID to high for only one row

Tom Kyte - Sat, 2019-10-19 15:45
Dear Tom, I have problem with query on table have function base index. create index : <code> create index customer_idx_idno on Customer (lower(id_no)) ; --- id_no varchar2(40) </code> <b>Query 1:</b> execute time 0.031s but cost 5,149, 1 row ...
Categories: DBA Blogs

How to get unique values/blanks across all columns

Tom Kyte - Sat, 2019-10-19 15:45
Hi, I have a wide table with 200 odd columns. Requirement is to pivot the columns and display the unique values and count of blanks within each column <code>CREATE TABLE example( c1 VARCHAR(10), c2 VARCHAR(10), c3 VARCHAR(10) ); / INSERT ...
Categories: DBA Blogs

Elastic search using Oracle 19c

Tom Kyte - Sat, 2019-10-19 15:45
Team, Very recently we got a question from our customer that "can I replace Elastic search using Oracle 19c or any version of Oracle database prior to that"? Any inputs/directions to that please - kindly advice.
Categories: DBA Blogs

Add NULLABLE column with DEFAULT values to a large table

Tom Kyte - Sat, 2019-10-19 15:45
I'm adding a number of columns with DEFAULT values that are NULLABLE to a large table e.g <code> alter table big_table add (col1 varchar2(1) default 0, col2 varchar2(1) default 0); </code> It's taking a long time to do because Oracle is e...
Categories: DBA Blogs

Create table from select with changes in the column values

Tom Kyte - Sat, 2019-10-19 15:45
Hello, In the work we have an update script that takes around 20 hours, and some of the most demanding queries are updates where we change some values, something like: <code> UPDATE table1 SET column1 = DECODE(table1.column1,null,null,'no...
Categories: DBA Blogs

Data archival and purging for OLTP database.

Tom Kyte - Sat, 2019-10-19 15:45
Hi Tom, Need your suggestion regarding data archival and purging solution for OLTP db. Currently we are planning to have below approach. database is size is 150 Gb and planning to run the jobs monthly. 1) Generate flat files from table based on...
Categories: DBA Blogs

Job to end in case connection not establishing with utl_http.begin_request

Tom Kyte - Sat, 2019-10-19 15:45
i am tracking around 17000 orders through a web service through pl/sql to a destination server. i am running multiple jobs in batches(for 500 orders each job) for invoking webservice to get the order status. so around 34 jobs are running (17000/500) ...
Categories: DBA Blogs

Inserting values into a table with '&'

Tom Kyte - Sat, 2019-10-19 15:45
Hi, I want to insert a values into a table as follows: create table test (name varchar2(35)); insert into test values ('&Vivek'); I tried the escape character '\' but the system asks for a value of the substitution variable. I also did a...
Categories: DBA Blogs

Basic Replication -- 8 : REFRESH_MODE ON COMMIT

Hemant K Chitale - Sat, 2019-10-19 09:26
So far, in previous posts in this series, I have demonstrated Materialized Views that set to REFRESH ON DEMAND.

You can also define a Materialized View that is set to REFRESH ON COMMIT -- i.e. every time DML against the Source Table is committed, the MV is also immediately updated.  Such an MV must be in the same database  (you cannot define an ON COMMIT Refresh across two databases  -- to do so, you have to build your own replication code, possibly using Database Triggers or external methods of 2-phase commit).

Here is a quick demonstration, starting with a Source Table in the HEMANT schema and then building a FAST REFRESH MV in the HR schema.

SQL> show user
USER is "HEMANT"
SQL> create table hemant_source_tbl (id_col number not null primary key, data_col varchar2(30));

Table created.

SQL> grant select on hemant_source_tbl to hr;

Grant succeeded.

SQL> create materialized view log on hemant_source_tbl;

Materialized view log created.

SQL> grant select on mlog$_hemant_source_tbl to hr;

Grant succeeded.

SQL>
SQL> grant create materialized view to hr;

Grant succeeded.

SQL> grant on commit refresh on hemant_source_tbl to hr;

Grant succeeded.

SQL>
SQL> grant on commit refresh on mlog$_hemant_source_tbl to hr;

Grant succeeded.

SQL>


Note : I had to grant the CREATE MATERIALIZED VIEW privilege to HR for this test case. Also, as the MV is to Refresh ON COMMIT, two additional object-level grants on the Source Table and the Materialized View Log are required as the Refresh is across schemas.

SQL> connect hr/HR@orclpdb1
Connected.
SQL> create materialized view hr_mv_on_commit
2 refresh fast on commit
3 as select id_col as primary_key_col, data_col as value_column
4 from hemant.hemant_source_tbl;

Materialized view created.

SQL>


Now that the Materialized View is created successfully, I will test DML against the table and check that an explicit REFRESH call (e.g. DBMS_MVIEW.REFRESH or DBMS_REFRESH.REFRESH) is not required.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> insert into hemant_source_tbl values (1,'First');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hr.hr_mv_on_commit;

PRIMARY_KEY_COL VALUE_COLUMN
--------------- ------------------------------
1 First

SQL> connect hr/HR@orclpdb1
Connected.
SQL> select * from hr_mv_on_commit;

PRIMARY_KEY_COL VALUE_COLUMN
--------------- ------------------------------
1 First

SQL>


The Materialized View in the HR schema was refreshed immediately, without an explicit REFRESH call.

Remember : An MV that is to REFRESH ON COMMIT must be in the same database as the Source Table.




Categories: DBA Blogs

Funny Gamertags for Xbox That You Don’t Want to Miss

VitalSoftTech - Thu, 2019-10-17 09:49

Being a regular gamer on Xbox player, some cool and funny Gamertags are very easy to come by. In the gaming universe of Xbox, the two essential things you need are 90% smartness and 10% luck, and you will come out the conqueror of the game. However, apart from skill and fortune, you need one […]

The post Funny Gamertags for Xbox That You Don’t Want to Miss appeared first on VitalSoftTech.

Categories: DBA Blogs

Excel Roundup Function – A Step By Step Tutorial

VitalSoftTech - Wed, 2019-10-16 10:28

Let’s face it; it’s all fun and games when you are playing with the numbers, but as soon as the decimals start to make an appearance, things start to take a turn. But it doesn’t necessarily have to be that way. With applications like Microsoft Excel at your disposal, handling the decimals has never been […]

The post Excel Roundup Function – A Step By Step Tutorial appeared first on VitalSoftTech.

Categories: DBA Blogs

Marks & Spencer Sparks, a Walkthrough

VitalSoftTech - Tue, 2019-10-15 10:02

As North Americans have their Walmart & Targets, the British have their own one-stop shop called Marks & Spencer established over one hundred years ago. It is a retail store for all home goods. It is a household name in Britain that caters to all kinds of domestic needs. A business as large as this […]

The post Marks & Spencer Sparks, a Walkthrough appeared first on VitalSoftTech.

Categories: DBA Blogs

Basic Replication -- 7 : Refresh Groups

Hemant K Chitale - Fri, 2019-10-11 23:24
So far, all my blog posts in this series cover "single" Materialized Views (even if I have created two MVs, they are independent of each other and can be refreshed at different schedules).

A Refresh Group is what you would define if you want multiple MVs to be refreshed to the same point in time.  This allows for
(a) data from transaction that touch multiple tables
or
(b) views of multiple tables
to be consistent in the target MVs.

For example, if you have SALES_ORDER and LINE_ITEMS tables and the MVs on these are refreshed at different times, you might see the ORDER (Header) without the LINE_ITEMs (or, worse, in the absence of Referential Integrity constraints, LINE_ITEMs without the ORDER (Header) !).

Here's a demo, using the HR  DEPARTMENTS and EMPLOYEES table with corresponding MVs built in the HEMANT schema.

SQL> show user
USER is "HR"
SQL> select count(*) from departments;

COUNT(*)
----------
27

SQL> select count(*) from employees;

COUNT(*)
----------
107

SQL>
SQL> grant select on departments to hemant;

Grant succeeded.

SQL> grant select on employees to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on departments;

Materialized view log created.

SQL> grant select, delete on mlog$_departments to hemant;

Grant succeeded.

SQL>
SQL> create materialized view log on employees;

Materialized view log created.

SQL> grant select, delete on mlog$_employees to hemant;

Grant succeeded.

SQL>
SQL>


Having created the source MV Logs  note that I have to grant privileges to the account (HEMANT) that will be reading and deleting from the MV Logs.

Next, I setup the MVs and the Refresh Group

SQL> show user
USER is "HEMANT"
SQL>
SQL> select count(*) from hr.departments;

COUNT(*)
----------
27

SQL> select count(*) from hr.employees;

COUNT(*)
----------
107

SQL>
SQL>
SQL> create materialized view mv_dept
2 refresh fast on demand
3 as select department_id as dept_id, department_name as dept_name
4 from hr.departments
5 /

Materialized view created.

SQL>
SQL> create materialized view mv_emp
2 refresh fast on demand
3 as select department_id as dept_id, employee_id as emp_id,
4 first_name, last_name, hire_date
5 from hr.employees
6 /

Materialized view created.

SQL>
SQL> select count(*) from mv_dept;

COUNT(*)
----------
27

SQL> select count(*) from mv_emp;

COUNT(*)
----------
107

SQL>
SQL> execute dbms_refresh.make(-
> name=>'HR_MVs',-
> list=>'MV_DEPT,MV_EMP',-
> next_date=>sysdate+0.5,-
> interval=>'sysdate+1');

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>


Here, I have built two MVs and then a Refresh Group called "HR_MVS".  The first refresh will be 12hours from now and every subsequent refresh will be after 24hours.  (The Refresh Interval must be set to what would be larger than the time taken to execute the actual Refresh).

However, I can manually execute the Refresh after new rows are populated into the source tables. First, I insert new rows

SQL> show user
USER is "HR"
SQL> insert into departments (department_id, department_name)
2 values
3 (departments_seq.nextval, 'New Department');

1 row created.

SQL> select department_id
2 from departments
3 where department_name = 'New Department';

DEPARTMENT_ID
-------------
280

SQL> insert into employees(employee_id, first_name, last_name, email, hire_date, job_id, department_id)
2 values
3 (employees_seq.nextval, 'Hemant', 'Chitale', 'hkc@myenterprise.com', sysdate, 'AD_VP', 280);

1 row created.

SQL> select employee_id
2 from employees
3 where first_name = 'Hemant';

EMPLOYEE_ID
-----------
208

SQL> commit;

Commit complete.

SQL>


Now that there are new rows, the target MVs must be refreshed together.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> execute dbms_refresh.refresh('HR_MVS');

PL/SQL procedure successfully completed.

SQL> select count(*) from mv_dept;

COUNT(*)
----------
28

SQL> select count(*) from mv_emp;

COUNT(*)
----------
108

SQL>
SQL> select * from mv_dept
2 where dept_id=280;

DEPT_ID DEPT_NAME
---------- ------------------------------
280 New Department

SQL> select * from mv_emp
2 where emp_id=208;

DEPT_ID EMP_ID FIRST_NAME LAST_NAME HIRE_DATE
---------- ---------- -------------------- ------------------------- ---------
280 208 Hemant Chitale 12-OCT-19

SQL>


Both MVs have been Refresh'd together as an ATOMIC Transaction.  If either of the two MVs had failed to refresh (e.g. unable to allocate extent to grow the MV), both the INSERTs would be rolled back.  (Note : It is not a necessary requirement that both source tables have new / updated rows, the Refresh Group works even if only one of the two tables has new / updated rows).

Note : I have used DBMS_REFRESH.REFRESH (instead of DBMS_MVIEW.REFRESH) to execute the Refresh.

You can build multiple Refresh Groups, each consisting of *multiple* Source Tables from the same source database.
You would define each Refresh Group to maintain consistency of data across multiple MVs (sourced from different tables).
Besides the Refresh Group on two HR tables, I could have, within the HEMANT schema, more Refresh Groups on FINANCE schema tables as well.

(Can you have a Refresh Group sourcing from tables from different schemas ?  Try that out !)


What's the downside of Refresh Groups ?    
Undo and Redo !  Every Refresh consists of INSERT/UPDATE/DELETE operations on the MVs.  And if any one of the MVs fails to Refresh, the entire set of DMLs (across all the MVs in the Refresh Group) has to *Rollback* !


Categories: DBA Blogs

100 Good Tumblr Usernames

VitalSoftTech - Thu, 2019-10-10 09:38

Shakespeare once said, “What is in a name?”. Our boy Shakespeare never had a Tumblr blog because, to have a hit blog, one must think of a catchy and cool Tumblr username. He would be pulling his hair out in despair had he tried to get his Tumblr blog off the ground with all the […]

The post 100 Good Tumblr Usernames appeared first on VitalSoftTech.

Categories: DBA Blogs

Funny Twitter Names

VitalSoftTech - Tue, 2019-10-08 09:45

If there is anyone who knows about the importance of having a funny Twitter name, it is the people who already use Twitter. Twitter is one of the most popular social media platforms. This microblogging website amounts up to an average of 330 Million active users every month over the globe. Fun Fact: Jack Dorsey, […]

The post Funny Twitter Names appeared first on VitalSoftTech.

Categories: DBA Blogs

Urdu in AWS

Pakistan's First Oracle Blog - Fri, 2019-10-04 00:24
Urdu is arguably one of the most beautiful and poetic language on the planet. AWS Translate now supports Urdu along with 31 other languages, which is awesome.



AWS Translate is growing leaps and bounds and has matured quite a lot over the last few months. There are now hundreds of translations and its now available in all the regions.


Amazon Translate is a text translation service that uses advanced machine learning technologies to provide high-quality translation on demand. You can use Amazon Translate to translate unstructured text documents or to build applications that work in multiple languages.


Amazon Translate provides translation between a source language (the input language) and a target language (the output language). A source language-target language combination is known as a language pair.


As with other AWS products, there are no contracts or minimum commitments for using Amazon Translate.



Categories: DBA Blogs

How to do a quick health check of AWS RDS database

Pakistan's First Oracle Blog - Fri, 2019-10-04 00:16
Just because the database is on AWS RDS, it doesn't mean that it won't run slow or get stuck. So when your users complain about the slowness of your RDS database, do the following quick health check:
1- From AWS console, in RDS section, go to your database and then go to Logs and Events tab. From Logs, in case of Oracle check alert log, in case of SQL Server check Error log, for PostgreSQL check postgres log and error log for MySQL database. Check for any errors or warnings and proceed accordingly as per that database engine.


2- If you dont see any errors or warnings or if you want to check in addition, then first check which database instance type you are using. For example for one of my test Oracle databases, it is db.r4.4xlarge.


Go to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html and check specifications of this instance type.

For instance, for db.r4.4xlarge, it is :


Instance Class vCPU, ECU, Memory (GiB), VPC Only, EBS Optimized, Max. Bandwidth (Mbps), Network Performance
db.r4.4xlarge 16  53  122   Yes   Yes    3,500     Up to 10 Gbps


So this db.r4.4xlarge has a max bandwidth (throughput) of 437.5 MB/s  (3500 Mbps/8 = 437.5 MB/s). The throughput limit is separate for read and write, which means you’ll get 437.5 MB/s for read and 437.5 MB/s for write.


3- Now go to Monitoring tab of this RDS in your console and check Read Throughput and Write Throughput to see if your instance is touching above threshold. For instance in this case 437.5. If yes, then you know that IO is the issue and you may need to either tune the SQLs responsible or increase instance size.


4- Similarly, from the same monitoring tab check for CPU usage, free memory available and Free storage space to make sure no other threshold is being reached.
5- Also check for Disk Queue Depth. The Disk Queue Depth is the number of IO requests waiting to be serviced. This time spent waiting in the queue is a component of latency and service time. Ideally disk queue depth of 15 or less should be good, but in case you notice latency greater than 10 or milliseconds accompanied by high disk queue depth than that could cause performance issues.


6- Last but not least, reviewing and tuning your SQLs is the biggest optimization gain you can achieve whether your database is in RDS or not.


Hope it helps.
Categories: DBA Blogs

How to rename an oracle scheduler job

Flavio Casetta - Thu, 2019-10-03 14:07
Categories: DBA Blogs

How Can you Use Audience Targeting to Show your Ads?

VitalSoftTech - Thu, 2019-10-03 10:10

Audience targeting has been a crucial tool in an advertiser’s arsenal for a long time. You can use audience targeting to show your ads to the exact people you’re targeting. What’s more, you can show it to them at the exact time you want. As advertisers, we believe that this is one of the core […]

The post How Can you Use Audience Targeting to Show your Ads? appeared first on VitalSoftTech.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs