Monday, December 18, 2006

Trace Analyzer

Trace analyzer is used to interpret raw trace files. It is similar to tkprof but its output is more readable than tkprof. It generates a report with details: time summary, call summary (parse, execute, fetch), identification of top SQL, row source plan, explain plan, CBO statistics, wait events, values of bind variables, I/O summary per schema object, latches, hot blocks, etc.

I like it for its ability of giving values for bind variables used in any SQL statement.

For a given raw SQL Trace generated by EVENT 10046 it provides:
  1. Trace identification including actual start and completion time, host name, instance, size, RDBMS version, etc.
  2. SQL statements count, user and internal, total and unique.
  3. Time summary: elapsed, cpu, non-idle wait events, idle wait events, for user (recursive and non-recursive) and for internal.
  4. Call summary for user and internal, with elapsed, cpu, logical reads, physical reads and similar performance details.
  5. Summary of wait events, classified by non-idle and idle for user and for internal (if EVENT 10046 level 8 or 12 was used generating trace).
  6. Top 20 SQL statements in terms of importance for SQL tuning analysis.
  7. List of all unique SQL statements with one-line performance summary per statement.
  8. Gaps of no trace activity found on file.
  9. List of transactions found (commits and rollbacks).
  10. Oracle errors if any.
  11. I/O core waits including schema objects affected (tables, indexes, partitions), when traced with level 8 or 12.
  12. Top 5 hot blocks, indicating the schema objects (level 8 or 12).
  13. Latch wait summary, by name and specific parameters (level 8 or 12).
  14. Non-default initialization parameters.

For every SQL statement included in the trace, it includes:
  1. Cursor header with SQL statement text, hash value, length, line on trace, depth, user, etc.
  2. Oracle errors if any.
  3. Call summary (parse, execute and fetch totals).
  4. Non-idle and idle wait events (if traced with level 8 or 12).
  5. I/O and latch waits summary (if level 8 or 12).
  6. First execution and top 10 for particular SQL statement.
  7. List of bind variables values for first and top 10 executions if trace was generated using EVENT 10046 level 4 or 12.
  8. Cumulative row source plan for all executions of SQL statement.
  9. Detailed explain plan if Trace Analyzer is executed on same instance where trace was generated, and if SQL statement made the Top 20 list.
  10. Table, index and partition details including row count, CBO statistics and indexed columns if the SQL statement generated an explain plan.

To download the installation script and usage visit http://www.metalink.oracle.com.

Friday, December 15, 2006

Renaming table when MV log is associated

Recently, I faced problem in renaming a table. The table had a materialized view log associated with it and I wanted to move the older version of the table and place a new version of the same instead.

But I was getting following error while renaming the old table:

ORA-26563: renaming this table is not allowed

One of the reasons of getting this error is, the table is used in a materialized view and a materialized view log is associated with the table to track the data changes.

Following are the steps need to be taken while renaming such tables:

1. Begin table organization to rename old table to bkup table
EXEC DBMS_MVIEW.BEGIN_TABLE_REORGANIZATION(USER,'OLD_TAB');

2. Rename old table to bkup table

RENAME OLD_TAB TO NEW_BKUP;

3. Rename new table to original table name

RENAME NEW_TAB TO OLD_TAB;

4. End table organization after renaming new table to original table name

EXEC DBMS_MVIEW.END_TABLE_REORGANIZATION (USER,'OLD_TAB');

Thursday, December 14, 2006

Automatic SQL Tuning

As we are aware, manual SQL tuning process is very complex and time consuming, Oracle 10g introduces automatic SQL tuning to ease out the tuning process. To use this feature, we need to use SQL tuning advisor.

Here, I will be giving a brief introduction of Automatic SQL Tuning and the simplest way of using SQL Tuning Advisor.

The query optimizer can run in two modes:

1. Normal mode: In this mode, the optimizer generates the access plan using already available statistics in a very short time and it may not get time to look at all the statistics available.

2. Tuning mode: In this mode, the optimizer generates recommendations after doing extra analysis for the query. This process can be time and resource consuming, so should be used for long ops queries only.

Automatic SQL tuning advisor performs following analysis:

· Statistics Analysis: The optimizer recommends the gathering of statistics wherever it has become stale.

· SQL Profiling: The optimizer generates a SQL profile with additional statistics specific to the query which can be saved and used later. This analysis generates a recommendation to use the SQL profile. A SQL profile, once accepted, is stored in the data dictionary and used to generate access plan for the query when the optimizer is running in normal mode.

· Access Path Analysis: In this analysis, the advisor recommends new indexes which may enhance the performance of the query.

· SQL Structure Analysis: The advisor suggests some changes in the SQL query itself to improve the performance. The developer should analyze the recommendations to check whether those are fit for the query. e.g. The advisor may suggest to replace UNION by UNION ALL. If the query will never give duplicate records, then UNION ALL will be faster.

SQL tuning advisor can work on various inputs such as ADDM, high load SQL statements, cursor cache, SQL tuning sets or manually specified statement. Here only cursor cache and manually specified statement methods are covered.

Using SQL Tuning Advisor

For query in cursor cache:

1. Create a SQL tuning task

DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_id => 'dn7gnkw06r2yu',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => 'dn7gnkw06r2yu_tuning_task',
description => 'Tuning task for statement dn7gnkw06r2yu.');

DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

2. Execute a SQL tuning task

BEGIN
DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name =>'dn7gnkw06r2yu_tuning_task' );
END;
/

The status of the task can be checke by following query

SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'dn7gnkw06r2yu_tuning_task';

The execution progress of the SQL Tuning Advisor can be checked in the V$ADVISOR_PROGRESS view.

SELECT sofar, totalwork FROM V$ADVISOR_PROGRESS WHERE user_name = 'HR' AND task_name = 'dn7gnkw06r2yu_tuning_task';

3. Display the results of a SQL tuning task

SET LONG 1000
SET LONGCHUNKSIZE 1000
SET LINESIZE 100
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK ('dn7gnkw06r2yu_tuning_task') FROM DUAL;

4. Implement recommendations as appropriate

For manually supplied query:

1. Create a SQL tuning task

DECLARE
my_task_name VARCHAR2(30);
my_sqltext CLOB;
BEGIN
my_sqltext := 'SELECT /*+ ORDERED */ * ' ||
'FROM employees e, locations l, departments d ' ||
'WHERE e.department_id = d.department_id AND ' ||
'l.location_id = d.location_id AND ' ||
'e.employee_id < :bnd'; my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK( sql_text => my_sqltext,
bind_list => sql_binds(anydata.ConvertNumber(100)),
user_name => 'HR',
scope => 'COMPREHENSIVE',
time_limit => 60,
task_name => 'my_sql_tuning_task',
description => 'Task to tune a query on a specified employee');
END;
/

2. Execute a SQL tuning task

BEGIN
DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name =>'my_sql_tuning_task' );
END;
/

The status of the task can be checke by following query

SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'my_sql_tuning_task';

The execution progress of the SQL Tuning Advisor can be checked in the V$ADVISOR_PROGRESS view.

SELECT sofar, totalwork FROM V$ADVISOR_PROGRESS WHERE user_name = 'HR' AND task_name = 'my_sql_tuning_task';

3. Display the results of a SQL tuning task

SET LONG 1000
SET LONGCHUNKSIZE 1000
SET LINESIZE 100
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task')
FROM DUAL;

4. Implement recommendations as appropriate

Wednesday, December 13, 2006

COUNT(*) Vs. COUNT(1)

The debat COUNT(*) vs. COUNT(1) is on for years now. So I thought of doing experiment on the same and here are the result:

I have done the test for indexed table as well as non-indexed table. The table has 6 millions of rows in both the tables and the rowwidth is approx 4000 bytes.

1. COUNT(1) is internally converted to COUNT(*).

SQL> EXPLAIN PLAN FOR
2 select max(REG_COUNT) from f_reg
3 group by MG_BKEY
4 having COUNT(1) > 1;

Explained.

SQL> SET LINESIZE 130
SET PAGESIZE 0
SELECT * FROM table(DBMS_XPLAN.DISPLAY);
SQL> SQL> Plan hash value: 272420531

-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 50738 (5)| 00:10:09 |
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 1 | 6 | 50738 (5)| 00:10:09 |
| 3 | TABLE ACCESS FULL| F_REG | 6632K| 37M| 49776 (4)| 00:09:58 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(COUNT(*)>1)

15 rows selected.

The DBMS_XPLAN gives you the plan and predicates used by oracle to execute a query. You can see above that the filter "COUNT(1) > 1" is converted to "COUNT(*) > 1".

2. There is no difference in COUNT(*) and COUNT(1).

Case a: Test on indexed table

set autotrace traceonly
select count(1) from f_reg;

Execution Plan
----------------------------------------------------------
Plan hash value: 622071604

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4429 (3)| 00:00:54 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| F_REG_PK | 6632K| 4429 (3)| 00:00:54 |
--------------------------------------------------------------------------

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
19808 consistent gets
0 physical reads
0 redo size
414 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


select count(*) from f_reg;

Execution Plan
----------------------------------------------------------
Plan hash value: 622071604

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 4429 (3)| 00:00:54 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| F_REG_PK | 6632K| 4429 (3)| 00:00:54 |
--------------------------------------------------------------------------

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
19808 consistent gets
0 physical reads
0 redo size
414 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


Case b: Test on non-indexed table

set autotrace traceonly
select count(1) from f_reg_1_bkup;

Execution Plan
----------------------------------------------------------
Plan hash value: 2458129084

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 48989 (1)| 00:09:48 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| F_REG_1_BKUP | 6632K| 48989 (1)| 00:09:48 |
---------------------------------------------------------------------------


Statistics
----------------------------------------------------------
330 recursive calls
0 db block gets
221666 consistent gets
176232 physical reads
0 redo size
414 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
1 rows processed



select count(*) from f_reg_1_bkup;

Execution Plan
----------------------------------------------------------
Plan hash value: 2458129084

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 48989 (1)| 00:09:48 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| F_REG_1_BKUP | 6632K| 48989 (1)| 00:09:48 |
---------------------------------------------------------------------------


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
221585 consistent gets
175908 physical reads
0 redo size
414 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

You can see, there is no difference at all in statistics or timings in case of indexed table, whereas in case of non-indexed table there is a slight difference in consistenet gets which is less in case of COUNT(*) and no difference in elapsed time.

P.S. The tests were carried out in Oracle 10g.

Parameterized View

I always used to think why oracle didn't provide parameterized views similar to parameterized cursors, till I found workaround for the same.

Following is an example which shows use of parameterized view:

CREATE OR REPLACE VIEW test_vw AS
SELECT empno,ename,sal
FROM emp
WHERE empno = to_number(userenv('client_info')) ;

In Stored Procedure, will have to call following procedure before executing the View query, as in following example:

exec dbms_application_info.set_client_info(7934);

select * from test_vw ;

And the output is:

EMPNO ENAME SAL
---------- ---------- ----------
7934 MILLER 13040

Wednesday, October 18, 2006

DDL Triggers

A trigger is a block of PL/SQL or Java or C code that executes or fires when a particular event occurs within database.

Triggers are useful for implementing code that must be executed on regular basis due to a pre-defined event. By utilizing, triggers, scheduling and data integrity problems can be eliminated because the trigger will be fired whenever the triggering event occurs. You need not remember to schedule or code an activity to perform the logic in the trigger.

There are various events on which a trigger can be written, such as:
1.System events
.....a. Database startup and shutdown
.....b. Server error message events
2.User events
.....a. User logon and logoff
.....b. DDL statements (CREATE, ALTER, and DROP)
.....c. DML statements (INSERT, DELETE, and UPDATE)

Here we would be concentrating on triggers invoked by DDL statements.

Being a DBA, you are monitoring a database and suddenly you realize that there is a particular table, which is occupying lots of space. The table is not a part of application tables and you don’t know who had created it.

Your application is working fine. One fine day, users start complaining that the application is not working. After searching for hours, you come to know someone had altered a table. If you had prevented alter for the user, this wouldn’t have happened.

In such scenario, DDL triggers can be used. Use DDL triggers when you want to do the following:
• You want to prevent certain changes to your database schema.
• You want something to occur in the database in response to a change in your database schema.
• You want to record changes or events in the database schema.

DDL Triggers are supported in Oracle and SQL Server 2005 but not in DB2.

DDL Triggers in Oracle:

Following is the list of the events supported in Oracle:

BEFORE / AFTER ALTER
BEFORE / AFTER CREATE
BEFORE / AFTER DROP
BEFORE / AFTER RENAME
BEFORE / AFTER ANALYZE
BEFORE / AFTER ASSOCIATE STATISTICS
BEFORE / AFTER DISASSOCIATE STATISTICS
BEFORE / AFTER AUDIT
BEFORE / AFTER NOAUDIT
BEFORE / AFTER COMMENT
BEFORE / AFTER DDL
BEFORE / AFTER GRANT
BEFORE / AFTER REVOKE
BEFORE / AFTER TRUNCATE
AFTER SUSPEND

Example:

CREATE OR REPLACE TRIGGER bcs_trigger
BEFORE CREATE
ON SCHEMA

DECLARE
oper ddl_log.operation%TYPE;
BEGIN
INSERT INTO ddl_log
SELECT ora_sysevent, ora_dict_obj_owner,
ora_dict_obj_name, NULL, USER, SYSDATE
FROM dual;
END bcs_trigger;
/

DDL Triggers in SQL Server 2005:

Following are few of the events supported in SQL Server 2005:

Create_Table
Alter_Table
Drop_Table
Create_View
Alter_View
Drop_View
Create_Synonym
Drop_Synonym
Create_Function
Alter_Function
Drop_Function
Create_Procedure
Alter_Procedure
Drop_Procedure

Example:

CREATE TRIGGER safety
ON DATABASE
FOR DROP_TABLE, ALTER_TABLE
AS
PRINT 'You must disable Trigger "safety" to drop or alter tables!'
ROLLBACK
;

Using DDL Triggers, you can prevent or log changes to your database. As a DBA or database developer, it may be important to know if and when something in your database has changed. In some cases, you may even want to prevent changes by any user altogether.

Using DDL Triggers, you may have naming conventions in your database. DDL triggers can enforce them by preventing improperly named tables, views, stored procedures, and even table columns.

Using DDL Triggers, you may want to perform other actions in response to a DDL event.

As a DBA or database developer, it may be important to respond to changes in a database. DDL triggers allow you to respond to and prevent changes altogether.

Dangers of Bind Variable

We have seen and heard many advantages of using bind variables in a query, one of which is doing hard parse when a query is executed for the first time and then reusing the plan created at the time of first execution.

But there is a catch in this. This is advantageous for less skewed data but if a column is heavily skewed, it becomes disadvantage.

Look at the example given below:

DROP TABLE t;

CREATE TABLE t
AS
SELECT 1 id, a.* FROM all_objects a;

UPDATE t SET id = 99 WHERE ROWNUM = 1;

CREATE INDEX t_idx ON t(id);

ANALYZE TABLE t COMPUTE STATISTICS
FOR TABLE
FOR ALL INDEXES
FOR ALL indexed COLUMNS SIZE 2;

variable n number;

exec :n := 1;

select * from t n_was_1 where id = :n;

Explain Plan from v$sql_plan

OPERATION OPTIONS
SELECT STATEMENT
TABLE ACCESS FULL

exec :n := 99;

select * from t n_was_99 where id = :n;

Explain Plan from v$sql_plan

OPERATION OPTIONS
SELECT STATEMENT
TABLE ACCESS BY INDEX ROWID

These plans are exactly similar to what we would expect it to be.

Now we execute same queries (using soft parse):

exec :n := 99;

select * from t n_was_1 where id = :n;

OPERATION OPTIONS
SELECT STATEMENT
TABLE ACCESS FULL

exec :n := 1;

select * from t n_was_99 where id = :n;

OPERATION OPTIONS
SELECT STATEMENT
TABLE ACCESS BY INDEX ROWID


As you can see, the queries have used the plan which was created at the time of hard parse.

Use hints like FIRST_ROWS(n)

Usually queries used for search fetch lots of records if proper search condition is not provided. If the query is fetching thousands of rows, that will be of no use for the end user. In such scenario, no of records should be limited by using ROWNUM <>/*+ FIRST_ROWS (n) */ should be used, where n is no of rows required.


Note: Hint /*+ FIRST_ROWS */ does not know the scope of the query and generally favours index access over full-table scans. Therefore /*+ FIRST_ROWS (n) */ should be used instead of /*+ FIRST_ROWS */.

Lets do a small experiment:

DROP TABLE t;

CREATE TABLE t AS SELECT ROWNUM id, ao.* FROM ALL_OBJECTS ao;

ANALYZE TABLE t
COMPUTE statistics;

SET timing on
SET autotrace traceonly
spool first_rows

prompt 'SELECT * FROM t;'
SELECT * FROM t;

prompt 'SELECT /*+ FIRST_ROWS */ * FROM t;'
SELECT /*+ FIRST_ROWS */ * FROM t;

prompt 'SELECT /*+ FIRST_ROWS_100 */ * FROM t;'
SELECT /*+ FIRST_ROWS_100 */ * FROM t;

prompt 'SELECT /*+ FIRST_ROWS(100) */ * FROM t;'
SELECT /*+ FIRST_ROWS(100) */ * FROM t;

prompt 'SELECT * FROM t WHERE ROWNUM <>
SELECT * FROM t WHERE ROWNUM <>

spool off
exit


And the results are:

CASE 1:
'SELECT * FROM t;'
88430 rows selected.
Elapsed: 00:00:03.04
Cost = 286

CASE 2:
'SELECT /*+ FIRST_ROWS */ * FROM t;'
88430 rows selected.
Elapsed: 00:00:03.03
Cost = 286

CASE 3:
'SELECT /*+ FIRST_ROWS_100 */ * FROM t;'
88430 rows selected.
Elapsed: 00:00:03.02
Cost = 286

CASE 4:
'SELECT /*+ FIRST_ROWS(100) */ * FROM t;'
88430 rows selected.
Elapsed: 00:00:03.02
Cost = 4

CASE 5:
'SELECT * FROM t WHERE ROWNUM <>
100 rows selected.
Elapsed: 00:00:00.00
cost = 4

After looking at above results, it is clear that the best option is using ROWNUM <>