Tuesday, June 30, 2009

Tools for tuning the database

In previous segment, we talked about various aspects of Performance Tuning. There are many tools provided by Oracle to monitor these aspects. These are

1. Oracle Enterprise Manager : This is a web based tool provided by Oracle to manage databases efficiently. OEM helps in all kind of DBA activities including create tables, manage tablespaces etc.. In addition, OEM provides interface for various performance advisors.
2. Oracle Diagnostics Pack : ADDM and AWR are the widely used diagnostic tools apart from many others provided by Oracle in this pack.
3. Oracle Database Tuning Pack : This pack automates the tuning process using SQL Tuning Advisor and SQL Access Advisor.
4. Oracle Real Application Testing : This consists of Database Replay and SQL Performance Analyzer.

Apart from these advanced tools, there are many basic level tools available, such as tkprof. Though tkprof gives you a wide range of information to detect the affected query and bottlenecks, one needs to be well versed with how to read a tkprof report, because the report is text based.

In coming parts we will be knowing about all these tools one by one.

Thursday, June 25, 2009

Performance Tuning – Overview

The first and foremost question which comes in mind is what is the need for tuning? This is the world of fast food, fast bikes, fast music, fast cars, in short fast everything, then why not fast database as well? Today everyone is in some kind of a race, and to achieve his/her goal one needs to win that race. To win a race one needs to either run fastest of all or make the horse (or car) run fastest of all. Performance tuning helps applications to run faster.

Is tuning all about running the application faster? No. Time is of course one of the major things we achieve during (and after) tuning, but tuning also means achieving the output in best possible way in least possible time while utilizing optimal resources.

Tuning has many aspects which include (but not restricted to)

1. Database design: This is the first and most important but most ignored part of tuning. Poor database design often results in poor database performance as well.
2. Application tuning: It is a well known fact that the application performance can be improved by 80% just by writing efficient SQLs.
3. Memory tuning: database buffers should be properly sized. As far as possible, use of automated parameters is advised. Pinning large or frequently used objects may also help.
4. Disk I/O tuning: Data files should be properly sized and stored to provide optimum disk throughput. Also, disk sorts, FTS, row chaining, etc need to be checked.
5. Eliminating database contention: Database locks, latches and wait events should be watched carefully.
6. Operating System tuning: OS CPU, I/O and memory usage should also be monitored.

If one takes care of all the above aspects, the good performance of the application will be guaranteed.

In next segment we will check the tools available for tuning purpose.

Monday, June 22, 2009

Performance Tuning

The word itself is vast enough to write a book on it. And many have already been written. Then why am I writing a blog on PT? Answer is simple; for quick check. Books are very tedious for searching something quickly. So I am trying to consolidate things and put it in a place easy to find. You can say it is kind of quick reference when I am in mid of something and do not want to go and search on net or a book. And also easily accessible.

I am not planning on how I am going to write this. I may consolidate multiple topics in one blog or one topic may stretch for 2-3 or more blogs if required.

Let me know if you find anything useful for you, if any topic helps you in your hard time or if you find any topic of no use of yours.

Friday, November 16, 2007

Oracle Listener Security

You are working on a critical issue on your database. You login to your database and you get the error "ORA-12541: TNS:no listener". Now you start wondering how the listener has been stopped. You have not stopped it. It means "SOMEONE ELSE HAS STOPPED THE LISTENER".

It shows, you need to secure your listener from unauthorized users.

You can protect your listener with a password so that only the authorized users can start and stop it and change the configuration.

To protect a listener with password, perform following as an Oracle user:

$ lsnrctl
LSNRCTL> change_password
Old password: {press enter here}
New password: {enter new password}
Reenter new password: {reenter password}

You will get following message once done

Password changed for LISTENER
The command completed successfully

Once you get the message, just save the cconfiguration in the listener.ora file.

LSNRCTL> save_config
Saved LISTENER configuration parameters.
Listener Parameter File D:\oracle\product\10.2.0\db_1\network\admin\listener.ora
Old Parameter File D:\oracle\product\10.2.0\db_1\network\admin\listener.bak
The command completed successfully

That's it, and you have a password protected listener.

P.S. In Oracle 9i you need to give following command before saving the configuration in listener.ora file.

LSNRCTL> set password
Password:{the password you chose}
The command completed successfully

In Oracle 10g it is not required. Reason: In Oracle 10g, operating system authentication for the listener has been implemented. The listener checks to make sure you are part of the privileged dba group, and if so it will grant you access to change the password, save configuration, stop the listener, etc.

Now you are safe from all the unauthorized shutdowns of listener and other kind of security breaches.

Auditing ALTER SYSTEM commands

Wonder, who is changing the system date? Who is changing the parameter? Who is giving all those ALTER SYSTEM commands which are causing havoc for others?

The simplest solution of the problem is, "DO NOT PROVIDE PASSWORD OF DBA USER TO A PERSON WHO IS NOT A DBA". Looks simple, right? Wrong. In developement environments it is not possible to restrict the access of DBA user to 1 or 2 persons. And it is very difficult to track someone who is giving ALTER SYSTEM commands. Triggers will not work.

These commands can not be stopped, but at least can be monitored. To monitor this, you need to start the AUDITing. Again, this can hit the performance, so use it when it is really really important.

First step is to start auditing for the database. To start auditing, follow these steps:

1. Set parameter audit_trail=db,extended in the spfile.
2. Connect to SYS user and execute the following script
3. Auditing can be started for a user
        a.) for a particular command he fires (whenever the command is successful or not successful)
        b.) for ALL the commands fired by him
            AUDIT ALL BY abc BY ACCESS;
4. shutdown the database
5. startup the database

That's it. Now you can monitor any command fired by the user. Details can be checked in DBA_AUDIT_STATEMENT view. This view will give you the user name, operating system user name, host name, terminal, timestamp and SQL text with all the other information.

So folks, try it and let me know how many users you have caught red handed when they were doing any harm to your database.

Friday, September 21, 2007

Virtual Indexes

Recently I have been to one of our clients for performance tuning of few processes. The process of tuning SQL queries requires testing of different alternatives of indexes. It becomes difficult in production environment where lots of people are connected to the database and tables are large.

Virtual indexes helps in such situations. Since virtual indexes does not use any segment, time required to create index and disk space required is irrelevant.

Command to create virtual index:


To make virtual index available, parameter _use_nosegment_indexes needs to be set to true.

ALTER SESSION SET "_use_nosegment_indexes" = TRUE;

The USER_INDEXES view does not keep information about virtual indexes. Only USER_OBJECTS view keeps the record.

Tuesday, February 06, 2007

Auditing SYS users

The SYS user is the most usefull and important user for any Oracle database. Lets take a look how a DBA can take hold of it if it is being misused.

Oracle provides the auditing facility of SYS account since the version 7 but after 9iR2, it has become more powerful in many ways.

How to Audit SYS account

Mainly there are only two parameters which need to be set to audit any SYS account, audit_file_dest and audit_sys_operations. The prior one sets the location for audit files and later one is ON/OFF for SYS auditing.

The audit files are created under the path provided in the parameter
audit_file_dest with ora_.aud name. The file usually have following informations:

Audit file /u01/app/oracle/admin/tcolap1/adump/ora_14725.aud
Oracle Database 10g Enterprise Edition Release - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
System name: Linux
Node name: hostname.domainname
Release: 2.6.9-34.ELsmp
Version: #1 SMP Fri Feb 24 16:54:53 EST 2006
Machine: i686
Instance name: tcolap1
Redo thread mounted by this instance: 1
Oracle process number: 25
Unix process pid: 14725, image: oracle@
hostname.domainname (TNS V1-V3)

Tue Feb 6 14:11:14 2007

With all the information related to the session like 'connect' and all the valid commands, it provides server related information as well, such as OS, version, Machine, etc. It does not store the invalid commands.

Invalid connection as SYSDBA

The audit files also store invalid login information. To check invalid logins to the system, following command can be given:

grep -i "status:" *.aud | grep -i -v "status: 0"

Parameters to set

alter system set audit_file_dest='/u04/app/oracle/oradata/saigon/audit' scope=spfile;
alter system set audit_sys_operations=true scope=spfile;

Bounce the database and you are done.

View the Audit Information

Audit information can be viewed in a text format or XML format depending on the parameter audit_trail setting.

Monday, December 18, 2006

Trace Analyzer

Trace analyzer is used to interpret raw trace files. It is similar to tkprof but its output is more readable than tkprof. It generates a report with details: time summary, call summary (parse, execute, fetch), identification of top SQL, row source plan, explain plan, CBO statistics, wait events, values of bind variables, I/O summary per schema object, latches, hot blocks, etc.

I like it for its ability of giving values for bind variables used in any SQL statement.

For a given raw SQL Trace generated by EVENT 10046 it provides:
  1. Trace identification including actual start and completion time, host name, instance, size, RDBMS version, etc.
  2. SQL statements count, user and internal, total and unique.
  3. Time summary: elapsed, cpu, non-idle wait events, idle wait events, for user (recursive and non-recursive) and for internal.
  4. Call summary for user and internal, with elapsed, cpu, logical reads, physical reads and similar performance details.
  5. Summary of wait events, classified by non-idle and idle for user and for internal (if EVENT 10046 level 8 or 12 was used generating trace).
  6. Top 20 SQL statements in terms of importance for SQL tuning analysis.
  7. List of all unique SQL statements with one-line performance summary per statement.
  8. Gaps of no trace activity found on file.
  9. List of transactions found (commits and rollbacks).
  10. Oracle errors if any.
  11. I/O core waits including schema objects affected (tables, indexes, partitions), when traced with level 8 or 12.
  12. Top 5 hot blocks, indicating the schema objects (level 8 or 12).
  13. Latch wait summary, by name and specific parameters (level 8 or 12).
  14. Non-default initialization parameters.

For every SQL statement included in the trace, it includes:
  1. Cursor header with SQL statement text, hash value, length, line on trace, depth, user, etc.
  2. Oracle errors if any.
  3. Call summary (parse, execute and fetch totals).
  4. Non-idle and idle wait events (if traced with level 8 or 12).
  5. I/O and latch waits summary (if level 8 or 12).
  6. First execution and top 10 for particular SQL statement.
  7. List of bind variables values for first and top 10 executions if trace was generated using EVENT 10046 level 4 or 12.
  8. Cumulative row source plan for all executions of SQL statement.
  9. Detailed explain plan if Trace Analyzer is executed on same instance where trace was generated, and if SQL statement made the Top 20 list.
  10. Table, index and partition details including row count, CBO statistics and indexed columns if the SQL statement generated an explain plan.

To download the installation script and usage visit http://www.metalink.oracle.com.