[DBA Daily Notes] ORA-00054 Resource busy when dropping table

Here are a few things you can do

  • Kill session from GUI

Go to toad and log on as system administrator. You could also do this sql developer.

 

Now you would have an overview of the sessions that might involve in locking the tables you would like to drop. So you could simply identify the sessions and kill the corresponding sessions where table locks are withheld.

 

  • Check v$session and kill session from command.

However sometimes, even we kill the existing sessions, the error would still not go away. I run my drop table script in toad that you could get the details of which table is being locked and the type of the lock. And then go to locks tab to identify the Sid number. You would see a similar window as below.  

 

For example, on the above screen, my PEGASYS_OBLIGATI table is locked with Lock Type DML. It is because I was doing the data refresh and then it was interrupted in the middle. But Sid 24 didn’t show in any of the sessions in the session window. I could not kill the session to get rid of the locks. Instead I run the following sql statement to find the serial number in order to kill the dead session that holds the lock for my table of object.

 

> select * from v$session where blocking_session is not null;

 

After running the sql statement, I find the serial number with the corresponding  sid.

 

> alter system kill session ‘42,789’;

 

Now we are all set. Lock is released and drop could be proceeding.

 

If you want  to know more about v$session, check the link below:

 

http://docs.oracle.com/cd/B19306_01/server.102/b14237/dynviews_2088.htm

 

Advertisement

Export a table to csv file and import csv file into database as a table

Today, I would be talking about exporting a table to csv file. This method is especially useful for advanced data manipulation within the csv file and I would also talk about how to import the csv data back into the database as a table. I will be showing Java code snippet to explain importing the csv data and the tradeoff for using sql developer. My environment is based on oracle database and so that all the utility I would be using here is targeting oracle database.

Export a table to csv file:

Method 1. Use oracle sql developer. Please take a look at the video below:

Note: This is good for small table with not much data. However, if you are dealing with a large table, I would recommend you try method 2.

Method 2. Generate the csv file from command line. Here is a sample sql file I used to generate the  csv file.

set pause off
set echo off
set verify off
set heading off
set linesize 5000
set feedback off
set termout off
set term off spool FY11_276.csv
set colsep ','
set pagesize 0
select 'BOOK_MONTH_ID','BOOK_MONTH_CODE','BOOK_MONTH_ORDER','QUARTER_NUMBER','BOOK_MONTH_NUMBER','BOOK_MONTH_NAME','QUARTER_NAME', 'DEL_COL' from dual;
select BOOK_MONTH_ID,BOOK_MONTH_CODE,BOOK_MONTH_ORDER,QUARTER_NUMBER,BOOK_MONTH_NUMBER,BOOK_MONTH_NAME,QUARTER_NAME, null from claire_sample.book_month;

spool off 

Note: You could notice that the highlighted select statement has a last column called: DEL_COL.This column is used to fill in the space after the column separator coma.Otherwise the csv file generated from the command line would have a huge last column filled with spaces.You might need to manually delete the column if you would use oracle sql developer to import the csv file back into database later on.

To be continued…

How to back up a table and import data from dump file

There are two approaches to back up a table:

1: Back up a table under the exsiting schema by creating a new table with tablename_bck nameing convention.

   Simply run the following query under the current schema :

   Creat table tablename_bck as select * from <tablename>;

2: Export the table to a dump file.

    Open a command line tool .Use Oracle utility tool expdp

expdp sample/samplepwd@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

3: Import the dump file into database.

impdp sample/samplepwd@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log

More thoughts:

    I would also like to note about the old utility tool exp and imp.

    The syntax is slightly different. Without creating a reusable directory, in exp and imp, you have to specify explicitly of the directory where your dump file is saved to.

exp sample/samplepwd@db10g tables=EMP,DEPT file="C:\EMP_DEPT.dmp" log="C:\impdpEMP_DEPT.log"
imp sample/samplepwd@db10g tables=EMP,DEPT file="C:\EMP_DEPT.dmp" log="C:\impdpEMP_DEPT.log"

  Error: ORA-39143: dump file “M:\SAMPLE.dmp” may be an original export dump file

    

     The above problem happened whenever you try to use the Import Data Pump client (impdp) to import a dumpfile that was created with the original Export client (exp).

Backup exsiting data and load data dump into Database

Export current data dump as a backup.

   Method 1: Use utility exp

  • Login to the Database Server.
  • Start a windows command prompt, click on Start > Run
  • Type in cmd in the dialog box and click on OK.
  • Change directory to the root C:\ >

Type in the following command:

exp <user_toexport>/<user_toexport_password>

file=<directory>\dumpFileName.dmp log=<directory>\logFileName.log owner=<to_user> buffer=10000

Press [Enter]

The backup dump file will be found in the directory you specified.

For example: the following command is to export sample data from SAMPLE database:

exp sample/sample file=C:\sample.dmp log=C:\sample.log owner=sample buffer=100000

   Method 2: using Data Pump expdp

  • Login to the Database Server.
  • Start a windows command prompt, click on Start > Run
  • Type in cmd in the dialog box and click on OK.
  • Type in the following command to connect to the SAMPLE database

SQLPLUS system/<system password>

Press [Enter]

e.g. SQLPLUS system/<system password>

Execute the following commands to create a database directory. This directory must point to a valid directory on the same server as the database:

SQL> CREATE or REPLACE DIRECTORY <directory_name> AS ‘<directory\folder>\‘;

Directory created.

SQL> GRANT READ, WRITE on directory <directory_name> to <user_toexport>/

e.g.  CREATE or REPLACE DIRECTORY imp_dir as ‘D:\db_dump’;

GRANT READ, WRITE on directory imp_dir to bisbtm;

·         Create a folder under directory

·         Type in the following command:

expdp <user_toexport>/<user_toexport_password>

directory=< directory_name> dumpfile=dumpFileNam.dmp 

e.g. expdp sample/sample directory=imp_dir dumpfile=samp.dmp

Press [Enter]

The backup dump file will be found in the directory you specified.

To be continued….

         

Improve Database performance by analyzing tables

A little bit background why we need to analyze tables, indexes, clusters:

         Sometimes we have changes for the tables, for example if we have regular ETL process that constantly modifies the database structure or table contents. The statistics that Oracle collects from the last table analyzation might be out of sync with the current data dictionary. And this information is often used by the optimizer when querying. Therefore querys base against the old statistics would run slow and the database performance could be decreased because of the changed statistic not being collected accordingly.

       What statistics does Oracle collect by the analyzation?

         Oracle will collect statistic on the number of rows, the number of empty data blocks, the number of blocks below the high water mark, the average data block free space, the average row length, and the number of chained rows in a table when the Oracle ANALYZE TABLE command is performed.Oracle ANALYZE TABLE can be used to collect statistics on a specific table.

          When using Oracle ANALYZE TABLE all domain indexes marked LOADING or FAILED will be skipped.

          Oracle will also calculate PCT_ACESS_DIRECT statistics for index-organized tables when using Oracle ANALYZE TABLE

         Note:Before analyzing a table with the Oracle ANALYZE TABLE command you must create function based indexes on the table.

        Two options for analyzing tables: Computes Statistics VS Estimate Statistics

Computes Statistics Estimate Statistics
Method Full Table Scan Sampling
Accuracy High Depends on the sample
Cost High Low

        Computer Statistics Analyzation uses the full table scan again the entire Oracle table. Upon completion of the command, the data dictionary would be updated with the highly accurate statistics by the cost-based optimizer.However, Estimate Statistics Analyzaiton would take samples from the table and the samples are stored in the data dictionary.

       Note: You need to weight the time and database resource against the accuracy of the statistics before considering which method should take at your current point of view. For small to medium tables, use analyze table table_name compute statistics. For larger tables take a sample (for example, analyze table table_name estimate statistics sample 20 percent).

         Utilize TOAD for table analyzations

          Login Toad with your database connection. Select the table you want to analyze and right click on Analyze Table.

  

    Once you get to the analyzation page, select the tables you want to do analyzation on and click on the green arrow to start analyzing the table.By default the method would be estimate statistics as shown in the picture below.

The success information is displayed after the analyzation.

You could also change the mode to be compute statistics by switching to the Options tab and modify the Analyze Functions as shown below:

              Utilize SQLPLUS for table analyzations

             Login to SQLPLUS with your database connection as below:

            A better solution : Utilize DBMS_STATS to collect statistics

Here is a code snippet for how we are gonna invoke the dbms_stats package in 10g.Just note that Oracle Analyze table commands are now considered to be old fashioned. And the dbms_stats package are used more and more frequently in packages because it provides high quality infomation about tables and indexes.

BEGIN

FMIS_SYNCH.SYNCH_REF_TABLES_WITH_FMIS;

DBMS_STATS.GATHER_SCHEMA_STATS

(

ownname=>'yourowner',

options => 'GATHER AUTO',

estimate_percent => DBMS_STATS.auto_sample_size,

method_opt=>'for all columns size repeat',

cascade => true,

degree=>12

);


END;

For a more detailed information on the features that Oracle 11g Database has offered. Please take a look at the guru’s blog as below:

http://www.dba-oracle.com/art_builder_dbms_stats.htm

Dropping Tables(Options and Recovery)

This blog is mainly for analyzing dropping table commands for Oracle SQL. You might have seen the dropping table commands like below:

   1.drop table table_name;   
   2.drop table table_name cascade constraints;
   3.drop table table_name purge;
   

and wonder what would Oracle Database do behind the scenes for dropping table commands?

Basically, if there are no keywords specified as the first command shows, Oracle Database would perform the following operations:

  • All rows from the table are dropped.

  • All table indexes and domain indexes are dropped, as well as any triggers defined on the table, regardless of who created them or whose schema contains them. If table is partitioned, then any corresponding local index partitions are also dropped.

  • All the storage tables of nested tables and LOBs of table are dropped.

  • When you drop a range-, hash-, or list-partitioned table, then the database drops all the table partitions. If you drop a composite-partitioned table, then all the partitions and subpartitions are also dropped.

The first question is Why am I getting the error: ORA-02449: unique/primary keys in table referenced by foreign keys

Answer: The Oracle error is caused by the attempt to drop a table with unique or primary keys referenced by foreign keys in another table, or in other word, the table that is referenced as a parent table by a foreign key constraint in a parent-child relationship that established between two tables through a foreign key. Oracle does not allow to drop tables referenced by foreign keys of other tables without specifying the CASCADE CONSTRAINTS option in the DROP TABLE statement, or to drop the parent table without first removing the foreign key.

The solution and workaround for the error when you want to drop tables referenced by child tables, is to use the CASCADE CONSTRAINTS option in the DROP TABLE statement.

That is why we use: 2. drop table table_name cascade constraints;

Now the question is What if I accidentally delete the table and want to recover it back?

Here comes the keyword PURGE that differentiates a normal dropping command and a purged one. See the table below for comparisons between the two statements.

DROP TABLE WITHOUT PURGE DROP TABLE WITH PURGE
Release the space No Yes
Rollbackable Yes No

Unless you specify the PURGE clause, the DROP TABLE statement does not result in space being released back to the tablespace for use by other objects, and the space continues to count toward the user’s space quota. Additionally, the first command only results in the to-be-dropped table and its dependent objects being placed into the recycle bin. Specify PURGE if you want to drop the table and release the space associated with it in a single step.

Note: You cannot roll back a DROP TABLE statement with the PURGE keyword clause, nor you can recover the table if you have dropped it with PURGE clause. So be cautious and intentionally.

So how could we recover the tables in the recycle bin?

One feature of Oracle 10g is recyclebin, which allows you to recover dropped tables.With recyclebin, any tables you drop do not actually get deleted. Instead, Oracle renames the table and its associated objects to a system-generated name that begins with BIN$.

Look at the following example to get a better understanding:

Let us first create a testing table called studentinfo that contains one row as below.

Now we want to drop this table and look into recycle bin to find it.

You could see the object name is changed because of the dropping action.

You could still query it as you would do before with this new object name. Because the table data is still there and if you want to undo the drop action, you could use the FLASHBACK command like below:

The next time when we drop the same table, oracle database would associate a new object name for the table. So you would understand that after a table is dropped, it is just renamed. It is still part of your tablespace, and this space counts against your user tablespace quota.This space would not release until you purge the table out of the recycbin.

So now goes my last question: Will the dropped tables stay in the recycle bin forever if there is no action made toward it later on?

The answer is NO. When your tablespace runs out of space or until you reach your user quota on the tablesapce. At that point, Oracle database would purge the objects at one time, starting with the ones dropped the longest time ago, until there is enough space for the current operation.

Oracle OPatch Failure & OPatch Rollback

OPatch Failure

I was encountering an OPatch failure during my Oracle Application Server 10g upgrade. The reason is that our previous developers failed during the last patch installation. The failure message is displayed below:

      Interim Patch is holding the lock from xxxxxxx

It looked like during the last unsuccessful patch installation. Patch number xxxxxxx acquired the patch lock but not released it. When the new patch was trying to aquire the same lock. It would be on hold. This information is generated during the last patch run and stored in the patch_locked file under the directory:

       $ORACLE_HOME/.patch_storage

If some interruption happened during an unsuccessful installation of the patches, the locks would not be released and this file would not be removed as well. Therefore, the following patch installation would encounter the share lock issue further caused by this file.

    Delete this file and then reinstall the patch. You would be good to go!

OPatch Rollback

Sometimes when you applied the patches, there would be some incompatiabity that would arise in the system. This new upgrade needs to be rolled back to keep the original environment.

Here are two useful commands for checking the version and the list of pathes that are touched.

opatch version

opatch lsinventory

Suppose my OAS Home is under /oracle/app/10gR3mid

Suppose you put your opatch folder temporarily under /tmp and under the tmp folder there is patch folder : 10031961

Run the command below marked in red in each subfolder of 10031961, suppose we would firstly rollback 10011970

So let us look at the before and after results from running opatch lsinventory

Before we apply the patch rollback, you could see the patch number exsits in the red box. 

After we apply the rollback it is gone.