Monday, 7 December 2015

Creating controlfile fails with ORA-01503, ORA-01161

Creating controlfile fails with ORA-01503, ORA-01161 

Problem Description

While creating cloning database I have modified controlfile contents as below where I like to change my database name to SIT1.

I have backed  up the control file from my production database by using the below query.
ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/temp/debashis.trc';

So when I copy the content from my prod control file and just named the db name to SIT1 with changing the location of the db file as per SIT environment like below..

STARTUP NOMOUNT
CREATE CONTROLFILE RESUE DATABASE "SIT1" RESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 32
    MAXLOGMEMBERS 3
    MAXDATAFILES 500
    MAXINSTANCES 8
    MAXLOGHISTORY 6135
LOGFILE
  GROUP 1 '/u02/oradata/SIT1/redo01.log'  SIZE 1024M BLOCKSIZE 512,
  GROUP 2 '/u02/oradata/SIT1/redo02.log'  SIZE 1024M BLOCKSIZE 512,
  GROUP 3 '/u02/oradata/SIT1/redo03.log'  SIZE 1024M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '/u06/oradata/SIT1/system01.dbf',
  '/u06/oradata/SIT1/sysaux01.dbf',
  '/u01/oradata/SIT1/undo_ts01.dbf',
  '/u02/oradata/SIT1/users01.dbf',
  '/u05/oradata/SIT1/orabpel.dbf',
  '/u06/oradata/SIT1/retek_index01.dbf',
  '/u01/oradata/SIT1/retek_data01.dbf',
  '/u05/oradata/SIT1/lob_data01.dbf',
  '/u02/oradata/SIT1/retek_part01_d01.dbf',
'/u06/oradata/SIT1/retek_index07.dbf'
CHARACTER SET AL32UTF8
;


And I got below error:


 After googling, I got to find out that 

Cause of The Problem
Control file script has been changed, more specifically database name has been changed.

Solution of the Problem
To create the new controlfile for the new database you must use the word 'SET'
in the CREATE CONTROLFILE COMMAND.
The SET DATABASE command is what enables the
changes to the name of the database. So the script will be like below

 STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "SIT1" RESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 32
    MAXLOGMEMBERS 3
    MAXDATAFILES 500
    MAXINSTANCES 8
    MAXLOGHISTORY 6135
LOGFILE
  GROUP 1 '/u02/oradata/SIT1/redo01.log'  SIZE 1024M BLOCKSIZE 512,
  GROUP 2 '/u02/oradata/SIT1/redo02.log'  SIZE 1024M BLOCKSIZE 512,
  GROUP 3 '/u02/oradata/SIT1/redo03.log'  SIZE 1024M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '/u06/oradata/SIT1/system01.dbf',
  '/u06/oradata/SIT1/sysaux01.dbf',
  '/u01/oradata/SIT1/undo_ts01.dbf',
  '/u02/oradata/SIT1/users01.dbf',
  '/u05/oradata/SIT1/orabpel.dbf',
  '/u06/oradata/SIT1/retek_index01.dbf',
  '/u01/oradata/SIT1/retek_data01.dbf',
  '/u05/oradata/SIT1/lob_data01.dbf',
  '/u02/oradata/SIT1/retek_part01_d01.dbf'
CHARACTER SET AL32UTF8
;

 SQL> @controlfile.ctl
ORACLE instance started.

Total System Global Area 8551575552 bytes
Fixed Size 2236744 bytes
Variable Size 6006247096 bytes
Database Buffers 2533359616 bytes
Redo Buffers 9732096 bytes

Control file created.

 

 

Tuesday, 8 September 2015

Monitor mount point in File System

How to monitor the mount point in file system.

Some times requirement came like we have to monitor a specific mount point which is very very important for our project to smoothly run the business flows. Whenever there the usage reached to threshold level, it should alert us so that we could take necessary action on it,like removing files,zipping files or moving files to other mount point so clear the space.

Please find the below script which is doing the similar action

Please Note: Here I am monitoring the mount point /u01  which has 217GB and when the mount point usage will be 210GB (97%) it will send the alert mail to folks mentioned in the EMAIL list.

Example I have created the below script in file Filemonitor.ksh

#!/bin/ksh
 THRESHOLD="97"
#hostname- in which server you want to monitor the file system
EMAIL="dekhatai@in.xyz.com"
# Excluded -mount points which we don't required to monitor,if want  remove from the EXCLUDE list and it will get monitor

EXCLUDE="/usr|/var|/tmp|/home|/proc|/opt|/admin|/tools|/var/adm/ras/livedump|/u11|/u12|/u13|/u14|/u21|/u22|/u23|/temp|/mksysbimg|/staging|/u02|/u04|/arch"

df -k | awk '{print $7"\t"$4}' |egrep -v "(${EXCLUDE})" | while read LINE; do

PERC=`echo $LINE | awk '{print $2}' | cut -d"%" -f1`
 if [ $PERC -gt $THRESHOLD -o $PERC -eq $THRESHOLD ]; then
   echo "${PERC}%- Threshold Value Reached to /u01 on`hostname`.thesource.ca .Please Release Space to avoid any batch releated issues." |mail -s "Space Alert: ${LINE}-reached threshold level on `hostname` server" $EMAIL
 fi
done

Set the shell script in cron job which will run in every minutes(changes as per your requirement)

* * * * * /staging/debashis/Filemonitor.ksh  


Thanks..
Hope it will help you..Enjoy..

Tuesday, 21 July 2015

Audit DDL Statements from alert.log file from Oracle Database

Audit of DDL

Every time a user attempts anything in the database where audit is enabled the Oracle kernel checks to see if an audit record should be created or updated (in the case or a session record) and generates the record in a table all changes are present in alert.log file of database.

Example-----

In 11g you can record you ddl operations by setting the parameter enable_ddl_logging to true:

ALTER SYSTEM SET enable_ddl_logging=TRUE SCOPE=BOTH;

 Example:
I have tried with create and drop command .. you can also do it with alter,rename,comment and truncate and you will get all results in alert*.log file

1. I have connected to debashis schema.
2. Enabling the ddl_looging by using the above command.
3. Create a table debashiskhatai
4. Drop the table
5. Disable the ddl_logging

 

Now if you go and check the alert log file you will find two 4 DDL commands over there

                                                                                                                                         inalert.logfile__________________________________________________________________________

ALTER SYSTEM SET enable_ddl_logging=TRUE SCOPE=BOTH;

 create table debashiskhatai(regid number);

drop table debashiskhatai;

ALTER SYSTEM SET enable_ddl_logging=false SCOPE=both;
__________________________________________________________________________________

 Path of alertlog file databse (use the below command and whatever the path in background_dump_dest is ..actually the path of alert*.log file)

SQL>show parameter background

 
 Now when I opened the alert_orcl1.log file I found all those 4 DDL statements in my alert.log file.

 

Done !! yipeee...

Hope you enjoy it ..... :)