Oracle

Oracle Export and Import Backup and Restore-12

Advertisements

Export and Import Backup and Restore

Logical backup: Normal export/import and Data pump

Data pump has many features. Like (PARALLEL, Job,NETWORK_LINK,REMAP DATAFILE & tablespace, exclude, include, version, extimate_only)

It mostly done in the DB server itself. It access data by direct path and external tables a master process, and worker processes. Whereas normal export/import uses Conventional path.

DataPump: Faster, Filtering option (remaps, compression & parallel), job, & self-tuning etc

Data Pump Export/Import Normal Export/Import
Filtering of data and metadata and remaps etc Not much
Change PARTITION_OPTIONS while import No
Two process at DB server level Master & Worker It runs on DB & client level
Data Pump jobs runs server, can start detach and resume later  no
Data Pump parameters can be altered at run time no
Data Pump will recreate the user DBA need to create
does not use the BUFFERS parameter It will
Impdp/Expdp has self-tuning like  BUFFER and RECORDLENGTH No
Data Pump create dump as XML documents rather than as DDL DDL format
Impdp/Expdp use parallel execution rather than a single stream Single thread
Expdp/Impdp create group of files rather than on a single dump Single file
Lower to higher vice Vera No
It will compress data, if enabled It won’s compress
schema import recreate  user will add security privileges also No
Sequential media, such as tapes and pipes not supported Support tape, pipes even compress the data
It is 5 time faster than normal export Slow
It is 20 time faster than normal import Slow
Even it canceled/closed it will run in background
It will import table using either APPEND or TRUNCATE Only append
Same and cross platform OS
Can be used for upgrade and downgrade after 10.1 Can be used for upgrade  prior  10.1
no Moving tables of XML/columns from 11.1 to 10.2
Transfer data by database link, without dump file  no
DB directories path Client specified path

 

These are utilities. The followings are keywords.

Export – EXP

Import – IMP

Backup Types for both:

Full DB
Tablespace
Users/ Schema
Table

 

Methods of Backup & Recovery:

Methods Comments
CLI – Command Line Interface Command line
Interactive mode It will ask, we need to  enter
Parfile Save as a file and run
OEM GUI

 

CLI

$ Exp user/pass parameters 1,2,3 etc

Who can take export/import – exp_full_databse & Imp_full_Database privileged users.

 

Export parameters:

 

NO Export Parameters Comments
1 Full=y/n All objects or any particular
2 Owner=<Username> Particular user
3 Tables=<user.tablename> Particular Table
4 Tablespace=<Tablespacename> Particular Tablespace
5 Transport_tablespace=y/n Used for different DB names, Migration
6 Grant = y/n Privilege backup
7 Indexes =y/n Index Backup
8 Constraints =y/n Constraint backup
9 Rows= y/n Structure +Data Backup
10 Consistent=y/n In case of SCN same no
11 Compress =y/n Compress data, will be  faster
12 Direct = y/n Conventional & Direct path
13 Statistics=<none> Yes, will be slow
14 Feedback = <number> 100 Give a feedback for every 100 rows
15 File=<Dumpname.dump> Binary File
16 Log = <Logname.log> Export info will log here

 

For the first 4 parameters, only one can be possible at a time.

Conventional path – It will take a data from data file –DB buffer cache — Export. Will be slow.

Direct path – It will take a data from data file –OS cache — Export. Will be fast.

Import parameters:

 

NO Import Parameters Comments
1 Full=y/n All objects or any particular
2 Fromuser=<Export User> Export user name
3 Touser=<Import User> Import user name
4 Tables=<tablename> Particular Table
5 Tablespace=<Tablespacename> Particular Tablespace
6 Transport_tablespace=y/n Used for different DB names, Migration
7 Grant = y/n Privilege backup
8 Indexes =y/n Index Backup
9 Constraints =y/n Constraint backup
10 Rows= y/n Structure +Data Backup
11 Statistics=<none> Yes, will be slow
12 Feedback = <number> 100 Give a feedback for every 100 rows
13 File=<Dumpname.dump> Binary File
14 Log = <Logname.log> Export info will log here
15 Shows=Y/N It will generate  a log file, to know the  dump info
16 Ignore = Y/N Ignore and append the data, duplicates also
17 Commit = Y/N Y- Will be fast, Since it will not wait for the commit

 

Note: Import will load lots of data, and archive log might full, we need to monitor this and sometimes may change the modes.

We need to verify the log for the import/export success & failures.

 

Export / Import Workouts and Scenarios

CLI method – Full DB export

$ Exp system/manager file =’/opt/full.dump’ full =y log =/opt/full.log

$ Imp system/manager file =’/opt/full.dump’ full =y log =/opt/full_imp.log

Import only particular tables & Users etc

$ Imp system/manager file =’/opt/full.dump’ fromuser =Muthu touser=rajesh log =/opt/full_imp.log ignore=y

$ Imp system/manager file =’/opt/full.dump’ fromuser =Muthu tables=emp touser=rajesh log =/opt/full_imp.log

(Workout): Export only particular tables & Users and then import it

$ Exp

$ Imp

 

Export and Import tablespace

Check the tablespace violation. Sys users tables do not exit into the tablespace, if any we need to move it.

Violation Check

Sql > exec dbms.tts_transport_set_check(‘tools’,true); — It will display the sys users table

Sql > Select * from transport_violation;

Sql > Alter tablespace Muthu read only;

$ Exp “sys/sys as sysdba” file =’/opt/tools_tbs.dump’  tablespace=tools transport _tablespace=y log=/opt/tools_tbs.log

Parfile:

userid = sys/sys

file =/opt/backup/export/muthu_new_tbs.dump tablespaces=muthu_new transport_tablespace=y grants=y log=/opt/backup/export/muthu_new_tbs.log

 

Sys – Only can export tablespace.

Note: We should backup data files also, since the tablespace belongs to data files

$ mkdir –p /backup/export/Muthu

$ cd  /backup/export/Muthu

$ cp toolso1.dbf /backup/export/Muthu

Sql > Alter tablespace Muthu read write;

 

Import – By default, tablespace can be imported as read only, even if we backup in read write.

$ Imp “sys/sys as sysdba” file =’/opt/tools_tbs.dump’  tablespace=tools transport _tablespace=y log=/opt/imp_tools_tbs.log datafile=’/backup/export/Muthu/tools.dbf’

Sql > Alter tablespace set read write;

 

Parfile:

userid = sys/sys

file =/opt/backup/export/muthu_new_tbs.dump tablespaces=muthu_new transport_tablespace=y grants=y log=/opt/backup/export/muthu_new_tbs.log datafiles=/opt/backup/export/muthu_new_01.dbf

imp parfile =/opt/backup/common/muthu_imp.par

 

Interactive Mode

We need to manually enter parameter one by one.

$ exp UN: Pass: file: etc..

 

Parfile Method

Open a notepad and write all commands and save it as .par

====================

User id =system/manager

File= /opt/full.dump

Full = y

Log =/opt/full.log

====================

Save /opt/full.par

$ exp parfile = /opt/full.par

$ Imp parfile = /opt/full.par

OEM –  GUI Method

To Get help of EXP/IMP parameters, use  exp help =y

 

Data Pump Export/Import – New feature from 10g

It is a utility.

Key Words: Expdp / Impdb

It will backup all like a normal export import. But, it will be faster.

Syntax:  $ expdp User/pass parameters 1,23 etc

Who can take export/import – datapump_exp_full_database & datapump _imp_full_Database privileged users.

Export Parameters

No Datapump Exp Parameters Comments
1 Attach=<Jobname> Runs a job and can be pause and resume
2 Contents=<all /metadata_only/data_only Structure and data etc
3 Compression=<All//metadata_only/data_only/none 10g metadata can compress, 11g both
4 Directory=<dirname> DB level directory creation
5 Dumpfile=<name.dump> Binary dump name
6 Estimate=blocks/statistics Estimate the data+statistics size
7 Estimate_only=y/n Calculate the  dumpfile size only
8 Exclude=<table/index/constraint/profile etc..> Index:/”like/ind.eno”/”/
9 Flashback_time=to_timestamp(‘2015-09-20 10:00:00’) It will backup before that
10 Flashback_scn=SCN Either time or SCN can be used
11 Full=y/n
12 file_size=<integer> like 6GB File will be divided by given bytes , 2file 3gb & 3gb
13 Include=<table/index/constraint/profile etc..> table:”in(’emp’)”
14 Job_name=<jobname> By default sys_operation_mode_nn – Job name
15 logfile=<logname.log> By default export.log / import.log
16 nologfile=y/n No log file will generate
17 Network_lonk=<Dblinkname> Objects can be exported b/w DB servers
18 Parallel=<no.of CPU>
19 Parfile=<filename.par> Save the script to run
20 query=muthu.emp:”where salary>2000″ Condition can be given
21 Schema=<username>
22 Status=<no.of Seconds> It will display the export status in seconds
23 Tables=<user.tablename> Particular table
24 Tablespace=<tablespacename>
25 Transport_full_Check=y/n Check the table & index in different tablespace or not
26 Transport_tablespaces=<TBS name> Only backup metadata, It cannot resume, if stopped

 

For DB link, we need a net service name and tns entry to do a export / import.

Sql> create databse link test_ln connect to Muthu identified by pass using ‘testDB’ testDB – netservice

Sql> select * from emptiest_ln;

If we use, estimate_only,content=metadata_only & transport_tablespace – Query parameter cannot be used.

 

Import Parameters

No Datapump Exp Parameters Comments
1 Attach=<Jobname> Runs a job and can be pause and resume
2 Contents=<all /metadata_only/data_only Structure and data etc
3 Directory=<dirname> DB level directory creation
4 Dumpfile=<name.dump> Binary dump name
5 Estimate=blocks/statistics Estimate the data+statistics size
6 Exclude=<table/index/constraint/profile etc..> Index:/”like/ind.eno”/”/
7 Flashback_time=to_timestamp(‘2015-09-20 10:00:00’) It will backup before that
8 Flashback_scn=SCN Either time or SCN can be used
9 Full=y/n
10 Include=<table/index/constraint/profile etc..> table:”in(’emp’)”
11 Job_name=<jobname> By default sys_operation_mode_nn – Job name
12 logfile=<logname.log> By default export.log / import.log
13 Parallel=<no.of CPU>
14 Parfile=<filename.par> Save the script to run
15 query=muthu.emp:”where salary>2000″ Condition can be given
16 Schema=<username>
17 Status=<no.of Seconds> It will display the export status in seconds
18 Tables=<user.tablename> Particular table
19 Tablespace=<tablespacename>
20 Transport_full_Check=y/n Check the table & index in different tablespace or not
21 Transport_tablespace=<TBS name> Only backup metadata, It cannot resume, if stopped
22 remap_schemas=muthu:rajesh  
23 remap_tables=emp:empnew 11g feature, rename in destination
24 remap_datafile=’/opt/tools.dbf’:’/data/tools.dbf’  
25 skip_unusable_indexes=y/n Skip the unused indexes
26 table_exits_action=[skip/append/truncate/replace] skip will comeout, if table is avilable

 

How to comeout & comein of data pump

Comeout:  Ctrl+C –Export> exit_client

Comein:

$ expdp system/manager attach=sys_full_export_01

Export> continue_client

Find the job name

SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;

Exit Exit_Client
Status can be see log file only We can resume job status @ any point

 

How to stop & resume the job of data pump

Export> stop_job

Export> exit

$ expdp system/manager attach=sys_full_export_01

Export> start_job

Note: We can add any parameter while the job running (Ctrl+C)

Ex; Export> status =300

How to Terminate job of data pump

Export> kill_job

Export and Import data by data pump

 

$ mkdir –p /backup/export/Muthu

Sql> conn system/manager

Sql> create directory exp_dir as ‘/backup/export/Muthu’;

Sql> conn system/sys as sysdba

Sql> grant read,write on directory exp_dir to  system;

Sql> select * from dba_directories;

Sql> select * from all_directories;

Sql> drop directory exp_dir;

By default if there is no directory, it will save in ddump

 

Full DB

$ expdp system/manager dumpfile=full.dump full=y logfile=full.log directory=exp_dir

$ impdp system/manager dumpfile=full.dump full=y logfile=imp_full.log directory=exp_dir

 

Schema

Same schema name

$ impdp system/manager dumpfile=full.dump schemas=muthu logfile=Schema.log directory=exp_dir

Different schema name

$ impdp system/manager dumpfile=full.dump remap_schemas=Muthu:rajesh logfile=Sehema1.log directory=exp_dir

 

Table

$ impdp system/manager dumpfile=full.dump tables=muthu logfile=tbl.log directory=exp_dir

How to view the backup of objects and what has been backed up

Sql> select * from dba_export_jobs

Best Practice: Use parfile method, Since it stores the scripts and can be verified.

$ expdb parfile =/opt/Muthu.par

Open a notepad and save it a scripts.

Parameters help – $ expdp help =y & $ Impdp help =y

(Workout): Export only particular tables & Users and then import it.

I’m currently working as a SQL server DBA in one of the top MNC. I’m passionate about SQL Server And I’m specialized in Administration and Performance tuning. I’m an active member of SQL server Central and MSDN forum. I also write articles in SQL server Central. For more Click here

Leave a Reply

Your email address will not be published. Required fields are marked *

3 + 3 =