Wednesday, August 8, 2018

FileSystem Monitoring shell script

#!/bin/bash -xv
#
export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/oracle/bin:/home/oracle/bin::/mnt/mnt02/app/oracle/product/12.2.0.1/db_1/bin:/mnt/mnt02/app/oracle/product/12.2.0.1/db_1/OPatch

# To debug uncomment the below line
#set -x
#
# Set Environment Variables
#
TODAY="at $(date '+%H:%M on %d-%b-%y')"
OutputFilename=$(date +"%b_%d_%Y".html)
LastReport=/tmp/LastReport.txt
NowReport=/tmp/NowReport.txt
CurDate=`date +%D\ %T\ %Z`
#
# Set Alert Type according to Percentage
#
CriticalPercentage=90
WarningPercentage=85
NormalPercentage=70
#
# Get IP Address set from hosts file
#
IPADDRESS=`head -n 1 /etc/hosts | awk ' { print $1 }'`
#
# Remove Output File, if it already exists
#
if [ -f /tmp/${OutputFilename} ]; then
 rm -f /tmp/${OutputFilename}
fi
if [ -f ${NowReport} ]; then
 mv ${NowReport} ${LastReport}
 cp /dev/null ${NowReport}
fi
#
# Find out Difference Previous and Current Report for each partition
#
df -Ph | grep -vE "^Filesystem|tmpfs|cdrom" | awk 'NR>0 && NF==6' | awk '{print $3}' > ${NowReport}
if [ -f ${LastReport} ]; then
 DiffValue=(`awk '{ getline getdifference < "/tmp/LastReport.txt"; $1 -= getdifference; print }' /tmp/NowReport.txt`)
fi
#
# Defining HTML Table Format & Fields
#
(
 echo '<HTML><HEAD><TITLE>Disk Usage Statistics</TITLE></HEAD>'
 echo '<BODY>'
 echo '<H3><font color="#5555E7" face="verdana" size="3">Disk Usage Report for server - '$(uname -n) for ${IPADDRESS}'</font></H3>'
 echo '<P style="color:#5555E7" face="verdana" size="3">Report Generated '${TODAY}'</P>'
 echo '<TABLE BORDER=1 CELLSPACING=0 CELLPADDING=2 bordercolor=#DEDEE2>'
echo '<style>
 body {
     font-family: verdana;
         font: 16px verdana, sans-serif;
}
tbody th { text-align:right; background: #F8F8FA; color:red}
</style>'
# echo '<TR BGCOLOR=\"#BBFFFF\"> <TH><font color="#5555E7"><font face="verdana"><b>Filesystem</b></font></TH> <TH>Total</TH> <TH>Disk Used</TH>
#<TH>Available</TH> <TH>Percentage Info</TH> <TH>Mounted On</TH> <TH>Critical Alert</TH>
#<TH>Report Date</TH> <TH>12 Hrs Difference </TH><//TR>'
echo '<TR BGCOLOR=\"#4863A0\"> <TH><font color="#5555E7" face="verdana" size="2"><b>Filesystem</b></font></TH> <TH><font color="#5555E7" face="verdana" size="2"><b>Total</b></font></TH> <TH><font color="#5555E7" face="verdana" size="2"><b>Disk Used</b></font></TH><TH><font color="#5555E7" size="2" face="verdana"><b>Available</b></font></TH> <TH><font color="#5555E7" size="2" face="verdana"><b>Percentage Info</b></font></TH> <TH><font color="#5555E7" size="2" face="verdana"><b>Mounted On</b></font></TH> <TH><font color="#5555E7" size="2" face="verdana"><b>Critical Alert</b></font></TH></TR>'

#
# Extract Disk Usage Information
# Suppress Listing of “FileSystem, tmpfs and cdrom” Information
#
ArryCount=0
df -Ph | grep -vE "^Filesystem|tmpfs|cdrom" | awk 'NR>0 && NF==6'|sort|while read FileSystem Size DiskUsed DiskFree DiskPercentUsed MountPoint
 do
 PERCENT=${DiskPercentUsed%%%}
#
# Calculate the Difference between previous run and current run
#
 TDiffValue=(`awk '{ getline getdifference < "/tmp/LastReport.txt"; $1 -= getdifference; print }' /tmp/NowReport.txt`)
#
# Verify if disk usage is greater equal to than set threshold limit - 90%
#
 if [[ ${PERCENT} -ge ${CriticalPercentage} ]];
 then
 COLOR=red
 CRITICALALERT="Yes, Notify"
 elif [ ${PERCENT} -ge ${WarningPercentage} ] && [ ${PERCENT} -le 90 ];
 then
 COLOR=orange
 CRITICALALERT=No
 else
 COLOR=green
 CRITICALALERT=NA
 fi
 echo '<TR><TD>'$FileSystem'</TD><TD ALIGN=RIGHT>'$Size'</TD>'
 echo '<TD ALIGN=RIGHT>'$DiskUsed'</TD><TD ALIGN=RIGHT>'$DiskFree'</TD>'
 echo '<TD><TABLE BORDER=0 CELLSPACING=3 CELLPADDING=0>'
 echo '<TR><TD WIDTH='$((2 * $PERCENT))' BGCOLOR="'$COLOR'"></TD>'
 echo '<TD WIDTH='$((2 * (100 - $PERCENT)))' BGCOLOR="gray"></TD>'
 echo '<TD><FONT FONT-WEIGHT="bold" SIZE=-1
COLOR="'$COLOR'">'$DiskPercentUsed'</FONT></TD>'
 echo '<TR></TABLE><TD>'$MountPoint'</TD>'
 echo '<TD><FONT font-weight="bold">'$CRITICALALERT'</TD></FONT>'
# echo '<TD><FONT font-weight="bold">'`date`'</TD></FONT>'
3 echo '<TD><FONT font-weight="bold">'${TDiffValue[ArryCount]} \(in bytes\)'</TD></FONT></TR>'
 echo $DiskUsed >> `hostname`.usage.txt
 ArryCount=$ArryCount+1
 done
 echo '</TABLE>'
 echo '</P><BR>'
 echo '<TABLE BORDER=1 CELLSPACING=3 CELLPADDING=0>'
 echo '<TR><TH FONT font-weight="bold">Legend Information</TH></TR>'
 echo '<TR><TD FONT color="white" BGCOLOR="RED">Critical Alert</TD></TR>'
 echo '<TR><TD FONT color="white" BGCOLOR="ORANGE">Warning Alert</TD></TR>'
 echo '<TR><TD FONT color="white" BGCOLOR="GREEN">No Action Alert</TD></TR>'
 echo '</TABLE>'
 echo '<BODY></HTML>'
 echo '<P><FONT font-weight="bold">Report Generated by DBA Team</P>'
) | tee `hostname`_${0##*/}.html
#
# Sending E-Mail Notification
#
(
 echo To: "#DL-OracleSYSDBA@XXXX.com"
 echo From: oracle@XXXXX.localdomain
 echo "Content-Type: text/html; "
 echo Subject: Disk Usage Report for server `hostname` 'for' $IPADDRESS
 echo
 cat `hostname`_${0##*/}.html
) | sendmail -t
echo -e "Report Generation is Completed... \n\a"

Sample Output


Different types of PATCH in Oracle

  • Interim Patch
  • Diagnostic Patch
  • Bundle Patch (BP)
  • Patch Set Update (PSU)
  • Security Patch Update (SPU)



How to Clear Cache on Linux?

Every Linux System has three options to clear cache without interrupting any processes or services.
1. Clear PageCache only.
# sync; echo 1 > /proc/sys/vm/drop_caches
2. Clear dentries and inodes.
# sync; echo 2 > /proc/sys/vm/drop_caches
3. Clear PageCache, dentries and inodes.
# sync; echo 3 > /proc/sys/vm/drop_caches
Explanation 
sync will flush the file system buffer. Command Separated by ";" run sequentially. The shell wait for each command to terminate before executing the next command in the sequence. As mentioned in kernel documentation, writing to drop_cache will clean cache without killing any application/service, command echo is doing the job of writing to file.
As a root user




Tuesday, August 7, 2018

History of Exadata

First Oracle's exadata was released in 2008 known as hardware and software engineered to work together. Since than we have seen 6 release of exadata machine with improved the combination of hardware and software. Latest X5-2 released in 2015. 

V1: The first Exadata was released in 2008 and labeled as V1. It was a combination of HP hardware and Oracle software. There were no flash cache used in this version. At that time Exadata was made specially for Data ware housing only. This product was not accepted widely as it was suffered from overheating. 



V2: After V1 Oracle has launched Exadata V2 in 2009. It was a combination of Sun hardware and Oracle software. Here in V2 Oracle has upgraded storage cell with 384 GB of Exadata smart cache. Many o the customer has changed the Exadata from V to V2 because of the overheating problem in Exadata V1.

X2: This is the third edition of Oracle Exadata released in 2010.There are two version X2-2 stands for dual core CPUs and X2-8 stands for 8 core CPUs database server. This version is supporting both OLTP and Data warehouse. Also this has massive features compare to older Exadata in terms of CPUs, Memory, Compression, Smart scan etc...

X3: This is the forth edition of Exadata released in 2012. There is no much more difference between X2 and X3 except in CPU and RAM. This version has more RAM and CPU power then the previous one.

X4: It was released in 2013. In this version processing was increased to 2×12 core CPUs. It had the capacity to upgrade memory to 512GB in a compute node. Flash and disk storage was also increased. It featured a new model of high-capacity disk. 600GB disks were retired, and included 1.2TB, 10,000 RPM disks. These disks were a smaller form factor (2.5” vs 3.5”). The other big change with the X4-2 was the introduction of an active/active InfiniBand network connection. On the X4-2, Oracle broke the bonded connection and utilized each InfiniBand port independently. This allowed an increased throughput across the InfiniBand fabric.


X5: It was announced in early 2015, the sixth generation of Exadata. The X5-2 was a dramatic change in the platform, removing the high-performance Disk option in favor of an all-flash, NVMe (Non-Volatile Memory Express) model. High-capacity disk sizes remain the same at 4TB per disk. The size of the flash cards doubled to 6.4TB per storage server. Memory stayed consistent with a base of 256GB, upgradeable to 768 GB, and the CPU core count increased to 18 cores per socket. With X5, customers are allowed to purchase X5-2 with any configuration required—a base rack begins with two compute nodes and three storage servers.

Exadata Terms

HP - High Performance
HC - High Capacity
IOPS - IOs Per Second
LUN - Logical Unit Number
MS - Management Server
RS - Restart Server
Cellsrv – Cell Server
SATA - Serial Advanced Technology Attachment
SAS - Serial Attached SCSI
PDU - Power Distribution Unit
ILOM - Integrated Lights Out Manager
IB - InfiniBand
EHCC - Exadata Hybrid Columnar Compression
CU - Compression Unit
UDMs - User Defined Metrics
RDS - Reliable Data Socket protocol
SNMP - Simple Network Management Protocol
iDB - Intelligent Database protocol
EM - Enterprise Manager
EMGC - Enterprise Manager Grid Control
DBRM - Database Resource Manager

Friday, August 3, 2018

Blocking Session-2

set serveroutput on
BEGIN
 dbms_output.enable(1000000);
 for do_loop in (select session_id, a.object_id, xidsqn, oracle_username, b.owner owner,
 b.object_name object_name, b.object_type object_type
 FROM v$locked_object a, dba_objects b
 WHERE xidsqn != 0
 and b.object_id = a.object_id)
 loop
 dbms_output.put_line('.');
 dbms_output.put_line('Blocking Session : '||do_loop.session_id);
 dbms_output.put_line('Object (Owner/Name): '||do_loop.owner||'.'||do_loop.object_name);
 dbms_output.put_line('Object Type : '||do_loop.object_type);
 for next_loop in (select sid from v$lock
 where id2 = do_loop.xidsqn
 and sid != do_loop.session_id)
 LOOP
 dbms_output.put_line('Sessions being blocked : '||next_loop.sid);
 end loop;
 end loop;
 END;
 /

Output
=====
.
Blocking Session : 4929
Object (Owner/Name): SYS.OBJ$
Object Type : TABLE



PL/SQL procedure successfully completed.

Wednesday, August 1, 2018

Blocking Sessions

SELECT /*+ RULE */ 
 LPAD('--->',DECODE(A.request,0,0,5))||A.SID SID, B.serial#,
DECODE(TRUNC(SYSDATE - LOGON_TIME), 0, NULL, TRUNC(SYSDATE - LOGON_TIME) || ' Days' || ' + ') || 
TO_CHAR(TO_DATE(TRUNC(MOD(SYSDATE-LOGON_TIME,1) * 86400), 'SSSSS'), 'HH24:MI:SS') LOGON, 
b.status, b.SCHEMANAME SCHEMA, 
DECODE(o.owner || '.' || o.object_name, '.', NULL, o.owner || '.' || o.object_name) OBJECT, o.object_type,
 b.osuser, b.machine, b.module, b.program, 
 DECODE(BLOCK, 0, NULL, 'BLOKER' ) || DECODE(request, 0, NULL, '-->WAITER' ) BLOKER,
 DECODE (A.lmode, 0, 'None', 1, 'Null', 2, 'Row-S (SS)', 3, 'Row-X (SX)', 4, 
'Share', 5, 'S/Row-X (SSX)', 6, 'Exclusive', A.lmode) lmode,
DECODE(TRUNC(ctime/86400), 0, TO_CHAR(TO_DATE(ctime, 'SSSSS'), 'HH24:MI:SS'), 
TRUNC(ctime/86400) || ' Days + ' || TO_CHAR(TO_DATE(ctime - (TRUNC(ctime/86400))*86400 , 'SSSSS'), 'HH24:MI:SS')) TIME, 
A.TYPE, 'alter system kill session ' || '''' || a.SID || ', ' || b.serial# ||'''' || ' immediate;' kill_session, 
DECODE(object_type, NULL, NULL, 'Dbms_Rowid.rowid_create(1, ' || row_wait_obj# || ', ' 
|| row_wait_file# ||', ' || row_wait_block#||', ' || row_wait_row# ||')') row_id 
 FROM v$lock A, v$session b, dba_objects o
 WHERE A.SID = b.SID 
  AND (lmode = 0 OR BLOCK = 1) 
  AND o.object_id (+) = DECODE(b.ROW_WAIT_OBJ#, -1, NULL, b.ROW_WAIT_OBJ#)
 ORDER BY A.id1,A.request;

Calculate Export dump size

The following query calculates how much table data each schema takes on your database


SELECT owner, ROUND(SUM(size_mb)) MBytes FROM 
(
SELECT owner, segment_name, segment_type, partition_name, ROUND(bytes/(1024*1024),2) SIZE_MB, tablespace_name 
FROM DBA_SEGMENTS 
WHERE SEGMENT_TYPE IN ('TABLE', 'TABLE PARTITION', 'TABLE SUBPARTITION') 
--AND TABLESPACE_NAME LIKE 'COSTE%' 
--AND SEGMENT_NAME LIKE 'OE_ORDER_LINES_ALL%' 
--AND partition_name LIKE 'USAGE_FCT_NEW%'
--AND OWNER = 'TARGET_DW' 
--AND ROUND(bytes/(1024*1024),2) > 1000)
) 
GROUP BY owner 
ORDER BY MBytes DESC;

Create a super user like DBA but with no administration power

Summary 

Some time you have the requirement to create a user to the database that has access to all v$session, v$lock, dba_free_space and other dictionary views to monitor performance issues, sql tuning, and other DBA operations. 

But you don't want to have real power to change things like a DBA can do. 

The quick and easy word around is to grant the select_catalog_role role to the new user.

create user dba_monitor identified by dba_monitor;
grant connect, select_catalog_role to dba_monitor;

Now with the new user you can use tools like TOAD or Grid Control to monitor the database as a DBA but with no real power to change things

When did a user change his/her password?

SELECT NAME, ptime AS "LAST TIME CHANGED", ctime "CREATION TIME", ltime "LOCKED" 
FROM USER$ 
WHERE ptime IS NOT NULL 
ORDER BY ptime DESC;

Explain plan and statistics with SQL*Plus

SQL> set timing on
SQL> set autotrace traceonly explain
SQL> select * from dual;
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

SQL> set autotrace traceonly statistics
SQL> /

Elapsed: 00:00:00.01

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          2  physical reads
          0  redo size
        522  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
SQL> set autotrace traceonly explain
SQL> /
Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Environment settings for SQL*PLUS - SET

Summary 

SQL*Plus environment is controlled a big list of SQL*Plus system settings. You can change them by using the SET command as shown in the following list:
    * SET AUTOCOMMIT OFF - Turns off the auto-commit feature.
    * SET FEEDBACK OFF - Stops displaying the "27 rows selected." message at the end of the query output.
    * SET HEADING OFF - Stops displaying the header line of the query output.
    * SET LINESIZE 256 - Sets the number of characters per line when displaying the query output.
    * SET NEWPAGE 2 - Sets 2 blank lines to be displayed on each page of the query output.
    * SET NEWPAGE NONE - Sets for no blank lines to be displayed on each page of the query output.
    * SET NULL 'null' - Asks SQL*Plus to display 'null' for columns that have null values in the query output.
    * SET PAGESIZE 60 - Sets the number of lines per page when displaying the query output.
    * SET TIMING ON - Asks SQL*Plus to display the command execution timing data.
    * SET WRAP OFF - Turns off the wrapping feature when displaying query output.

Memory Notification: Library Cache Object loaded into SGA

Summary 
In the Oracle10g a new heap checking mechanism, together with a new messaging system is introduced. This new mechanism reports memory allocations above a threshold in the alert.log, together with a tracefile in the udump directory. If you notice in your alert.log you may get messages like this:

Memory Notification: Library Cache Object loaded into SGA
Heap size 58689K exceeds notification threshold (51200K)
This message means that the threshold set by hidden parameter _kgl_large_heap_warning_threshold has been exceeded

In certain situations this can be very helpful to inform you if large allocations have been done in the sga heap (shared pool). The notification mechanism allows you to troubleshoot memory allocation problems, which eventually will appear as the infamous ORA-4031

If you don't have ORA-04031: unable to allocate x bytes of shared memory problems and don't want to appear the message in the alert log then you can increase the hidden parameter _kgl_large_heap_warning_threshold. 

The default limit is set at 2048K. To find the value of this parameter execute

SELECT * FROM (
SELECT a.ksppinm AS parameter,
       a.ksppdesc AS description,
       b.ksppstvl AS session_value,
       c.ksppstvl AS instance_value
FROM   x$ksppi a,
       x$ksppcv b,
       x$ksppsv c
WHERE  a.indx = b.indx
AND    a.indx = c.indx
AND    a.ksppinm LIKE '/_%' ESCAPE '/'
ORDER BY a.ksppinm) 
WHERE parameter IN ('_kgl_large_heap_warning_threshold')
In the description of the parameter indicates: maximum heap size before KGL writes warnings to the alert log 

Workaround 
If you want to disappear the message from the alert.log increase the hidden parameter to something bigger, for example

ALTER SYSTEM SET "_kgl_large_heap_warning_threshold" = 89428800 SCOPE=SPFILE ;

Tuesday, July 3, 2018

ORA-00054: resource busy and acquire with NOWAIT specified

ORA-00054: resource busy and acquire with NOWAIT specified
Cause: The NOWAIT keyword forced a return to the command prompt
because a resource was unavailable for a LOCK TABLE or SELECT FOR
UPDATE command.
Action: Try the command after a few minutes or enter the command without
the NOWAIT keyword.


Example:
SQL> alter table emp add (mobile varchar2(15));
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified


How to avoid the ORA-00054:
    - Execute DDL at off-peak hours, when database is idle.
    - Execute DDL in maintenance window.
    - Find and Kill the session that is preventing the exclusive lock.


Other Solutions:

Solution 1:
In Oracle 11g you can set ddl_lock_timeout i.e. allow DDL to wait for the object to 
become available, simply specify how long you would like it to wait:
 
SQL> alter session set ddl_lock_timeout = 600;
Session altered.

SQL> alter table emp add (mobile varchar2(15));
Table altered. 


Solution 2:
Also In 11g, you can mark your table as read-only to prevent DML:
SQL> alter table emp read only;
Session altered.

SQL> alter table emp add (mobile varchar2(15));
Table altered.


Solution 3 (for 10g):
DECLARE
 MYSQL VARCHAR2(250) := 'alter table emp add (mobile varchar2(15))';
 IN_USE_EXCEPTION EXCEPTION;
 PRAGMA EXCEPTION_INIT(IN_USE_EXCEPTION, -54);
BEGIN
 WHILE TRUE LOOP
  BEGIN
   EXECUTE IMMEDIATE MYSQL;
   EXIT;
  EXCEPTION
   WHEN IN_USE_EXCEPTION THEN 
    NULL;
  END;
  DBMS_LOCK.SLEEP(1);
 END LOOP;
END;


Solution 4: 

Step 1: Identify the session which is locking the object
select a.sid, a.serial#
from v$session a, v$locked_object b, dba_objects c 
where b.object_id = c.object_id 
and a.sid = b.session_id
and OBJECT_NAME='EMP';

Step 2: kill that session using
alter system kill session 'sid,serial#'; 

Reference: http://docs.oracle.com/cd/B19306_01/server.102/b14219/e0.htm

Monday, June 11, 2018

Different ways to get dumpfile version - EXPDP

Lets suppose we have a dumpfile and we have no clue from which database or atleast which version of database it is exported. (assume we don’t have a log file)

We got a request to import it to other database. As per the compatibility matrix you can import only to higher versions and for this we need to know current dumpfile version.

In this situation, the following methods will help you….

Method # 1
+++++++

For classic export dump files on Unix systems, you can use below command

$ cat <dump_file_name> | head | strings



Method # 2
+++++++

From Oracle 10g, you have another way to do this. We can use DBMS_DATAPUMP.GET_DUMPFILE_INFO package to read the dumpfile header where this information will be stored. For this we need to use pre-defined stored procedure.


Here is the procedure script and other details, refer to MOS doc 462488.1

CONNECT system/manager 

CREATE OR REPLACE PROCEDURE show_dumpfile_info(
  p_dir  VARCHAR2 DEFAULT 'DATA_PUMP_DIR',
  p_file VARCHAR2 DEFAULT 'EXPDAT.DMP')
AS
-- p_dir        = directory object where dump file can be found
-- p_file       = simple filename of export dump file (case-sensitive)
  v_separator   VARCHAR2(80) := '--------------------------------------' ||
                                '--------------------------------------';
  v_path        all_directories.directory_path%type := '?';
  v_filetype    NUMBER;                 -- 0=unknown 1=expdp 2=exp 3=ext
  v_fileversion VARCHAR2(15);           -- 0.1=10gR1 1.1=10gR2 (etc.)
  v_info_table  sys.ku$_dumpfile_info;  -- PL/SQL table with file info
  type valtype  IS VARRAY(23) OF VARCHAR2(2048);
  var_values    valtype := valtype();
  no_file_found EXCEPTION;
  PRAGMA        exception_init(no_file_found, -39211);

BEGIN

-- Dump file details:
-- ==================
-- For Oracle10g Release 2 and higher:
--    dbms_datapump.KU$_DFHDR_FILE_VERSION        CONSTANT NUMBER := 1;
--    dbms_datapump.KU$_DFHDR_MASTER_PRESENT      CONSTANT NUMBER := 2;
--    dbms_datapump.KU$_DFHDR_GUID                CONSTANT NUMBER := 3;
--    dbms_datapump.KU$_DFHDR_FILE_NUMBER         CONSTANT NUMBER := 4;
--    dbms_datapump.KU$_DFHDR_CHARSET_ID          CONSTANT NUMBER := 5;
--    dbms_datapump.KU$_DFHDR_CREATION_DATE       CONSTANT NUMBER := 6;
--    dbms_datapump.KU$_DFHDR_FLAGS               CONSTANT NUMBER := 7;
--    dbms_datapump.KU$_DFHDR_JOB_NAME            CONSTANT NUMBER := 8;
--    dbms_datapump.KU$_DFHDR_PLATFORM            CONSTANT NUMBER := 9;
--    dbms_datapump.KU$_DFHDR_INSTANCE            CONSTANT NUMBER := 10;
--    dbms_datapump.KU$_DFHDR_LANGUAGE            CONSTANT NUMBER := 11;
--    dbms_datapump.KU$_DFHDR_BLOCKSIZE           CONSTANT NUMBER := 12;
--    dbms_datapump.KU$_DFHDR_DIRPATH             CONSTANT NUMBER := 13;
--    dbms_datapump.KU$_DFHDR_METADATA_COMPRESSED CONSTANT NUMBER := 14;
--    dbms_datapump.KU$_DFHDR_DB_VERSION          CONSTANT NUMBER := 15;
-- For Oracle11gR1:
--    dbms_datapump.KU$_DFHDR_MASTER_PIECE_COUNT  CONSTANT NUMBER := 16;
--    dbms_datapump.KU$_DFHDR_MASTER_PIECE_NUMBER CONSTANT NUMBER := 17;
--    dbms_datapump.KU$_DFHDR_DATA_COMPRESSED     CONSTANT NUMBER := 18;
--    dbms_datapump.KU$_DFHDR_METADATA_ENCRYPTED  CONSTANT NUMBER := 19;
--    dbms_datapump.KU$_DFHDR_DATA_ENCRYPTED      CONSTANT NUMBER := 20;
-- For Oracle11gR2:
--    dbms_datapump.KU$_DFHDR_COLUMNS_ENCRYPTED   CONSTANT NUMBER := 21;
--    dbms_datapump.KU$_DFHDR_ENCRIPTION_MODE     CONSTANT NUMBER := 22;
-- For Oracle12cR1:
--    dbms_datapump.KU$_DFHDR_COMPRESSION_ALG     CONSTANT NUMBER := 23;

-- For Oracle10gR2: KU$_DFHDR_MAX_ITEM_CODE       CONSTANT NUMBER := 15;
-- For Oracle11gR1: KU$_DFHDR_MAX_ITEM_CODE       CONSTANT NUMBER := 20;
-- For Oracle11gR2: KU$_DFHDR_MAX_ITEM_CODE       CONSTANT NUMBER := 22;
-- For Oracle12cR1: KU$_DFHDR_MAX_ITEM_CODE       CONSTANT NUMBER := 23;

-- Show header output info:
-- ========================

  dbms_output.put_line(v_separator);
  dbms_output.put_line('Purpose..: Obtain details about export ' ||
        'dumpfile.        Version: 18-DEC-2013');
  dbms_output.put_line('Required.: RDBMS version: 10.2.0.1.0 or higher');
  dbms_output.put_line('.          ' ||
        'Export dumpfile version: 7.3.4.0.0 or higher');
  dbms_output.put_line('.          ' ||
        'Export Data Pump dumpfile version: 10.1.0.1.0 or higher');
  dbms_output.put_line('Usage....: ' ||
        'execute show_dumfile_info(''DIRECTORY'', ''DUMPFILE'');');
  dbms_output.put_line('Example..: ' ||
        'exec show_dumfile_info(''MY_DIR'', ''expdp_s.dmp'')');
  dbms_output.put_line(v_separator);
  dbms_output.put_line('Filename.: ' || p_file);
  dbms_output.put_line('Directory: ' || p_dir);

-- Retrieve Export dumpfile details:
-- =================================

  SELECT directory_path INTO v_path FROM all_directories
   WHERE directory_name = p_dir
      OR directory_name = UPPER(p_dir);

  dbms_datapump.get_dumpfile_info(
           filename   => p_file,       directory => UPPER(p_dir),
           info_table => v_info_table, filetype  => v_filetype);

  var_values.EXTEND(23);
  FOR i in 1 .. 23 LOOP
    BEGIN
      SELECT value INTO var_values(i) FROM TABLE(v_info_table)
       WHERE item_code = i;
    EXCEPTION WHEN OTHERS THEN var_values(i) := '';
    END;
  END LOOP;

  dbms_output.put_line('Disk Path: ' || v_path);

  IF v_filetype >= 1 THEN
    -- Get characterset name:
    BEGIN
      SELECT var_values(5) || ' (' || nls_charset_name(var_values(5)) ||
        ')' INTO var_values(5) FROM dual;
    EXCEPTION WHEN OTHERS THEN null;
    END;
    IF v_filetype = 2 THEN
      dbms_output.put_line(
         'Filetype.: ' || v_filetype || ' (Original Export dumpfile)');
      dbms_output.put_line(v_separator);
      SELECT DECODE(var_values(13), '0', '0 (Conventional Path)',
        '1', '1 (Direct Path)', var_values(13))
        INTO var_values(13) FROM dual;
      dbms_output.put_line('...Characterset ID of source db..: ' || var_values(5));
      dbms_output.put_line('...Direct Path Export Mode.......: ' || var_values(13));
      dbms_output.put_line('...Export Version................: ' || var_values(15));
    ELSIF v_filetype = 1 OR v_filetype = 3 THEN
      SELECT SUBSTR(var_values(1), 1, 15) INTO v_fileversion FROM dual;
      SELECT DECODE(var_values(1),
                    '0.1', '0.1 (Oracle10g Release 1: 10.1.0.x)',
                    '1.1', '1.1 (Oracle10g Release 2: 10.2.0.x)',
                    '2.1', '2.1 (Oracle11g Release 1: 11.1.0.x)',
                    '3.1', '3.1 (Oracle11g Release 2: 11.2.0.x)',
                    '4.1', '4.1 (Oracle12c Release 1: 12.1.0.x)',
        var_values(1)) INTO var_values(1) FROM dual;
      SELECT DECODE(var_values(2), '0', '0 (No)', '1', '1 (Yes)',
        var_values(2)) INTO var_values(2) FROM dual;
      SELECT DECODE(var_values(14), '0', '0 (No)', '1', '1 (Yes)',
        var_values(14)) INTO var_values(14) FROM dual;
      SELECT DECODE(var_values(18), '0', '0 (No)', '1', '1 (Yes)',
        var_values(18)) INTO var_values(18) FROM dual;
      SELECT DECODE(var_values(19), '0', '0 (No)', '1', '1 (Yes)',
        var_values(19)) INTO var_values(19) FROM dual;
      SELECT DECODE(var_values(20), '0', '0 (No)', '1', '1 (Yes)',
        var_values(20)) INTO var_values(20) FROM dual;
      SELECT DECODE(var_values(21), '0', '0 (No)', '1', '1 (Yes)',
        var_values(21)) INTO var_values(21) FROM dual;
      SELECT DECODE(var_values(22),
                    '1', '1 (Unknown)',
                    '2', '2 (None)',
                    '3', '3 (Password)',
                    '4', '4 (Password and Wallet)',
                    '5', '5 (Wallet)',
        var_values(22)) INTO var_values(22) FROM dual;
      SELECT DECODE(var_values(23),
                    '2', '2 (None)',
                    '3', '3 (Basic)',
                    '4', '4 (Low)',
                    '5', '5 (Medium)',
                    '6', '6 (High)',
        var_values(23)) INTO var_values(23) FROM dual;
      IF v_filetype = 1 THEN
        dbms_output.put_line(
           'Filetype.: ' || v_filetype || ' (Export Data Pump dumpfile)');
        dbms_output.put_line(v_separator);
        dbms_output.put_line('...Database Job Version..........: ' || var_values(15));
        dbms_output.put_line('...Internal Dump File Version....: ' || var_values(1));
        dbms_output.put_line('...Creation Date.................: ' || var_values(6));
        dbms_output.put_line('...File Number (in dump file set): ' || var_values(4));
        dbms_output.put_line('...Master Present in dump file...: ' || var_values(2));
        IF dbms_datapump.KU$_DFHDR_MAX_ITEM_CODE > 15 AND v_fileversion >= '2.1' THEN
          dbms_output.put_line('...Master in how many dump files.: ' || var_values(16));
          dbms_output.put_line('...Master Piece Number in file...: ' || var_values(17));
        END IF;
        dbms_output.put_line('...Operating System of source db.: ' || var_values(9));
        IF v_fileversion >= '2.1' THEN
          dbms_output.put_line('...Instance Name of source db....: ' || var_values(10));
        END IF;
        dbms_output.put_line('...Characterset ID of source db..: ' || var_values(5));
        dbms_output.put_line('...Language Name of characterset.: ' || var_values(11));
        dbms_output.put_line('...Job Name......................: ' || var_values(8));
        dbms_output.put_line('...GUID (unique job identifier)..: ' || var_values(3));
        dbms_output.put_line('...Block size dump file (bytes)..: ' || var_values(12));
        dbms_output.put_line('...Metadata Compressed...........: ' || var_values(14));
        IF dbms_datapump.KU$_DFHDR_MAX_ITEM_CODE > 15 THEN
          dbms_output.put_line('...Data Compressed...............: ' || var_values(18));
          IF dbms_datapump.KU$_DFHDR_MAX_ITEM_CODE > 22 AND v_fileversion >= '4.1' THEN
            dbms_output.put_line('...Compression Algorithm.........: ' || var_values(23));
          END IF;
          dbms_output.put_line('...Metadata Encrypted............: ' || var_values(19));
          dbms_output.put_line('...Table Data Encrypted..........: ' || var_values(20));
          dbms_output.put_line('...Column Data Encrypted.........: ' || var_values(21));
          dbms_output.put_line('...Encryption Mode...............: ' || var_values(22));
        END IF;
      ELSE
        dbms_output.put_line(
           'Filetype.: ' || v_filetype || ' (External Table dumpfile)');
        dbms_output.put_line(v_separator);
        dbms_output.put_line('...Database Job Version..........: ' || var_values(15));
        dbms_output.put_line('...Internal Dump File Version....: ' || var_values(1));
        dbms_output.put_line('...Creation Date.................: ' || var_values(6));
        dbms_output.put_line('...File Number (in dump file set): ' || var_values(4));
        dbms_output.put_line('...Operating System of source db.: ' || var_values(9));
        IF v_fileversion >= '2.1' THEN
          dbms_output.put_line('...Instance Name of source db....: ' || var_values(10));
        END IF;
        dbms_output.put_line('...Characterset ID of source db..: ' || var_values(5));
        dbms_output.put_line('...Language Name of characterset.: ' || var_values(11));
        dbms_output.put_line('...GUID (unique job identifier)..: ' || var_values(3));
        dbms_output.put_line('...Block size dump file (bytes)..: ' || var_values(12));
        IF dbms_datapump.KU$_DFHDR_MAX_ITEM_CODE > 15 THEN
          dbms_output.put_line('...Data Compressed...............: ' || var_values(18));
          IF dbms_datapump.KU$_DFHDR_MAX_ITEM_CODE > 22 AND v_fileversion >= '4.1' THEN
            dbms_output.put_line('...Compression Algorithm.........: ' || var_values(23));
          END IF;
          dbms_output.put_line('...Table Data Encrypted..........: ' || var_values(20));
          dbms_output.put_line('...Encryption Mode...............: ' || var_values(22));
        END IF;
      END IF;
      dbms_output.put_line('...Internal Flag Values..........: ' || var_values(7));
      dbms_output.put_line('...Max Items Code (Info Items)...: ' ||
                  dbms_datapump.KU$_DFHDR_MAX_ITEM_CODE);
    END IF;
  ELSE
    dbms_output.put_line('Filetype.: ' || v_filetype);
    dbms_output.put_line(v_separator);
    dbms_output.put_line('ERROR....: Not an export dumpfile.');
  END IF;
  dbms_output.put_line(v_separator);

EXCEPTION
  WHEN no_data_found THEN
    dbms_output.put_line('Disk Path: ?');
    dbms_output.put_line('Filetype.: ?');
    dbms_output.put_line(v_separator);
    dbms_output.put_line('ERROR....: Directory Object does not exist.');
    dbms_output.put_line(v_separator);
  WHEN no_file_found THEN
    dbms_output.put_line('Disk Path: ' || v_path);
    dbms_output.put_line('Filetype.: ?');
    dbms_output.put_line(v_separator);
    dbms_output.put_line('ERROR....: File does not exist.');
    dbms_output.put_line(v_separator);
END;

/

Sample Output

SET serveroutput on SIZE 1000000



Friday, June 8, 2018

ORA-02374, ORA-12899, ORA-02372 during Data Pump expdp/impdp from 12c WE8MSWIN1252 into 12c AL32UTF8

PROBLEM

During a data pump expdp/impdp from 12c WE8MSWIN1252 into 12c AL32UTF8, I received several errors in the data pump import log file:

07-JUN-18 17:00:12.953: ORA-02374: conversion error loading table "XXX"."ABC"
07-JUN-18 17:00:12.953: ORA-12899: value too large for column ABC_NDC (actual: 22, maximum: 11)
07-JUN-18 17:00:12.953: ORA-02372: data for row: ABC_NDC : 0X'9F9F9F9F9F9F9F9F9F9F9F'


07-JUN-18 22:46:02.663: KUP-11007: conversion error loading table "XXX"."ABC"
07-JUN-18 22:46:02.663: ORA-12899: value too large for column ABC_TOB (actual: 5, maximum: 4)
07-JUN-18 22:46:02.663: KUP-11009: data for row: ABC_TOB : 0X'303136A0'



CAUSE

The cause of these errors is related to the fact that I am migrating data from a database with a single-byte character set (WE8MSWIN1252) into one with a multi-byte character set (AL32UTF8). This means that some single-byte characters will be “expanded” into multi-byte characters, and if the column was already filled completely, ORA-12899 will be raised, showing the maximum allowed column value and the actual value needed to fit all the characters into the multi-byte column.

There are several solutions to this problem: increase the size of the source column or truncate the data before the import, stick to the same character set, pre-create the tables with modified column sizes in the source database before the import

If you have a lot of rows with conversion errors in your import log file, there’s another solution. This requires the installation of the Database Character Set Scanner utility (csscan) into your source database.

Csscan is installed by running the csminst.sql script under $ORACLE_HOME/rdbms/admin. This script will create a user “csmig”. It’s a good idea to first modify the following line so the user csmig doesn’t write into the SYSTEM tablespace:

alter user csmig default tablespace SYSTEM quota unlimited on SYSTEM

In my case, I replaced SYSTEM by SYSAUX.

Let’s install csscan into our source database:

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus /nolog

SQL*Plus: Release 12.2.0.1.0 Production on Fri Jun 8 11:28:41 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

SQL> conn / as sysdba
Connected.
SQL> set TERMOUT ON
SQL> set ECHO ON
SQL> spool csminst.log
SQL> @csminst.sql

After this you should check the csminst.log file for errors. 

Now that csscan is installed, you can use it to check specific schemas or tables for data conversion problems during the migration to another character set. I executed it as follows (you will need the password of the SYS user):

$ csscan \”sys as sysdba\” LOG=/tmp/csscan.log USER=XXX CAPTURE=Y TOCHAR=AL32UTF8 ARRAY=1024000 PROCESS=3

This will analyze all character data stored in the tables of the schema "VSVW2LO112_1801". The script will create 3 log files. The log file with the extension “.err” has the list of all affected rows with their corresponding ROWID’s. You can count the affected rows from the log file using the UNIX “grep” command:

$ grep -i “exceed column size” /tmp/csscan.log.err|wc -l

38

So, in my case, 38 rows from the "" schema will have conversion problems during the migration to the AL32UTF8 character set.

You can remove csscan by dropping the user “csmig”. More information regarding csscan can be found on My Oracle Support, please see document ID 1297961.1.

select VALUE, ISDEFAULT   from v$parameter   where NAME='nls_length_semantics';
VALUE           ISDEFAULT
--------------- --------------------
BYTE            TRUE

select VALUE   from nls_database_parameters   where parameter='NLS_CHARACTERSET';
VALUE
---------------

WE8MSWIN1252