Oracle golden gate step-by-step configuration

      No Comments on Oracle golden gate step-by-step configuration
To be addressed :

1.How to enable monitoring
2.Regular maintenance tasks
3.DB refresh , How to handle it ?

Prerequisites :

Source :

Establish network connectivity between source and target 
Enable firewall between source and target 
Enable exadata to send outboud traffic to the target host of port number 7801-7820 << Needs to be done only by exadata admins
Install gg software in source 
create mgr.prm with required settings
create GLOBALS file and allowoutputdirectory if required
Create a test table in source server
Configure test source to target replication.
Check for undo retention and how  to retain before image of a row for longer time to do scn based sourcetable.
setup oem monitoring 


Target :
Establish network connectivity between source and target 
Enable firewall between source and target 
Enable exadata to send outboud traffic to the target host of port number 7801-7820 << Needs to be done only by exadata admins
Install gg software in source 
create mgr.prm with required settings
create GLOBALS file and allowoutputdirectory if required
Create a test table in source server
Configure test source to target replication.
setup oem monitoring 


Query to find the obejct existance in source:

Check if the source dirdat folder has sufficient space to hold the files.
Check if all the tables requested for the replication present in the database.
set echo on 
set head off
spool /tmp/a.log

@/tmp/query1.sql
select owner from dba_objects where object_name='xxx';
Spool off

-----Source Configuration-----

1-Create the Instalation directory to receive the Oracle GoldenGate software
cd /u01/app/oracle
mkdir ogg
Copy the software zip file to ogg location
unzip software zip file 

2-Start the GoldenGate Software Command Interface (GGSCI). Create the default empty subdirectories
./ggsci 
Create Subdirs 
exit
If a directory already exists, the installation leaves the contents of that directory alone. 
You have successfully installed Oracle GoldenGate on Linux


3- Configure the Environment
----verify that LOG_MODE is set to ARCHIVELOG. 
SQL> SELECT log_mode FROM v$database;

LOG_MODE
------------
ARCHIVELOG

-----if the DB is in NOARCHIVELOG follow the below steps after all approval with application and management--

SQL>shutdown immediate 
SQL>startup mount 
SQL>ALTER DATABASE ARCHIVELOG; 
SQL>ALTER DATABASE OPEN; 
SQL> SELECT log_mode FROM v$database; 

LOG_MODE
------------
ARCHIVELOG

------Verify that supplemental logging and forced logging are set properly.--------

SQL> SELECT force_logging, supplemental_log_data_min FROM v$database;

FOR SUPPLEME
--- --------
YES YES

---if output is NO follow the below step-----
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; 
SQL> ALTER DATABASE FORCE LOGGING; 
SQL> ALTER SYSTEM SWITCH LOGFILE; 
SQL> show parameter enable_goldengate_replication
SQL> alter system set enable_goldengate_replication=TRUE  scope=both;<< Enable the parameter if not enabled.
SQL> select force_logging from v$database;
SQL> alter database force logging;  << Enable it if not enabled especially if you use 
schematrandata

Create common user and local user if not already present

SQL> create user GG_USER identified by GG_USER ;  
SQL> GRANT CREATE SESSION TO GG_USER; 
SQL> GRANT DBA to GG_USER; 


Test connectivity:
connect GG_USER/GG_USER@localhost:1521/ltru1

Create environment file

ggsci>DBLOGIN USERID GG_USER PASSWORD GG_USER  << check if db log is successful
ggsci > ADD CREDENTIALSTORE <<< if credential store does not existis, oracle creates empty dircrd by using this command.
ggsci > INFO CREDENTIALSTORE

-------ggsci > ALTER CREDENTIALSTORE ADD USER GG_USER@DTRU1  PASSWORD oracle ALIAS GGUSERDTRU1 << container database 
-------ggsci > ALTER CREDENTIALSTORE ADD USER GG_USER PASSWORD password oracle ALIAS GG_USER << container database 

ggsci > ALTER CREDENTIALSTORE ADD USER GG_USER PASSWORD GG_USER ALIAS GG_USER    << Non container database
ggsci > DBLOGIN USERIDALIAS GG_USER << Non container database

ggsci > DBLOGIN USERIDALIAS GG_USER <<< Test dblogin
ggsci > DBLOGIN USERID test@pdb1, PASSWORD test_pass  <<< Test dblogin

***********************

Add trandata for all the tables  or schematrandata


ggsci > DBLOGIN USERIDALIAS FAEMBTR1
-------ggsci > ADD SCHEMATRANDATA pdb1.test ALLCOLS << To take care of current,future and altering tables reference doc below.
-------ggsci > ADD TRANDATA pdb1.test.tcustmer ALLCOLS  << Only if few tables are going to be replicated and need to be done every time a table is added/altered


Check if the suplement logging has been enabled if you use add trandata command.

ggsci > info trandata schemaname.tablename 

SQL >select owner, log_group_name, table_name, log_group_type, always, generated 
from dba_log_groups
where owner = 'TP_DATA_SIT'
and log_group_name like 'GGS%';

Check if supplement logging enabled at schema level 
ggsci > info schematrandata dtru1.TP_DATA_SIT

---------Need to check for DEFGEN----

Take list of tables to do initial load if not provided by appteam.
set feedback off;
set echo off
set pagesize 0
set linesize 500
spool /tmp/atables.log

select 'TABLE ' || owner || '.'|| table_name || ';'from dba_tables where owner in ('TP_AUDIT_TRG','TP_DATA_TRG','TP_EXTN_TRG','TP_METADATA_TRG','TP_STAGE_TRG','TP_USERS_TRG','TP_REPORTS_TRG');


-------------------------------------------------
---:      Create manager process
Step 1 : Create extract process : 


Configure Classic Extract:
ironment==========================
vi eddec1.prm

EXTRACT EDDEC1
--SOURECATALOG PDBNAME <<< Only for Multitenant database
LOGALLSUPCOLS
UPDATERECORDFORMAT FULL
USERIDALIAS GG_USER
--userid GG_USER, password GG_USER
EXTTRAIL ./dirdat/et
TRANLOGOPTIONS EXCLUDEUSER gg_user
TRANLOGOPTIONS DBLOGREADER
TRANLOGOPTIONS BUFSIZE 4096000
TRANLOGOPTIONS DBLOGREADERBUFSIZE 4096000
TABLE STAGING2.*;

ggsci>DBLOGIN USERID GG_USER PASSWORD GG_USER
ggsci>ADD EXTRACT EDDEC1, TRANLOG , begin now
--ADD EXTRACT EXTDDEC1, TRANLOG , begin now
ggsci>ADD EXTTRAIL ./dirdat/et, EXTRACT EDDEC1, megabytes 20

---ADD EXTTRAIL ./dirdat/ex, EXTRACT EXTDDEC1, megabytes 10

ggsci >add trandata schmea.tablename ALLCOLS   -------------------------------------NEED TO CHECK


Configure Integrated extract:
============================
vi  edtru1.prm

EXTRACT EDTRU1
ExtTrail ./dirdat/dtru1/inc_load/ic
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/dbhome_1")
SETENV (ORACLE_SID="C1DTRU1")
SETENV (NLS_LANG=AMERICAN_AMERICA.UTF8)
USERIDALIAS  GG_USER
SOURCECATALOG dtru1
REPORTCOUNT EVERY 10 MINUTES, RATE
BR, BRDIR ./dirdat/BR
TABLE PT_DATA_SIT.*;
TABLE PT_METADATA_SIT.*;
--GETTRUNCATES
--STATOPTIONS REPORTFETCH
--FETCHOPTIONS USESNAPSHOT, USELATESTVERSION
--FETCHOPTIONS MISSINGROW REPORT
--DDL INCLUDE MAPPED
--DDLOPTIONS REPORT
--Table EDAA_RPT.*;
--TABLE EDAA_CACHE.*;
--TABLEEXCLUDE ODS.ODS_CLM_PPO_MED_KO_NEW;



ggsci > DBLOGIN USERID GG_USER PASSWORD ggddec1 <<<< FOR Non userid setup
ggsci > DBLOGIN USERIDALIAS GG_USER << Login to root container
ggsci > ADD EXTRACT EDTRU1, INTEGRATED TRANLOG, BEGIN NOW
ggsci > ADD EXTTRAIL ./dirdat/dtru1/inc_load/ic EXTRACT EDTRU1, megabytes 2000
ggsci > register extract edtru1 database      <<<<< For NONCDB database
ggsci > dblogin userid GG_USER password oracle  << login to root container to execute the below command.
----------------ggsci > REGISTER EXTRACT EDTRU1 DATABASE CONTAINER (dtru1)   <<< for CDB
ggsci > start extract EDTRU1


Step 2 : create pump process:

Configure Pump:
=================

vi PDTRU1.prm

EXTRACT PDTRU1
RMTHOST 192.30.111.177, MGRPORT 7801, TIMEOUT 30
RMTTRAIL /opt/ogg/dirdat/legacy/tp/inc_load/ic
--CACHEMGR CACHEDIRECTORY ./dirdat/dirtmp
PASSTHRU
--GETTRUNCATES
TABLE PT_DATA_SIT.*;
TABLE PT_METADATA_SIT.*;



ggsci > DBLOGIN USERID GG_USER PASSWORD GG_USER <<<< FOR Non userid setup
ggsci > DBLOGIN USERIDALIAS GGUSER 
ggsci > add extract PDTRU2, EXTTRAILSOURCE ./dirdat/dtru1/inc_load/ic, begin NOW
ggsci > ADD RMTTRAIL /opt/ogg/data/dirdat/legacy/tp/inc_load/ic extract PDTRU1 , seqno 1000 rba 0    <<< Check or assume number of trial files that inital load can produce and keep the seqno higher than that to avoid position col repeat"

ggsci > start extract PDTRU1

Incase the pump is already running but you want to reposition it so the initial load and incremental load remote trail files does not overlap in number which will cause duplicate logposition values.

ggsci > delete RMTTRAIL /opt/ogg/data/dirdat/facets/inc_load/ic , extract UT1_PMP
ggsci > ADD RMTTRAIL /opt/ogg/data/dirdat/facets/inc_load/ic  , SEQNO 200 RBA 0 , EXTRACT UT1_PMP
ggsci > Alter extract UT1_PMP , extseqno 0 extrba 0
ggsci > info UT1_PMP detail 



Step 3 : Preparing and initiate Inital load : 

col current_scn format 9999999999999999
select current_scn from v$database;   << Note the SCN
CURRENT_SCN
-----------
    2040227
                
GGSCI > EDIT PARAMS INLOAD1

Paste the following contents to INLOAD1.PRM:

extract INLOAD1
USERIDALIAS GGUSER
USERIDALIAS GG_USER << Multitentant db
ENCRYPTTRAIL AES256
RMTHOST 192.30.111.173, MGRPORT 7801
RMTFILE /opt/ogg/data/dirdat/legacy/tp/ini_load/ii, MEGABYTES 500
TABLE dtru1.PT_DATA_SIT.* , SQLPREDICATE 'AS OF SCN 2040227';
TABLE dtru1.PT_METADATA_SIT.* , SQLPREDICATE 'AS OF SCN 2040227';



TABLE PDBNAME.STAGING2.TEST10 , SQLPREDICATE 'AS OF SCN 2040227'; <<<< for contaniner database only
--TABLE STAGING3.QPNY_CLM_PRF_5010 , SQLPREDICATE "'WHERE BEG_SERV_DATE_YEAR>='20150101' and BEG_SERV_DATE_YEAR<'20170101'"";
--TABLE STAGING3.QPNY_CLM_PRF_5010 , SQLPREDICATE "WHERE BEG_SERV_DATE_YEAR>='20170101'";
--TABLEEXCLUDE HR.EMP_DETAILS_VIEW;


Initial load for big size partition tables
-- TCPBUFSIZE 10000000 , tcpflushbytes 10000000  ( to set it 10 mb , How to calculate is in the commands_goldengate doc)

EXTRACT INILOAD1
USERIDALIAS GGUSER
ENCRYPTTRAIL AES256
RMTHOST 192.30.111.173, MGRPORT 7801, tcpbufsize 10000000, tcpflushbytes 10000000
RMTFILE /opt/ogg/data/dirdat/legacy/tp/ini_load/ii, MEGABYTES 2000


table east.milltblea, SQLPREDICATE “WHERE COLA < 499294558289218750”;


add extract INLOAD1 SOURCEISTABLE

SELECT 'TABLE DTRU1.'||OWNER||'.'||TABLE_NAME||", SQLPREDICATE 'AS OF SCN 9428942694650';"|| FROM DBA_TABLES WHERE OWNER LIKE 'TP_%_SIT%';

start 
ggsci >start extract inext1   << Either use this command or below script to start the extract.
ggsci >nohup ./extract paramfile dirprm/inext1.prm reportfile dirrpt/inext1.rpt &


Step 4 : starting replicate for the inital load files sent by iload.
Create One time replicat Process:

Replicat RLOAD1
USERIDALIAS GGUSER
SPECIALRUN       
END RUNTIME
EXTFILE /opt/ogg/data/dirdat/facets/ini_load/ii     
MAP FACT.*, TARGET FACT.*;

start replicat RLOAD1

For Non co-ordinated replicat:
ADD REPLICAT RLOAD1, EXTTRAIL /opt/GoldenGate/dirdat/fact/inc_load/tf
ALTER REPLICAT EXTSEQNO 0 extrba 0


Oracle:
dblogin userid GG_USER PASSWORD GG_USER
ADD CHECKPOINTTABLE GG_USER.GGSCHKPT;
add replicat rfac2, exttrail /ogg/oggdir/dirdat/ic, checkpointtable GG_USER.GGSCHKPT
START REPLICAT RFAC2 , AFTERCSN 38270077089

While dropping and recreating we don't need to specify any checkpoint table, Oracle should be able to do it with below thread.
Co-ordinated replicat :
add replicat RLOAD1 exttrail /u01/app/ogg/product/12.3.0.1.2/dirdat/QSAW8/rf coordinated maxthreads 10

ALTER REPLICAT EXTSEQNO 0 extrba 0

Big data :
Replicat file:

Replicat icae1
targetdb libfile libggjava.so set property=dirprm/caehdfs_rba.props
reportcount every 1 minutes , rate
grouptransops 5000
MAP dbo.*, TARGET landing_cae.*, THREADRANGE (1-5);


Step 5 : Starting Incremental using aftercsn so there is no data loss or collision of data.

After completion of inital load  , Start incremental load :

start replicat replcdd, aftercsn 2040227

Reference:
https://blog.dbi-services.com/performing-an-initial-load-with-goldengate-1-file-to-replicat/
http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/pdb/pdb_basics/pdb_basics.html
http://www.vitalsofttech.com/oracle-12c-managing-common-and-local-users-roles-and-privileges-in-cdb-and-pdb/
https://dbasolved.com/2016/08/12/to-trandata-or-to-schematrandata-that-is-the-goldengate-questions-of-the-day/





Leave a Reply

Your email address will not be published. Required fields are marked *