Best Practices for Cloning the Applications Manager Oracle data to Another AM Instance

Document ID : KB000088260
Last Modified Date : 14/04/2018
Show Technical Document Details
Issue:
 
We have seen many installation and performance issues as a result of copying the Oracle data from an existing Applications Manager (AM) instance and importing it into a new AM instance.  This is primarily a result of leaving historical data or source specific data from the original installation as well as not following the procedures outlined in the AM Documentation ‘Copying and Moving Applications Manager Instances’.  We are recommending the following best practices be followed when cloning an AM database to another AM instance.
 

 
Resolution:
 
The following information is meant to supplement the AM Documentation and provide additional specific guidance especially when cloning the Oracle data from one AM installation and using it to install AM in another location.
 
Definition of terms used in this document:
Source:  The original AM instance.   This is the instance you are taking the data from.
Target:  The new AM instance.  This is the instance you are moving the data to.

Be sure you read theBefore You Beginsection before you start the process and in addition to those items you should also disable auditing if you have it enabled.
 
Disable Auditing:
If you have the DBA user group assigned to you, you will see an Audit tab on the Agents window for the Automation Engine/Remote Agent and will see the ‘Auditing Enabled’ box checked, ensure that that box is not selected for all Agents.
 
When cloning an instance, we recommend that you purge all the historical data by executing the following SQL statements:
 
truncate table so_users_log;
delete from table so_print_log;
delete from table so_job_history;
commit;
 
Something to keep in mind is that if you use the Oracle TRUNCATE command all rows from the table will be removed. The operation cannot be rolled back and no triggers will be fired. As such, TRUNCATE is faster and doesn't use as much undo space as a DELETE would.  However, both the so_print_log and the so_job_history tables have delete triggers associated with them, so the cleanest way to purge the data in those tables is to use the DELETE command to allow the delete triggers to execute and purge associated data as well.
 
If there is a significant amount of data in the tables you are trying to delete you may want to consider using a WHERE clause and do incremental deletes from those tables.  The following SQL statements can help you gather information about the data in those tables so you can determine how far back you should start.

 
Determine the oldest record in the so_job_history table:
                    select min(to_char(so_job_finished, 'yyyymmdd hh24:mi:ss')) from so_job_history;
 
Based on what is returned you can decide how many days back from today’s date you want to start deleting data from the history table.
 
Incremental delete statement example:
          delete from so_job_history where so_job_finished > sysdate -120;
          commit;
 
This statement will delete all rows from the so_job_history table where the so_job_finished date is older than 120 days from today’s date.
 
Remember after performing a DELETE operation you need to COMMIT or ROLLBACK the transaction to make the change permanent or to undo it.
 
In addition, you can also purge the following data from the source data as it is not applicable to the target instance:
  • You should have ensured there were no jobs in the backlog when you created the import.   In  order to ensure there are no records in that table from the source installation, you can run the following SQL statements:
delete from so_job_queue;
truncate table aw_job_queue_activity;
commit;
 
  • aw_reports_history – this is data used for the CALC_HISTORY_STATISTICS job based on the information in the other tables truncated above.  Use the following SQL statement to delete the existing data from the source installation in that table:
truncate table aw_reports_history;
commit;
 
  • Drop any backup tables you may have generated, as they will not be necessary on the new system.  You can run the following SQL statement to get a listing of all the tables you have imported into the new instance as well as the number of rows in each table:
SELECT
   table_name,
   to_number (extractvalue(xmltype(dbms_xmlgen.getxml('SELECT count(*) c FROM '||table_name)),'/ROWSET/ROW/C')) count
FROM
   user_tables
ORDER by
   table_name;
 
          Contact support if you need help identifying any tables that are not AM default tables.