How to manually partition the UIM database for Oracle

Document ID : KB000071942
Last Modified Date : 05/03/2018
Show Technical Document Details

The following document describes a manual process for partitioning tables on a UIM Database hosted on Oracle 11gr2 Enterprise or above.

When enabling partitioning in an environment with a very large database, it may be advisable to manually partition the largest tables before allowing the data_engine to partition the smaller ones; the data_engine may otherwise time out during maintenance periods if it is allowed to attempt partitioning the tables on its own.

Generally speaking, we always recommend that partitioning should be enabled where possible; even on a smaller database, it will improve the performance of data maintenance.  On a larger database (over 10GB in size) this can be especially important, as data maintenance on an unpartitioned database can take many hours to complete.

While it is difficult to pinpoint an exact number of rows which constitutes a "large" table that is a candidate for manual partitioning, a rule of thumb is to manually partition any table with >50 million rows.  

To determine the top 20 largest RN_QOS_DATA tables, you may run the following query as SYS/SYSDBA on the Oracle Database:

select * from (select owner,segment_name,segment_type,tablespace_name,round(bytes/1024/1024/1024,1) "size_in_GB" from dba_segments where owner=upper('uimuser') order by bytes/1024/1024/1024 desc) where segment_name like '%RN_QOS_DATA%' and segment_type = 'TABLE' and rownum < 21 order by 5 desc;


************Warning Use at your own Risk*********

Please review this and all queries with your DBA before using this Article.

We also highly recommend that these queries and their results be tested in a Test/DEV environment before running in Production.


select * from (selectowner,segment_name,segment_type,tablespace_name,round(bytes/1024/1024/1024,1) "size_in_GB" from dba_segments where owner=upper('uimuser') order by bytes/1024/1024/1024 desc) where segment_name like '%RN_QOS_DATA%' and segment_type = 'TABLE' and rownum < 21 order by 5 desc;

Below is the full process and query for manually partitioning the tables associated with a QoS definition.

************IMPORTANT WARNING************
Note that once you begin this process, you have 14 days to complete it. As soon as you begin the next step here there's no turning back. If the process is not completed within 14 days, then you risk losing 1 days’ worth of data each day until the process is completed.
1. Open the data_engine probe using Raw Configure mode (select the probe then hold down the SHIFT key and rt-click to open in Raw Configure mode)
2. Change the value of data_management_partition from "no" to "yes" and then restart the data_engine.
3. This will commit a change to the database which is needed for the next steps.
4. Now open Raw Configure again, and set data_management_active to "no" and restart data_engine again.
5. From this point on, it’s permissible to leave data_engine active – there is no need to deactivate it.
6. Determine which table to partition first - in the example below we'll use RN_QOS_DATA_0035 but this will depend on table size. (It’s advisable to start with the largest tables as mentioned above.)
7. The number of this table (e.g. 0035) will be the number you will plug into the following query, for the "IQosDefId" parameter, to partition the desired table (strip the leading zeroes). This will also match the qos_def_id associated with this table in S_QOS_DEFINITION.


set serveroutput on size 30000; 

lQosDefId number default 35;   -- change this number to reflect which RN table you wish to partition
lMode varchar2(127) default 'all'; 
lStartTime timestamp default systimestamp; 
lLogLevel number default 5; 
lRawCompressedRows number; 
lHistoryCompressedRows number; 
lRawShrunkRows number; 
lHistoryShrunkRows number; 
lErrorMessage varchar2(1024); 
lReturnCode number; 
spn_de_DataMaint(lQosDefId, lMode, lStartTime, lLogLevel, lRawCompressedRows, lHistoryCompressedRows, lRawShrunkRows, lHistoryShrunkRows, lErrorMessage, lReturnCode); 
dbms_output.put_line('lRawCompressedRows=' || to_char(lRawCompressedRows)); 
dbms_output.put_line('lHistoryCompressedRows=' || to_char(lHistoryCompressedRows)); 
dbms_output.put_line('lRawShrunkRows=' || to_char(lRawShrunkRows)); 
dbms_output.put_line('lHistoryShrunkRows=' || to_char(lHistoryShrunkRows)); 
dbms_output.put_line('lErrorMessage=' || lErrorMessage); 
dbms_output.put_line('lReturnCode=' || to_char(lReturnCode)); 

Once you have run this query against all the tables you wish to partition, you should re-enable data_engine maintenance as mentioned above, by setting data_management_active to "yes" again and restarting data_engine.   On the next maintenance run, the data_engine will partition the remaining tables automatically.
Sample data_engine.cfg:
   loglevel = 0
   logfile = data_engine.log
   logsize = 200
   locale = English
   data_management_active = no
   data_management_time_spec = DTSTART:20151104T004000|RRULE:FREQ=DAILY;INTERVAL=1
   data_management_timeout = 7200
   auto_reindex = yes
   hub_bulk_size = 20
   thread_count_insert = 0
   data_management_compress = yes
   delete_raw_samples = 181
   delete_history_samples = 720
   raw_data_extra = 15
   daily_avg_age = 744
   daily_avg_age_extra = 15
   number_of_subpartitions = 5
   data_management_partition = yes
   provider = SQLOLEDB
   database = CA_UIM831
   user = sa
   password = N2FFHiT3wTN2eVnIS2i8bA==
   parameters = Network Library=dbmssocn;Language=us_english
   min_free_space = 10
   monitor_interval = 300
   alarm_severity = 5
   mysql_buffer_increase = 5000
   mysql_buffer_size = 5000
   log_bulk_stats = 0
   log_inserted_rows = 1
   log_lsv_rows = 0
   table_maintenance_mode = 0
   table_maintenance_online_mode = 0
   table_maintenance_loglevel = 0
   statistics_age = 24
   statistics_pattern = RN_QOS_DATA%
   statistics_loglevel = 0
   qos_probes = no
   lsv_sleep = 3
   bucket_flush_size = 5000
   bucket_flush_time = 5
   show_admin_upgrade_primary = yes
   server =,1433
   port = 1433
   servicename = 
   db_plugin = Microsoft
   data_engine_id = 1
   threads = 0
   index_frag_low = 5
   index_frag_high = 30
   compress_indexes_only = no
   index_pattern =