If a specific table is in question, you can easily set up the command to check from the data rows to the index or from the index to the data rows. Here is an example of the commands to use - first to check form the data to the index, then the index to the data - using table ACE in database 143:
In addition, you can add another parameter (KEYNAME=kkkkk), if you want to check a specific keyname for this table. Otherwise, all keys will be processed for the table as the default.
It is important to note that when scanning using the DATAINDEX option, individual data rows are locked until all of the keys that refer to this row are processed, then the row is unlocked. When using the INDEXDATA option, there is no locking, so it could be possible that a data row could be deleted or moved (due to compression settings) before the index entry is updated, thereby producing an error. It is recommended to run the job a second time for just that key name in order to determine if a true problem exists or if this was due to a timing issue.
If you want to easily scan all the tables in a database, you must create the VERINDEX commands for each table. For some databases, this could be a time-consuming process to identify and then code the commands. Attached to this case is a sample z/OS job stream (TEC1032014.zip) that will take a parameter of one or more comma-separated DBIDs and automatically generate and run the commands. This job will:
- Use Rexx to create a series of SQL commands to get all the table names;
- Run the DBSQLPR program to generate the individual DBUTLTY commands. This program will run twice, first to generate the list of commands to SYSOUT for your reference (or to manually capture into a file), and then a second time to create the DBUTLTY command file. The commands will:
> Produce a CXX report for each DBID
> Run the VERINDEX function for each table in each DBID (both DATAINDEX and INDEXDATA)
> Produce an Index Population Report (Type=A) for each table and key in each DBID
- Finally, run the DBUTLTY commands generated by the DBSQLPR program
Note that if you have tables with a large number of records and many keys, this process could run for a while. In one test of a table with over 2 million rows and 12 keys for each of those records, the job ran for over 90 minutes, but used only about 4 CPU minutes, as the majority of work was done in the MUF.