The SFS system has several file spaces that contain a large number of files and authorizations. This message is issued and the file space is not backed up. What needs to be done to resolve this problem?
There are two choices on to how to implement a solution for this. HiDRO's virtual storage may need to be increased to do either. From the output of the FC file you will be able to determine the FILEPOOL that the offending file space is in.
For our example, the filepool is SYSD and the file space is TPFDATA
Create a file on SYBMON's 191 disk.. call it SFSCORE COMMAND. Populate it with these records:
B U <UNIT> USER (<USERID> RR <PASSWD>) TO TAPE CORE 30M OSF -
VM3.TEST.FULL.BACKUP ( OTAPE FSN *
The OSF option is optional depending on your site.
Add a record to the SELECT file for the job/jobs that are failing.
I: SYSD * * * SFSCORE
This record tells HiDRO when it creates the JOB... use SFSCORE COMMAND for the syntax when creating tasks for the filepool SYSD. Note the 30M of core in the SFSCORE COMMAND file.
If your default OPCORE is 5M, when HiDRO carves out the storage for filepool SYSD, all file spaces in that filepool will be backed up with 30M of storage. Make sure that HiDRO has enough vstor to add 30M while this filepool is being backed up. If there is only one offending file space, then the SELECT record could be:
I: SYSD * TPFDATA * SFSCORE
This would only use 30M of vstor for this one file space while it is being backed up.
After completing the changes to the SELECT file, do a generate PF4 and verify that the task for the specified filepool/file space has the correct OPCORE added to the task.
XEDIT the SFSCTL file for the job/jobs on SYBMON's 191 disk and add the 'CORE 30M' to it. This will cause *all* the SFS filepools to be backed up using 30M of virtual storage.
30M may not be enough virtual storage. Adding a larger amount requires that you consider that EVERY file space/filepool would be needing that much storage and if MAXTAPE is 3, then the possibility exists that all three 30M tasks would be running at the same time, therefore, needing 90M. Over-estimate how much vstor to guarantee that as the file spaces grow, this will not be a problem in the future.