Hiperbatch and the Data Lookaside Facility

Hiperbatch is a z/OS® performance enhancement that works with the Data Lookaside Facility (DLF) to allow batch jobs and started tasks to share access to a data set, or data object. IBM Z Workload Scheduler provides control information to DLF concerning which operations are allowed to connect to which DLF object, and which data sets are eligible for Hiperbatch.

Within IBM Z Workload Scheduler, a data set that is eligible for Hiperbatch is treated as a resource. Using the RESOURCES panel, you can define data sets with the DLF attribute. The DLF exit sample, EQQDLFX, can then make the following decisions about DLF processing:
  • Is this data set eligible for Hiperbatch?
  • Should this operation be connected to this data object?

IBM Z Workload Scheduler issues enqueues on the job and data set name to notify the DLF exit that the job to be scheduled will use Hiperbatch. When the job ends, IBM Z Workload Scheduler checks if the same data set will be used by the immediate successor operation or by any other ready operation. If so IBM Z Workload Scheduler does not purge the data object. Otherwise, IBM Z Workload Scheduler initiates purge processing of the data object (that is, IBM Z Workload Scheduler removes it from Hiperspace). For details about installing IBM Z Workload Scheduler Hiperbatch support, see Customization and Tuning.

Note: The controller can create DLF objects on any system in the controller's global resource serialization (GRS) ring, but operations that need to connect to the object must run on the same system as the controller.