The following 7 strategies improve batch performance, that in turn reduce cycle time.
- Ensuring the system is properly configured
- Ensuring a properly configured system means making sure there are generally adequate system resources, so work flow is not inhibited, such as:
- Adequate CPU to avoid CPU queuing
- Memory so the system does not page
- A powerful enough I/O subsystem
- A DB2 Data Sharing environment with adequate resources
- As well as resource considerations, ensure that important systems and other software are set up properly as follows:
- Workload Manager (WLM) set up to provide for good batch throughput
- DB2 subsystems optimally tuned for batch
- Implementing data in memory (DIM)
- Implementing data in memory (DIM) techniques is a complex task, but one likely to yield good results. It means exploiting appropriately the various software facilities available to eliminate unnecessary I/Os. These I/Os include repeated reads of the same data, such as:
- DB2 buffer pools
- Virtual I/O (VIO) in Central Storage
- Queued Sequential Access Method (QSAM) buffering
- Batch LSR Subsystem (BLSR) exploiting VSAM Local Shared Resources buffering
- DFSORT Large Memory Object sorting
- To be able to implement DIM techniques, you need spare memory, spare CPU capacity, and a batch workload that is I/O-intensive
- Optimizing I/O
- Optimizing I/O means ensuring the I/O processing done by batch jobs is performed as quickly as possible, such as:
- The correct use of DS8000 Cache functions
- Exploiting modern Channel Programming functions such as zHPF and MIDAWs
- Using the fastest FICON channels
- Using HyperPAV to minimize IOSQ time
- I/O optimization is most important when the batch workload is I/O-intensive. Where there is little spare CPU capacity I/O optimization could cause an I/O bottleneck to be replaced by a CPU bottleneck.
- Increasing parallelism
- Increasing parallelism means running more things alongside each other, such as:
- Reducing the elapsed time of a set of jobs by running more of them alongside each other.
- Performing I/O in parallel, using techniques such as Stripes.
- Performing DB2 queries using CPU Query Parallelism to use multiple TCBs.
- Cloning batch jobs to work against subsets of the data.
- To be able to run more work in parallel, you need adequate resources, most importantly spare CPU capacity and memory, and adequate I/O bandwidth.
- Reducing the impact of failures
- Reducing the impact of failures means ensuring that any prolongation of the run time of the batch workload by failures is minimized. For example, this includes such actions as:
- Making batch jobs more reliable by fixing problems in the code.
- Ensuring that recovery procedures are effective and swift.
- Increasing operational effectiveness
- Increasing operational effectiveness means ensuring scheduling policies are executed accurately and promptly, for examples:
- Using an automatic scheduler such as Tivoli Workload Scheduler.
- Using the NetView® automation product.
- Improving application efficiency
- Improving application efficiency means looking at ways to make the applications process more efficiently, such as:
- Replacing obviously long sequential searches by hash searches or binary searches.
- Replacing self-written processing with DFSORT processing as this is optimized for speed.
Ref:IBM_Redbooks