Teradata+Material
Teradata+Material

The following 7 strategies improve batch performance, that in turn reduce cycle time.

  • Ensuring the system is properly configured
  • mainframe+db2+jobs
    1. Ensuring a properly configured system means making sure there are generally adequate system resources, so work flow is not inhibited, such as:
    2. Adequate CPU to avoid CPU queuing
    3. Memory so the system does not page
    4. A powerful enough I/O subsystem
    5. A DB2 Data Sharing environment with adequate resources
    6. As well as resource considerations, ensure that important systems and other software are set up properly as follows:
    7. Workload Manager (WLM) set up to provide for good batch throughput
    8. DB2 subsystems optimally tuned for batch
    • Implementing data in memory (DIM)
    1. Implementing data in memory (DIM) techniques is a complex task, but one likely to yield good results. It means exploiting appropriately the various software facilities available to eliminate unnecessary I/Os. These I/Os include repeated reads of the same data, such as:
    2. DB2 buffer pools
    3. Virtual I/O (VIO) in Central Storage
    4. Queued Sequential Access Method (QSAM) buffering
    5. Batch LSR Subsystem (BLSR) exploiting VSAM Local Shared Resources buffering
    6. DFSORT Large Memory Object sorting
    7. To be able to implement DIM techniques, you need spare memory, spare CPU capacity, and a batch workload that is I/O-intensive
    • Optimizing I/O
    1. Optimizing I/O means ensuring the I/O processing done by batch jobs is performed as quickly as possible, such as:
    2. The correct use of DS8000 Cache functions
    3. Exploiting modern Channel Programming functions such as zHPF and MIDAWs
    4. Using the fastest FICON channels
    5. Using HyperPAV to minimize IOSQ time
    6. I/O optimization is most important when the batch workload is I/O-intensive. Where there is little spare CPU capacity I/O optimization could cause an I/O bottleneck to be replaced by a CPU bottleneck.
    • Increasing parallelism
    1. Increasing parallelism means running more things alongside each other, such as:
    2. Reducing the elapsed time of a set of jobs by running more of them alongside each other.
    3. Performing I/O in parallel, using techniques such as Stripes.
    4. Performing DB2 queries using CPU Query Parallelism to use multiple TCBs.
    5. Cloning batch jobs to work against subsets of the data.
    6. To be able to run more work in parallel, you need adequate resources, most importantly spare CPU capacity and memory, and adequate I/O bandwidth.
    • Reducing the impact of failures
    1. Reducing the impact of failures means ensuring that any prolongation of the run time of the batch workload by failures is minimized. For example, this includes such actions as:
    2. Making batch jobs more reliable by fixing problems in the code.
    3. Ensuring that recovery procedures are effective and swift.
    • Increasing operational effectiveness
    1. Increasing operational effectiveness means ensuring scheduling policies are executed accurately and promptly, for examples:
    2. Using an automatic scheduler such as Tivoli Workload Scheduler.
    3. Using the NetView® automation product.
    • Improving application efficiency
    1. Improving application efficiency means looking at ways to make the applications process more efficiently, such as:
    2. Replacing obviously long sequential searches by hash searches or binary searches.
    3. Replacing self-written processing with DFSORT processing as this is optimized for speed.

    Ref:IBM_Redbooks

    Advertisements