GDG also called Generation Data Group. I have explained the life cycle of GDG in the following steps:

6 Top GDG Prime Points

  1. Create GDG
  2. GDG Template
  3. Create first GDG
  4. Create new generation GDG
  5. Delete GDG

1.Create a GDG File

Create a GDG file. This JCL calling IDCAMS will do it. In this example LIMIT(15) is the number of generations you wish to create and keep. Of course, change the GDG file name to your requirements:

//STEP1 EXEC PGM=IDCAMS  
//SYSPRINT DD SYSOUT=*  
//SYSIN DD *   
DEFINE GDG(NAME(STEWART.APPLY.LOG.GDGROUP1) -   
LIMIT(15) -   
NOEMPTY -   
SCRATCH)
  /*

2. Create a GDG Model

Create a model or template for the individual generations of data sets. Here the default DCB attributes for the Apply or Capture log were used. Note that the space is rather small (5 tracks) in this example so you might want to increase it.

//STEP020 EXEC PGM=IEFBR14 
//GDGMODEL DD DSN=STEWART.APPLY.LOG.GDMODEL1, 
// DISP=(NEW,KEEP,DELETE), 
// UNIT=SYSDA, 
// SPACE=(TRK,5), 
// DCB=(LRECL=1024,RECFM=VB,BLKSIZE=6144,DSORG=PS)

Hard skills get you the interview, soft skills get you the job.

Anonymous

3. Generate GDG

Advertisements

As a test, you can create your first generation of data sets by running this JCL example. Each time you run this step a new generation of data set will be created in your group.

//STEP010 EXEC PGM=IEBGENER 
//SYSPRINT DD SYSOUT=* 
//SYSIN DD DUMMY 
//SYSUT1 DD *  
TEST DATA LINE 1  TEST DATA LINE 2 
/* 
//SYSUT2 DD DSN=STEWART.APPLY.LOG.GDGROUP1(+1),
// DISP=(NEW,CATLG,DELETE),
// SPACE=(TRK,5), 
// DCB=STEWART.APPLY.LOG.GDMODEL1

After you have created several generations of log files an ISPF 3.4 display of your GDG would look like this:

STEWART.APPLY.LOG.GDGROUP1
STEWART.APPLY.LOG.GDGROUP1.G0003V00
STEWART.APPLY.LOG.GDGROUP1.G0004V00
STEWART.APPLY.LOG.GDGROUP1.G0005V00
.
.
.
STEWART.APPLY.LOG.GDMODEL1

The ISPF 3.4 screen header also tells you how many generations of datasets you have.

For example, Data Sets Matching STEWART.APPLY.LOG.GDGROUP1.G* Row 1 of 15. There are 15 generations of the dataset.

4. New Generation GDG

Because the Apply or Capture log file is not referenced by a DD statement you must use IEBGENER to copy your app.log or cap.log file to the GDG. Place this step ahead of your Apply or Capture JCL.

This step creates a new generation of data in your group. Consider using the LOGREUSE=N parm when you start Capture or Apply so that each generation of the log is unique to the specific instance when Capture or Apply was run.

//COPYLOG EXEC PGM=IEBGENER 
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=STEWART.DSN9.STEVE1.APP.LOG,
// DISP=SHR
//SYSUT2 DD DSN=STEWART.APPLY.LOG.GDGROUP1(+1),
// DISP=(NEW,CATLG,DELETE),
// SPACE=(TRK,5),
// DCB=STEWART.APPLY.LOG.GDMODEL1
//SYSIN DD DUMMY //SYSOUT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*

5. Delete GDG

Advertisements

If you need to delete your GDG, delete the individual data sets (G0003V00, G0004V00, etc.) and then run this IDCAM job to delete the GDG.

//STEP010 EXEC PGM=IDCAMS 
//SYSPRINT DD SYSOUT=*
//SYSIN DD * DELETE (STEWART.APPLY.LOG.GDGROUP1) GDG FORCE
/*

Finally, this example of a GDG was created with 15 generations of data sets using the LIMIT(15) parameter. If you wish to change the number of generations run this IDCAMS alter example where the number of generations is increased to 50. Use the GDG name in the ALTER statement that you created from Step 1.

//STEP010 EXEC PGM=IDCAMS 
//SYSPRINT DD SYSOUT=*
//SYSIN DD * ALTER STEWART.APPLY.LOG.GDGROUP1 LIMIT(50)
/*

The maximum value for LIMIT is 255.

Ref:IBM

LATEST POSTS

FAANG-Style SQL Interview Traps (And How to Avoid Them)

SQL interviews at FAANG (Facebook/Meta, Amazon, Apple, Netflix, Google) are not about syntax. They are designed to test logical thinking, edge cases, execution order, and data correctness at scale. Many strong candidates fail—not because they don’t know SQL, but because they fall into subtle traps. In this blog, we’ll walk through real FAANG-style SQL traps,…

Common Databricks Pipeline Errors, How to Fix Them, and Where to Optimize

Introduction Databricks has become a premier platform for data engineering, especially with its robust integration of Apache Spark and Delta Lake. However, even experienced data engineers encounter challenges when building and maintaining pipelines. In this blog post, we’ll explore common Databricks pipeline errors, provide practical fixes, and discuss performance optimization strategies to ensure your data…

Something went wrong. Please refresh the page and/or try again.