3 Key points of Acquire BIND option in DB2

  • The ACQUIRE bind option specifies that resources for the packages in the plan are to be acquired when the application first accesses them.

ACQUIRE
( USE )
( ALLOCATE )  ==> Deprecated

  • Usage is on BIND and REBIND PLAN

This behaviour is the default, regardless of whether you specify ACQUIRE(USE), ACQUIRE(ALLOCATE), which has been deprecated, or neither option. The ACQUIRE option does not affect page, row, LOB or XML locks.

(USE)
Acquires table space locks only when the application program first uses them.
(ALLOCATE)
This option has no effect. ALLOCATE is deprecated. If you specify ACQUIRE(ALLOCATE) DB2® issues a warning and uses ACQUIRE(USE).

Process Default value
BIND PLAN:  USE
BIND PACKAGE:  N/A
REBIND PLAN :USE
REBIND PACKAGE: N/A

  • There is no ACQUIRE option for packages. A package always acquires resources when it first uses them, as if you specified ACQUIRE(USE).

Ref: IBM

How to know your version of DB2 in Mainframe

A tricky question for Mainframe DB2 programmers. Question is where I get information about DB2 version of sub system. The below post answers to this question.

Secondly, each DB2 sub system can have different version of DB2.

AS DB2 goes through V8, V9, and now V10 with different mode levels (Compatibility mode, Enabling New Function mode, and New Function mode), it is important to know which version is running in the subsystem you are working in. Many shops have different subsystems in different versions, especially in their test environment.

It is very important for developers to know this so they do not get confused with what looks like an SQL error but is actually an error specific to not being on the right version to handle a particular function.

You can use the below 2 ways to get version information:

  • Choose DISPLAY GROUP from the DB2I menu

DISPLAY GROUP (DB2)
The DB2® command DISPLAY GROUP displays information about the data sharing group to which a DB2 subsystem belongs.

-DISPLAY GROUP

DISPLAY GROUP DETAIL displays the DB2 subsystem and group mode (conversion mode, enabling new function mode, or DB2 Version 8 new-function mode or later).

Abbreviation: -DIS GROUP

  • Issue the following new V8 GET VARIABLE statement:

SELECT GETVARIABLE(‘SYSIBM.VERSION’)
FROM SYSIBM.SYSDUMMY1

A string comes back with the following:

PPP is the product string, set to:                     DSN.
VV is a two-digit version identifier, such as: 09.
RR is a two-digit release identifier, such as: 01.
M is a one-digit maintenance-level identifier, such as 5.
You can execute either of these in the subsystem you’re currently working in.

Remember that each subsystem could be under different versions and releases.

2 Top features of IBM DB2 CHECK DATA utility

  1. The CHECK DATA option checks the violation of the referential integrity rules between two tables.
  2. It also checks whether data values conform to the data validation constraints applied to the table space that is specified with the utility command.

Technical Architect jobs career options

[Technical Architect jobs career options]

For example, if two tables EMP and DEPT are joined with the data field dept_no, the utility will check whether the values in both tables correspond with each other. If the utility detects any error, it will report the error and possibly resolve it.

The utility can be run for a single or multiple table space or for a specific partition of a table space. If the utility encounters any violation, it deletes the invalid rows of data or copies them to an exception table.

The table space being checked is placed in the CHECK-pending status. After reporting and solving the errors, the CHECK-pending status is reset. The CHECK DATA option runs after a conditional restart or when the base tables do not correspond to each other.

Related: DB2 LOAD and UNLOAD with SHRLEVEL and PARALLEL

Some scenarios in which this utility works are:

  • When a table is loaded without the ENFORCE CONSTRAINT option. This option of the LOAD utility implements the constraints that were defined when the table was created.
  • When a check constraint is added in an existing table and it violates the existing data.
  • When the CHECK DATA utility detects any violation of the data integrity rules, while deleting rows of erroneous data during its execution. You can run the utility again to fix the problem.
  • When you need to perform partial recovery of the table spaces that are referentially joined.
  • When the reliability of data that binds the table spaces referentially needs to be checked.
  • When the CHECK-pending status of the table space is to be removed for future access and use of data.

//SYSIN DD *
CHECK DATA TABLESPACE DSN9D41A.DSN9S41D
FOR EXCEPTION IN DSN9410.DESIG
USE DSN9410.DESIG_EXCPTN
SCOPE ALL DELETE YES
//*

The above CHECK DATA statement works on the DSN9D41A.DSN9S41D table space. The utility checks that the DSN9410.DESIG dataset meets all the referential constraints. If a violation is encountered, the utility deletes all improper rows and puts them into the exception table DSN9410.DESIG_EXCPTN.
Note : The CHECK DATA utility run parallel to the DIAGNOSE, MERGECOPY, MODIFY, REPORT, and STOSPACE utilities.

Read more at:

Srinimf-DB2 Load utility part-1

Srinimf-DB2 LOAD utility part-2

Top features of SAS 9.2 for Mainframe projects ( 1 of 5)

How many ways you can invoke SAS in Mainframe:

SAS,Mainframe Jobs and Career options

[SAS,Mainframe Jobs and Career options]

You can invoke SAS with any of the following methods:

  • in interactive mode under TSO using the SAS CLIST
  • in interactive mode under TSO using the SASRX exec
  • in batch mode with the SAS cataloged procedure
  • by logging on to SAS directly and bypassing the TSO terminal monitor program.

Overview of SAS Library Engines
SAS provides different engines that enable you to access and, in most cases, to update files of different types and different formats.

A native library engine is an engine that accesses forms of SAS files that are created and processed only by SAS.

The V9 Engine
The default Base SAS engine for SAS libraries is V9. The V9 engine creates libraries in the V9 format, and it can also read and write libraries created using the V7 and V8 engines.

The V9 engine is the appropriate choice for most applications because it supports the full SAS data set functionality. The V9 engine also exploits the random access capabilities of disk devices to achieve greater performance than is possible with sequential engines.

The V9 engine is the default engine in most cases, but you can change the specified default engine with the ENGINE system option. The V9 engine can be used only for the types of devices that support it.

Use BASE as the engine name if you write programs that create new SAS libraries and you want to create the libraries in the latest available format. In SAS®9, BASE is an alias for V9, and it will be an alias for newer engines in subsequent releases of SAS.

The V9TAPE Engine
The sequential engine for SAS libraries is V9TAPE. The V9TAPE engine creates sequential libraries in the V9TAPE format, and it can also read and write libraries created using the V7TAPE and V8TAPE engines.

  • The V9TAPE engine provides a way to store files on devices such as tape that do not support random access. Some of the uses of the V9TAPE engine on z/OS include
  • archiving SAS files to tape for long-term storage.
  • transporting SAS files between your z/OS system and another z/OS system or CMS system via tape.
  • sending SAS data, via a pipe connection, for immediate consumption by another job running in parallel.

In contrast to the V9 engine, V9TAPE has the following limitations:

  1. does not support indexing, compression of observations, or audit trail capabilities
  2. does not support direct access to individual observations (using the POINT= or KEY= options in the SET or MODIFY statements)
  3. provides limited support for the following types of SAS library members: ACCESS, CATALOG, PROGRAM, and VIEW. You can move or transport these member types, but you cannot use the V9TAPE engine to access the information within these members.

Compatibility Engines
Overview of Compatibility Engines
SAS provides various compatibility engines for the purpose of processing libraries that were created by previous versions of SAS.

The type of engine that should be used depends on the engine format of the library. In most cases, SAS can detect the engine format and automatically select the appropriate engine to use.

However, if you are using SAS 9.2 to create a new library or new members that will be processed by a prior version of SAS, you need to explicitly specify (on a LIBNAME statement or function) an engine that creates a library or members in a format that can be processed by the prior version of SAS. For more information about cross-release compatibility and migration, see the Migration focus area at support.sas.com/migration.

The following Base SAS engine library formats can be read and written by SAS 9.2:

V9 library
Libraries created by the default Base SAS engine in V8 or V7 are identified by SAS 9.2 as being in V9 format.

V6 library
These libraries were created using the default Base SAS engine in V6 or using the V6 compatibility engine under a later version of SAS.

Specifying one of the following compatibility engines has the indicated effect:

V8- creates a V9 library but does not allow creation of members with format names longer than 8 bytes.

V7- has the same effect as V8.

V6- creates a V6 format library.

The following sequential engine library formats can be read and written by SAS 9.2:

V9TAPE library- Libraries created by the default sequential engine in V8 or V7 are identified by SAS 9.2 as being in V9TAPE format.

V6TAPE library- These libraries were created using the default sequential engine in V6 or using the V6TAPE compatibility engine under a later version of SAS.

Mainframe: How SORTWKxx datasets increases sort performance in JCL

DFSORT Master for best examples

[DFSORT Master for best examples]

The use of more work data sets increases the maximum amount of data DFSORT can process in a single sort application.

  • Any valid ddname of the form SORTWKdd or SORTWKd can now be used for DASD work data sets (for example, SORTWK01, SORTWKC3, SORTWK2, SORTWK#5, SORTWKA, SORTWKXY and so on).

Limit of number of input datasets for sort JCL

The upper limit for the number of input data sets that can be specified and used for a Blockset merge application has been raised from 16 to 100. The use of more merge input data sets increases the maximum amount of data DFSORT can process in a single merge application.

Maximum limit for work datasets in DFSORT JCL

The maximum limit is 255

Why to avoid SELECT COUNT(*) in your SQL query

SQL Tuning - Part-1

[SQL Tuning – Part-1]

Many queries in programs execute SQL Select Count(*) statements just to see if any rows exist for particular data.

  • Many times it doesn’t matter whether there is one row or one million rows; it just matters if any rows exist. When this is the case, using Select Count(*) is the most expensive way to check because it will count up all rows.

Related – Complex SQL Queries for project and interviews

Queries should be coded so that they ensure stopping after getting a hit on the first row (unless of course they actually need the total number of rows).

Writing SQL query in in-efficient way:

MOVE ZEROS TO HOST-VARIABLE1.
EXEC SQL
 SELECT COUNT(*)
 INTO :HOST-VARIABLE1
 FROM TABLE1
 WHERE COLUMN1 = :HOST-VARIABLE-X
 AND COLUMN2 = :HOST-VARIABLE-Y
 END-EXEC

IF HOST-VARIABLE1 > ZERO
SET ROWS-FOUND TO TRUE
END-IF

Best practice is:

MOVE ZEROS TO HOST-VARIABLE1
SELECT 1
INTO :HOST-VARIABLE1
FROM TABLE1
WHERE COLUMN1 = :HOST-VARIABLE-X
 AND COLUMN2 = :HOST-VARIABLE-Y
FETCH FIRST 1 ROW ONLY
EVALUATE SQLCODE …‥
IF SQLCODE = 0
 SET ROWS-FOUND TO TRUE
END-IF

Related topics:
SQL tuning refresher guide

 

DB2 Error codes -204, -205, -206

The error code -204 says:

The object name is undefined. Resolution is need to check if the DB2 object name is correct.

The error code -205 says:

Invalid column name. In other way, the column name is not defined on that table. Resolution is check if that column name is created during Table creation.

The error code -206 says:

Similar to error -204, the column name is invalid in the context of SELECT, INSERT , DELETE or MERGE

 

How a record in a KSDS is randomly accessed by primary key

KSDS,RRDS,ESDS,LDS differences

[KSDS,RRDS,ESDS,LDS differences]

As you’ve already learned, a KSDS consists of a data component and an index component. The primary purpose of the index component is to locate the records in the data component by their key values. To increase the efficiency of this process, the index component consists of an index set and a sequence set.

As you can see, the sequence set is the lowest level of the index. It’s used to determine which control interval a specific record is stored in. In contrast, the index set consists of records at one or more levels that point to the sequence set records. In this figure, though, the index set has only a top level, which always consists of just one record.

The entries in the index set record in this figure contain the highest key values stored in each of the sequence set records, and the entries in the sequence set records contain the highest key values stored in each of the control intervals. To access a record, VSAM searches from the index set to the sequence set to the control interval that contains the desired record. Then, VSAM reads that control interval into virtual storage and searches the keys of the records until the specific record is found.

The free pointer shown in each sequence set record points to free control intervals in the control area. There is also control information within each control interval (not shown) that identifies the free space in the interval. By using the free pointers and control information, VSAM is able to manage the free space as it adds records to a KSDS. VSAM also updates the records in the index and sequence sets as it processes those additions. As a result, the data set can always be used for both sequential and random processing by primary key.

To improve performance, the systems programmer tries to define a production KSDS so all of the index set is brought into internal storage areas called buffers when the data set is processed so that the information can be accessed more quickly. This is one of several ways that the performance of a KSDS can be fine-tuned.

Hierarchical file system (HFS) in Mainframe UNIX (1 of 2)

 UNIX basic commands- part-1

               Mainframe UNIX

Files in a UNIX environment are organized into a hierarchical structure, If this looks familiar to you, that’s because both the DOS and Windows operating systems use a hierarchical file organization too. As you can see, all files within a hierarchical file system (HFS) are members of a directory. Each directory is, in turn, a member of another directory at a higher level in the hierarchy. At the highest level is the root directory.

The root directory can be considered analogous to the master catalog in OS/390 and will typically contain only other directories (also called subdirectories).

Related articles:

Directories and their subordinate files can be considered analogous to partitioned data sets and their members. That’s because each directory contains information about itself as well as the files it holds, including their names, their size, the date they were created or modified, and other relevant information. However, unlike PDS data sets, directories are not considered files themselves. So whereas a PDS can be treated as a sequential file in some cases, directories do not have the same capabilities.

In fact, the entire HFS structure, from the root directory on down, is stored as a single data set on an IBM mainframe. OS/390 then manages the hierarchical files through its own HFS facility. As a result, HFS files can be used in both the UNIX System Services and OS/390 environments. This makes it possible for application programs that are designed to run in a UNIX environment to handle files as they normally would under OS/390. For example, Websphere Application Server for OS/390 uses HFS files to store and retrieve information for web applications.

It also means that HFS files can be copied to and from sequential, partitioned, or partitioned extended data sets.
Although most HFS files store data, some can be executable modules and still others can consist of UNIX shell scripts. Executable modules, or programs, are similar to the compiled and linked programs found in OS/390 load libraries. UNIX shell scripts are similar to procedures. They consist of a series of UNIX commands (and, optionally, scripting commands) that are executed in order whenever the file is invoked.

UNIX file:

/usr/data/r1/va5001/LP/master.file

ISPF basic editor commands for Software developers

COMMAND FUNCTION
Data shift left

Shifts a single line of program source code to the left without affecting the program labels or comment i.e. Data from column one to the first blank and data following several blanks are not moved. May be specified with a number identifying the distance to move (default 2)

<<  Block data shift left

All of the lines in the block are affected as if you typed individual data shift left commands. May be specified with a number identifying the distance to move (default 2)

Data shift right

As for data shift left but the opposite direction. May be specified with a number identifying the distance to move (default 2)

>>  Block data shift right
( Column shift left

Works similarly to data shift left but moves everything within the bounds, nothing stays fixed in place. May be specified with a number identifying the distance to move (default 2)

(( Block column shift left
) Column shift right
)) Block column shift right
A After

used with copy, move, or paste to specify the line after which the copied/moved lines are to be inserted

B Before

used with copy, move, or paste to specify the line before which the copied/moved lines are to be inserted

Bnds Display bounds above this line

Displays the current boundary positions which can be changed by tying < and > in the new boundary positions that you require

C Copy

Copies this line either to another place within the current file (using a, b, or o to identify destination) or to another file (using create, replace or cut commands). Can be specified with a number to indicate that multiple lines are to be copied

CC Block copy
Cols Display the column ruler above this line
D Delete

Deletes this line from the file. Can be specified with a number to indicate that following lines are also to be deleted

DD Block Delete
F Display the first excluded line

Can be specified with a number to display more than one excluded lines. This command is only valid on excluded lines

I Insert a new line after this one

Can be specified with a number to insert multiple lines

L Display the last excluded line

Can be specified with a number to display more than one excluded lines. This command is only valid on excluded lines

Lc Convert all text on this line to lower case

Can be specified with a number to convert more than one line to lower case

Lcc Block convert to lower case
M Move

Works the same as copy except that the lines are removed from their current location

MM Block move

mask Display the mask line above this one

The mask defines the default content for inserted lines

O Overlay (used with copy and move to specify the line into which the copied/moved line is to be inserted – only spaces are replaced). Can be specified with a number to indicate that following lines are also to be overlaid
OO Block overlay (the lines to be copied/moved are inserted into the block as many times as they will fit)
R Repeat – create a duplicate of this line

Can be specified with a number to indicate that additional duplicate lines are to be produced

RR Block repeat

Can be specified with a number to indicate that multiple duplicates of the block are to be produced

S Show the excluded line that has the least indentation

Can be specified with a number to display more than one excluded lines. When multiple lines are displayed they may not be together. This command is only valid on excluded lines

Tabs Show the tab settings above this line

Hardware tabs positions are indicated by asterisks (*) and software tabs by hyphens (-) or underscores (_)

TE Text Entry mode – allows bulk insert following this line

You can start entering data without paying any attention to lines as the text will wrap automatically. Press the enter key to exit from text entry mode

TF Text flow – flows the text between the margins for this line and following lines until a blank line is found, the indentation changes, or a special character (period, colon, ampersand, less than, or form feed) is found in the first column
TJ Text Join – merges this line with the following one
TS Text split – splits this line in two

You need to position the cursor at the position on the line where you want the split to occur

UC Convert all text on this line to upper case

Can be specified with a number to convert multiple lines

UCC Block convert to upper case
X Exclude this line from the display

Can be specified with a number to exclude multiple lines. This command is useful when you need to view two blocks of data that are in different locations within the file, just exclude the intervening data from the display

XX Block exclude
. label assignment

You can assign a label to any non-excluded line by typing a period followed by the label name. The label can then be used to identify the line in primary commands. You cannot start labels with “z” as these labels are reserved for system use