DB2 Error codes -204, -205, -206

The error code -204 says:

The object name is undefined. Resolution is need to check if the DB2 object name is correct.

The error code -205 says:

Invalid column name. In other way, the column name is not defined on that table. Resolution is check if that column name is created during Table creation.

The error code -206 says:

Similar to error -204, the column name is invalid in the context of SELECT, INSERT , DELETE or MERGE

 

How a record in a KSDS is randomly accessed by primary key

KSDS,RRDS,ESDS,LDS differences

[KSDS,RRDS,ESDS,LDS differences]

As you’ve already learned, a KSDS consists of a data component and an index component. The primary purpose of the index component is to locate the records in the data component by their key values. To increase the efficiency of this process, the index component consists of an index set and a sequence set.

As you can see, the sequence set is the lowest level of the index. It’s used to determine which control interval a specific record is stored in. In contrast, the index set consists of records at one or more levels that point to the sequence set records. In this figure, though, the index set has only a top level, which always consists of just one record.

The entries in the index set record in this figure contain the highest key values stored in each of the sequence set records, and the entries in the sequence set records contain the highest key values stored in each of the control intervals. To access a record, VSAM searches from the index set to the sequence set to the control interval that contains the desired record. Then, VSAM reads that control interval into virtual storage and searches the keys of the records until the specific record is found.

The free pointer shown in each sequence set record points to free control intervals in the control area. There is also control information within each control interval (not shown) that identifies the free space in the interval. By using the free pointers and control information, VSAM is able to manage the free space as it adds records to a KSDS. VSAM also updates the records in the index and sequence sets as it processes those additions. As a result, the data set can always be used for both sequential and random processing by primary key.

To improve performance, the systems programmer tries to define a production KSDS so all of the index set is brought into internal storage areas called buffers when the data set is processed so that the information can be accessed more quickly. This is one of several ways that the performance of a KSDS can be fine-tuned.

Hierarchical file system (HFS) in Mainframe UNIX (1 of 2)

 UNIX basic commands- part-1

               Mainframe UNIX

Files in a UNIX environment are organized into a hierarchical structure, If this looks familiar to you, that’s because both the DOS and Windows operating systems use a hierarchical file organization too. As you can see, all files within a hierarchical file system (HFS) are members of a directory. Each directory is, in turn, a member of another directory at a higher level in the hierarchy. At the highest level is the root directory.

The root directory can be considered analogous to the master catalog in OS/390 and will typically contain only other directories (also called subdirectories).

Related articles:

Directories and their subordinate files can be considered analogous to partitioned data sets and their members. That’s because each directory contains information about itself as well as the files it holds, including their names, their size, the date they were created or modified, and other relevant information. However, unlike PDS data sets, directories are not considered files themselves. So whereas a PDS can be treated as a sequential file in some cases, directories do not have the same capabilities.

In fact, the entire HFS structure, from the root directory on down, is stored as a single data set on an IBM mainframe. OS/390 then manages the hierarchical files through its own HFS facility. As a result, HFS files can be used in both the UNIX System Services and OS/390 environments. This makes it possible for application programs that are designed to run in a UNIX environment to handle files as they normally would under OS/390. For example, Websphere Application Server for OS/390 uses HFS files to store and retrieve information for web applications.

It also means that HFS files can be copied to and from sequential, partitioned, or partitioned extended data sets.
Although most HFS files store data, some can be executable modules and still others can consist of UNIX shell scripts. Executable modules, or programs, are similar to the compiled and linked programs found in OS/390 load libraries. UNIX shell scripts are similar to procedures. They consist of a series of UNIX commands (and, optionally, scripting commands) that are executed in order whenever the file is invoked.

UNIX file:

/usr/data/r1/va5001/LP/master.file

ISPF basic editor commands for Software developers

COMMAND FUNCTION
Data shift left

Shifts a single line of program source code to the left without affecting the program labels or comment i.e. Data from column one to the first blank and data following several blanks are not moved. May be specified with a number identifying the distance to move (default 2)

<<  Block data shift left

All of the lines in the block are affected as if you typed individual data shift left commands. May be specified with a number identifying the distance to move (default 2)

Data shift right

As for data shift left but the opposite direction. May be specified with a number identifying the distance to move (default 2)

>>  Block data shift right
( Column shift left

Works similarly to data shift left but moves everything within the bounds, nothing stays fixed in place. May be specified with a number identifying the distance to move (default 2)

(( Block column shift left
) Column shift right
)) Block column shift right
A After

used with copy, move, or paste to specify the line after which the copied/moved lines are to be inserted

B Before

used with copy, move, or paste to specify the line before which the copied/moved lines are to be inserted

Bnds Display bounds above this line

Displays the current boundary positions which can be changed by tying < and > in the new boundary positions that you require

C Copy

Copies this line either to another place within the current file (using a, b, or o to identify destination) or to another file (using create, replace or cut commands). Can be specified with a number to indicate that multiple lines are to be copied

CC Block copy
Cols Display the column ruler above this line
D Delete

Deletes this line from the file. Can be specified with a number to indicate that following lines are also to be deleted

DD Block Delete
F Display the first excluded line

Can be specified with a number to display more than one excluded lines. This command is only valid on excluded lines

I Insert a new line after this one

Can be specified with a number to insert multiple lines

L Display the last excluded line

Can be specified with a number to display more than one excluded lines. This command is only valid on excluded lines

Lc Convert all text on this line to lower case

Can be specified with a number to convert more than one line to lower case

Lcc Block convert to lower case
M Move

Works the same as copy except that the lines are removed from their current location

MM Block move

mask Display the mask line above this one

The mask defines the default content for inserted lines

O Overlay (used with copy and move to specify the line into which the copied/moved line is to be inserted – only spaces are replaced). Can be specified with a number to indicate that following lines are also to be overlaid
OO Block overlay (the lines to be copied/moved are inserted into the block as many times as they will fit)
R Repeat – create a duplicate of this line

Can be specified with a number to indicate that additional duplicate lines are to be produced

RR Block repeat

Can be specified with a number to indicate that multiple duplicates of the block are to be produced

S Show the excluded line that has the least indentation

Can be specified with a number to display more than one excluded lines. When multiple lines are displayed they may not be together. This command is only valid on excluded lines

Tabs Show the tab settings above this line

Hardware tabs positions are indicated by asterisks (*) and software tabs by hyphens (-) or underscores (_)

TE Text Entry mode – allows bulk insert following this line

You can start entering data without paying any attention to lines as the text will wrap automatically. Press the enter key to exit from text entry mode

TF Text flow – flows the text between the margins for this line and following lines until a blank line is found, the indentation changes, or a special character (period, colon, ampersand, less than, or form feed) is found in the first column
TJ Text Join – merges this line with the following one
TS Text split – splits this line in two

You need to position the cursor at the position on the line where you want the split to occur

UC Convert all text on this line to upper case

Can be specified with a number to convert multiple lines

UCC Block convert to upper case
X Exclude this line from the display

Can be specified with a number to exclude multiple lines. This command is useful when you need to view two blocks of data that are in different locations within the file, just exclude the intervening data from the display

XX Block exclude
. label assignment

You can assign a label to any non-excluded line by typing a period followed by the label name. The label can then be used to identify the line in primary commands. You cannot start labels with “z” as these labels are reserved for system use

5 Essential features of COBOL REDEFINES phrase

How to write COBOL file in reverse order

[How to write COBOL file in reverse order]

In COBOL REDEFINES is one of the frequently being used phrase. It has many advantages if you use it pproperly.

REDEFINES is possible at two levels:

  • At group level  (01 level)
  • At field level

The REDEFINES clause allows you to define the same data field or record in more than one way, as shown in this example:

03 MY-DATE PIC 9(08).
03 MY-DATE-R REDEFINES MY-DATE.
05 MY-YEAR PIC 9(04).
05 MY-MONTH PIC 9(02).
05 MY-DAY PIC 9(02).

In this example, MY-DATE and MY-DATE-R refer to the same data but are described differently.

Rules for REDEFINES:

MY-DATE is numeric. MY-DATE-R is a group item, so it is inherently alphanumeric, generally equivalent to X(8), except that if you move MY-DATE-R to an edited field, the result will be as if the receiving field were alphanumeric and not edited.

This capability is very useful, particularly when handling numeric fields or fields that may or may not be numeric. In many systems, referencing a numeric field that contains non-numeric characters can cause a program abort. Having an alphanumeric definition of that field enables you to examine it before using it and to deal with it if it is not numeric.

When you redefine a 01 record with another 01, either record can be larger, smaller, or the same size as the other. But when you redefine field-1 by field-2 at a higher-level number, field-1 must not be smaller than field-2.

Tip Indenting the PICTURE of the redefining field and its subordinate entries visually separates the redefinition from the field it is redefining. This is useful when trying to determine the record size or position of fields within the record.

You can redefine a field any number of times, but each REDEFINES must immediately follow the field being redefined or a previous redefinition of the field. You can also redefine a field that is subordinate to a redefinition.

Here is an example: 
03 MY-DATE PIC 9(08). 
03 MY-DATE-R REDEFINES MY-DATE. 
     05 MY-YEAR PIC 9(04). 
     O5 MY-CCYY REDEFINES MY-YEAR. 
           07 MY-CC PIC 9(02). 
          07 MY-YY PIC 9(02). 
     05 MY-MONTH PIC 9(02). 
     05 MY-DAY PIC 9(02). 

As with group-level items, redefining fields can be a source of errors if you are not careful when constructing your definitions.

Mainframe tip- Resolution TSO session not allowing to do anything

If by mistake you try to browse the migrated dataset then your TSO session will not allow you to do anything and you will get below error.

TSO error-1

So to come out of this press  cntrl+shift+A  twice  then below message will appear-

TSO error-2

Press Y, then press any key and press enter .Your TSO session will be free for your use.

Mainframe COBOL Debugging mode ‘D’ in 7th Column

To get Display for each PARA instead of using 2-3 lines code we can do it by single line code. If you are writing a new code then it will be useful to use this tip.

*SOURCE-COMPUTER.  IBM-370.

  SOURCE COMPUTER IBM370.  WITH DEBUGGING MODE.

Debugging

And give ‘D’ in the 7th column of the sentence which u want to display.

Debugging-2

Informatica Interview Questions- Part( 1 of 2 )

Informatica, jobs and Career options

[Informatica, jobs and Career options]

The following are the interview questions asked at CGI for informatica developers.

  • What type of testing will do in project?

A). Unit testing, System testing, Integration testing

  • How many ways we can do performance tuning in informatica?

A) – Optimize the target. Enables the Integration Service to write to the targets efficiently.
– Optimize the source. Enables the Integration Service to read source data efficiently.
– Optimize the mapping. Enables the Integration Service to transform and move data efficiently.
– Optimize the transformation. Enables the Integration Service to process transformations in a mapping efficiently.
– Optimize the session. Enables the Integration Service to run the session more quickly.
– Optimize the grid deployments. Enables the Integration Service to run on a grid with optimal performance.
– Optimize the PowerCenter components. Enables the Integration Service and Repository Service to function optimally.
– Optimize the system. Enables PowerCenter service processes to run more quickly

  • What is pmcmd command?

pmcmd- is built-in command line program. There are 4 built-in command line program utilities:

  • infacmd
  • infasetup
  • pmcmd
  • pmrep

Functions of pmcmd:

– Scheduling the work flow

pmcmd scheduleworkflow -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name

We can not specify scheduling options here.

– Start workflow

pmcmd startworkflow -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name

Start workflow from a task

pmcmd startask -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name -startfrom task-name -Stops specified task instance

pmcmd stoptask -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name task-name -Aborting workflow and task

pmcmd abortworkflow -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name pmcmd aborttask -service informatica-integration-Service -d domain-name -u user-name -p password -f folder-name -w workflow-name task-name

  • What are the different problems you will face in production support?
    – Expiration of passwords in databases
    – Late feeds from up-stream, Getting incorrect source fields, Getting special characters and duplicate values, Environment issues like databases are down, and repository is down, Spaces on the server etc.,
  • What are the different Scheduler tools used in Data warehousing?

-Control-M, Autosys, IBM Maestro

  • What is push down optimization in informatica?

Push down optimization option, improves performance by enabling process to be pushed down to relational database, maximizing flexibility, minimizing data movement and providing optimal performance to both data intensive and process intensive transformations.

  • How source analyzer connected to Repository?

Source analyzer==> Repo server ==> TCP/IP ==>Repo agent==> Repository

  • Differences between connected look-up and un-connected look-up?

Connected Lookup

  • Part of mapping data flow
  • Returns multiple values

  • Executed for every record passing through the transformation

  • More visible as lookup values are used

  • Default values are used

Un-connected look-up

  • Separate from mapping data flow

  • Returns only one value

  • Only executed when lookup function is called

  • Less visible

  • Default values are ignored

Mainframe tip to cut first 10 lines, last 10 lines from 10000 lines dataset

I have a dataset with 10,000 lines. I want to cut the first 10 lines and last 10 lines and paste into another dataset.

When I cut the first 10 lines and then again the last 10 lines, only the last 10 lines are pasted into the new dataset.

Is there any way out other than doing a cut & paste twice?

The answer for the above question is YES:

  1. First cut 10 lines, and then issue CUT APPEND
  2. Then cut last 10 lines, and then issue CUT APPEND
  3. When you PASTE it, you get both