DB2 V11- Copy Data From One Table to Another

The below are the steps we need to follow. First, we need t create a new table.


Now new Table TELE is created. I want to copy  some old data into this new table.


Now Data is copied into new table.

The CREATE TABLE statement example creates a table which, at first, is empty. The table has columns for last names, first names, and phone numbers, but does not have any rows.

The INSERT statement fills the newly created table with data that is selected from the DA31.EMP table: the names and phone numbers of employees in department D21.

This way we can copy data from one table to another table. No need to use any Tool or load utility. This is simple sql query.

SORT-Overlay Command In JCL

DB2+Mainframe+Hadoop+Bigdata+JOBSI came from office during rush hour to home, and decided to write something on SORT Overlay. In recent interviews, I found some guys confused to tell about overlay. Below details are an excellent examples to understand OVERLAY.

* Skip the first three records
* Keep records that meet the specified field-to-field or
* field-to-constant selection criteria
INCLUDE COND=(7,5,PD,LE,22,5,PD,OR,25,3,CH,EQ,C’J69′)
* Add a sequence number to the remaining records.
* Sort the records.
SORT FIELDS=(20,8,ZD,A,11,2,CH,D)
* Produce one record for each unique sort key with totals
* for the specified summary fields
SUM FIELDS=(45,4,BI,8,2,PD)
* Produce output records displaying the sort and sum fields
* intermixed with constants and arithmetic operations.
OUTREC BUILD=(8C’0′,20,8,ZD,M11,4X,11,2,4C’‘,
* Produce three output data sets from one pass over the input

Detailed Explanation for Overlay:

Using OVERLAY, you reformat each record by specifying just the items that overlay specific columns.
OVERLAY lets you change specific existing columns without affecting the entire record.

In the below example, OUTREC OVERLAY=(45:45,8,TRAN=LTOU)

Starts from position 45, and length 8, translating lower case to upper case. This example clearly tells how we can use Overlay. One of the frequently asked interview Question, in IBM, Tech Mahindra, CTS, and Dell. Why means all financial projects using Sort, to sort all reports.

The OVERLAY we can apply both INREC and OUTREC formatting.


DB2-JOINS Interview Questions

In interviews you will be asked definitely a question on Joins. I am giving one of the basic question here.

  • What is join?

When you want to see information from multiple tables, you can use a SELECT statement. SELECT statements can retrieve and join column values from two or more tables into a single row. The retrieval is based on a specified condition, typically of matching column values.

The main ingredient of a join is, typically, matching column values in rows of each table that participates in the join. The result of a join associates rows from one table with rows from another table. Depending on the type of join operation, some rows might be formed that contain column values in one table that do not match column values in another table.

DB2 for ZOS supports different kinds of joins. Those are:

Inner join - Combines each row of the left table with each row of the right table, keeping only the rows in which the join condition is true.

Outer join – Includes the rows that are produced by the inner join, plus the missing rows, depending on the type of outer join:

Left outer join
Includes the rows from the left table that were missing from the inner join.
Right outer join
Includes the rows from the right table that were missing from the inner join.
Full outer join
Includes the rows from both tables that were missing from the inner join.


We can specify joins in the FROM clause of a query. Data from the rows that satisfy the search conditions are joined from all the tables to form the result table.

Interview Questions on Joins:-

1) In the case of Left outer join, there is no matching rows of joined tables?

Best example for Joins. Reference IBM.

15 Rapidly Growing IT JOBS

Open+Source+jobs  [Open Source Jobs]

Open source experience and contributions continue to rate highly with IT employers, especially those with significant big data and cloud initiatives on the books. Think along the lines of the aforementioned Hadoop, of course, but don’t forget about the broader ecosystem around it, which includes projects such as Cassandra, Hive, and Pig. Linux is another place to look, and the NoSQL world (see also: MongoDB) offers open source opportunities as well.

SECURITY+jobs [Security, Security, Security]

Job titles range from security architect to the relatively recent executive suite addition of the CSO or CISO. There’s never been a better time to be a qualified security pro — at least from the standpoint of employability. Cyber-security was the fastest-growing skill area in terms of employer demand on tech jobs site Dice from January 2014 through January 2015, with its close cousin information security not too far behind.

JIRA+jobs [Project Management, Agile, DevOps]

Interest in IT pros experienced with the project management and issue-tracking software from Atlassian spiked 69% on Dice in the 12 months ending Jan. 31, 2015, as more organizations adopted either Agile development approaches or a full-blown DevOps makeover.


SAP HANA growing job market globally.

Mainframe+jobs [Mainframe Jobs]

Still demand for mainframe skills growing

Hadoop+jobs  [Bigdata Jobs]

Demand for Hadoop skills grew 54% on Dice during the past 12 months, as did employer interest in big data expertise as a broader category. Renda, of Eliassen, noted that developers with chops in languages such as Java who add Hadoop experience will be particularly attractive to recruiters and hiring managers going forward.

ContrlM+jobs [Control-M, Mainframe Jobs]

Informatica and Mainframe area demand for CONTROL-M skills is increasing.



TERADATA+jobs [Big Data, Analytics]

In concert with rising demand for technical and analytical skills elsewhere in the big data universe, Dice found mounting employer interest in experience with specific platforms — in this case, Teradata’s big data, data warehousing, and analytics applications and services. Teradata saw a 33% increase in advertised positions on Dice between January 2014 and January 2015, making it the ninth fastest-growing skill area on the site.

CICS+DB2+jobs [CICS DB2 Jobs]

CICS and DB2 in mainframe area increasing jobs globally.

Data+Warehouse+jobs [Data Warehouse Jobs]


Cloud+Computing+jobs [Cloud Computing Jobs]

NOSQL+jobs [Big Data, Web Application Development]

NoSQL database know-how is another quick ticket to job interviews these days. Demand for this skill category jumped 49% from January 2014 to January 2015, according to Dice. There’s a range of NoSQL options, but the open source MongoDB has proven popular as a recruiting search term. Job search site Indeed.com lists MongoDB as the No. 2 keyword in terms of growth on its site during the past several years.

Android+Development+jobs [Mobile, Software]

Renda from Eliassen said that mobile remains one of four foundational technologies — alongside cloud, social, and analytics — driving the hiring needs of the recruiting firm’s clients this year. As a result, Android developers (don’t worry, iOS fans — you’re up next) are regularly in the hiring crosshairs.

ios+Development+jobs [Mobile, Software]

Of course, Renda included iOS developers on his list of commonly sought job titles among Eliassen’s mobile clients, too. Like Android, iOS has been one of the fastest-growing (third overall) terms on Indeed.com job listings over the past several years.

Puppet+Labs+jobs [Automation & Configuration Management, DevOps, Cloud]

Employer interest in IT pros who know Puppet Labs’ flagship automation and configuration management tool shot skyward, increasing 67% from January 2014 to January 2015. This was due, in part, to growing interest in DevOps — or at least DevOps-like approaches to building, testing, delivering, and maintaining software.

Python+bigdata+jobs [Software Development, Big Data]

The Python programming language’s rising popularity among developers corresponds with increasing interest among hiring companies, too. Python cracked the top 10 fastest-growing skills areas on Dice during the months ending Jan. 31, 2015, with a 29% bump on ads for this skill posted on the jobs site.

JAVA+jobs [Software Development, Cloud, Mobile]

Java may not have the new-and-shiny appeal of some others skills and technologies here, but sometimes “boring” is a strong suit. Java continues to be mainstream in enterprise IT environments, and there’s a corresponding demand among employers. In fact, as of January 2015, Dice had some 16,000 available positions requiring or requesting Java skills on any given day.

Data+Analysis+jobs [Big Data, Analytics]

Data-driven businesses absolutely need IT pros who can mine, manage, and store massive data sets, whether with Hadoop or other technologies. But they have an equally pressing need for people who can make actionable business sense of the data, too — otherwise, there’s little real value, especially to the C-Suite and on non-technical teams. Whether listed under “data analyst” or comparable title, the ability to derive business value from the numbers is going to be a pressing concern for hiring managers going forward.

“Analytics are embedded everywhere,” said McGarrity of Mondo. “It’s critical to have the resources who can analyze and drive strategy from the data and numbers.”


Open source experience and contributions continue to rate highly with IT employers, especially those with significant big data and cloud initiatives on the books. Think along the lines of the aforementioned Hadoop, of course, but don’t forget about the broader ecosystem around it, which includes projects such as Cassandra, Hive, and Pig. Linux is another place to look, and the NoSQL world (see also: MongoDB) offers open source opportunities as well.

UX/UI+Design+jobs [Software Development, Mobile]

According to Eliassen’s Renda, top-notch user experience and user interface (UX/UI) designers rate highly alongside developers on hiring wish lists. The reason? Employers are obsessed with creating first-rate Web and mobile apps that appeal to younger consumers — Millennials, especially — as they grow older and their buying power increases.

.NET+Development+jobs [Software Development]

Like Java in some respects (and unlike it in others), .NET has been a mainstay on many IT resumes for both desktop application development and, in the case of ASP.NET, Web application development as well. You won’t find it on the “fastest-growing” lists because it’s been around a while, but it remains very much in demand, something unlikely to change in the near future. Employers responding to a January 2015 survey on Dice listed .NET as one of the toughest skills or roles to fill, ranking it No. 3 (No. 1 was software developer/software engineer, and No. 2 was Java).

Virtualization+jobs [Virtualization Cloud]

As far as buzzwords and IT trends go, virtualization seems bland these days. But virtualization skills remain keenly sought after by employers, according to Renda, particularly when they intersect with cloud computing experience. Both Eliassen and Dice point to VMware as the one of the most common platform-specific searches, though obviously it’s not the only game in town.

COBOL-Use Of Key Phrase In Occurs Clause

Data Analytics

Data Analytics

In COBOL, internal arrays play prominent role. How to define an array is very important. We can define single dimension array, multi-dimension array, also can define, variable records in an array.

There is no limitation on size of an array (Occurs limit). Maximum 7 dimensions we can define in an array.

Earlier posts I have described as Indexed By phrase. In this post I want to show you, how we can define KEY phrase in an array.

Key Definition  

The KEY phrase in an OCCURS clause states that the table is sorted by the sequence of keys named data-name-3. Each of the fields named by data-name-3 must be part of the table—either items subordinate to this data item or possibly the data-name of the table itself. The purpose of the KEY clause is to define the table’s ordering when you use the SEARCH statement with the ALL phrase (as described later in this chapter).
Syntax: -

Ascending/Descending KEY is data-name-3

Data-name-3 can be qualified, but it must not be subscripted even though it is a table element, since the reference is really to the entire table.  Each key must be described as either ASCENDING or DESCENDING. This defines the order, either from lowest to highest (ASCENDING) or highest to lowest (DESCENDING) in which the table is arranged. The order is determined by either numeric comparison for a numeric data-name-3 or non-numeric comparison for other operands. The keys are listed in order of their significance from major to minor.

The presence of a KEY clause does not mean that the table is necessarily in sorted order, nor does it cause the table to be sorted. It merely states the expected order of the table. The SEARCH statement (discussed later in this chapter) works correctly only if the table is in the order specified by the keys.

Here is an example of a table defined with two keys:
05 NAME      PIC X(20).
05 CITY      PIC X(20).
05 STATE     PIC XX.
05 ZIP       PIC 9(5).

**Important interview question:-

How sorting occurs?

This example shows a table (MAIL-LIST) whose elements are (expected to be) in order by STATE (from lowest to highest). Elements of the table with the same STATE will be in order by CITY (from lowest to highest).

Interview Question:

Indexed By and KEY is – Both are different phrases in the context of OCCURS. The phrase INDEXED By, we use to retrieve a record dynamically. The KEY IS, phrase we use to sort an array.

Tough COBOL Interview Questions

Recently I conducted some interviews, I asked many experienced mainframe programmers on the below file handling questions. Many people confused to tell answers. But, these are very basic questions, everyone has to remember always.

After years of programming, we are forgetting, these things.

Everyone knows about, sequential files handling. But, people confused about Relative files and Indexed files.

Relative files:

Access mode is Random

Organization is Relative  ==> Why we use this is to say that file is relative. No problem we can give “Indexed” also.

To access relative files, we need to give above information in SELECT statement of COBOL.


Read File1 Key is xyz

Close File

We can read the relative files in above passion. Each time we need to supply key value.

Indexed Files


Access mode is Random

Organization is Indexed

To access Indexed files, we need to give above information in SELECT statement of COBOL.

The difference between Access Mode is Random and Access Mode is Dynamic:

Access mode is Random means we can retrieve a record matching to a Key value.

Where as Access mode is Dynamic- We can establish position first dynamically, and we can read records sequentially.

START Verb we use in Dynamic access mode.

The ORGANIZATION IS RELATIVE clause is used to specify that this is a relative file.

The ACCESS MODE clause is optional. If this clause is omitted, the default mode is SEQUENTIAL. A relative file with ACCESS MODE SEQUENTIAL operates logically very much like an ORGANIZATION IS SEQUENTIAL file, except that you can delete records (see DELETE below) and position to a given location within the file (see START below).

If ACCESS MODE IS RANDOM, the target record for every I-O statement is specified by the value of data-name-1.

ACCESS MODE IS DYNAMIC is the most flexible. By using specific syntax in the I-O statements, you can process the file sequentially or randomly at any time. The downside is that you must specify which type of access you want for each read operation. See the individual I-O statements that follow for more details. data-name-1 must be an integer numeric item, and its PICTURE cannot contain the P symbol. Obviously, it must be large enough to contain the highest record number you will need. It also must not be contained in any of the 01 records under the FD for this file. Some implementations may impose more restrictions on the format of data-name-1.
START Statement 

The START statement is used with relative files to logically position the file to a specific RECORD. The file must be declared with SEQUENTIAL or DYNAMIC access mode and opened in INPUT mode or I-O mode. The START statement does not modify the contents of the record area. The result of a successful START statement is that the following READ NEXT statement will input the record located by the START statement.


START file-name-1 [KEY relation-expression-1 data-name-1]

[INVALID KEY imperative-statement-1]

[NOT INVALID KEY imperative-statement-2]


The word KEY is for documentation only.

relation-condition-1 can be any of the following:

IS =
IS >
IS >=

The words IS, TO, and THAN are for documentation only.

data-name-1 must be the data item specified in the RELATIVE KEY phrase of the ACCESS mode of the associated file control SELECT statement.

If the KEY phrase is not specified, relation IS EQUAL TO is implied.

The INVALID KEY phrase causes imperative-statement-1 to be executed if the START was unsuccessful because no record in file-name-1 satisfies the relation condition.

The NOT INVALID KEY phrase causes imperative-statement-2 to be executed if the START was successful.

The END-START phrase ends the scope of the START statement.
 DELETE Statement 

The DELETE statement removes a record from the RELATIVE file. The RELATIVE file must be opened in I-O mode. The DELETE statement does not modify the contents of the record area. For RELATIVE files in SEQUENTIAL mode, the record deleted is the record obtained by the preceding READ statement, which must have been successful. For RELATIVE files in RANDOM or DYNAMIC mode, the record to be deleted is the record number determined by the value of the RELATIVE KEY data item associated with file-name-1.

DELETE file-name-1 RECORD

[INVALID KEY imperative-statement-1]

[NOT INVALID KEY imperative-statement-2]


The words RECORD and KEY are for documentation only.

The INVALID KEY and NOT INVALID KEY phrases must not be used with a RELATIVE file in SEQUENTIAL access mode.

The INVALID KEY phrase causes imperative-statement-1 to be executed if the DELETE was unsuccessful because the record was not available.  The NOT INVALID KEY phrase causes imperative-statement-2 to be executed if the DELETE was successful.

The END-DELETE phrase ends the scope of the DELETE statement.

How different OPEN Modes works in different Access modes?

ACCESS Mode Sequential
ACCESS Mode Random
OPEN EXTEND Invalid access/open combination
ACCESS Mode Dynamic
OPEN EXTEND Invalid access/open combination

Domain Knowledge-Credit Card Processing

This domain also called PCI – Payment Card Industry. It denotes debit, credit, prepaid, e-purse, ATM, and POS cards and associated businesses.


The payment card industry consists of all the organizations which store, process and transmit cardholder data, most notably for debit cards and credit cards. The security standards are developed by the Payment Card Industry Security Standards Council which develops the Payment Card Industry Data Security Standards used throughout the industry. Individual card brands establish compliance requirements that are used by service providers and have their own compliance programs. Major card brands include American Express, Discover Financial Services, Japan Credit Bureau, MasterCard Worldwide and Visa International. Most companies use member banks that connect and accept transactions from the card brands. Not all card brands use member banks, like American Express, these instead act as their own bank.

As of 2014, the United States uses a magnetic stripe on a card to process transactions and its security relies on the holder’s signature and visual inspection of the card to check for features such as hologram. This system will be outmoded and replaced by EMV in 2015.EMV is a global standard for inter-operation of integrated circuit cards (IC cards or “chip cards”) and IC card capable point of sale (POS) terminals and automated teller machines (ATMs), for authenticating credit and debit card transactions. It has enhanced security features, but is still susceptible to fraud.

Credit card allows its holder to buy goods and services based on the holder’s promise to pay for these goods and services.
• Its usage started in 1920s in US for selling the fuel to the automobile owners and became usable by the customers when Diners Club was launched in early 1950s.

• In 1958, the Bank of America issued the BankAmericard in the California state and this is known to be the first successful modern credit card.
Credit Card v/s Debit Card:

Both Debit card and credit card and other cards like smart card are plastic money which play the role of medium of payment. In credit cards the customer (credit card holder) can avail the facility of buying goods and services at a Point of Sale (POS) from merchant establishments (provided such arrangements exist) without making a prior payment. This credit facility is provided by the issuer bank to the customer for a specific period.
• However, in the case of debit cards, the customer (debit card holder) can buy goods and services by automatically debiting the payments to card holder’s banks account.

• In case of a credit card, the card holder uses credit line by making drawings within a specified or sanctioned limit and makes payment on receiving the bill along with the applicable charges and interests.

• In case of debit cards, the card holder uses the balance in his / her own bank account and payment is made immediately on purchases. Read more.

Domain Knowledge in Segregated Fund

A Segregated Fund or Seg Fund is a type of investment fund administered by Canadian insurance companies in the form of individual, variable life insurance contracts offering certain guarantees to the policyholder such as reimbursement of capital upon death. As required by law, these funds are fully segregated from the company’s general investment funds, hence the eponym. A segregated fund is synonymous with the U.S. insurance industry “separate account” and related insurance and annuity products.

A segregated fund is an investment fund that combines the growth potential of a mutual fund with the security of a life insurance policy. Segregated funds are often referred to as “mutual funds with an insurance policy wrapper”.

Like mutual funds, segregated funds consist of a pool of investments in securities such as bonds, debentures, and stocks. The value of the segregated fund fluctuates according to the market value of the underlying securities.

Segregated funds do not issue units or shares; therefore, a segregated fund investor is not referred to as a unitholder. Instead, the investor is the holder of a segregated fund contract. Contracts can be registered (held inside an RRSP or TFSA) or non-registered (not held inside an RRSP or TFSA). Registered investments qualify for annual tax-sheltered RRSP or TFSA contributions. Non-registered investments are subject to tax payments on the capital gains each year and capital losses can also be claimed.

Other information:

Segregated funds are sold as deferred variable annuity contracts and can be sold only by licensed insurance representatives. Segregated funds are owned by the life insurance company, not the individual investors, and must be kept separate (or “segregated”) from the company’s other assets. Segregated funds are made up of underlying assets that are purchased via the Life assurance companies. Investors do not have ownership share. Segregated Funds have guarantees and run for a period. Should the investor leave before the end date, he/she may be penalized.

Transfer Agency:

Definition from Investopedia- A trust company, bank or similar financial institution assigned by a corporation to maintain records of investors and account balances and transactions, to cancel and issue certificates, to process investor mailings and to deal with any associated problems (i.e. lost or stolen certificates).

Latest Mainframe Interview Questions




- Difference between COMP and COMP-3?

COMP- Is binary variable. It is more efficient at boundaries. Comp-3 is packed decimal, it is more efficient in computations.

- Redefines advantages?

Redefines is possible at same level. And, Since redefined fields share the same space, by definition they have the same size. Although recent compilers (COBOL85 and later) will permit redefined fields to be larger or smaller than the initial definition, and automatically adjust both definitions to the largest field, it’s convention (and good programming practice) to define them to be the same size, by adding a FILLER field if necessary.

- In COMP3, how many bytes we can give?

Max limit in COBOL is 18

- Segregated Fund ?

This is mutual fund domain Question. It is combination of LIC policy and Growth fund.

  • How to read last record in COBOL file?
  • We can use temporary array in COBOL. Just keep DECENDING in array. Then first record of Array becomes last record of in[it file

-Cobol logic:

perform read-para until EOF
display temp.
read file until EOF=’Y’.
if not EOF then
move low values to temp.
move inrec to temp.

-If we know Key value, we can find last record dynamically.


- How can we override COND value in proc?

In PROC step, need to give COND.procstepname=(5,GT)

- What is GDG?

GDGs we use to store data of multiple versions. For examples, weekly, monthly and quarterly files etc

- What is IEFBR14?

It is do-nothing utility. It is used in JCLs, to create or Delete a dataset dynamically


- How to extract duplicate records from a file?

//SORTXSUM DD DSN=datasetname,
//            SPACE=(CYL,(1,4),RLSE),
//            DCB=(RECFM=FB,LRECL=80,BLKSIZE=800)
//SYSIN    DD *


- What is EXCEPTION join?

This is very rarely used in SQL of DB2. However, some old DB2 of I-series following this join. Left outer join retrieves rows of Left side table no-matching rows and matching rows of both the table.

In this case, it returns, non-matching rows of left side table.

- What is join?

It is a silly question. We use to get data from multiple joins

- What is COALESCE function in DB2?

COALESCE, returns first non NULL value. COALESCE(Select name from emp where dept=10,1000); If no department found, it replaces, with 1000.

DB2 Four Isolation Levels-RR,RS,CS,UR

Bigdata Analytics

Bigdata Analytics

DB2 offers four locking isolation levels: Repeatable Read (RR), Read Stability (RS), Cursor Stability (CS), and Uncommitted Read (UR).

Each of these isolation levels allows the user and application to control the number and duration of read (Share) locks held within a unit of work. When you set the appropriate isolation level, based on a particular application’s requirement, lock resources can be minimized, and the user/program concurrency can be increased. Take the following example:


Repeatable Read

With RR means that the same query can be executed multiple times within the same unit of work, and the results of the query will be identical every time (repeatable). A Share lock will be set and will stay on each row or page until the query or logical unit of work has completed. All accessed rows or pages are locked, even if they do not satisfy the predicate. For table scans, this would encompass each row/page within the table. For other queries not processing table scans, this would encompass any rows or pages that meet the predicate criteria of the SQL statement. In the example above, this would be all rows or pages containing last names that begin with S.

All Share locks with RR are held until a commit takes place. These share locks would effectively prevent updates, inserts, or deletes (X locks) from occurring on any of the rows/pages from any other process until a commit is executed.

Note Most query tools on the market have their default isolation level set to RR, which is not good. This causes many problems in environments where users, analysts, developers, and others query the data often during the day. Many times users leave their workstations while a query running in the background is applying and holding locks on the data being retrieved.

This is a common reason for many -911 SQLCODE errors.

Read Stability

With RS is very much like With RR, except that it will allow inserts from other users. It can at times lock more rows/pages because locks are taken and held on data, even when it goes to stage 2 processing to further check predicates. If there is a stage 2 predicate and the data does not fit the predicate criteria, the RS lock is still placed and held.

Cursor Stability

With CS sets a Share lock on each row or page processed, and the moment the cursor moves on to another row or page, it releases the lock. So at any one time, there is only one lock being held either on a row or page of data. This obviously allows good concurrency and some data integrity. Almost all batch COBOL programs in IT shops today are bound with the locking parameter CS. This is because as these programs execute cursor processing, they have no need to reread any data processed. The Share locks get freed up as the query moves through the cursor, and the query has data integrity as it processes each current row or page. This bind parameter, along with another bind parameter, Currentdata(No), provides an opportunity for avoiding locks altogether. With these two bind parameters together, DB2 can test whether a row or page has committed data on it, and if it has, DB2 will not have to obtain any lock.

Uncommitted Read

With UR means that no Share locks are placed on any rows or pages processed by this query, and it does not matter if other processes have any locks on any of the data being retrieved. This can improve efficiency because it reduces overall processing time. But the one issue in using UR is that if some other process has applied updates to data being retrieved, UR will return the updated data from the buffer before the other process has executed a commit. If for some reason the other process does a rollback of its updates, then this UR process has updated data that was never committed.

Even with the issue of possibly picking up non-committed data, there are definitely times when UR can be used: