Writing Transaction File to Master File in COBOL- 8 Tips

Try+Data+warehousing+Jobs

Try+Data+warehousing+Jobs

COBOL-Master file-Transaction file. Many interviews these questions are asked in COBOL,transaction file and master file. Easy ideas I have given for your sure success in your interviews.

Master files contain the entire data of a particular application. For example, a master file may contain the entire data about the employees, payroll, or other applications of a company. Although the data stored in master files is permanent to some extent, there may be some random changes in this data. These changes are grouped together and stored in a file called the transaction file. The transaction file contains information about all the transactions that have to be applied on the data stored in the master file.

For example, the Employee file containing information about all the employees of a company is a master file. When new employees join the company or some employees leave the company, the information in the Employee file needs to be updated. Instead of applying these changes on the master file, they can be grouped together in a transaction file and applied on the master file together as a batch.

A transaction file is also a sequential file, which can be ordered or unordered. This file can contain three types of operations that can be applied to the master file:

  1. Inserting records
  2. Modifying records
  3. Deleting records
  • If both the master and transaction files are unordered, inserting records at the end of the file is the only operation that can be performed on the master file. This is done by opening the master file in EXTEND mode, reading the records from the transaction file, and writing the records to the master file.
  • If both the master and transaction files are ordered, the operations can either be applied in the master file or a master file can be created.
  • If all the operations in the transaction file are related to modifying existing data in the records in the master file, there is no need to create a master file. In this case, the master file is opened in I/O mode, the record to be modified is read, the operations are performed, and the record is rewritten in place.
  • It is not possible to add or delete records from the file because insertion or deletion cannot take place between an ordered set of records. To solve this problem, the concept of a new master file is used. In this case, any operation from the transaction file requires three files: the old master file, the transaction file, and the new master file. The old master file and the transaction file are opened in input mode. After all the operations are performed, the new master file is generated.
  • If the master file is ordered and the transaction file is unordered, the transaction file also has to be sorted in the order of the key field of the master file. Then, the sorted transaction file can be read and the operations can be performed on the master file.
  • If the master file is unordered and the transaction file is ordered, the master file can be sorted on the key field, which is unique for each record. Then, the operations can be performed and the updated file can be obtained.

ALTER TABLE table_name DROP COLUMN column_name

TEradata blog

TEradata blog

ALTER statement we can use in DB2 SQL query to drop a column.

ALTER TABLE table_name
DROP COLUMN column_name

But, there are lot other conditions involved, which column we can drop or not.

  • The containing table space is not a universal table space.
  • The table is a created global temporary table.
  • The table is a system-period temporal table.
  • The table is a history table.
  • The table is an archive-enabled table.
  • The table is an archive table.
  • The table has an edit procedure or a validation exit procedure.
  • The table contains check constraints.
  • The table is a materialized query table.
  • The table is referenced in a materialized query table definition.
  • The column is defined as a security label column.
  • The column is an XML column.
  • The column is a DOCID column.
  • The column is a hidden ROWID column.
  • The column is defined as ROWID GENERATED BY DEFAULT, and the table contains a hidden ROWID column.
  • The column is a ROWID column on which there is a dependent LOB column.
  • The column is part of the table partitioning key.
  • The column is part of the hash key.
  • All of the remaining columns in the table are hidden.
  • A view that is dependent on the table has INSTEAD OF triggers.
  • A trigger is defined on the table.
    Any of the following objects are dependent on the table:
  • Extended indexes
  • Row permissions
  • Column masks
  • Inline SQL table functions

When you have written SQL query to drop a column,but it is not executed. Then let us see what will happen:

ALTER TABLE DROP COLUMN is considered a pending definition change, at the time that the ALTERstatement is executed, semantic validation and authorization checking are performed as usual. However, the drop is not applied to the current definition or data at the time of the ALTER (that is, catalog and data are untouched). An entry is recorded in the SYSIBM.SYSPENDINGDDL catalog table for the pending drop column, and the table space is placed in an advisory REORG-pending (AREOR) state.

1 More Addition Of Global Variables in DB2 V11

COBOL+JCL+Mainframe+Jobs

COBOL+JCL+Mainframe+Jobs

Traditionally within a relational database system, most interactions between an application and the DBMS are in the form of SQL statements within a connection.

To share information between SQL statements within the same application context, the application that issued the SQL statements has to do this work by copying the values from the output arguments, such as host variables, of one statement to the input host variables of another. Similarly, when applications issue host-language calls to another application, host variables need to be passed among applications as input or output parameters for the applications to share common variable. Furthermore, SQL statements that are defined and contained within the DBMS, such as the SQL statements in the trigger bodies, cannot access this shared information.

Mainframe+DB2+Jobs | CLOUD+IT+JOBS | Bigdata+JOBS

These restrictions limit the flexibility of relational database systems and, thus, the ability of users of such systems to implement complex, interactive models within the database itself. Users of such systems are forced to put supporting logic inside their applications to access and transfer user application information and internal database information within a relational database system. Ensuring the security of the information that is transferred and accessed is also left to the user to enforce in their application logic.

How to create global variables?

CREATE VARIABLE BATCH_START_TS TIMESTAMP
DEFAULT CURRENT TIMESTAMP;


The new SYSIBM.SYSVARIABLES table includes one row for each global variable that is created.

The new SYSIBM.SYSVARIABLEAUTH table includes one row for each privilege of each authorization ID that has privileges on a global variable.

The SYSIBM.SYSVARIABLES_TEXT table is an auxiliary table for the DEFAULTTEXT column of the SYSIBM.SYSVARIABLES table.

How a global variable response can be accessed in different contexts?

— Initial execution of the SQL
SELECT BATCH_START_TS, CURRENT TIMESTAMP
FROM SYSIBM.SYSDUMMY1
;
— Result set from the initial execution
BATCH_START_TS CURRENT TIMESTAMP
2013-08-02-14.59.46.423414 2013-08-02-14.59.46.423414
— Second execution of the same SQL statement in the same SPUFI session

SELECT BATCH_START_TS, CURRENT TIMESTAMP
FROM SYSIBM.SYSDUMMY1
;
— Result set from the second execution
BATCH_START_TS CURRENT TIMESTAMP

2013-08-02-14.59.46.423414 2013-08-02-14.59.46.424678

— Third execution of the same SQL statement in the same SPUFI session

SELECT BATCH_START_TS, CURRENT TIMESTAMP
FROM SYSIBM.SYSDUMMY1
;
— Result set from the third execution
BATCH_START_TS CURRENT TIMESTAMP

 

Cashback In Credit Or Debit Cards

 CREDIT+CARD+DOMAIN+IT+JOBS

CREDIT+CARD+DOMAIN+IT+JOBS

A Cashback reward program is an incentive program operated by credit card companies where a percentage of the amount spent is paid back to the card holder. Many credit card issuers, particularly those in the United Kingdom and United States, run programs to encourage use of the card where the card holder is given points, air miles or a monetary amount. This last benefit, a monetary amount, is usually known as cashback or cash back reward.

credit+cards+jobs

Where a card issuer operates such a scheme, card holders typically receive between 0.5% and 2% of their net expenditure (purchases minus refunds) as an annual rebate, which is either credited to the credit card account or paid to the card holder separately.[1]

When accepting payment by credit card, merchants typically pay a percentage of the transaction amount in commission to their bank or merchant services provider. Merchants are not allowed to charge a higher price when a credit card is used as opposed to other methods of payment, so there is no penalty for a card holder to use their credit card. The credit card issuer is sharing some of this commission with the card holder to incentivise them to use the credit card when making a payment.

Rewards based credit card products like cash back are more beneficial to consumers who pay their credit card statement off every month. Rewards based products generally have higher Annual percentage rate. If the balance were not paid in full every month the extra interest would eclipse any rewards earned. Most consumers do not know that their rewards-based credit cards charge higher fees to the vendors who accept them without vendors having any notification.

What is the benefit for Card issuer?

When merchants accept payment via credit card, they are required to pay a percentage of the transaction amount as a fee to the credit card company. If the card holder has a participating cash back rewards program, the credit card issuer is simply sharing some of the merchant fees with the consumer. The goal is to incentivize people to use their credit cards when making payments rather than cash, which earns them no rewards. The more that a consumer uses a credit card, the more merchant fees the credit card company can earn

Start Tuning Your DB2 SQL Query (Part-1)

Updated+DB2+Jobs

Updated+DB2+Jobs

  1. Check the predicates. Whether it is one query or multiple queries in a program, check every predicate in every query to ensure that they are indexable, stage 1, and as simple and straightforward as possible.
  2. If there is a Distinct or Group By in the query, make sure it is needed and then look at the Explain to see if it is causing a sort to take place. If the Distinct or Group By is needed, maybe there is another way to rewrite the query to handle a duplicates issue that will not cause a sort.
  3. Execute an Explain. In the Explain output, check the following: Are any tablespace scans occurring?Are any sorts occurring? If the query has a Union, Distinct, Group By, or Order By in it, does it need to be there?
  4. If there is a join involved, what is the order of tables being processed? DB2 should be selecting the table that will be filtered the most as the starting table. If it is not selecting the table being filtered the most, then check the columns of the predicates and make sure there are enough statistics on these columns to help the optimizer. To determine which table is going to be filtered the most, you must know the values coming in at runtime. You can execute Select count(*) statements to figure this out.
  5. All correlated subqueries should use an index, and if possible, they should process with indexonly = yes. A correlated subquery is a subquery that contains a join to a column from the outer table.
  6. Any nested loop join operations should have their tables processed using an index with matching columns. If the starting (composite) table is showing a tablespace scan, then this may not be much of an issue due to the fact that it will be scanned only one time. But for a joined table, any tablespace scans will causes that table to be scanned numerous times.

1 More Addition to Improve DB2 Insert Performance

Change Career-BIg Data-How?

Change Career-BIg Data-How?

1 More Addition in DB2 V11 to improve insert performance:

Having to index every data row affects performance and the size of the index. When creating an index, it is useful to exclude one or more values from being indexed, such as values that will never be used in a query, for example NULL, blank, and 0.

DB2 11 NFM can improve insert performance of NULL entries by the option of excluding NULL rows from indexes.

The CREATE INDEX statement is changed to state EXCLUDE NULL KEYS, and the RUNSTATS utility collect statistics only on non-NULL value.

All table statistics derived from an index are adjusted by the number of excluded NULL values. Therefore the table statistics will be the same whether they were derived from a table scan, an EXCLUDE NULL KEYS index, or a non-EXCLUDE NULL KEYS index (or INCLUDE NULL KEYS index).

After converting existing indexes to EXCLUDE NULL indexes, monitor application performance. Insert performance should improve and query performance difference should be minimal.

DB2-New Built-in Functions In Version 11

ARRAY_AGG

The ARRAY_AGG function returns an array in which each value of the input set is assigned to an element of the array. ARRAY_AGG can be invoked in the following situations:

Select list of a SELECT INTO statement

Select list of a fullselect in the definition of a cursor that is not scrollable

Select list of a scalar fullselect as a source data item for a SET assignment-statement (or SQL PL assignment-statement)

A RETURN statement in an SQL scalar function

ARRAY_DELETE

The ARRAY_DELETE function deletes elements from an array. This function can be specified only in the following specific contexts:

As a source value for a SET assignment-statement (or SQL PL assignment-statement) or VALUES INTO statement.

As the value to be returned in a RETURN statement in an SQL scalar function.

ARRAY_FIRST

The ARRAY_FIRST function returns the minimum array index value of an array. The data type of the result is the data type of the array index, which is INTEGER for an ordinary array. If array expression is not null and the array is not empty, the value of the result is the minimum array index value, which is 1 for an ordinary array

ARRAY_LAST

The ARRAY_LAST function returns the maximum array index value of an array. The data type of the result is the data type of the array index, which is INTEGER for an ordinary array. If the array expression is not null and the array is not empty, the value of the result is the maximum array index value, which is the cardinality of the array for an ordinary array.

ARRAY_NEXT

The ARRAY_NEXT function returns the next larger array index value, relative to a specified array index value

ARRAY_PRIOR
The ARRAY_PRIOR function returns the next smaller array index value, relative to a specified array index value.

CARDINALITY
The CARDINALITY function returns the number of elements in an array. The data type of the result is BIGINT.

The result of the CARDINALITY function is as follows:

For an ordinary array, the result is the highest array index for which the array has an assigned element. Elements that have been assigned the null value are considered to be assigned elements.

For an associative array, the result is the actual number of unique array index values that are defined in array-expression.

For an empty array, the result is 0.

MAX_CORDINALITY

The MAX_CARDINALITY function returns the maximum number of elements that an array can contain. This value is the cardinality that was specified in the CREATE TYPE statement for an ordinary array type.

The result of the MAX_CARDINALITY function is as follows:

For an ordinary array, the result is the maximum number of elements that an array can contain.

For an associative array, the result is the null value

TRIM_ARRAY

The TRIM_ARRAY function deletes elements from the end of an ordinary array. It can be invoked only in the following contexts:

A source value for SET assignment-statement or SQL PL assignment-statement, or a VALUES INTOstatement

The value that is returned in a RETURN statement in an SQL scalar function

Ref:IBM

ISPF-Commands For Senior Software Developers (3 Of 5)

MAINFRME+DB2+CICS+JOBS

MAINFRME+DB2+CICS+JOBS

21) I am not able to delete a dataset that resides on a tape.

A. Yes. You cannot delete/rename a tape dataset. Only you can uncatalog it.

ISPF Editor

22) How do I see only those lines that contain a particular string?

A. Type ‘X all’ then ‘F all ‘search-string’

23) Now I want to see only those lines that contain a particular value in the display I get after doing an ‘X all, F all’.

A. Type ‘ F new_value all nx’. ‘nx’ will restrict the search only to the displayed lines.

 24) How do I go to a particular line numbers ?

A. Use ‘L search-string’ command. This is the Label command.

 25) I want to find for a search-string that is Not prefix/suffix of another word. I want to find for an instance of a search-string that is whole word by itself. How can I do this?

A. Use keyword ‘word’ at the end of find command. E.g., ‘f job word’ will find for the whole word ‘job’.

 26) How do I specify the direction of search i.e., how do I search for a string in backward / forward direction (w.r.t., your current cursor position)?

A.

Direction

Keyword

Example

Backward

PREV

F ‘job’ PREV finds for string ‘job’ in backward direction

Forward

No need to mention any keyword. It’s default direction

F ‘job’  finds for string ‘job’ in forward direction.

From the beginning of the file

FIRST

F ‘job’ FIRST finds the first instance of sting ‘job’ in the file.

Count of all instances in file

ALL

F ‘job’ FIRST finds the first instance of sting ‘job’ in the file and gives total-no-of-instances of sting ‘job’ in the file.

27) How do I search for a value say ‘0980312’ stored in comp-3 format?

A. Type F X’0980312’.

28) How do I do case sensitive search find for a given search-string?

A. Just, enclose the search-string within quotes and have a letter C in front of the string itself. For e.g., F C’FindMe’ will find all FindMe strings only if the case matches.

29) How do I find for any non-blank characters?

A. Use picture string ‘^’. E.g., “F P’^’ 36” will find the next non-blank characters in column 36.

30) How do I find for any numeric character?

A. Use picture string ‘#’. E.g., 1. “F p’#’” finds the next numeric character. 2. “F ‘PAGE ##’ will find the next instance of the string ‘page followed by a blank followed by 2 numeric characters’.

IMSDB Interview Questions ( Part-2 )

IMSDB+JOBS

IMSDB+JOBS

1) How CICS access IMS DB database?

A. Access through CICS command level interface

2) How REXX Exec access IMS DB?

A. REXX EXECs can invoke IMS functions by using the IMS adapter for REXX

3) What is ICAL call in IMSDB?

A. The IMS Call (ICAL) call is used by an IMS TM-managed application program to send a synchronous processing request to a service that runs in a z/OS or distributed environment.

4) What is DFSDDLT0 in IMS DB?

A.

DFSDDLT0 is an IMS application program test tool that issues DL/I calls to IMS based on control statement information. You can use it to verify and debug DL/I calls independently of application programs. You can run DFSDDLT0 using any PSB. You can also use DFSDDLT0 as a general-purpose database utility program.

The functions that DFSDDLT0 provides include the following:

Issuing any valid DL/I call against any database using the following:

  • Any segment search argument (SSA) or PCB, or both
  • Any SSA or AIB, or both
  • Comparing the results of a call to expected results. This includes the contents of selected PCB fields, the data returned in the I/O area, or both.
  • Printing the control statements, the results of calls, and the results of comparisons only when the output is useful, such as after an unequal compare.
  • Dumping DL/I control blocks, the I/O buffer pool, or the entire batch region.
  • Punching selected control statements into an output file to create new test data sets. This simplifies the construction of new test cases.
  • Merging multiple input data sets into a single input data set using a SYSIN2 DD statement in the job control language (JCL). You can specify the final order of the merged statements in columns 73 to 80 of the DFSDDLT0 control statements.
  • Sending messages to the z/OS system console (with or without a reply).

5) How many ways application programs interact with IMS DB?

A.

Application programs can interact with IMS DB in two ways:

Applications that use the Data Language/Interface (DL/I) database call interface

Java applications that use the IMS Universal drivers for Java Database Connectivity (JDBC) access or DL/I access

6) What is direct access IMSDB program?

A. A direct access program accesses, for every input transaction, segments in one or more database records. These accesses are based on database record and segment identification. This identification is essentially derived from the transaction input and is normally the root-key value and additional (key) field values of dependent segments. For more complex transactions, segments can be accessed in several IMS databases concurrently.

7) What is sequential access IMSDB program?

A. A sequential access program accesses sequentially selected segments of all of a consecutive subset of a particular database. The sequence is usually determined by the key of the root segment. A sequential access program can also access other databases, but those accesses are direct, unless the root keys of both databases are the same.

8) What is the role of full function database?

A.IMS provides full-function database types to support direct and sequential access application programs

9) What is the role of fast path database?

A. Fast Path database types for direct access application programs

10) Is test PSB and production PSB are different?

A. YES, At execution time, each application program uses one PSB and it is usually a different PSB from those used by other application programs. At execution time, each application program uses a PSB that specifies the databases identified by the DB PCB it is authorized to process.

5 More Frequently Asked DB2 SQL Queries

15+ HOT+ IT+ JOBS

15+ HOT+ IT+ JOBS

I collected from my friends, for DB2 interviews these SQL queries you need to practice more times.

Query#1

1) What are the different SELECT statements?

A.

select clause: Enumerates the column names and related constructs

from clause: Describes the target(s) from which the data will be gathered and how multiple targets should be joined

where clause: Describes conditions known as predicates that the target data must meet in order to be included or considered for the query results

group by clause: Describes how nonaggregated data will be handled in the presence of related aggregated data (sums, averages, and so on)

having clause: Optional equivalent to the where clause, which is used as criteria to judge groups formed in the group by clause

order by clause: Provides optional ordering to the otherwise unordered sets of data and results

Query#2

2) What are the frequently referred DB2 System tables?

A.SYSTABLES.SYSIBM
Information about all tables in the database

SYSINDEXES.SYSIBM
Information about the indexes on all tables

SYSVIEWS.SYSIBM
Information on all views in the database

Query#3

3) Sample SQL query fro SYSTABLES?

A.

select name, creator
from sysibm.systables
where creator = ‘FUZZY’

Result:

NAME        CREATOR


ACT               FUZZY
ADEFUSR   FUZZY
CATALOG    FUZZY

Query#4

4) How Group By works in SELECT statement?

A. Group By works on non-aggregate columns.

select workdept, sum(salary) as newsalary
from employee
group by workdept

Query#5

5) How having works in SELECT statement?

A. You can use the having clause to apply criteria to aggregated groups. Think of the having clause as similar to the where clause, but applying only to the aggregate groups after they are formed by the group by clause.

select workdept, sum(salary) DeptSal
from employee
group by workdept
having sum(salary) > 100000

WORKDEPT     DEPTSAL


D11                     646620.00
A00                    354250.00
E11                      317140.00
C01                     308890.00
D21                     358680.00
E21                     282520.00

These five questions asked in many interviews. Keep reading my blog for more beautiful posts!