2 More Ideas on COBOL Debugging

Mastering+COBOL

Mastering+COBOL

COBOL debugging: is a concept used in COBOL programs. Let us go into details of this concept. Many COBOL programmers have a question, if debugging tools are there, why we need to go for COBOL debugging. Yes, it is true, but it has some advantages.

How to implement COBOL debugging?

Any program line with a D character in the indicator area is a debugging line. If the SOURCE-COMPUTER paragraph in the Configuration Section of the Environment Division contains the phrase WITH DEBUGGING MODE, the debugging lines are treated as regular COBOL source code.

IDENTIFICATION DIVISION.
PROGRAM-ID. PROGRAM-NAME.

ENVIRONMENT DIVISION.
CONFIGURATION SECTION.
SOURCE-COMPUTER. HAL-2001 WITH DEBUGGING MODE.

If the WITH DEBUGGING MODE phrase is not present, the debugging lines are treated as comments.

In this way, you can insert debugging statements into your program and selectively turn them off or on by excluding or including the WITH DEBUGGING MODE phrase. Of course, you must recompile your program to remove or add the phrase, but you don’t need to remove any of the debugging lines.

Example for COBOL debugging:

A debugging line often contains a DISPLAY statement that shows the location of the statement. It also may contain the contents of critical data items that need to be monitored to find a particular problem. Here is an example:

001000 PERFORM A-200 THRU A-200_EXIT.

004000 A-200.
004010 some statements

004200 COMPUTE MAX-INTEREST = …

004201D DISPLAY “A-200-EXIT; MAX-INTEREST = “, MAX-INTEREST.

004210 A-200-EXIT.

004220 EXIT.

The debugging line (4201) is compiled and is a normal program statement when WITH DEBUGGING MODE is specified. In that case, the DISPLAY statement will be executed. If WITH DEBUGGING MODE is not specified, line 4201 is a comment.

Debugging lines are not restricted to the Procedure Division. They are allowed anywhere in the source program after the SOURCE-COMPUTER paragraph. You might define a file to which you write debugging information.

Advantages:

  1. if we are not in debugging mode, the ‘D’ lines will be treated as comments. So, we can save spool space
  2. Any layman can use this concept for testing of COBOL programs.

ISPF-Commands For Senior Software Developers (4 Of 5)

SRINIMF-Blog

SRINIMF-Blog

  1. How do I find for any non-numeric character?

A. Use picture string ‘-’.

32. How do I find for any alphabetic character?

A. Use picture string ‘@’.

33. How do I find for any uppercase alphabetic character?

A. Use picture string ‘>’.

34. How do I find for any lowercase alphabetic character?

A. Use picture string ‘<’.

35. How do I find for any non-display character?

A. Use picture string ‘.’ E.g., “f p’.’” finds the next instance of non-display character.

36. How can I split a line in ISPF editor?

A. Key in ‘TS’ in columns 1-6, position cursor at a point from where you want to split the line and press ENTER. This will result in splitting the line: the contents of line from where the cursor is positioned goes to the next line and the remaining portion will be retained in that line itself.

—TS—  This is first portion. It’s been split here.

^ Cursor is placed here.

Upon hitting ENTER, it splits into 2 lines as below:

——–  This is first portion.

——–                                  It’s been split here. ß next line

37. How do I join 2 lines?

A. You can do this by using overlay command. For e.g.,

——– This is line-1.

——– This is line-2.

Say, you want to join these 2 lines. First, you need to move the 2nd line to the position from where you need to join it in the first line. Then, key in ‘M’ in columns 1-6 of 2nd line and ‘O’ in columns 1-6 of 1st line.

—-O— This is line-1.

—-M—                       This is line-2.

Upon hitting ENTER, line-2 joins line-1.

——– This is line-1. This is line-2.

NOTE: If you key in ‘C’ instead of ‘M’ in the 2nd line, it joins the 2nd line to the first line and also retains the 2nd line.

  1. How do I see the value of the field stored in comp-3 format?

A. Type HEX ON and go to the location of the field and get the value from the two lines displayed below it.

39. ow do I get rid of the 4 to 5 message lines displayed at the beginning of the file in ISPF edit?

A. Use ‘RESET’ command.

40. How do I stop the standard numbers on Col 73-80 in the ISPF editor?

A. Type ‘NUM OFF’. Some Clients use Col 73-80 to mark their changes, in that case you HAVE to have NUM OFF as the option.

Reduce SQL Complexity With Grouping Sets in DB2 V11

DB2+Mainframe+USA+Jobs

DB2+Mainframe+USA+Jobs

DB2V11,Grouping Sets-Mainframe.

The GROUPING SETS option can be thought of as the union of two or more groups of rows into a single result set. It is logically equivalent to the union of multiple subselects with the group by clause in each subselect corresponding to one grouping set. This is similar to the DB2 for Linux, UNIX, and Windows and DB2 for IBM System i® support for grouping-sets and super-group specifications.

SQL statement by using the GROUP BY clause with the GROUPING SETS option.

(See SQL query to use grouping sets)

WOKKDEPT
EDLEVEL SEX SUM SALARY AVG SALARY COUNT
A00 NULL NULL 204250 40850 5
B01 NULL NULL 41250 41250 1
C01 NULL NULL 118890 29722.5 4
Dll NULL NULL 258350 25835 10
D21 NULL NULL 143250 28650 5
E01 NULL NULL 40175 40175 1
Ell NULL NULL 82250 27416.67 3
E21 NULL NULL 124570 24914 5
NULL 14 NULL 157570 26261.67 6
NULL 15 NULL 27380 27380 1
NULL 16 NULL 332655 27721.25 12

SQL query:

SELECT WORKDEPT, EDLEVEL, SEX, SUM(SALARY) as SUM_SALARY, AVG(SALARY) as
AVG_SALARY, COUNT(*) as COUNT
FROM DSN81110.EMP WHERE SALARY > 20000
GROUP BY GROUPING SETS (WORKDEPT, EDLEVEL, SEX)

The result set is logically equivalent to the union all of three subselects with the group by clause in each subselect corresponding to one column each from the three columns on the grouping sets specification (while the other two column values are shown as NULLs).

What is EIB Storage Area in CICS

Jobs @ SRINIMF

Jobs @ SRINIMF

Execute Interface Block (EIB) is a CICS area that contains information related to the current task. I’ll explain how to use some of the other fields. For a complete explanation of these fields and their possible values, you can refer to the IBM manual CICS Application Programming Reference.

EIBDATE and EIBTIME contain the date and time your task was started. EIBDATE indicates what number day in the year it is and includes an identifier for the century. So December 31, 1999 is stored as 0099365 (the 365th day of 1999), and January 12, 2001 is stored as 0101012. EIBTIME reflects a 24-hour clock (where 2:00 p.m. is hour 14, for example). So midnight is stored as 0000000; one second before midnight is 0235959.

Although the date format is useful for date comparisons, it’s inappropriate for display purposes. And two time values can only be compared if you’re confident that both represent the same day. As a result, you’ll often use the CICS FORMATTIME command to convert times and dates to and from various formats. You’ll learn how to use the FORMATTIME command later in this chapter.

Several of the Execute Interface Block fields are particularly useful when debugging a CICS program. In fact, you saw four of them used in the SYSERR program: EIBRESP, EIBRESP2, EIBRSRCE, and EIBTRNID. You might also use EIBRCODE to get the CICS response code, or EIBFN to determine the last CICS command that was executed.

EIBTRNID is often used for purposes other than debugging, too. It contains the trans-id that started the current task, so one of its common uses is to determine how a program was started. For example, you might check this field to insure that a program is invoked only from a menu, not by entering the program’s trans-id at a terminal. In that case, this field should contain the trans-id of the menu program.

How to access CICS storage areas other than working storage?

The Execute Interface Block (EIB):  is a CICS area that contains information related to the current task, such as the date and time the task was started and the transaction-id that was used to start it.

The definition of the EIB is automatically inserted into the Linkage Section of the program when the program is prepared for execution. You don’t have to code it yourself.

When the user presses an attention identifer (AID) key, CICS passes a one-byte value to the program through the EIBAID field in the Execute Interface Block. You can use the value of this field to determine the processing the user has requested.

The EIBCALEN field contains the length of the data passed to the program through its communication area (DFHCOMMAREA). A length of zero indicates that no data was passed to the program. In a pseudo-conversational program, that means that it’s the first execution of the program

Other CICS System areas?

The Common System Area (CSA) is a major CICS control block that contains important system information, including pointers to most of the other CICS control blocks. Access to the information in the CSA is provided through the ASSIGN command.

The Common Work Area (CWA) is a storage area that can be accessed by any task in a CICS system. Its format is installation-dependent. It often contains information such as the company name, the current date already converted to the form MM/DD/YYYY, or other application-specific information.

The Terminal Control Table User Area (TCTUA) is a user-defined storage area that’s unique to the terminal where the current task is attached, and it’s maintained even when no task is attached to the terminal. So you may want to keep terminal-related information there.

The Transaction Work Area (TWA) is a storage area that’s unique to the current task, so you may want to use it to store information about the execution of a transaction. It’s deleted when the task ends.

To access the CWA, TCTUA, or TWA in the Linkage Section, you use the ADDRESS command to establish addressability

Code to handle un-recoverable errors in CICS

WORKING-STORAGE SECTION.
*
COPY ERRPARM.
C 01 ERROR-PARAMETERS.
C *
C     05 ERR-RESP PIC S9(8) COMP.
C     05 ERR-RESP2 PIC S9(8) COMP.
C     05 ERR-TRNID PIC X(4).
C     05 ERR-RSRCE PIC X(8).

 

Writing Transaction File to Master File in COBOL- 8 Tips

Try+Data+warehousing+Jobs

Try+Data+warehousing+Jobs

COBOL-Master file-Transaction file. Many interviews these questions are asked in COBOL,transaction file and master file. Easy ideas I have given for your sure success in your interviews.

Master files contain the entire data of a particular application. For example, a master file may contain the entire data about the employees, payroll, or other applications of a company. Although the data stored in master files is permanent to some extent, there may be some random changes in this data. These changes are grouped together and stored in a file called the transaction file. The transaction file contains information about all the transactions that have to be applied on the data stored in the master file.

For example, the Employee file containing information about all the employees of a company is a master file. When new employees join the company or some employees leave the company, the information in the Employee file needs to be updated. Instead of applying these changes on the master file, they can be grouped together in a transaction file and applied on the master file together as a batch.

A transaction file is also a sequential file, which can be ordered or unordered. This file can contain three types of operations that can be applied to the master file:

  1. Inserting records
  2. Modifying records
  3. Deleting records
  • If both the master and transaction files are unordered, inserting records at the end of the file is the only operation that can be performed on the master file. This is done by opening the master file in EXTEND mode, reading the records from the transaction file, and writing the records to the master file.
  • If both the master and transaction files are ordered, the operations can either be applied in the master file or a master file can be created.
  • If all the operations in the transaction file are related to modifying existing data in the records in the master file, there is no need to create a master file. In this case, the master file is opened in I/O mode, the record to be modified is read, the operations are performed, and the record is rewritten in place.
  • It is not possible to add or delete records from the file because insertion or deletion cannot take place between an ordered set of records. To solve this problem, the concept of a new master file is used. In this case, any operation from the transaction file requires three files: the old master file, the transaction file, and the new master file. The old master file and the transaction file are opened in input mode. After all the operations are performed, the new master file is generated.
  • If the master file is ordered and the transaction file is unordered, the transaction file also has to be sorted in the order of the key field of the master file. Then, the sorted transaction file can be read and the operations can be performed on the master file.
  • If the master file is unordered and the transaction file is ordered, the master file can be sorted on the key field, which is unique for each record. Then, the operations can be performed and the updated file can be obtained.

ALTER TABLE table_name DROP COLUMN column_name

TEradata blog

TEradata blog

ALTER statement we can use in DB2 SQL query to drop a column.

ALTER TABLE table_name
DROP COLUMN column_name

But, there are lot other conditions involved, which column we can drop or not.

  • The containing table space is not a universal table space.
  • The table is a created global temporary table.
  • The table is a system-period temporal table.
  • The table is a history table.
  • The table is an archive-enabled table.
  • The table is an archive table.
  • The table has an edit procedure or a validation exit procedure.
  • The table contains check constraints.
  • The table is a materialized query table.
  • The table is referenced in a materialized query table definition.
  • The column is defined as a security label column.
  • The column is an XML column.
  • The column is a DOCID column.
  • The column is a hidden ROWID column.
  • The column is defined as ROWID GENERATED BY DEFAULT, and the table contains a hidden ROWID column.
  • The column is a ROWID column on which there is a dependent LOB column.
  • The column is part of the table partitioning key.
  • The column is part of the hash key.
  • All of the remaining columns in the table are hidden.
  • A view that is dependent on the table has INSTEAD OF triggers.
  • A trigger is defined on the table.
    Any of the following objects are dependent on the table:
  • Extended indexes
  • Row permissions
  • Column masks
  • Inline SQL table functions

When you have written SQL query to drop a column,but it is not executed. Then let us see what will happen:

ALTER TABLE DROP COLUMN is considered a pending definition change, at the time that the ALTERstatement is executed, semantic validation and authorization checking are performed as usual. However, the drop is not applied to the current definition or data at the time of the ALTER (that is, catalog and data are untouched). An entry is recorded in the SYSIBM.SYSPENDINGDDL catalog table for the pending drop column, and the table space is placed in an advisory REORG-pending (AREOR) state.

1 More Addition Of Global Variables in DB2 V11

COBOL+JCL+Mainframe+Jobs

COBOL+JCL+Mainframe+Jobs

Traditionally within a relational database system, most interactions between an application and the DBMS are in the form of SQL statements within a connection.

To share information between SQL statements within the same application context, the application that issued the SQL statements has to do this work by copying the values from the output arguments, such as host variables, of one statement to the input host variables of another. Similarly, when applications issue host-language calls to another application, host variables need to be passed among applications as input or output parameters for the applications to share common variable. Furthermore, SQL statements that are defined and contained within the DBMS, such as the SQL statements in the trigger bodies, cannot access this shared information.

Mainframe+DB2+Jobs | CLOUD+IT+JOBS | Bigdata+JOBS

These restrictions limit the flexibility of relational database systems and, thus, the ability of users of such systems to implement complex, interactive models within the database itself. Users of such systems are forced to put supporting logic inside their applications to access and transfer user application information and internal database information within a relational database system. Ensuring the security of the information that is transferred and accessed is also left to the user to enforce in their application logic.

How to create global variables?

CREATE VARIABLE BATCH_START_TS TIMESTAMP
DEFAULT CURRENT TIMESTAMP;


The new SYSIBM.SYSVARIABLES table includes one row for each global variable that is created.

The new SYSIBM.SYSVARIABLEAUTH table includes one row for each privilege of each authorization ID that has privileges on a global variable.

The SYSIBM.SYSVARIABLES_TEXT table is an auxiliary table for the DEFAULTTEXT column of the SYSIBM.SYSVARIABLES table.

How a global variable response can be accessed in different contexts?

— Initial execution of the SQL
SELECT BATCH_START_TS, CURRENT TIMESTAMP
FROM SYSIBM.SYSDUMMY1
;
— Result set from the initial execution
BATCH_START_TS CURRENT TIMESTAMP
2013-08-02-14.59.46.423414 2013-08-02-14.59.46.423414
— Second execution of the same SQL statement in the same SPUFI session

SELECT BATCH_START_TS, CURRENT TIMESTAMP
FROM SYSIBM.SYSDUMMY1
;
— Result set from the second execution
BATCH_START_TS CURRENT TIMESTAMP

2013-08-02-14.59.46.423414 2013-08-02-14.59.46.424678

— Third execution of the same SQL statement in the same SPUFI session

SELECT BATCH_START_TS, CURRENT TIMESTAMP
FROM SYSIBM.SYSDUMMY1
;
— Result set from the third execution
BATCH_START_TS CURRENT TIMESTAMP

 

Cashback In Credit Or Debit Cards

 CREDIT+CARD+DOMAIN+IT+JOBS

CREDIT+CARD+DOMAIN+IT+JOBS

A Cashback reward program is an incentive program operated by credit card companies where a percentage of the amount spent is paid back to the card holder. Many credit card issuers, particularly those in the United Kingdom and United States, run programs to encourage use of the card where the card holder is given points, air miles or a monetary amount. This last benefit, a monetary amount, is usually known as cashback or cash back reward.

credit+cards+jobs

Where a card issuer operates such a scheme, card holders typically receive between 0.5% and 2% of their net expenditure (purchases minus refunds) as an annual rebate, which is either credited to the credit card account or paid to the card holder separately.[1]

When accepting payment by credit card, merchants typically pay a percentage of the transaction amount in commission to their bank or merchant services provider. Merchants are not allowed to charge a higher price when a credit card is used as opposed to other methods of payment, so there is no penalty for a card holder to use their credit card. The credit card issuer is sharing some of this commission with the card holder to incentivise them to use the credit card when making a payment.

Rewards based credit card products like cash back are more beneficial to consumers who pay their credit card statement off every month. Rewards based products generally have higher Annual percentage rate. If the balance were not paid in full every month the extra interest would eclipse any rewards earned. Most consumers do not know that their rewards-based credit cards charge higher fees to the vendors who accept them without vendors having any notification.

What is the benefit for Card issuer?

When merchants accept payment via credit card, they are required to pay a percentage of the transaction amount as a fee to the credit card company. If the card holder has a participating cash back rewards program, the credit card issuer is simply sharing some of the merchant fees with the consumer. The goal is to incentivize people to use their credit cards when making payments rather than cash, which earns them no rewards. The more that a consumer uses a credit card, the more merchant fees the credit card company can earn

Start Tuning Your DB2 SQL Query (Part-1)

Updated+DB2+Jobs

Updated+DB2+Jobs

  1. Check the predicates. Whether it is one query or multiple queries in a program, check every predicate in every query to ensure that they are indexable, stage 1, and as simple and straightforward as possible.
  2. If there is a Distinct or Group By in the query, make sure it is needed and then look at the Explain to see if it is causing a sort to take place. If the Distinct or Group By is needed, maybe there is another way to rewrite the query to handle a duplicates issue that will not cause a sort.
  3. Execute an Explain. In the Explain output, check the following: Are any tablespace scans occurring?Are any sorts occurring? If the query has a Union, Distinct, Group By, or Order By in it, does it need to be there?
  4. If there is a join involved, what is the order of tables being processed? DB2 should be selecting the table that will be filtered the most as the starting table. If it is not selecting the table being filtered the most, then check the columns of the predicates and make sure there are enough statistics on these columns to help the optimizer. To determine which table is going to be filtered the most, you must know the values coming in at runtime. You can execute Select count(*) statements to figure this out.
  5. All correlated subqueries should use an index, and if possible, they should process with indexonly = yes. A correlated subquery is a subquery that contains a join to a column from the outer table.
  6. Any nested loop join operations should have their tables processed using an index with matching columns. If the starting (composite) table is showing a tablespace scan, then this may not be much of an issue due to the fact that it will be scanned only one time. But for a joined table, any tablespace scans will causes that table to be scanned numerous times.

1 More Addition to Improve DB2 Insert Performance

Change Career-BIg Data-How?

Change Career-BIg Data-How?

1 More Addition in DB2 V11 to improve insert performance:

Having to index every data row affects performance and the size of the index. When creating an index, it is useful to exclude one or more values from being indexed, such as values that will never be used in a query, for example NULL, blank, and 0.

DB2 11 NFM can improve insert performance of NULL entries by the option of excluding NULL rows from indexes.

The CREATE INDEX statement is changed to state EXCLUDE NULL KEYS, and the RUNSTATS utility collect statistics only on non-NULL value.

All table statistics derived from an index are adjusted by the number of excluded NULL values. Therefore the table statistics will be the same whether they were derived from a table scan, an EXCLUDE NULL KEYS index, or a non-EXCLUDE NULL KEYS index (or INCLUDE NULL KEYS index).

After converting existing indexes to EXCLUDE NULL indexes, monitor application performance. Insert performance should improve and query performance difference should be minimal.