The Outsourced Mainframe and the Human Factor

As a generation Y mainframer born in the 80’s, I find it hard sometimes to explain to my generational peers what I do.
Quality Management

Why Mainframe?

Typically people my age a) have never heard of the mainframe or b) think of punch cards and green screen terminals they saw in a computer museum. They tend to be surprised when I tell them that mission-critical workloads like ledgers, payroll, inventory control, banking and financial transactions are in the year 2018 actually running on mainframes. In fact, some 68 % of the world’s IT production workloads run on mainframes (cio.com) and 71 % of global Fortune 500 companies have a mainframe (IBM). There are valid reasons for that such as reliability, availability, service-ability, scalability and security.
While the business case for the mainframe is solid, the ignorance of Gen Y and Z regarding mainframes is starting to become a problem. The experts are aging and companies are finding it hard to replace them. One effect we see is a centralization of a limited workforce pool going on, in other words: outsourcing of the mainframe.

The Human Factor

As an ISV in the mainframe eco-system we engage with customers who went for the outsourcing option and sometimes we witnessed surprising error scenarios show up. Jobs are executed twice, not at all or falsely configured. There seems to be more human errors happening as vital tacit knowledge can get lost in the transition process.  Furthermore, since the data on the mainframe is typically mission-critical, the consequences can be very serious – such as wrong account balances, double transactions or payments made to the wrong recipients. In one case 20.000 recipients were selected instead of 20.
While the mainframe is arguably the most reliable style of computing, is there an underestimation of the human factor?
If yes, what can we do about it?

Improve the Data Quality with automatic data quality assurance on the mainframe

Implementing our software greatly reduces the number of erroneous transactions, integrates deeply with SAP processes and automatically checks if data is correct. In order to reconcile jobs in SAP and distributed server landscapes, our quality management software _beta check|z closely integrates with the existing IBM workload scheduler environment. This makes it possible to transfer information from SAP to the host for comparison against relevant target values. This open design is a key strength of our software, as it allows for checking the data quality on the mainframe (and in distributed systems fo course).

To sum it up: _beta check|z offers a standardized method that enables customers to automatically verify all data and logs, regardless of whether they are being processed centrally under z/OS or on distributed platforms. The product thus delivers a single point of control.
Have a nice day,
Lars Danielsson

Leave a Reply

Your email address will not be published. Required fields are marked *

*

← zurück