With RPA, It’s Not Just “Point A to Point B”

Imagine you decide one day to go shopping for a new ride. You do a quick Web search for what’s available, what’s in your price range, who’s selling what in your area, and so forth. However, something strange occurs. No matter where you go and no matter how you word your search, you keep coming up with a silver-colored, base-level, four-door sedan — mind you, it’s the same basic silver car, over and over. Every dealer’s website emphasizes that you don’t need different vehicle choices because it doesn’t matter what you get: all you want to do is get from Point A to Point B.

You say to yourself, “This can’t be right. What if I need more room to carry my family and friends around? What if I want something with more horsepower? What if I want a different color? What if I want some special features?”

And you’ve got it: This can’t be right. Yet that’s precisely how some vendors want you to think about robotic process automation (RPA). They want you to think it’s pretty much a commodity now, that one product is about the same as another, and that it doesn’t matter all that much which one you get. “Additional features? Bah! You don’t need those. Ability to handle complex tasks? Pfft! That’s not what RPA does! Server-based connectivity with mainframes? Ridiculous! It doesn’t matter.”

Well . . . yes, maybe you do; yes, it does; and yes, it does.

As much as some of the vendors out there may want to pretend that there’s no difference among the offerings, it just isn’t so. We’ll set that straight in a moment.

 

RPA: More Than Just a Silver Sedan

First, let’s start at the beginning. What exactly is RPA?

In its simplest form, the meaning of RPA is automating human-executed processes with no changes to the existing infrastructure or applications. Among RPA vendors, most interpret this to mean a software robot running on a desktop and interacting with the same user interfaces used by humans. Moreover, the general perception apparently is that there are two types of automation: RPA and everything else.

In this mindset, RPA is billed as the easy entry product that quickly automates mundane tasks to free up users to do the more important and interesting work; so, naturally, many people consider all RPA products to be equal when it comes to automating processes. It comes down to a feature checklist to determine which RPA product(s) to choose for each organization: whatever is an RPA product — by this interpretation, something that runs on a desktop and interacts with the user interface — will be considered. It’s silver sedans all the way down.

(OK, sorry; wrong metaphor.)

There is one big problem with this: not everybody wants or needs a silver sedan! There are many different types of processes that can be automated. Here are two big mistakes organizations make:

  • They lump all RPA products into the same bucket and look at only the feature list.
  • They consider only mundane, simple tasks/processes for RPA automation.

Companies, Evaluate Your Processes

When considering automation opportunities (as we’ve discussed in earlier blog posts here), companies should first look at all processes, from the complex to the simple (“mundane”), and then find the right tool — and they shouldn’t look at just the tools that fit the simplistic definition of RPA.

There is a continuum of process complexity that ranges from simple through moderate to complex. Traditional RPA tools can handle simple, and some moderate, processes, while they struggle with some moderate and most complex processes. For these reasons, your automation strategy should include several tools, one for each complexity level. Many companies already choose at least a couple of products for their standard RPA solution. But these choices typically are based on either the best two feature sets (the classic “look-for-the-most-bullets-in-the-feature-list” approach) or one product for unattended automation and one for attended (also called assisted) automation.

Some families need multiple vehicles: a nice-sized crossover to carry everybody around, an econo-car for workday commuting, and maybe even a sporty car for one or more teenagers. One silver sedan just won’t cut it. And, when it comes to RPA, one product isn’t enough. What you really should have is one RPA tool for attended automation and two or three others for unattended automation. The reason you should have only one attended solution is because the users who interact with it can be confused by having to learn multiple products. You should find the best-of-breed product and do a thorough evaluation of it with end users to make sure it meets their needs. For attended automation, the users are key to your success, so find a product that is easy to use, yet powerful enough to handle users’ needs.

The reason you need more than one unattended solution is because, again, the same silver sedan can’t fit everyone’s needs. One RPA product might not be able to successfully automate some of the more complex processes and another might not be cost effective for simple processes. You should look at the problems you’re trying to solve and choose the appropriate solution to each problem.

Simple to Moderate Processes

Simple to moderate processes are those processes that have simple logic and few decision points. For those, look at standard desktop-based RPA tools. You can find a list of the usual suspects by searching on the Web; but don’t choose one just because it has the highest rating from Gartner or Forrester or another analyst you may prefer. Instead, look at the strengths and weaknesses of each tool and compare that data to your actual requirements.

Desktop-based RPA tools should be used to automate mundane, simple tasks. These products are designed to sit on a desktop and automate processes exactly like a human. They are perfect for simple processes such as data entry, but may not be appropriate for complex processes that require advanced business logic or have many decision points. Many of these products also provide a server to allow you to monitor the desktops and robots, and to dynamically scale when needed. Still, don’t be fooled by the presence of a server here: in most cases, the server does not contain any business logic or allow robots to share information. This is not server-based RPA.

Moderate to Complex Processes

For automating moderate to complex processes, you should look for solutions that specifically state that they can automate complex processes. These solutions are usually server-based, as opposed to desktop-based, and usually have some proprietary technology that allows them to access applications more efficiently and accurately, and with high scalability and availability.

These systems are designed to handle complex logic and usually provide more than just desktop access. For example, access to Web-based applications might be accomplished through a server-based Web browser allowing higher scalability, efficiency and accuracy. Another example is mainframe “green-screen” access. A desktop-based RPA product interacts with mainframes through a desktop-based terminal emulator via either EHLLAPI or screen-scraping, both of which present major challenges unless access to the mainframe is a very small part of the overall picture.

What to Look For in RPA

Here are some guidelines that can help you choose the right solution:

  • How long does it take an end user to learn the process?
    – Hours to a few days = simple process.
    – Days to a week = moderate process.
    – More than a week = complex process.
  • How large is the daily inventory of work and how many humans does it take to complete it?
    – Requires 10 or fewer workers = low-volume process.
    – Requires 50 or fewer workers = medium-volume process.
    – Requires more than 50 workers = high-volume process.
  • What is the impact to the business if the process isn’t executed properly?
    – Minimal impact = low.
    – Fines that could hurt the bottom line = moderate.
    – Fines and interest that add up to millions = high.

The Bottom Line: Where the Rubber Meets the Road

Remember, basic silver sedans won’t serve everyone when it comes to RPA. Even though the desktop RPA vendors have managed to make a name for themselves, there are better RPA solutions available for high-volume, high-value, complex processes. You don’t need a complex, BPM-like solution to automate these processes. RPA technology will work — so long as the architecture of the solution is designed to handle it.

Mainframe Modernization: Start With the User Interface

What is mainframe modernization? It depends on who you ask.

The mainframe is an interesting beast. In fact, just as the term mainframe modernization means different things to different people, so does the term mainframe by itself. The core of the mainframe includes the operating system, applications, and access interfaces.

Let’s take a standard mainframe environment. The mainframe is running CICS or IMS. Users access the applications using a terminal emulator, also referred to as a “green screen.” If you talk to a system architect, he might use the term mainframe modernization to mean converting back-end applications to be more modern. If you talk to an end user, he/she will tell you it means an easy-to-use graphical interface. Those are two completely different visions.

Most large organizations’ IT departments take the architect’s view — modernizing the back-end mainframe. This could consist of converting COBOL code to Java, or running COBOL applications under a Linux environment, or converting the database from IMS to DB2 to make it easier to integrate with external applications. All of this takes time, possibly years, and lots of money. So, you have to ask the question: “What’s the goal?”

Hide in Plain Sight

If the goal is to reduce costs, you should look at the back end to find ways to modernize and reduce MIPS costs. However, if your goal is to improve productivity, usability, and overall cost of operations, you should consider updating the front end (user interface) first. To do that, you need a layer of abstraction. That’s a software layer that understands how to communicate with two different layers while hiding the layers from each other. As an example, you could have a modern Web-based user interface that communicates with legacy “green-screen” applications. The UI doesn’t know it’s communicating with “green screens,” and the “green screens” don’t know they’re being converted to a Web interface.

Here’s how you might implement this solution. You want to first design your user interface without regard for the mainframe’s communications protocol or architecture. Simply design your user interface using standard UX practices. Now, you need a layer of abstraction — a layer that knows how to talk to the mainframe, using the standard native protocol, and communicate with the new user interface. If you decide, as most companies do, to create a Web-based user interface, you would use the standard Web architecture used by all your internal Web applications. Maybe you use J2EE or Microsoft, or maybe your standard is Struts or Ruby on Rails. Whatever it is, just design your Web application the same way you would if it were using a standard relational database.

Can We Talk, Mainframe?

Now you need a way to communicate with the mainframe “green screens.” There are several options available. OpenConnect has one called ConnectiQ. It’s a server-based platform that communicates directly with the mainframe, using the TN3270 protocol. It consumes the raw 3270 packets and interprets them in memory. It then converts that information into standard Web services that can be called and consumed by your Web application. The user interface doesn’t know it’s talking to an old legacy mainframe application, and the mainframe application doesn’t know it’s communicating with a modern Web-based user interface. In other words, you have a layer of abstraction; get it? In this case, the layer of abstraction is ConnectiQ. The Web application simply communicates with ConnectiQ, using standard Web services.

Why is the layer of abstraction so important? Because, once you have the new UI in place, you can start working on modernizing the back end applications, databases, and network interfaces. Again, that could take years, but you already have a nice, intuitive, easy-to-use, easy-to-learn user interface which should be saving money; so maybe the urgency isn’t quite as great.

The key is: once the back end is modernized, you simply change your layer of abstraction so it now communicates with the new back end. Users don’t see any changes — other than, perhaps, better performance. In the example above, you would remove ConnectiQ and use different Web service calls in your Web applications. This approach also allows you to modernize a little at a time. You can move an application, or function, at a time and have a mixture of Web service calls to ConnectiQ and the new mainframe interface. Again, the users don’t know the difference. They’re just using a nice, new, modern interface.

The benefits of this approach are numerous — including improved user productivity, simpler and faster training to bring people onboard more quickly, better and more timely customer support, and access from any Web browser so work-from-home users don’t need any special software.

Macros, Schmacros

Let’s also talk about a very serious problem related to mainframe terminal emulators: macros!

Almost all terminal emulators provide a macro interface that allows a user to press a “Record” key and record a set of keystrokes. She can then play back the recording any time. For example, a common problem is copying data from one screen onto another one. The macro would be recorded to navigate to the first screen, copy specific fields, navigate back to the original screen, and paste the data into that screen.

This option saves a lot of time, and users can create their own macros. However, that’s actually the problem. You end up with hundreds or even thousands of user macros running. Most aren’t written correctly. Macros are not “intelligent,” and have no way to know if they’re working correctly. If a user hits the “Play” button while on the wrong screen, the macro is still going to try to run. In some situations, this can actually cause harm.

So, why am I talking about macros in a mainframe modernization blog post? Simple: if you modernize your user interface, you can provide the same benefits that macros provide without the risk of users creating their own, possibly harmful macros. You now have control.

Users will ask for changes that help them be more productive. That’s great, and now you have a centralized way to provide those changes. The best part is that all users benefit, not just the one person who knew how to create a macro.

Front First,Back Last

Here’s the bottom line. Please your users first to improve quality, reduce costs, and improve employee satisfaction. Then, look for ways to modernize your back end. But remember, you don’t have to modernize your back end after you modernize the front end. If users are more productive and operations costs are down, you already may have achieved the results you sought. Also, you already know how to maintain the back end. So why introduce new technology, processes, and procedures if you don’t have to?

Mainframes and the Desktop: Bridging the Gap

There’s a perception among the general public, not to mention a surprising number of tech journalists, that mainframes belong in a long-ago yesterday.

In fact, the truth is quite different. Mainframes are still highly relevant, and will stay that way well into the foreseeable future.

Mainframes Stay Mainstream

The funny thing is that today’s computing reality actually needs mainframes at least as much as yesterday’s did — probably more! For example, smartphone apps typically consult distant mainframes for whatever information you’re seeking. Moreover, numerous enterprises not only have decades’ worth of data on their mainframe, but also are adding to that storage every minute.

So, if mainframes will stick around, how can they most readily integrate with everything else? What happens when ordinary PCs in today’s Web-enabled workplace must “talk” to the office mainframe and work with its data?

One answer is terminal emulation software. As the name implies, it acts like a mainframe’s classic terminal interface. This lets you access the mainframe’s applications and data via your PC, just as if you were sitting at a mainframe terminal.

It Matters Where It Lives

So far, so good; you can view, and use, mainframe apps and data from the comfort of your everyday PC. However, depending on the specific emulation product, deployment can be a big, hairy deal for your IT department. It also can be a big, hairy, expensive deal for your CFO.

Here are some of the potential problems:

  • PC-by-PC — You’ve probably heard the expression, “It’s like being nibbled to death by a duck.” That pretty much sums up how your IT department feels when it has to install and support a given application on a desktop-by-desktop, PC-by-PC basis. It’s even more problematic in today’s telecommuting-driven reality, when many people work entirely from home. So, if the chosen terminal emulation product is installed and updated one-by-one, that’s plenty of added work for IT.
  • Managing macros — A similar drawback lies in the common need for macros — programmed keystrokes that perform recorded sequences of commands. Macros are particularly useful with terminal emulation. But if every user has his or her own unique macros, that’s still another thing IT must try to manage. (Imagine if a user forgets any of the macros or loses his cheat-sheet for them!)
  • Licensing costs — You usually buy enterprise software through licensing the number of users for the product. Some terminal emulation products require you to buy enough licenses to cover all the desktops which might use the product, no matter how many of them really will. As a result, that wastes a lot of money over time. Also, just as in the case of PC-by-PC software, it’s more difficult for IT to setup and maintain.

A Better Way to Get There

With all these potential flies in the ointment, then, what’s a better way to access all the goodies on the mainframe? We feel your best bet is to use an emulation software product that:

  • Is server-based — Rather than the “duck-nibbling” desktop-by-desktop approach, your emulation software choice should live on a server that the terminal users can access. That provides centralized management which greatly simplifies things for IT, both when it’s time to install the software and when it needs updates.
  • Is browser-based — With the terminal emulation process occurring on a server, your company can then give each user access to the mainframe through a Web browser. (Of course, it must work well with the browser of choice. We’ll address that shortly.) This simplifies interaction with the mainframe. It also can facilitate access from anywhere — usually through a virtual private network (VPN) — and via a wide choice of devices, including compact tablets.
  • Has centralized macros — The server-based approach has another benefit: it means the macros also live in one place, accessible by all users. It’s much easier for IT when each macro means the same thing regardless of who’s using it.
  • Has concurrent user licensing — With concurrent user licensing, you pay for only the number of simultaneous users you know you’ll actually have. Moreover, the licensing “cares” about only how many PCs, not which PCs, are accessing the mainframe. If you’ve paid for 25 licenses, it doesn’t matter who those 25 simultaneous users are. This gives you much more flexibility regarding access to the mainframe.

Some Additional Recommendations

We also highly recommend that you select terminal emulation software with these additional advantages:

  • SSO — Why have to enter a user ID and password for each separate mainframe application you access? Single sign-on (SSO) lets you authenticate just once per session to get into all applicable mainframe applications.
  • End-to-end security — The mountains of data on your mainframe require the highest security. Your terminal emulation must maintain that security, end-to-end, in four levels: application, session, transport, and host.
  • Various client options — Enterprises’ IT departments often must limit their users’ choices of Web browsers and versions thereof. For that reason, your browser-based terminal emulation software should come in a variety of clients. If so, your emulation software will be compatible with what you use. It won’t matter whether you’re on a Java®-supporting legacy platform or the latest-and-greatest, HTML 5-savvy browser and version.
  • Portal integration compatibility — Your IT team may want to integrate the emulation software with a custom mainframe portal. In that case, the software should be compatible with industry standards such as EHLLAPI and, for environments using Java, JHLLAPI.

Want more information about these and other benefits of this approach to mainframe terminal emulation? Visit our website to learn about our WebConnect product. WebConnect is among many reasons why mainframes will remain very much alive and relevant for quite some time to come.

Java is a registered trademark of Oracle Corporation.