A tale of application rationalization (not)

Don’t you just love buzzwords and silver bullets? My first post on InfoWorld was about the cloud as a silver bullet. And Scott Adams from Dilbert catches painful realities as well: see this strip. Now, cloud is a real phenomenon (even if it isn’t a silver bullet), just another step in the constantly growing layers of complexities in our landscapes. And the platform concept that Scott makes fun of is real too, and actually rather useful.

Given the constant confusion about the meaning (or use) of our terms, we should look at a definition of platform first. Because platform as a word casts a very wide net.

Apple’s iPhone/iPod/iPad/iTunes/App Store/etc., for instance, is a platform in an economic sense and it includes more than iPhone/iPod/iPad, the hardware platform, or iOS, the software platform. In a most general sense, a platform supports something. In Dutch, for instance, a drill rig is called a drill platform. In politics, a party platform supports politicians to get their message across and their influence increased. A theatre stage is a platform that supports many different acts and shows.

EatonsAuditorium1945

Today, I’d like to talk about the technical idea of a platform. Here, Wikipedia’s definition of a computer platform) may be useful:

A computing platform is, in the most general sense, whatever pre-existing environment a piece of computer software or code object is designed to run within, obeying its constraints, and making use of its facilities. The term computing platform can refer to different abstraction levels, including a certain hardware architecture, an operating system (OS), and runtime libraries. In total it can be said as stage on which computer programs can run.

In short: in computing, a platform is something in which programs can run. Good. Now, before I return to it, I’d like to tell a small story about application rationalization.

There once was a company that had an accounting system. In that accounting system, they counted. They counted the money in their many bank accounts. They counted the stock they held in their warehouses. And when a transaction took place, e.g. something was bought or sold, the transaction system would signal the accounting system.

In theory, the accounting system held a perfect record of the assets of the company. But as we have learned recently, it is silly to blindly trust banks, right? Sometimes things go wrong, and we need to be certain that the bank actually agrees with how much money or assets we have. So, and this is standard policy in many industries, the company had set up a number of reconciliations, comparing a report of other administrations with its own. For instance, every day the banks would send electronic records about the actual transactions and the balance of all those bank accounts. And the warehouses would send daily overviews of what was in them. These reports were checked against what the accounting system said it must be. And if there were differences (called “breaks” in reconciliation jargon), these had to be resolved by the employees in one way or another. Who is right? Who is wrong?

Now, if you have thousands of bank accounts (some companies do) or many tens of thousands of assets (many companies do), you don’t do that by hand. So, the company had a whole set of computer programs that took data form the accounting system and data from the other sources and from those created reconciliations.

Some of these programs were written in a programming language and ran on Windows computers. Let’s say there were five programs, each for a different reconciliation. The programs shared some code in a shared library, but they were five different programs. Surrounding this was a scheduled setup to get the data to the programs and a way to show the results to the recon team.

This landscape was a thorn in the CIO’s side and ripe for much needed simplification. Instead of these five programs, he wanted to buy a single off-the-shelf reconciliation program, Such a program was available: “SuperRecon.” No more costly, risky and slow software development and maintenance, just use something than can be bought, following the architecture principle “Re-use before buy before build.” What the CIO wanted to execute was an application rationalization. We buy one new program (SuperRecon) and drop five. Less applications, less development and maintenance, less complexity (the holy grail of many architects).

So, the SuperRecon program was bought and deployed and the employees started to configure the program to handle these five reconciliations. For this, they needed to configure the input mechanism of SuperRecon to recognize the input files and translate these to SuperRecon’s internal working. And they needed to configure the actual reconciliations between the two sets of data that had been loaded in the program. For the first task, SuperRecon came with a nifty language to define inputs and transformations. For the second task, there was a GUI where flows and comparison rules (what constitutes a “break”) can be created.

Some employees had to learn this ‘nifty language’ which consists of writing declarations and matching rules. And they had to write the rules for flows and breaks. For all of this, they had to receive an education from the vendor. And it all turned out to be a rather complicated, costly and slow transition.

Sounds familiar? Yes, it does, doesn’t it? Both activities are in fact a form of programming, which they had to learn. So, what had happened is that the company did not buy an application at all. Install the SuperRecon application, start it up and it does nothing for your business. It just sits there. That is because what they had bought was a platform. And within that platform they had written five applications that actually did something useful for the business. Also, they had effectively added two programming languages to the list of languages they needed to know in their company. And they still had five applications. The idea that a rationalization took place is an illusion in this case.

It is important that we take the platform concept seriously. While operating systems are platforms, not all platforms are operating systems (though software engineers all are familiar with the old saying that “every application tends to evolve into an operating system” and these days we should say that “every application tends to evolve into a platform”, and the reason is that we also automate applications and this automation is a form of programming).

All of this begs a question: is application rationalization real? Well, in case of standardizing on one of the four different applications that are used to calculate travel costs, it is. But when you are talking about applications for specific business processes, it often isn’t. The same IT functionality from a business perspective is just re-created in another platform. The rationalizations projects turn out to be cumbersome and expensive, mostly because you are redeveloping application functionality in support of (or automatically executing) enterprise-specific processes. This is why your vision for the future should probably show a ‘potential increase in the number of development environments to support’ (which is ironic, as it is partly driven by this rationalization), and a main driver for that is the platformization of our application landscape.

That doesn’t mean migrating to SuperRecon in our story is useless. Not at all, SuperRecon comes with all kinds of specialized support for reconciliation. It comes with access control, logging, auditing trails and so forth. All of this you would have to program yourself from scratch if you just write .Net programs for Windows or Java programs for Linux. And the SuperRecon platform comes with a couple of applications too, such as the ‘development environment’ for creating new reconciliations, or a monitoring system, or a dashboard, or maybe workflow support, etc. The SuperRecon is a ‘rich’ environment for setting up reconciliations. But it is useful to remember there are still five applications, that now require specialist knowledge of two more specialist development environments, specialist nifty languages etc.

Most large systems we buy these days are in fact a mix of a platform and some specialized applications if you look at it from an architectural perspective. If you model these in the enterprise architecture language ArchiMate, for instance, you can model the platform separate from the application aspects, which increases clarity of what you are dealing with. An EA-modelling language can be an improvement over that ad-hoc drawing of boxes, lines and arrows, really, and lifting the veil of confusion from application rationalization is one (depending on you actually really understanding that modelling language, though, a language is also not a silver bullet, especially if you do not speak it well). Besides, though I like many aspects of ArchiMate, it needs to catch up here a bit (which I expect it will do in its upcoming new version).

By the way, platform is also a good concept to use for other parts of your IT landscape. For instance, while many EA pictures may show a box titled ‘Excel’, this often stands for a large collection of various applications written for the Excel platform. The (often complex) spreadsheets are the applications, Excel itself is a combination of a platform to run them in and a development application to create them with. Those unaccounted-for-Excel-spreadsheets are often a pain in the neck from the architectural perspective. Modelling them as applications where appropriate helps to make the real problem clear and helps to separate the key Excel-applications from the rest (forms, documents, unimportant end-user computing). That way a good modelling practice supports a good enterprise architecture practice.

This post was first published on InfoWorld.com.