Issues and Challenges Facing Legacy Systems
Researched and Modified by: Jonathan Arvin Adolfo
Maintaining and upgrading legacy systems is one of the most difficult challenges CIOs face today. Constant technological change often weakens the business value of legacy systems, which have been developed over the years through huge investments. CIOs struggle with the problem of modernizing these systems while keeping their functionality intact. Despite their obsolescence, legacy systems continue to provide a competitive advantage through supporting unique business processes and containing invaluable knowledge and historical data.
Despite the availability of more cost-effective technology, about 80% of IT systems are running on legacy platforms. International Data Corp. estimates that 200 billion lines of legacy code are still in use today on more than 10,000 large mainframe sites. The difficulty in accessing legacy applications is reflected in a December 2001 study by the Hurwitz Group that found only 10% of enterprises have fully integrated their most mission-critical business processes.
Driving the need for change is the cost versus the business value of legacy systems, which according to some industry polls represent as much as 85-90% of an IT budget for operation and maintenance. Monolithic legacy architectures are antitheses to modern distributed and layered architectures. Legacy systems execute business policies and decisions that are hardwired by rigid, predefined process flows, making integration with customer relationship management (CRM) software and Internet-based business applications torturous and sometimes impossible. In addition, IT departments find it increasingly difficult to hire developers qualified to work on applications written in languages no longer found in modern technologies.
Several options exist for modernizing legacy systems, defined as any monolithic information system that's too difficult and expensive to modify to meet new and constantly changing business requirements. Techniques range from quick fixes such as screen scraping and legacy wrapping to permanent, but more complex, solutions such as automated migration or replacing the system with a packaged product.
A Brief History
Debate on legacy modernization can be traced more than a decade, when reengineering experts argued whether it was best to migrate a large, mission-critical information system piecemeal or all at once.
Rewriting a legacy system from scratch can create a functionally equivalent information system based on modern software techniques and hardware. But the high risk of failure associated with any large software project lessens the chances of success. Researchers from the pioneering 1991 DARWIN project at the University of California, Berkeley, listed several factors working against the so-called "Cold Turkey" approach:
- Management rarely approves a major expenditure if the only result is lower maintenance costs, instead of additional business functionality.
- Development of such massive systems takes years, so unintended business processes will have to be added to keep pace with the changing business climate, increasing the risk of failure.
- Documentation for the old system is frequently inadequate.
- Like most large projects, the development process will take longer than planned, testing management's patience.
- And finally, there's a tendency for large projects to end up costing much more than anticipated.
DARWIN advocated the incremental approach, popularly referred to as "Chicken Little," because it split a large project into manageable pieces. An organization could focus on reaching specific milestones throughout the long-term project, and management could see progress as each piece was deployed on the target system. Industry experts challenged this model several years later, saying the need for legacy and target systems to inter-operate via data gateways during the migration process added complexity to an already complex process. In addition, gateways were a significant technical challenge.
Many migration projects failed because of the lack of mature automated migration tools to ease the complexity and technical challenges. That started to change in the mid-1990s with the availability of tools from companies such as Anubex, ArtinSoft, FreeSoft, and Relativity Technologies. These tools not only convert legacy code into modern languages, but, in doing so, also provide access to an array of commercially available components that provide sophisticated functionality and reduce development costs. They help break up a legacy system's business knowledge into components accessible through modern industry-standard protocols, a component being a collection of objects that perform specific business services and have clearly defined application-programming interfaces (APIs).
Choosing A Modernization Approach
The Internet is often the driving force behind legacy modernization today. The Web can save an organization time and money by delivering to customers and partners business processes and information locked within a legacy system. The approach used in accessing back-office functionality will depend on how much of the system needs to be Internet-enabled.
Screen scrapers, often called "frontware," is an option when the intent is to deliver Web access on the current legacy platform. The non-intrusive tools add a graphical user interface to character-based mainframe and minicomputer applications. Screen scrapers run in the personal computer, which is used as a terminal to the mainframe or mini via 3270 or 5250 emulation. Popular screen scrapers include Star,Flashpoint, Mozart, and ESL. This technique provides Internet access to legacy applications without making any changes to the underlying platform. Because they're non-intrusive, screen scrapers can be deployed in days and sometimes hours. However, scalability can be an issue because most legacy systems cannot handle nearly as many users as modern Internet-based platforms.
Legacy wrapping is a second non-intrusive approach. The technique builds callable APIs around legacy transactions, providing an integration point with other systems. Wrapping does not provide a way to fundamentally change the hardwired structure of the legacy system, but it is often used as an integration method with Enterprise Application Integration (EAI) frameworks provided by companies such as SeeBeyond Technology, Tibco, Vitria, and WebMethods.
EAI moves away from rigid application-to-application connectivity to more loosely connected message- or event-based approaches. The middleware also includes data translation and transformation, rules- and content-based routing, and connectors (often called adapters) to packaged applications. Vendors generally offer one of three system-wide integration architectures: hub-and-spoke, publish and subscribe, or business process automation. XML-based EAI tools are considered the state-of-the-art of loosely coupled modern architectures.
EAI vendors advocate wrapping as a way to tap legacy data while avoiding the misery of trying to modify the underlying platform. This approach also enables integration vendors to focus on the communications and connectivity aspects of their solutions, while avoiding the complexity of legacy systems. Like screen scraping, wrapping techniques are applicable in situations where there's no need to change business functionality in the existing platform. However, none of the above approaches address the high cost associated with maintaining a legacy system or finding IT professionals willing to work on obsolete technology.
Another option is replacing an older information system with modern, packaged software and hardware from any one of a variety of ERP vendors, including Lawson Software, Manugistics, PeopleSoft, Oracle, and SAP. This approach makes sense when the code quality of the original system is so poor that it can't be reused. However, deploying a modern ERP system is not a panacea. An organization either has to customize the software or conform to its business processes. The first option is necessary if the original system was custom-made and provided a critical business advantage. Over the last couple of years, major ERP vendors have added tools to help adapt their applications to a customer's specific needs. However, customization still carries enormous risks that the system won't be able to duplicate a unique set of business processes.
In addition, a packaged system requires retraining of end users whose productivity will slow as they adjust to a new way of doing their jobs. IT staff also will need training on the new system. Finally, ERP applications carry hefty licensing fees that remain throughout the life of the software.
When Legacy Migration Makes Sense
Legacy migration is best suited for companies looking to implement a new business model, such as an Internet-based procurement or other B2B system on either of the two major platforms, J2EE from Sun Microsystems and partners or Microsoft's .NET. Both emerging development/deployment environments support XML and SOAP, standards used in exporting and consuming Web services across heterogeneous platforms. Another justification for embarking on a complex migration project would be the increasing expense and difficulty of maintaining and modifying the old system.
The first step in the migration process is the analysis and assessment of the legacy system. Typically, this includes taking stock of all application artifacts, such as source code, copybooks, and Job Control Language. A complete database analysis is also necessary, including tables, views indexes, procedures and triggers, and data profiling.
Database vendors, such as Oracle and IBM, provide tools that help automate the database migration, which is separate from the application migration. All source database schema and elements must be mapped to the target database. Depending on the complexity of the system, from 80% to 90% of the migration process can be automated. However, there will always be issues with stored procedures and triggers that are indecipherable by an automated parser, requiring manual tweaking.
Database migration can add considerable time to completing a project. For example, Mercy Ships, a Christian charity organization headquartered near Tyler, Texas, migrated its 4GL Informix application on a SCO Unix server to Informix's Java-based Cloudscape database running on Linux. The project was necessary to reduce maintenance costs of the system used to track contributors and donations. In addition, the new system gave Mercy Ships a modern development platform for modifying and adding services.
Using an automated migration tool, Mercy Ships ported its 80,000-line application, called PartnerShip, in less than a month. But the total project, including setting up seven locations in Europe and the U.S. with databases and writing Java servlets for maintenance and replication, took seven months. If everything had stayed on the same database, then the project would have been finished in about a month.
Legacy Application Migration
In the early stages of the migration process, core business logic must be identified and mapped out to show the interrelationships of the code performing the application's business function. Program-affinity analysis can be performed to produce call maps and process flow diagrams, which contain program-to-program call/link relationships. These maps and diagrams make it possible to visually identify connected clusters of programs, which are good indicators of related business activity. Companies providing tools to help in the analysis and assessment process include MigraTEC, Netron, Semantic Designs, and McCabe and Associates.
Once core business logic is identified and mapped, it can be broken up into standalone components deployable on client/server and Internet-based environments. This process creates collections of programs that perform a specific business function. In addition, the components have clearly defined APIs and can be accessed through modern, industry-standard protocols. Components can remain on a mainframe such as COBOL, PL/I, or Natural programs, or be re-deployed into modern, distributed environments, such as Java 2 or .NET.
As part of the transformation process, a special class of components that exist in every system needs to be identified. These components perform common system utility functions such as error reporting, transaction logging, and date-calculation routines, and usually work at a lower level of abstraction than business components. To avoid processing redundancy and to ensure consistency in system behavior, these components need to be standardized into a system-wide reusable utility library.
When selecting a migration tool, organizations need to consider the quality of the generated code. Tools that map every component in the legacy language to a code equivalent in the target language can be a major time saver. Developers who are experts in the legacy language will find it easier to understand the generated code if it comprises representations of the legacy code's language and structure.
In addition, organizations may find it more convenient to break up the conversion process into two steps. The first is the translation of existing code, data migration, and associated testing; the second is the addition of new functionality. Before making any changes to program logic and structure, organizations should first test the end-result of the migration process for functional equivalence with the original legacy application.
Legacy systems are considered to be potentially problematic by many software engineers (for example, see Bisbal et al., 1999) for several reasons. Legacy systems often run on obsolete (and usually slow) hardware, and sometimes spare parts for such computers become increasingly hard to obtain. These systems are often hard to maintain, improve, and expand because there is a general lack of understanding of the system; the designers of the system have left the organization, so there is no one left to explain how it works. Such a lack of understanding can be exacerbated by inadequate documentation, or manuals getting lost over the years. Integration with newer systems may also be difficult because new software may use completely different technologies.
Despite these problems, organizations can have compelling reasons for keeping a legacy system, such as:
The costs of redesigning the system are prohibitive because it is large, monolithic, and/or complex.
The system requires close to 100% availability, so it cannot be taken out of service, and the cost of designing a new system with a similar availability level are high.
The way the system works is not well understood. Such a situation can occur when the designers of the system have left the organization, and the system has either not been fully documented or such documentation has been lost over the years.
The user expects that the system can easily be replaced when this becomes necessary.The system works satisfactorily, and the owner sees no reason for changing it.
If legacy software only runs on antiquated hardware, the cost of maintaining the system may eventually outweigh the cost of replacing both the software and hardware unless some form of emulation or backward compatibility allows the software to run on new hardware. However, many of these systems do still meet the basic needs of the organization; the systems to handle customers' accounts in banks are one example. Therefore the organization cannot afford to stop them and yet some cannot afford to update them.
A demand of extremely high availability is commonly the case in computer reservation systems, air traffic control, energy distribution (power grids), nuclear power plants, military defence installations, and other systems critical to safety, security, traffic throughput, and/or economic profits. For example see the TOPS database system.
The change being undertaken in some organizations is to switch to Automated Business Process (ABP) software, which generates complete systems. These systems can then interface to the organizations' legacy systems and use them as data repositories. This approach can provide a number of significant benefits: the users are insulated from the inefficiencies of their legacy systems, and the changes can be incorporated quickly and easily in the ABP software (at least, that's the intention).
There's an alternative point of view that legacy systems are simply computer systems that are both installed and working. In other words, the term is not at all pejorative -- quite the opposite. Perhaps the term is only an effort by computer industry salesmen to generate artificial churn in order to encourage purchase of unneeded technology.
Legacy Systems
A common problem for many companies is that they may have an old computer system that is reasonably functional and cost efficient but is thought to be in need of replacement. Although these systems are usually very strong on transaction volumes, they typically are not intuitive and are difficult to use. Companies must decide the right timing to move off these old systems and into new systems.
As we do systems analysis for companies all over North America, we have good experience analyzing this difficult decision. While not every company needs to make a change, the following are common complaints that cause companies to consider this decision:
- We cannot get at information in the system.
- We have to rely on the IT staff to configure new reports for us.
- We have to print everything as opposed to getting information on the screen.
- Our system is cumbersome compared to other desktop applications.
- We cannot easily get information into a spreadsheet.
- It takes monumental effort and specialized programmers to add new functionality.
- We are behind what our competitors are able to do.
There are definite advantages to stay with a legacy system. These advantages include: