The Bottom Line
Eight Considerations for Managing .NET Applications to Reduce TCO
By Victor Mushkatin
The bottom line. Return on investment. TCO. Whatever you call it, organizations today are developing and maintaining applications at an unsurpassed rate in order to drive employee productivity, company revenue, and customer satisfaction. Their ultimate goal: to achieve operational efficiencies and reduce costs. However, this effort poses its own risks, including how to maintain and manage applications as both end user and business requirements change. These changes can quickly and decisively render applications no longer viable.
The arduous task of maintaining and managing in-production applications accounts for up to two-thirds of an application s total cost of ownership (TCO). In other words, for every dollar spent developing an application, an organization will spend two dollars maintaining it in production. As legacy applications built on technologies like COM and COBOL approach the end of their working lives, organizations are considering future manageability and TCO in their development platform choice. Similarly, as organizations embark on developing new applications to improve processes and impact revenue, they face the same challenge. Increasingly, these organizations are turning to Microsoft s .NET Framework.
The .NET Framework provides a compelling platform for building robust distributed applications. The framework offers many advantages, including the use of safe managed code, automatic garbage collection, language independence, and wide functionality through the comprehensive class library, all making development easier and quicker. This, combined with a development and maintenance cost estimated at 20-25 percent lower than J2EE-built applications, has many organizations viewing .NET as a powerful means of reducing their TCO.
Still, it is simply not enough to choose one platform over another. Organizations must consider several strategies that can reduce costs throughout the application lifecycle, from planning and development, to testing, deployment, and support. The common thread: enabling the ability to monitor and measure application behavior and health throughout the application lifecycle.
Extract and Verify Business Rules
The first consideration when developing or migrating an application to a new framework is to determine the business rules that are being automated. These rules could simply be specifications for extracting and displaying data, but more likely have added complexity for reading, querying, manipulating, and storing data; interacting with users and other systems and services; and providing management and monitoring functionality. Therefore, it is imperative to ensure business rules are not only implemented correctly, but also meet current business and end-user requirements.
Extracting and verifying these business rules will provide the basis for the architecture of the applications, allowing organizations to correctly match the architecture with the hardware and programming techniques required to successfully implement and deploy.
Build a Health Model
In addition to examining business rules, modern principles and best practices dictate that architects create a health model or blueprint for application behavior. The health model contains definitions of state transitions that occur when an individual service or component changes its state from working normally to performance degraded or failed . The health model typically utilizes a simple indication, such as:
- Green indicator for working normally
- Yellow indicator for performance degraded
- Red indicator for failed
These indicators provide health state notifications for each application, component, service, and group of servers.
This health model helps minimize the TCO of applications by better enabling designers to understand the relationships and interactions between application components and the impact of individual component failures on the health of the entire system. A health model also allows developers to write the appropriate instrumentation (or appropriately configure a monitoring solution), and operations staff to better deploy and manage the application.
Further, the model helps determine what information needs to be collected to correct the degradation or failure. As this information is specific to each individual component and service, it is essential to ensuring optimal performance. Increased uptime and improved data collection when applications behave unexpectedly equate to lower TCO.
Service-oriented Architecture Considerations
As the health model for an application evolves, so too do design practices. Considering the distributed nature of business, modern design principles favor a scenario that allows for a distinct separation of functions into discrete and re-usable components that can interact remotely with each other, and with remote and disparate systems and services.
Web services are a platform-, operating system-, and language-independent technology that allow application components to communicate over any network, including the Internet, using standard ports and protocols such as HTTP and HTTPS. The service-oriented architecture (SOA) approach to application design leverages the advantages of Web services to enable communication across all tiers of an application.
By enabling architects and developers to disconnect the tiers of an application, greater value is realized in terms of increased application flexibility and interoperability, along with easier integration with remote services and external business partners. Despite this impact on cost of ownership, SOA does introduce some challenges that must be considered. These challenges include the location, segregation, and orchestration of services, along with the implementation of a suitable system for monitoring the performance of these components in order to adhere to service-level agreements (SLAs) and operational requirements.
Plan Your Application Monitoring Approach
Up to this point, we ve addressed business rules, health models, and Web service design principles. All of these considerations have the potential to lower the TCO of applications. However, their successful implementation requires reliable application monitoring to ensure business rules and application components are yielding the desired results.
Traditionally, monitoring and capturing diagnostic information about an application s behavior has been a development exercise of writing information to a log file or publishing it to the system event log. The development team, in this case, is responsible for deciding what information to collect. In this scenario, organizations rely on end users or QA staff for problem detection and notification, and log files provide the diagnostics. Lowering the TCO for an application requires moving beyond this method of monitoring to a proactive application management approach. To do this, it is vital to understand how collected information is interpreted throughout the problem resolution process and to standardize the presentation of that diagnostic information across all applications.
As was previously mentioned, the health model provides a template for how an application is expected to behave. This model is an integral piece of the monitoring approach, as it should also define the capture of information that is meaningful to the problem resolution process - and provide corresponding corrective actions.
Imagine an application that requires access to a file on the fileserver. What happens if the IT department changes a security policy and causes an access denied error? The health model rules will note an application state of failed and automatically notify the appropriate team within the IT department. Additionally, supporting diagnostic information is collected to indicate the type of problem, specific information about the particular instance of the problem, and the steps required to resolve the error. The diagnostic information collected would likely include the specific file being accessed, the security error, and the precise permissions required to restore normal application behavior.
This example illustrates how the health model enables management of well-known potential application problems. However, this approach can be costly from both a design and development perspective because it does not necessarily accommodate un-anticipated problems. By marrying a health model with an always-on application monitoring solution - one that provides 24/7 detection and diagnosis of both expected and unexpected application problems - a more cost-effective and proactive approach is possible.
From the manageability perspective, the final architecture should allow developers to expose instrumentation, as defined by the health model, that captures errors and generates information that can guide operations staff to the source of the error. Where an application uses services that are not locally controlled or installed, such as a Web service exposed by a supplier or delivery transport provider, local routines that access these services should include facilities for configuration and monitoring that will provide information to help identify availability and performance of these remote services.
A health model and carefully planned architecture allow organizations to considerably reduce the amount of code (instrumentation) developers must write while still providing full state change and performance monitoring capabilities.
Expand Your Monitoring Approach to the Whole System
Now that an application monitoring approach has been considered, greater reduction in total application costs can be realized by leveraging that approach to implement system-wide monitoring.
IT operators typically require a comprehensive view of the performance and issues within an entire system and infrastructure in order to accurately diagnose and resolve problems. The scope of a monitoring approach ideally encompasses the entire system, not just the application. For example, a business application is likely to depend on at least four separate areas of functionality: the data tier, application tier, interface tier, and utility services such as Active Directory, DNS, and networking.
Suppose our health model indicates a state of failed for a business application. Also, suppose this failure is actually because of a problem with a database server that may become corrupt or run out of sufficient memory. Or, consider an application component that relies on a Web service exposed by another organization for its source data. If that Web service itself fails, the application state will show failed . However, if the IT operator cannot see the performance data and error messages relating to the database server, or analyze the time taken for calls to the Web service, it will be difficult to diagnose the problem accurately. Thus, the organization will spend more resources than needed to both determine the source of the problem and resolve it.
To improve the problem resolution process and ensure optimal application availability, while also lowering total cost of application ownership, a monitoring environment should provide roll-up capabilities that combine application events and state transitions to deliver an overall view of both application performance and the health state of individual servers, services, and applications. This environment should also allow IT operators to drill down into data at an application, component, service, or server level in order to determine where a failure or problem occurred.
Implement and Test
Once the design of the application architecture, its components, and the infrastructure upon which it will operate and be monitored is complete, developers can begin to build the application. Using the health model, the required instrumentation is clearly defined, as is the data that this information must expose to the monitoring system.
As code is developed, the information in the health model assists in locating faults and accelerates development, thereby reducing schedule risk and lowering development costs. In addition, when the code fails, developers can use the resulting state changes to quickly locate the fault and determine the root cause. The health model contains all the state changes, so developers are better assured that their instrumentation will detect any errors. While it is dangerous to assume that lack of run-time failures during development means that the code is fault-free, a comprehensive health model helps ensure that all scenarios are considered in terms of detecting run-time faults.
Testing for lack of application failures, and to ensure service level or performance requirements, is one piece of the testing process. IT operations staff should also test the monitoring systems the application will use. For example, monitoring rules created may not be correct or may contain flaws that affect certain values as a specific event category ID. Or, an incorrect discovery rule or system configuration may mean that one or more servers fall outside the expected system monitoring solution. Another potential issue is lack of connectivity, bandwidth, or connection protocol support between the monitored system and the monitoring solution.
Best practices dictate that all functions of the monitoring infrastructure be tested before and after application deployment. This testing should include monitoring management capabilities, resolution process, alerting methods, and the precise information exposed for different types of problems. Complete and accurate testing ensures rapid error capture and diagnosis, and promotes fast error resolution.
Integration into Problem Management Workflow
New applications and monitoring solutions should integrate into existing incident and problem management workflows. Most monitoring solutions can provide application information in a format that is compatible with existing formal processes for alerting, resolving, testing, and releasing product updates.
For individual teams, such as developers or operations staff, communication channels may be formalized so bug reports and feature requests go through a strictly controlled process. A monitoring solution should be flexible enough to fit into this scenario. Seamlessly integrating new applications into existing management systems, structures, and methods reduces application TCO, minimizes application downtime, and circumvents communication problems that plague companies and leave application problems unresolved.
Adhering to the best practices and principles for designing and developing applications depends on establishing an accurate and comprehensive monitoring solution. This solution should provide coverage for the entire application, including its dependencies on other services and components, and it should contain the knowledge required to diagnose, resolve, and test the resolution of application errors.
Like the application, the monitoring solution should also evolve as business and end-user needs change. When adding new features or code to an application, the health model should reflect those changes. The monitoring environment should also evolve continually. Evolution of the monitoring solution can include new server discovery rules, changing roll-up rules and settings, or even adding completely new rules and alerts. By evolving both the application and the monitoring solution, organizations realize greater return on their application development and management investments.
Victor Mushkatin is Chief Technology Officer at AVIcode (http://www.avicode.com). An expert software architect, and adept business manager and leader, Victor has developed a variety of software components, communications libraries, and XML schemas for Internet business portals to include integration with a variety of financial systems, government systems, and communication networks. As business manager for AVIcode s predecessor company, he was directly responsible for P and L and overall leadership of the company. It was under his management that the foundations of the current company and Intercept Studio itself were built.