IBM NETWORKING WORKING PAPER

Tracking Organizational Restructuring and change:

Metrics for an Assessment System

(Draft -- for I-Team discussion only)

Brigitte Jordan

Data Design

Last Changes: 95.03.05gj

1.0 Introduction

In this discussion paper, we propose to you a way of thinking about measuring organizational restructuring and cultural transformation in the Division and other parts of IBM that the I-Team works with and affects. We mean for this to serve as a catalyst for conversations and discussions within the I-Team, the ideal outcome of which would be a shared view of what constitutes desirable change in the Division and IBM as a whole, how we can track and understand that change, and how we can use our emerging insights to further the I-Team's work.

Specifically, our objective is to generate an Assessment System that allows us to measure and make visible the ways in which the reengineering and restructuring activities of the I-Team impact IBM business outcomes, principally revenue growth. That is to say, we want to identify enablers and barriers to increased productivity and revenue and then measure progress in enhancing enablers and removing barriers.

A survey of the literature and of efforts in other companies reveals that no such measurement instruments exist. Below, we propose an approach for constructing appropriate metrics, instruments and procedures that would allow us to custom-design such an assessment system and put it in place.

2.0 Overview

Ideally, if we had all the time in the world, we would carry out a detailed ethnographic system analysis that identifies where the barriers and enablers to productivity and revenue growth are. In the interest of early (if partial) results, however, we propose to draw on current experience and prevailing wisdom within IBM to begin this identification. We see the expertise to identify the variables that need to be measured residing in three groups closely connected to the I-Team: the I-Team itself, the Revenue Team, and the Field Organizations, especially LAN Reengineering.

Figure 1: Identifying Variables for Constructing Survey Questions

We propose to use these groups as data sources for constructing a "Survey" around variables and issues that we find to be specifically relevant to the Division and IBM as a whole. In the longer run, we expect to develop a (more or less) formal assessment system for tracking the impact of the team's transformational and reengineering efforts on the Division and other parts of IBM. For the groups that function as data sources, an important side benefit of the conversations engendered by our data collection activities would be a shared view of the relevant issues, support of team consciousness, and reflection on collaborative practices.

These conversations would get teams to begin to think about fundamental issues such as:

•how can an emergent "community of practice" (such as the I-Team or the Revenue Team) understand and reflect on its own changing values, resources and problem definitions

•how can we visualize the emergent new culture

•how can we appreciate and support the work practices of the new culture

•how can we use the assessment process itself as a synergistic tool in the change process.

One set of interviews has already been carried out by Phyllis Finnemore with individuals from LA hq and the countries; another will shortly be carried out with the I-Team.

3.0 Background: Measurement of Change Presents a Tough Challenge

We are certainly not the only ones to realize that change in complex, knowledge-based organizations is difficult to measure. The professional literature contains no satisfactory recipes for carrying out such measurements. There are multiple reasons for that:

•a changing organization is a moving target; it doesn't stand still long enough to be assessed;

•the factors that cause change are likely to be complex in the first place. Furthermore, interventions by the I-Team may interact with a variety of extraneous factors - internal/external; perceived/unconscious; purposeful/unintended; local/global;

•most importantly, our interest lies not so much in simple-minded individual outcome measures as in understanding changes in the complex relationships that make up knowledge workers' communities of practice, their attitudes, values, ways of looking at "the world"; in brief, we need to better understand the cultural changesthat can affect otherwise deeply entrenched world views.

Before getting into the work of designing metrics that can do justice to these issues, we need to be clear about what exactly we want to accomplish in constructing an assessment system.

4.0 Some Fundamental Questions We Should Ask about Measurement

4.1 WHAT are we going to measure?

Explicit or implicit, any measurement is based on assumptions about what is important to keep track of. Many kinds of measurement are possible, not all of which are of interest to us. In particular, we need to distinguish between relevant and irrelevant aspects of change. What is our measurement system intended to assess? What do we care about? In the long run, we want to be able to make an argument that the restructuring and reengineering activities of the I-Team have a positive impact on business outcome. (We take it that business outcome will be measured by others numerically and that those numbers will be available to us.)

In the long run, we need to worry about linking I-Team activities to improved business results. This requires understanding what activities the I-Team believes have such an effect and a clear formulation of a concise and convincing argument to that effect.

4.2 WHY measure it?

The next question we need to ask is WHY measure these variables. In other words, what are we going to DO with the outcome measures? In what ways do we imagine will they be useful for the work of the I-Team? What measures will be diagnostic? What measures will be persuasive? There is no sense in compiling measures that sit on a shelf.

Reasons why we want to establish an Assessment System might include:

•make visible ("talk-aboutable") overt as well as subtle, evolutionary changes

•make visible (and reap rewards for) various types of successes

•assess positive and negative effects of particular interventions

•make midcourse adjustments; adjust programs on a continuous basis

•support team (organizational) learning about what works and what doesn't

•generate crisp and "objective" reports to senior management; with numbers, models, and graphs

•and more ....

4.3 FOR WHOM are we going to measure it?

We need to be clear about the potential users of our measurement instruments and results. Our Assessment System should take account of their needs. It might be designed with some of the following purposes in mind (as well as others not now evident):

•for use by individual team members (personal self-assessment)

-for self-diagnosis

-for asking questions that identify strengths and opportunities

-for assessing one's most powerful and effective role within the team

•for internal use by the groups that serve as data sources

-they may want to reflect on changes the team is undergoing. Often can't see the forest for the trees. Not easy to see patterns. Incremental changes not visible.

*team conversations (formal and informal; workshops)

*awareness of what works and what doesn't

*shared problem solving

*shift to the Learning Organization

•for persuasive use by I-Team members (by Steve?)

-in signing up new clients for reengineering efforts

-for communicating "upstairs"

-strategic planning function

use to identify systemic problems

use for resource allocation

•for strategists' use to develop a theory (rather than importinga theory) of how change happens in complex, knowledge-based global organizations

•for I-Team change agents to develop a methodology for organizational restructuring

-a tracking and team development system (for I-Team)

-a survey-based instrument for Division

-a generic guide to effective reengineering practice

5.0 Two Potential Approaches to Measurement

(We propose combining the two below.)

5.1 The outside observer

The first approach relies on an outside "objective" observer who, with the aid of validated instruments, looks at the organization and measures what goes on between time one and time two. To our knowledge, there are no validated instruments that would serve our purposes. If we wanted to follow this path we would need to:

-have a clear understanding of the variables we want to measure. At this point, this would most likely be based on anybody's guess, augmented possibly by information from the literature about what has been valuable for other organizations (Lisanne has not found much)

-develop a questionnaire based on those variables

-validate the questionnaire

-administer the Survey to all members of the organization

-analyze results

Note that the results we would get would let us know how the I-Team or Division is doing along lines identified by the variables we decided up front are important. In other words, this approach lets us know better and in more detail what we, in some sense, already know.

For example, we may have good reason to believe that effective teamwork is an important enabler in high performance work groups. We could develop a set of questions around that variable that measure people's perceptions of changes in teamwork. In the end, we will know better, that is to say, in more detail, how people perceive that issue.

5.2 The engaged participant

On the other hand, what is important to measure here may not altogether fall into any preconceived change schema or be addressed by any well-known questions. The second approach we will utilize relies on building the pool of variables (and questions to be asked around those variables) from interviews and conversations, within the I-Team and other groups.

If we follow this approach, we start a conversation with our data sources about what are important aspects of the ongoing change as they currently experience it. We also question them as to their thoughts and projections regarding the future course. Specifically, these conversations might revolve around:

-changes in individual's own thinking

-perceived changes in team members' thinking

-changes in the organizational environment

-current perception of productive courses

-current perception of barriers and risks

-changes in work practice

-changes in technologies (especially technologies for distributed collaboration)

-changes in social networks and collaboration patterns

-changes in attitudes and values

-changes in the reward system and how they are perceived

* financial

* social/interactional

-etc.

Out of this process would come a shared understanding within each team of what aspects of change are perceived as important and what mechanisms for generating change team members see as effective or problematic. This would identify a set of variables insiders believe are relevant and powerful. Tracking those would give us a dynamic and adaptive evaluation system that would at the same time serve as a catalyst in the ongoing discourse about strategies for effective change and transformation. It would also serve as a flashpoint in the team's evolving awareness of the changes it itself is undergoing, how to take control of those, and what can be learned from this process to influence change in the Division and elsewhere

5.3 Our proposal: A hybrid approach

Given our desire to make quick initial progress towards a survey instrument, we intend to capitalize on "low-hanging fruit," that is, identify some variables that are obvious candidates for barriers and enablers (such as effective team work, open communication channels, technology that supports collaboration, etc.)

At the same time, we believe it is of fundamental importance to come to a better understanding of the systemic productivity enablers that need to be identified and supported in order to effect the necessary culture change. We intend to get at those variables by working intensively with the data sources identified above, and other relevant groups and individuals. (See figure 2, attached, for a graphic representation of 1995 and 96 activities)

6.0 Here Is Our Strategy

We propose to begin by tracking the I-Team, starting with intensive ethnographic interviews, possibly to be followed by work with the Revenue and LAN RE teams. We'll use our emerging understanding of the changes the I-Team is undergoing and the I-Team's sensitivities to the operational realities in IBM for building up an empirically (rather than theoretically) grounded inventory of issues, variables, constructs and questions that we will later use for developing more formal survey instruments for understanding change and transformation in the Division and other organizations where size precludes intensive interviewing and consensus building.

This should lead, by the end of 1995, to an experimental survey instrument that, in turn, will form the foundation for developing a comprehensive tracking and assessment instrument in 1996. Figure 2 represents the steps to be taken and the relationship between the activities projected for 1995 and 1996.

It should be noted that while some of this work can be done by specialists delegated to carry it out (like Gitti Jordan and Lisanne Shupe), its success will depend crucially on active collaboration and engagement by all I-Team members. The end result should be a custom-designed instrument that the I-Team owns and uses, in its entirety or in parts, for evaluating all of its initiatives.

We also intend to explore the possibility of combining interviews in the service of constructing the Survey with work involving Myer Briggs assessments that Debra York plans to carry out.

Filename: Exitus: ms a’mnt syst 95.03.05 page-1