Principal Investigator/Program Director (Last, First, Middle): Jolesz, Ferenc A.

A.

B.

C.Specific Aims

The primary goal of this project is to use biomedical engineering principles to develop general-purpose software methods that can be integrated into complete therapy delivery systems. Such systems will support more effective delivery of many image-guided procedures --biopsy, minimally invasive surgery, and radiation therapy, among others. To understand the extensive role of imaging in the therapeutic process, and to appreciate the current usage of images before, during, and after treatment, we will focus our analysis on four main components of image-guided therapy (IGT): localization, targeting, monitoring and control. We will use a succession of increasingly challenging testbeds to drive the development of new science and technology. The main focus of this partnership is the development and adaptation of robust algorithms. Clinical project will be used to provide challenges, data sets and, ultimately, validation of the overall utility of the concepts developed. Specifically, we will:

  1. Develop robust algorithms for
  • segmentation – automated methods that create patient-specific models of relevant anatomy from multi-modal imagery.
  • registration – automated methods that align multiple data sets with each other and with the patient
  1. Integrate these technologies into complete and coherent image guided therapy delivery systems;
  2. Validate these integrated systems using performance measures established in particular application areas.

To achieve these aims, we will create novel methods that build on our extensive ongoing research in different areas of image guided procedures such as neurosurgery and prostate brachytherapy. Our goal is to develop basic technology and to extend IGT capabilities into other applications, such as abdominal, orthopedic and pelvic interventions. We anticipate encountering significant challenges in transferring our experience with the brain into other body parts. As we develop solutions for each new anatomical area, we expect that the performance specifications will change, requiring new techniques. For example, we expect that we will have to extend rigid body registration to solutions that deal with deformable soft tissue organs, for which more elaborate algorithms are needed. Hence, we will develop more general algorithms for segmentation and registration. To support our program, we will develop modular software frameworks in which it is easy to compare alternative methods within the context of an end-to-end system, and in which it is easy to interconnect suites of modules to build complete systems. Because we see such integrated systems as central to the development of IGT systems, we will also explore concepts for managing software development and assuring quality, especially within a multidisciplinary multi-location collaboration.

Initially, we will use image guided prostate brachytherapy as our testbed. We will then expand into minimally invasive liver procedures and, finally, towards the end of the duration of the grant period, we will work on image guided breast surgery. This approach will present us with increasing difficulties in compensating for nonrigid and semi-rigid motion. Both the efforts in the prostate and in the liver are ongoing clinical projects with a constant stream of clinical cases. The breast project is in the early stages of clinical application.

Our ultimate goal is to create the computational infrastructure and associated suite of methods to support a broad range of image guided therapy procedures, and to evaluate the impact of such IGT interventions in the delivery of surgical care.

D.Background and Significance

D.1.Significance

Images are rich in information that can be used for diagnosis and for subsequent therapeutic interventions. Our goal is to develop tools that leverage the application of image-based information to the tightly coupled processes of diagnosis and therapy. To appreciate the potential impact of IGT systems, we consider the numerous ways in which images influence diagnosis and therapy. IGT systems use preoperatively acquired images to create anatomical models, which provide localization, targeting and visualization of the three-dimensional (3D) anatomy. These models support preoperative planning to define and optimize access strategies as well as the simulation of planned intervention. When registered to the patient, these models connect image coordinates with the actual position defined by an instrument’s location in the surgical field, thereby enabling a surgeon to navigate and execute procedures with full knowledge of the surrounding anatomy.

Used in these ways, image based models can support a variety of medical applications. For diagnosis, the primary objective is the detection, localization and identification of a potential abnormality. For therapy, the primary objective is localization, which includes not only the exact anatomic position and spatial extent of an already known target, but also the delineation of surrounding anatomy and the comprehension of important anatomic relationships. The depiction of anatomic structures not adjacent to a target may be important if they are located within the operational volume or reside along one of the potential access routes to the tumor target. Image-guidance supports both objectives by providing 3D representations of the target and operational volume, including tumor margins, tissue identification, and nearby structural content.

Localization or target definition should incorporate all the essential morphologic, anatomic and physiologic properties of the target. These are required for planning and executing interventional and surgical procedures, especially for optimizing targeting and achieving complete removal or ablation. Target definition can improve not only the detection of tumors but also the effectiveness of surgical therapies. Image tools are essential at this stage.

The localization of a tumor and its delineation in 3D defines the target of therapy. The next step is the selection of potential access routes or trajectories, wherein the operator must choose preferential options from a multitude of alternative paths to the lesion. In the case of biopsy a single trajectory should be chosen. During surgical procedures, or more complex percutaneous interventions, the decision usually involves multiple conceivable trajectories. Again, detailed image information is critical in guiding decisions at this stage.

Given analyzed preoperative image data, the spatial information it represents should be tightly linked to the patient. This registration process determines the transformation of image-based coordinates into the patient’s frame of reference, thus allowing targeting and execution of the actual procedure to occur with optimal information available to the surgeon. Major progress has been made by moving from frame based stereotactic systems to the current CT or MRI-based computerized frameless stereotactic systems. These new systems use fiducial markers or anatomic landmarks to establish correspondence between the image-space and the patient's anatomy. Both fiducial markers and surgical instruments can be tracked using a variety of sensors. These tracking methods are used not only to relate the positions of markers or instruments between two corresponding frames of reference but also for interactive display. These navigational systems present images with orientation and location defined by the position of the tracked device, which can guide the surgeons to the target lesions with relatively good accuracy unless the position of the targeted structure and/or the surrounding anatomy has changed significantly during the procedure.

In some cases, shifts and deformations of soft tissues that occur during surgery, due to mechanical factors, physiologic motion, swelling or hemorrhage, may displace organs or their tissue components to such a degree that preoperatively acquired image-based 3D models cannot register or fuse with the patient’s actual anatomy. In this situation either partial correction of the 3D model or full volumetric image update is necessary. Even limited revision of the original images requires some clues about the changes taking place during surgery. Some useful information can be obtained from prior knowledge and experience-based models of deformations. However, only intraoperative imaging can provide real information. This updated positional data then can be used to modify the original volumetric images using elastic warping algorithms or other more elaborate computer based methods. Thus, images are a key component in maintaining surgical integrity.

The ultimate solution for accurate image-guided targeting is real-time intraoperative imaging or at least frequent updating of the volumetric images during procedures. This results in targeting methods that can continuously detect changes of the position of various tissue components and locate the targets and their environs in order to define trajectories to the lesion. Intraoperative imaging may use multiple projections or image planes to get 3D data, or cross-sectional imaging methods like CT or MRI can be used to encompass the entire 3D volume without the need of real-time interactive imaging.

Our main goal is to facilitate full utilization of anatomic and functional information accessible by current medical imaging methods in image-guided therapy. By providing the surgeon with easy access to this multi-modal information, which is registered to the anatomy of the patient, we will improve the safety and efficiency of surgical procedures. In order to accomplish this desirable goal we must develop methods that automatically convert medical images into patient-specific models. We will apply our segmentation techniques to a range of multi-modal acquisitions, which we then register to a common coordinate frame. This augmented patient model can then be used for planning and simulation, or can be further registered to the actual patient, to support surgical visualization, and navigation.

It is our belief that this registered information may help to dramatically change surgical procedures by enabling the surgeon to precisely identify and avoid critical structures, which are either hidden below the currently exposed surface or indistinguishable to the human eye compared to surrounding structures. This multi-modal information will provide the means to accurately locate pathological tissue and facilitate trajectory optimization. It will support minimally invasive procedures, which take less time, involve less removal of tissue, and have fewer risks of side effects.

To achieve our goal of IGT, we must do more than just develop robust algorithms for segmenting, registering and visualizing anatomical and functional reconstructions of patient anatomy. We must also integrate these computational tools into complete systems that are subjected to extensive use in real surgical settings. Furthermore, we must evaluate the efficacy of the individual components and the complete systems in improving therapeutic delivery. The feedback obtained from surgical utilization of IGT systems is essential to developing practical biomedical toolkits for leveraging information in medical imagery. The Surgical Planning Laboratory is well placed for achieving this dual goal of novel algorithmic development and clinical application and evaluation. We currently provide pre- and intra-operative visualization services to our neurosurgical colleagues on a regular basis, and we collaborate closely with users of the interventional MRI unit at Brigham and Women's Hospital. Both surgical settings provide immediate and valuable feedback to the designers of the IGT components and systems.

D.2.Background

As outlined by our Specific Aims, we consider relevant prior work in each of our proposed work’s constituent parts.

D.2.1Algorithms

Segmentation converts medical images into anatomically or functionally distinctive structures, which are of more utility to the surgeon than the individual image slices. The process also identifies surface boundaries of connected tissue types, enabling visualization of major anatomical structures. By creating automated tools for segmentation, we enable the transformation of raw image data into structures that directly relate to the patient's anatomy, and thus to the surgical procedure. Common approaches to segmentation include manual segmentation, pattern recognition (statistical classification) based segmentation, alignment based segmentation and curvature flow based segmentation. The earliest approaches to image segmentation were achieved through intensive user interaction. Although considerable progress has been made in the development of automated segmentation algorithms, the lack of robust techniques still remains the most significant obstacle to the widespread use of medical image analysis for diagnosis and treatment planning. An extensive review of MRI segmentation approaches can be found in [Collins92], and from our group, in [Kikinis97] and [Warfield98d].

a)Segmentation
(1)Adaptive Filtering

Because image intensities associated with different tissues may be difficult to discriminate in the presence of sensor noise, several authors have applied signal-processing methods to enhance the image intensities (feature enhancement) [Jain89]. Several methods of adaptive filtering have been proposed and applied for image enhancement [Knutson83], [Perona90], [Restrepo88]. Initial experiments with diffusion-based algorithms for image analysis were motivated by the aperture problem [Koenderink84], [Witkin83], [Lindeberg90]. More advanced methods included nonlinear diffusion and geometry-driven diffusion [Nordström90]. Level Set methods are increasingly finding application in medical image processing [Osher88], [Sethian89], [Sethian92].They provide powerful and general-purpose means of evolving surface models in three dimensions. When coupled to image data, they can be used to segment or identify structures having certain properties. These methods were used by [Zeng98]to find the sulco-gyral structure of the human cortex from MRI. Snakes, or active contours, are a common computer vision tool, and have been used for edge and curve detection, segmentation, shape modeling, and visual tracking [Blake93]. In general, partial differential equation methods that couple to image features localized by curvature driven evolution are under active investigation [Morel95].

(2)Intensity Based Classification

The use of multi-channel statistical intensity classifiers in the medical domain was pioneered by Vannier et al. [Vannier85] and later used and extended by many others. This class of methods models each tissue type as a distribution of intensity values in some feature space, and uses variants of a nearest-neighbor rule to classify voxels based on recorded intensity. The tissue model distributions may be acquired from labeled training data, or provided by a priori models. Variations of this method include the use of ``homomorphic" methods, [Axel87], [Lim89] to correct for slowly varying intensity artifacts, as well as several non-homomorphic approaches [Dawant93], [Tincher93]. Several authors have reported methods based on the use of phantoms for intensity calibration [Axel87], [Gohagan87]. Several authors, including [Kapouleas94] and [Kohn91] have examined the use of statistical neighborhood models for segmentation. The use of Markov Random Field neighborhood models to capture local variations in voxel classification has also received attention [Held96], [Geman84]. Additional anatomical knowledge can be factored into these methods by modifying the classification of a pixel, based on registration of the image volume to an atlas of empirically determined tissue class priors [Kamber95], [Zijdenbos98].

(3)Segmentation by Alignment

More recent approaches to segmenting anatomical structures consider taking a template data set and comparing it with patient data by finding a transformation that aligns the reference data set with the patient data. In this way, one can either apply transformed labels from the template to define tissue boundaries, or one can use the transformed template to set expectations for the statistical classification of the patient data. This approach applys anatomical context to aid in the segmentation process, though it clearly depends on the existence of an accurate, detailed atlas, and the ability to warp that atlas to the new image data. This registration of an atlas has been accomplished through manual correspondence [Evans91], [Bookstein92], semi-automated correspondence detection [Collins92], [Meyer96], and automated correspondence [Dengler88], [Bajcsy89], [MacDonald94], [Moshfeghi94], [Collins94], [Thompson96], [Szeliski93], [Christensen94], [Bro-Nielsen96], [Miller93]. Several different schemes have been successfully applied for representing the high-order nonlinear transform necessary to align a reference template with a patient, allowing interesting characterizations of anatomical variability [Thompson96], [Thompson97], [Haller97], [Gee92].

(4)PDE’s and Active Contours

Grenander introduced deformable image models in his classic work on pattern theory [Grenander76]. The formulation of pattern understanding has been made in terms of Bayesian statistics. (Utilizing concepts from information theory, in particular the notion of “minimum description length”, allows the derivation of an equivalent formulation.) The fundamental idea of incorporating prior knowledge on the space of possible images in the form of a template on some domain, and the assumption that the class of all possible true images is formed by a composition of that template with continuous mappings from the domain into itself has had a major effect on various computer vision and image processing algorithms, in particular algorithms designed for the problem of segmentation.

Segmentation by active contours (or snakes) is an established method in the field of model based segmentation. In two-dimensional images, a parametrically defined line with constraints on how it can deform is driven by forces derived from the images. Such a force can, for example, be derived from the local edge strength. The parametric line, or the so-called snake, is then attracted to the edges of the images forming an outline of the object at hand. The modern approaches of active contours is based on a more rigorous mathematical framework. Segmentation based on mean curvature evolution schemes, implemented with level set methods, has recently become an important approach in computer vision [evans-spruck:91,chen-giga-etal:91,caselles-kimmel-etal:97, kichenassamy:95,kichenassamy96,tannenbaumSnippet96,sapiro:96,lauziere98].

The philosophy underpinning the well-known Mumford-Shah functional which gives a rigorous variational approach for segmentation [Mumford89] is strongly based on Grenander’s work. Here one wants to find the best match to the deformed image; “best” is defined in terms of the given functional. This in turn has motivated much of the research in the PDE based approaches for segmentation [Morel95]. Finally, deformable templates can powerfully model the variability of observed imagery, and have been to a large extent rigorously justified; see [Amit91], [Grenander98] and the references therein. In our proposed work on segmentation, this constellation of concepts based on pattern theory certainly will play an important role when we consider the organization of our methodologies for segmentation and feature extraction