Tordoff

A. SPECIFIC AIMS

Taste solution acceptance is a complex behavior that can be easily measured. We propose to refine existing methods so they can be used to screen large numbers of mice for aberrations in taste solution acceptance. To do this will require three specific aims:

Specific Aim 1. Fine-tuning the long-term, two-bottle choice test. The most commonly used test to examine the acceptability of taste solutions in mice is the long-term, two-bottle choice test. We propose to fine-tune the already well-established methods for this test. Specifically, we will compare the response of groups of C57BL/6J (B6) and 129/SvJ (129) mice tested during systematic manipulations of drinking bottle spout position, the number of drinking bottles, the test duration, the maintenance diet, and the subjects' age. We will also determine which taste solutions produce carry-over effects (that is, influence solution intake in subsequent tests), and explore procedures to eliminate them. These experiments will allow us to optimize test conditions so as to maximize the likelihood of finding mice with aberrant taste solution acceptance.

Specific Aim 2. Optimizing test conditions to conduct brief-exposure tests using a lickometer. A complimentary method of assessing taste phenotypes is to conduct brief-exposure tests using a lickometer. In this aim, we propose to automate the equipment required for such tests. We will also establish appropriate solution concentrations and test conditions to optimize the likelihood of discovering mice with aberrant taste phenotypes.

Specific Aim 3. Establishing reference data for subsequent identification of mice with aberrant taste phenotypes. The previous aims will establish the best methods for taste phenotyping large numbers of animals. In this aim, we propose to use these methods to test large numbers of B6 and 129 mice, 24 other "reference" strains, 7 strains with known taste deficits, and 8 groups of mice with surgical or dietary manipulations. This will establish reference data for subsequent mass screening of mice and demonstrate the feasibility of detecting genetic differences in taste phenotypes.

A section of the proposal is devoted to administrative issues, including the procedures we will use to disseminate test methods and results. This includes developing a detailed training manual, publishing a database of results, and exploring other ways of providing detailed methods and reference data to interested parties.

B. BACKGROUND AND SIGNIFICANCE

The RFA for this proposal provides ample justification for phenotyping large numbers of mice, and we will not repeat the arguments here. Instead, we will discuss why it is important to conduct research on taste solution acceptanceA[1] in the mouse, and summarize potential approaches to do this.

Why study taste solution acceptance?

There are at least two reasons to study taste solution acceptance. First, studying what an animal drinks tells us about mechanisms of taste perception. Loss of taste reduces intake of palatable solutions and increases intake of unpalatable ones. Understanding the mechanisms of taste perception has implications for health and wealth (e.g., formulation of foods and drinks). Second, taste solution acceptance is a complex behavior that is influenced by physiological state. Disturbances in physiology are frequently expressed as changes in ingestive behavior. This can be quite specific. For example, disturbances of sodium balance increase intake of NaCl solutions(e.g.,30,39,89,91), disturbances that produce hypocalcemia lead to increased intake of calcium solutions(e.g.,110,113), protein deficiency leads to increased protein intake(e.g.,40), and metabolic disturbances such as diabetes alter intake of sweet compounds(e.g.,109). Thus, abnormalities in taste solution acceptance provide a non-invasive indication of dysfunction of many physiological mechanisms involved in homeostasis.

Taste and genetics in the mouse

Primarily because of historical antecedents and its larger size, the rat has been the favored rodent for taste perception studies. However, there have been exceptions from "classical" genetics, most notably the development and characterization of mouse strains with various sensitivities to bitter compounds18,19,42,126.

Recent interest promulgated by the revolution in genetic methodologies has redirected several investigators with experience in chemosensory research to join in the hunt for genes involved in taste perception. One breakthrough has been the production of mice with a knockout of the a-G protein subunit gustducin gene, which have diminished sensitivity to sweetness and bitterness45,61,62,64,65,106,127. Another has been the localization of a pair of quantitative trait loci on chromosome 4, one of which is the Sac locus60, which together account for more than 50% of the genetic variance in the intake of sweet solutions by C57/BL6 and 129/J mice and their hybrids3. The behavioral data here are complimented by electrophysiological recordings showing that differences in transduction (or a peripheral sensory process) can account for the disparate preference for sweetness shown by the two parent strains3,4,69. Very recently, two G protein-coupled receptors in the apical sensing-end of taste receptors have been characterized (TR1 and TR2)26, and one appears to be localized in the same portion of chromosome 4 as the Sac locus26. It is therefore possible that genetic methods have already exposed a receptor of major importance for the perception of sweetness.

These successes have provided a new impetus to the study of taste perception, and it seems likely that several genes underlying taste receptor structure and function will be identified in the next few years. Nevertheless, many puzzles remain. For example, the TR1 and TR2 receptors are not co-expressed with the a-G protein subunit of gustducin. Thus, gastducin-coupled taste receptors remain to be discovered. Moreover, besides peripheral taste reception, there are complex mechanisms responsible for taste coding and integration of sensory input into behavioral output. Many genes must be involved in these higher levels of taste function and ingestive behavior, but they are unknown.

The influence of genes exerting a large contribution to taste perception will be easy to phenotype and thus the genes should be relatively easy to identify. However, as attention turns to genes with smaller or less significant effects on taste perception, it will become progressively harder to discern their contribution. Whereas it has been possible to isolate QTLs with major effects on the acceptance of sweetness and alcohol with groups of several hundred F2 hybrid mice(e.g.,3,16,75,93), it is likely that this strategy will require several thousand animals for genes with lesser effects or epistatic contributions. Similarly, NIH is planning a multicenter initiative to screen many thousands of mice with mutations. Before such an investment in time and money begins, it is prudent to establish methodologies for testing taste perception and acceptance that meet several criteria, listed below.

1. Each test must be rapid or at least require little time of the investigators. This is because of the large number of subjects involved. A corollary is that the tests involve simple procedures that relatively unskilled laboratory personnel can perform routinely.

2. Each test must be sensitive. Obviously, insensitive tests may fail to detect animals with subtle taste deficits. The more sensitive the test, the more likely it will discriminate animals with unusual phenotypes. Generally, sensitivity can be increased by repeated or prolonged testing, but this is not a viable option for rapid screening of mutagenized mice (see Criterion 1, above). More important for studies of taste perception is the choice of compounds and concentrations of taste solutions to test. Variation in response is seriously curtailed if the taste solutions are highly palatable because all animals respond maximally, leading to ceiling effects. Similarly, highly unpalatable solutions are ingested in such small amounts that floor effects restrict variation.

3. Each test must be highly reliable. If a test is unreliable it is useless for genetic studies. False positive results precipitate an unfruitful investment involving genotyping and/or breeding mice with normal phenotypes. False negative results hide rare mutations. Thus, errors in testing must be kept to an absolute minimum. Like sensitivity, reliability often comes at the price of repeated or prolonged testing but this is not an option for rapid screening.

4. Each test must be independent of other tests. Because it is most efficient for the same animal to be tested more than once, it is essential that tests with one taste compound do not influence the response to other compounds. More generally, the lack of invasive treatments or permanent effects of each test on behavior is particularly important for studies of mutagenized mice because they may be used to investigate a number of phenotypes in addition to taste perception.

5. Each test must be transferable to other laboratories. A test must be sufficiently robust that it produces the same results with minor variations in conditions, such as different cage sizes, diet, lighting or temperature. If not, it is critical that these conditions are specified and controlled.

Methods currently available to assess taste perception in rodents

How do the methods that are currently available to assess taste perception in rodents stack up against these criteria? Below is a description of current methods, together with their advantages and disadvantages from the perspective of screening huge numbers of animals. All the methods have problems. Most of them have so many disadvantages they cannot be adapted to screen large numbers of mice, and we do not intend to pursue them further here. However, we believe the first two have advantages that outweigh the disadvantages, and they are thus worthy of pursuit.

Long-term, two-bottle choice test. The two-bottle choice test (a.k.a. two-bottle preference test or two-tube choice test) has been the standard, workhorse method of assessing taste solution acceptance in mammals since the work of Richter in the 1930's(e.g.,6,22,88,90,99,114,130). In its simplest form, the animal is presented with two drinking tubes, one containing water and the other a taste solution. It is common to conduct sequential 48-h tests involving a range of ascending concentrations of the taste solution being examined. Because rodents can have pronounced side preferences the position of the bottles is switched every 24 h. Variations on the basic theme involve tests using shorter or longer durations, tests with both choices being a taste solution, and tests with three or more choices (often called "cafeteria" experiments). Some of the advantages of this method are that the procedure is very simple and low-tech, and many mice can be tested simultaneously (we have tested >250 at once). There is also a large body of existing evidence, including a growing literature with mice as subjects, to draw from. The measure of preference (intake of solution/total intake) is, within limits, performance- and body size-independent (see also Table 1). There are three main disadvantages of this method. First, although daily measurements can be made very quickly, testing a series of compounds takes substantial time (weeks-to-months) because each test requires a minimum of 48 h. Second, there is no attempt to confine the taste solution to the oral cavity, so intakes and preferences reflect postingestive events as well as chemosensory ones. Third, most likely because of postingestive factors, long-term two-bottle tests are not always independent. There are strong carry-over effects to contend with, although these are usually ignored (see Section C.2 and Experiment 1f, below).

A few studies, particularly in the field of alcohol research, have used long-term tests in which the animal consumes all its fluid from a single bottle. There is no water available. However, this has all the disadvantages of the long-term, two-bottle test with none of the advantages. The only justification for conducting this form of test is to force the animals to drink a non-preferred solution. In this case, the animal's dislike for the solution is pitted against its thirst. Because thirst is determined by both the amount and osmotic load of the ingested solution, interpretation is generally impossible. Although the long-term one-bottle test can be a useful treatment to induce alcohol dependence or hypertension (with NaCl as the drinking solution), we do not consider it a viable measure of taste solution acceptance.

Brief-exposure test using a lickometer. The major problem with the long-term preference test is that it does not provide a "pure" measure of taste; the oral and postingestive effects of the taste solution are confounded. One method that has been used successfully to characterize oral effects without postingestive ones is to conduct a short test. This requires the assumption that postingestive effects are not manifest immediately. Early investigators used 15- or 30-min tests but it is clear that this allows plenty of time for the expression of some postingestive events (e.g., osmotic inhibition of intake). Based on studies primarily in rats, a general consensus developed that tests must be 2-3 min or less in order to minimize postingestive factors (e.g.,29,101,124,125). As the tests became shorter, several problems emerged. One was that it was difficult to make animals drink during short tests without first depriving them of water. Thirsty animals tend to "guzzle" the first solution they come across rather than select among choices. Almost universally, the solution to this problem has been to conduct one-bottle tests (see67 for an exception). Short-term tests are not long enough for thirst to develop so access to water (the 2nd choice) is unnecessary. The second problem is that volume intakes during short-term tests are very small, particularly in small species like the mouse. It becomes impractical to measure volumes of solution ingested under ~1 ml because spillage can contribute more than this. The solution has been to adopt various devices that record individual licks, using a lickometer.

There are several types of lickometer, differing in the method they use to detect licks. The earliest models involved embedding a phonograph needle into a drinking spout so that when a rat drank from the spout, the vibrations were picked up by the needle, then amplified, and the output of the amplifier fed into a chart recorder. By far the most common method now in use involves passing an undetectable current (<1 µA) through the drinking spout. When the animal drinks it becomes part of a circuit from the spout to the cage floor, which is grounded, and this change in conductivity is amplified, shaped, and recorded by a computer. Another method involves incorporating a miniature strain gauge in the drinking spout so that the pressure of each lick can be recorded. There are also methods in which the movement of the tongue is detected when it crosses a light beam mounted in front of the drinking spout. Weijnen has published several excellent reviews that outline the advantages and disadvantages of each type of lickometer122-125.