Science of Learning: History

A workshop held at NSF, 4-5 October 2012

Submitted by the Steering Committee: David Lightfoot (PI), Ralph Etienne-Cummings, Morton Gernsbacher, Eric Hamilton, Barbara Landau, Elissa Newport and David Poeppel

Science of Learning Workshop: History

1. Introduction

For a long time NSF has supported work on learning, through regular programs in SBE, CISE and EHR and also through special large initiatives like the Learning and Intelligent Systems component of the two-year Foundation-wide program in Knowledge and Distributed Intelligence (KDI) in the late 1990’s. In its support, NSF has responded to ground-breaking shifts in our understanding of learning as thinking moved beyond B. F. Skinner’slong-dominant behaviorist paradigm, from a single associationist approach toward an appreciation of the complexity – and potential multiplicity – of learning mechanisms. As one example, the 1973 Nobel Prize in Physiology (to Konrad Lorenz, Niko Tinbergen and Karlvon Frisch) marked the discovery and description of what were originally called “innate releasing mechanisms” in ethology: an external triggering stimulus releases a developmental program that allows the organism to learn highly specific actions or representations; it requires a well-articulated, genetically specified scaffold that is triggered by input. Appendix 1 provides a brief bibliography of some of the influential ground-breaking work that brought about the shifts in our understanding of learning processes.

NSF has construed learning broadly, dealing with the cognitive and neural basis of human learning, learning in other animals and computer models of learning. In 2003 it established the Science of Learning Centers (SLC) program. The goal was to stimulate and integrate research in the science of learning, dealing with the cognitive and neural bases of learning (as distinct from the more education-driven “learning sciences”); to connect the research to scientific, technological, educational and workforce challenges; and to enable research communities to capitalize on new opportunities and discoveries. The thinking was that the complexity of these goals required expertise from various disciplines and integrative research agendas that were beyond the capabilities of individual investigators or small groups. The longer durations of funding and the stable environments of centers would provide incentives for committed, long-term interactions among researchers to reconceptualize their thinking beyond the paradigms of traditional disciplines. The first solicitation is at SLC Solicitationand the six centers are listed in Appendix 2.

The SLC Program has represented a big investment in the human sciences broadly and in the multidisciplinary science of learning involving several of the NSF directorates. As the centers begin to phase down toward the expiration of NSF support after ten years, the time has come to think about the future of the science of learning and, to this end, two two-day workshops are being held, The Science of Learning: History and Prospects. This report covers the first workshop, held at NSF on 4-5 October 2012 and dealing with what has been achieved over recent decades in the science of learning, particularly in the last ten years. The second workshop, to be held at NSF on 28 February and 1 March 2013, will consider what we can look forward to over the next ten years in terms of opportunities and threats and will be a forum to brainstorm mechanisms for how work on the science of learning might be supported and funded over the coming decade, outlining strategies and objectives.

2. Organization of the workshops

A Steering Committee of leading figures in work on learning is guiding the organization of both workshops and is writing the reports: Ralph Etienne-Cummings (Johns Hopkins), Eric Hamilton (Pepperdine), Elissa Newport (Rochester and now Georgetown), David Poeppel (NYU and recent member of the SBE Advisory Committee), and current members of the SBE AC, Morton Gernsbacher (Wisconsin) and Barbara Landau (Johns Hopkins).

For the first workshop, six speakers were invited to address topics in the science of learning: Michael Stryker from UC San Francisco on neural plasticity, Nitin Gogtay from NIMH on abnormal and normal brain development, Ranu Jung from Florida International onmotor control learning linked to rehabilitation, Sharon Goldwater from Edinburgh University on computational modeling and large-scale data-mining, Linda Smith from Indiana University on cognitive development, and David Andrews from Johns Hopkins on learning and education. Soo-Siang Lim from NSF was invited to discuss infrastructure developed by the SLC Program through the six centers and six representatives from the centers were asked to speak about achievements and challenges in the focal area of their center: Nora Newcombe from SILC on spatial learning, Pat Kuhl from LIFE on social foundations of learning, Ken Koedinger from PSLC on computational models and robust learning, Barbara Shinn-Cunningham from CELEST on brain-inspired technologies, Gary Cottrell from TDLC on timing elements in learning, and Laura-Ann Petitto from VL2 on visual learning and signed languages. All speakers were invited to identify two signal achievements and two challenges in the areas they were addressing.

Presenters all sent in one-pagers in advance of the meeting, listing their main points and providing links to publications (Appendix 5). There was extensive discussion: ten minutes after each presentation, half an hour at the end of the first day, and then structured discussion for the whole of the morning on the second day. The list of participants is in Appendix 3 and the program in Appendix 4 (with links to the one-pagers).

3. The science of learning

The presentations from invited speakers and from representatives of the existing Science of Learning Centers covered broad territory, raising the question of what we mean by “learning” and what has been discovered about its processes and mechanisms. One relatively broad idea, provided by Michael Stryker's contribution, is that learning consists of some ‘reasonably specific set of changes in neural connections corresponding to the thing learned.’ It is notable that this idea does not constrain learning to changes that depend on experience per se. For example, formation of structure in the developing visual system occurs as a consequence of both spontaneous neural activity and exposure to structured patterns of information available to the organism from the environment. A slightly narrower idea is that learning encompasses experience-dependent change. Even here, the range of changes that are consequent upon experience, the kinds of experience that create change, and the timetable on which these changes can occur constitutes vast territory. As a consequence, it is likely that the mechanisms underlying learning could be quite varied. Consider, for example, the infant who learns to reach and grasp objects; the toddler who learns to talk and understand; the child who learns to count or to read; the adolescent who learns to drive; the adult who learns to re-use his or her limbs after stroke. Moving into the realm of machine learning, consider the machine that learns to translate an unknown language, learns to diagnose a tumor type on the basis of brain images, or learns to play Jeopardy, and compete with human experts.

The vast territory that comprises human learning can be organized to some degree by considering evolutionary foundations, the specific domains of learning, and likely mechanisms underlying learning in a given domain. Evolutionary foundations suggest that some aspects of human learning are likely to be continuous with other species (e.g. development of visual-motor coordination, tool use, number, navigation), while others are likely to be distinct from that of other species (e.g. human language, formal use of symbol systems). Still others will likely be hybrids, in which some foundational aspects of the system are shared by many species while other accomplishments require formal tutoring only available to humans. Number constitutes a good example: while fundamental aspects of numerical sensitivity are shared by other species, only humans master algebra (Dehaene 1997).

Domain-specific structures vary considerably, suggesting that some domains may engage distinct learning mechanisms. For example, navigation in all species requires that the organism keep track of its current location as it moves through space; for many species, this in turn depends at least partly on the mechanism of dead-reckoning, which allows the animal to keep track of its changing location as it moves (Gallistel 1990) and this supports the ability to form a map of the environment. The distinctly different case of language acquisition has been subject to intense controversy, with solid evidence now showing that aspects of the learning problem may depend on quite general statistical learning mechanisms (e.g. parsing the speech stream, Saffran, Aslin & Newport 1996) but other aspects of learning in syntax and semantics are still unexplained by such general mechanisms. It has been discovered (i) that language acquirers can entertain multiple representations of a syntactic string and (ii) that the representations entertained sometimes go against the statistics of the input: that is, learners entertain highly constrained options that are only in part driven by properties of the input. In addition, learning mechanisms may vary depending on the knowledge domain and, therefore, the computational problem to be solved. Learning mechanisms have also been categorized by scientists at a more macro level, into those that appear to require explicit (conscious) learning (as in learning a list of new word pairs by reading them out loud) or implicit (unconscious) learning (as in learning the properties of "outdoor vs. indoor scenes" by passively observing many exemplars, and constructing summaries of their statistical structure).

These basic organizational cuts are surely inadequate to capture the full richness of learning. Moreover, they may leave aside many kinds of change that - although they might not be part of the natural kind "learning" - will likely shed light on the breadth of changes that any science of learning will want to capture. These include such cases as the changes to the visual system underlying the development of binocular vision; changes to the developing brain that occur as information is recruited, manipulated and stored; changes to memory during the life-span and in the diseased brain; and changes that occur during rehabilitation after brain injury. The vast territory of learning requires not just a single science of learning, but, more likely, multiple sciences of learning.

4. Some history

The last heyday of learning theory was during the 1940’s and 1950’s, when the study of learning was dominated by associationist theories that proposed a few general principles that would explain all types of learning, across domains and species. That optimistic view waned with two striking findings. First, the seminal work of John Garcia showed that, even in rodents and birds, simple principles of conditioning were invaded by species-specific biases and innate constraints on what could be readily learned. Second, Noam Chomsky profoundly altered our understanding of cognition, suggesting that there are abstract universal principles of human language and arguing that language learning (and other types of learning) is made possible by constraints on the types of patterns that can be learned and processed. Together these lines of work, and others that followed in fields from psychology to computer science, have suggested that learning systems operate successfully by being quick to acquire certain types of information - and correspondingly slow or entirely failing to acquire other types.

Surprisingly, for a few decades after these claims appeared, the study of learning continued within linguistics and computer science but, without a search for general principles, languished within psychology. Departments of psychology that had always offered courses on ‘learning’ and had programs of graduate study focused on animal learning ceased to offer these specialties. But in more recent years, several important findings have revitalized interest in the study of learning, which is now one of the cutting edge fields within cognitive science. First, while the Chomskyan analysis of specialized learning modules has become richer and deeper, challenges have come from the study of neural networks, and the controversies surrounding this work, both from supporters and critics, have helped to put the study of learning back in the center of the cognitive and neuro- sciences. Second, discoveries within neuroscience of some of the cellular-molecular and systems-level underpinnings of learning – from LTP and NMDA receptors to studies of the hippocampus and other memory systems – have begun to shed light on the mechanisms by which experience alters the brain. Third, the field of infancy has provided remarkable findings of very early human cognitive capacities and also very early capacities for learning, even including prenatal learning. Fourth, the field of machine learning has undergone revitalization, providing a wealth of computational models for how human (and non-human) learning might in principle work.

Among the many important discoveries of recent years are the following:

  • Developmental and adult plasticity: We have learned not only that the brain is particularly plastic and susceptible to environmental influences early in life, but also that it is still, to some degree, plastic even in adulthood. Adult plasticity is reduced as compared with early development, but some of the same mechanisms for plastic change are still present in the adult brain – and new findings show even how to re-open critical periods for plastic change in mature organisms.
  • Cross-species comparisons of learning and the evolution of learning mechanisms: We have also learned that many mechanisms of learning are shared across species, and have begun to understand as well the arenas in which learning differs or has evolved differently across species and domains. An excellent summary of work on non-human animals is Gallistel et al.’s 1991 landmark review. Evidence now abounds that, for many important learning problems, most species begin life equipped with structures and mechanisms that guide learning. Examples include the barn owl, equipped with a specialized learning mechanism that calibrates its sound localization circuitry as it grows; migratory songbirds that are capable of representing the spatial arrangement of the stars in order to direct their initial flights; and ducks that compute the relative distribution of foods so that they can select the optimal location for foraging.
  • Mechanisms of learning, integrating from cells to behavior:In several important systems in animals – particularly in the sensory and motor systems – there has been remarkable progress in understanding both the effects of experience and the cellular-molecular changes that mediate them in shaping neural circuits.
  • From early seminal work in vision, we know two important principles. The early findings of HubelWiesel (1962) show that early visual input to the two eyes in cats can permanently alter the size of the neural regions devoted to each eye, and also their relative dominance in binocular vision (a critical period effect of input on neural circuits). We also know that the broader organization of visual cortex, as well as other sensory and motor cortices, is fundamentally “topical,” with a consistent mapping of the receptor surface (e.g. from left to right in the visual system, from low to high pitch in the auditory system) onto the corresponding layout of the primary cortical areas.
  • From the work of Knudsen (1999, 2004), Carr & Konishi (1988) and others on barn owls, we have learned how early auditory experience can alter sound localization; the mechanisms by which sound localization is mediated, throughcleverly evolved simple neural circuits (delay lines); and the rich ways in which these mechanisms can and cannot be altered throughout life, by experience with flight and localization of prey.
  • We have learned, from the work of Merzenich et al.(1983), about reorganization of somatosensory cortexin primates that can occur with experience using the hands, even in adulthood.
  • While these matters are much more difficult to investigate in humans – and links between cellular circuitry and behavior are at present out of reach - the study of language is one prominent arena in human cognitive science in which critical periods and plasticity early versus late in life has been the subject of important and sophisticated investigation.
  • Types of learning and memory: As interest in fundamental principles of learning has been revived in basic behavioral research, a diversification of types of learning has been explored. Some cognitive scientists distinguish procedural and declarative learning, the learning of procedures (such as how to ride a bicycle or compute a square root) versus the learning of information (such as the capital of Brazil or the color fuchsia). Other scientists distinguish between short-term learning (including the maintenance of knowledge in so-called working memory) and longer-term learning. Engle et al. 1999 demonstrated a strong correlation between the ability to quickly store and accurately retrieve recently learned information and standardized assessments of fluid intelligence. Still other scientists distinguish between implicit learning, obtained without conscious awareness or notable effort, and explicit learning, which requires effortful encoding and rehearsal.

There are many different types of learning. Domains of knowledge such as language, space, number and, likely, social interaction, are well-structured, but quite different from each other, and learning in each domain depends on having some initial structural biases. The biases for each domain are qualitatively different and there is no necessary reason for the mechanisms underlying our ability to produce and understand complex sentences to be identical to the mechanisms underlying our ability to navigate through space or decide whether a con-specific should be trusted. Of course, the question of whether domain-general mechanisms also play an important role in learning in every domain, and how these mechanisms interface with domain-specific mechanisms, is still very much under debate.