University of Arizona

LING 495A/ 595A – Linguistics Department Colloquium

Spring 2016

Chair:Massimo Piattelli-Palmarini ()

TA:Zechy Wong ()
Schedule:Fridays, 3:00pm – 4:30pm in Communication 311

Program

Friday, January 15

Reconsidering canonical forms and the benefits of “clear” speech

Meghan Sumner (Department of Linguistics, Stanford University)

Labels are powerful. A label for an object or an idea induces its representation in our minds to become more categorical and less idiosyncratic than it might be in reality (Clark, 1998). Labels are all around us. They are the names we give to objects, the stereotypes we use to describe people and behaviors, and using them reinforces the abstract concept, overriding the specifics of our experience (Lupyan, 2007). Some labels we use quite frequently in phonetics, speech perception, and spoken word recognition are “clear speech”, “reduced speech”, “noisy speech” and “canonical forms”. In this talk, I give a brief overview of these labels, and how they have influenced our thought and our aims to understand speech processing, as a field.

This overview leads to the central question of some current research that I will discuss: Given that the bulk of the research in the field shows that clear speech is understood more quickly and accurately than casual speech, and that “mismatches” between signal and canonical forms wreak havoc on the perceptual system, how is it that we understand one another so well when nearly all of what we hear and say falls under the “reduced speech” umbrella?

To address this question, I present a variety of studies investigating both the recognition of clear and reduced words, the processing of clear and reduced sentences, and memory for clear and reduced words. The data paint a story that is intuitive, given what we say and hear, but counterintuitive given our strong sense of the benefits of “talking clearly”. The project as a whole suggests that we move away from the notion of canonical forms, and replace terms like “reduced” and “noisy” with “informative” and “typical”.

Friday, January 22

Undergraduate Meet-and-Greet

Prof. Diane Ohala

Friday,January 29

A Theory of Quantifier Context Dependence

Kristen Greer(Department of Linguistics, UCLA)

Quantifiers appearing in the DP are widely acknowledged to have meanings that are at least partially determined by the context in which they are uttered. Three phenomena in particular have been cited and treated as cases of quantifier context dependence: (a) the domain restriction, (b) the ambiguity of certain quantifiers between proportional, reverse, focus-affected, and cardinal interpretations, and (c) expectation readings. These are roughly illustrated in (1)-(3), respectively.

(1)Every student failed the assignment.

a. ≠Every studentin the universefailed the assignment.

b. =Every studentin the classfailed the assignment.

(2)Many boys came to the party.

a. = Manyof the boyscame to the party. (Proportional)

b. = Manyof the people who came to the partywere boys. (Reverse)

(3)a. Many lawyers attended the meeting this year.

b. Many doctors attended the meeting this year.

(Where (3a)≠(3b)even if[[doctors]]=[[lawyers]], presumably because an expectation is exceeded in one case but not the other)

Existing analyses account for these phenomena using a variety of formal tools, alternately appealing to multiple linguistic forms (ambiguity) or to variable positions in the semantic structure that are resolved to contents within a single model (extensional vagueness) or within multiple models (intensional vagueness).

I develop a unified approach to quantifier context dependence, showing that the phenomena illustrated in (1)-(3) can all be handled as cases of extensional vagueness, wherein context (construed as a set of sets in a single model) provides content for variables in semantic structure. I argue that there are two such contextual variables, one representing the domain of quantification and one the restriction on this domain, and I develop a theory of the syntactico-semantic structure of quantifiers that incorporates these variables. I then show how this structure predicts their context-dependent behaviors in a fully general, extensional way. The analysis has striking implications for our understanding of natural language quantification, suggesting that there are two fundamental operators forming generalized quantifiers (GQs) in natural language DPs: one that forms supersets over its restrictor, and one that forms subsets over its restrictor.

Friday, February 5

An Act Apart: Processing NAI Content

Lyn Frazier (Professor, Linguistics, University of Massachusetts Amherst)

Potts (2005) investigated parentheticals, appositives, expressives and honorifics and argued that they form a semantic natural class in that these "Not at Issue" (NAI) expressions do not contribute to the truth-conditions of the embedding utterance, they convey speaker commitments, and their semantic interpretation does not interact with "At Issue" (AI) content. To explain these properties, he offered a multi-dimensional semantics. See Schlenker (2010) for an alternative semantic account without multi-dimensionality, and for pragmatic accounts, see Harris and Potts (2009) and Amaral et al. (2007.)

Based on research conducted with Brian Dillon and Chuck Clifton, I will argue that NAI expressions are complete but dependent speech acts, and presumably as a result, they may be represented in a separate memory store from AI content. Evidence derives from differential effects of lengthening NAI vs AI content (Dillon et al.,
2014). Additional evidence comes from distinct interpretations for comparable material when it is expressed as AI vs NAI content, and evidence that expressives exhibit a similar interpretation (the same range of interpretations and sensitivity to the same interpretive principles) when the expressive stands as an utterance by itself, where it must be analyzed as a speech act, and when it appears as an attributive adjective (Frazier et al., 2014). Online evidence shows an interaction of a wh-dependency in the embedding sentencewith the status (AI vs. NAI) of an embedded structure containing a wh-dependency (Dillon et al., submitted), as expected if content in the same memory store may interact, but content in different stores may not.

The resulting view is one where prosody, syntax, semantics, pragmatics and general cognitive principles all play a role in explaining the properties and processing of NAI content.

References

Amaral, Patricia, Craige Roberts & E. Allyn Smith. (2007). Review of The Logic of Conventional Implicatures by Chris Potts. Linguistics and Philosophy 30(6): 707–749.

Dillon, B., Clifton, Jr., C., & Frazier, L. (2014). Pushed aside: Parentheticals, Memory & Processing. Language, Cognition and Neuroscience29 (4), 483-498.

Dillon, B., Clifton, Jr., C., Sloggett, S., and Frazier, L. (Submitted) Not all relative clauses interfere with filler-gap processing equally: Appositive relative clauses and the organization of linguistic working memory.

Frazier, L., Dillon, B., & Clifton, Jr., C. (2014). A note on interpreting damn expressives: transferring the blame. Language and Cognition 29, 1-14.

Harris, J., & Potts, C. (2009). Perspective-shifting with appositives and expressives. Linguistics and Philosophy 32, 523-552.

Potts, C. (2005).The logic of conventional implicatures. Oxford: Oxford University Press.

Schlenker, P. (2010). Supplements within a unidimensional semantics II: Epistemic status and projection. Proceedings of Northeastern Linguistic Society 2009, GLSA.

Schlenker, P. (2010). Supplements within a unidimensional semantics II: Epistemic status and projection. Proceedings of Northeastern Linguistic Society 2009, GLSA.

Friday, February 12

Minimalism: Aview after 20 years

Norbert Hornstein (Professor, Linguistics, University of Maryland / College Park)

This talk will try to assess how far we have come in realizing the goals of the Minimalist Program. Of course, doing this requires outlining what these goals were, and here I will provide a somewhat idiosyncratic (though, of course, deeply faithful rendition) of what the program was. In my opinion, the project had two parts:

  1. A Reductive/unificatory aspect in which the GB like modularity of FL was shown to be merely apparent, i.e. that the seven or so distinct grammatical modules with their own distinctive operations and locality domains were just surface aspects of the same operations and principles.
  1. An analytical aspect which reduces all the unified dependencies to a very small number (1?) of linguistically novel cognitive operations.

I argue that we have come a pretty long way in seeing what (1) entails and that the evidence that it is correct is non-negligible. Thus, I think that there is reason to believe that phrase building, movement, binding, control, agreement, case, etc. are all just aspects of the same basic combinatoric machinery. There are problems here, but the outlines of a plausible theory are visible.

I then investigate whether Merge is the secret sauce that lies behind this unification. I suggest (as I did in the 2009 book) that we should decompose Merge into two more general operations, one of which corresponds roughly to labeling and the other to set union, and that this is what lies behind the kinds of chains we in fact see. I show that were this right then the kinds of restrictions on Merge that we find would follow. In particular, were Union the basic combination operation, we would expect phrase markers to be sets subject to Extension, Inclusiveness and to contain copies.

If this is right, my conclusion is that Minimalism is alive and well and making progress. I think that this might come as a surprising conclusion to many.

Friday, February 19

Beyond Decomposition: How experiments might help shape morphological theory

Alec Marantz (Silver Professor of Linguistics and Psychology Departments of Linguistics and Psychology, NYU)

Over the last decade, neuro and psycholinguistic experimentation has accumulated supporting the hypothesis that words are decomposed down to their roots in comprehension, during both visual and auditory presentation, lending credence to linguistic theories such as Distributed Morphology, which insist on such decomposition in the analysis of word structure. However, this work has also dissolved the putative distinction central to Pinker’s Words and Rules framework between the “memorized” and the “constructed” – usage frequencies are relevant to the processing of all words and phrases, no matter how transparent or regular. So, for example, the transition probability between a stem and a suffix of a morphologically complex word modulates (obligatory) decomposition independent of regularity. The lack of correlation between memorization and regularity allows us to recast Pinker’s Words and Rules approach to integrating linguistics with cognitive neuroscience as an “Atoms and Rules” approach, emphasizing the distinction between the ontology of linguistic pieces (morphemes) and the generalizations about their order and arrangement. I will discuss how some recent finding from NYU’s Neuroscience of Language Lab might feed back into the development of morphological theory, given the Atoms and Rules approach and the observation that there is no escape from frequencies even for the most regular of rules.

Friday, February 26

InferentialOrganizationinGrammarSystems:WordStructure.ParadigmOrganizationandLearnability

Farrell Ackerman (Professor, Linguistics; Director, Human Development Program, UC San Diego); work in collaboration withRobMalouf(SDSU)

Speakersoflanguageswithcomplexmorphologyandmultipleinflectionclassesconfrontalargelearningtaskwhosesolutionraisesfundamentalquestionsaboutthestructureofwords,andtheorganizationofmorphologicalsystems.ThistaskreceivesageneralformulationastheParadigmCellFilling Problem (PCFP)inAckermanetal.(2009):

PARADIGMCELLFILLINGPROBLEM:Givenexposuretoaninflectedwordformofanovellexeme,whatlicensesreliable inferences about theother wordformsin itsinflectionalfamily?

Theessentialchallenge,asformulatedinthePCFP,isnotnew,andproposedanswerstoithaveasimilarprofile(Paul1891,Hockett1967,Paunonen1976,Bybee1985,Anttila1989,Wurzel1989,seealsoFertig2013):analogicalinferencesfrom(incompletesetsof)formsbelongingtoknowninflectionalpatternspermitreasonableguessesconcerninglikelycandidatesforunknownforms.Descriptive observationsabouttheimplicationalorganizationofmorphologicalsystemshavebeen reconceptualizedandquantifiedinrenascentword‐basedapproachestomorphologicalanalysis(seethedetailedoverviewanddescriptioninBlevins2016),wheretwo interdependentexplanatorydimensionsofpart/wholerelationsaredeveloped:the internalstructureofwordsinterpretedintermsofdiscriminabilityamongrelatedwords andtheexternalrelationsamongwordsasreflectedinparadigmorganization,wherewordsarepartsof(complex)paradigmswhichareinterpretableas(adaptivediscriminative)systemsofpatterns.ThesetwodimensionsareevidentintheNilo-SaharanlanguageFur(Waag2010),wherethecombinatoricsofaffixesandstemvariationreflectedinsegmentlength,tonalmelodiesandmetathesisdistinguishrelated wordsandprovideavailableinferencesforpredictingtheformsofunencounteredwords.Thisisexemplifiedinthesimplepatternsforthe 1stpersonsingularcompletiveversusthe3rdpersonsingularcompletivefortheverb`tospeak’in(1):

1a.ʔ-ɪ́rsɪ́ŋɔ
1sg-spoke
‘I spoke’ / 1b.rɪ̀sɪ̀ŋɔ̀
spoke
‘s/he spoke’

The tonal melodies (1a) and (1b) exhibit opposite values for their person contrasts: all highs on 1st singular versus all lows on 3rd singular. Additionally, 1st singular is associated with a prefix i.e., ʔ, while its stem represents a methathetic variant of the 3rd singular stem form, i.e., ɪ́rX versus rɪ̀X. To know the form for 1sg completive is also to know the form of the 3sg completive, and vice versa. The mutual inferential relations in (1) are trivial, but they become more complex when the whole system of Fur morphosyntactic properties and encodings is considered.

Thecalculationofinformativityconcerningcombinationspatternedingredientsand meaningsassociatedwiththemattestedinindividuallanguages,suchasFur,hasbeenthemainobjectofInformation‐Theoreticmeasuresinthesenewword-basedformalmodels.TheL(ow)C(onditional)E(ntropy)C(onjecture)(AckermanandMalouf2013) representsacross-linguistichypothesisconcerningcomplexmorphologicalsystems:morphologicalsystemsseemtodisplayorganizationintermsoflowconditional entropies,reflectinghighpredictabilitybetweenknownwordsandtheirunknown variants.Ineffect,theLCECisawayofsolvingthePCFP,providinglearnerswithcuestofacilitategoodguessesaboutpreviouslyunencounteredwords.Giventhehugevariabilityinthecross‐linguisticshapesofwordsandtheirpatternsofrelatedness,theLCEC,byhypothesis,reflectsastrategybywhichlanguagechangeisguidedbylearnabilityconsiderations.

Thelearnabilityproblemidentifiedbydescriptivistsisputinnewperspectivebyrecentresearchrevealingthattheinflectionalstimulithatlearnersexperienceishighlyskewedandincomplete:followingZipfiandistributions,smallnumbersofinflectedwordsareheardfrequentlyprovidingpartialparadigminformation,whileincreasingthecorpus sizedoesnotprovideexposuretothe“missing”wordsfromthe complete paradigm,butmerelyreinforcesthedistributionsfoundinsmallersamples.(BonamiandBeniamine2015,RamscarandBlevins2015).Yang(2010)utilizestheseresultstoargueagainsttheclassofwordandparadigmmodels.FollowingBonamiandBeniamine2015,RamscarandBlevins2015andBlevins2016,weargue,onthecontrary,thattheseresults suggestthenecessityforcomplexmorphologicalsystemstobeorganizedinlinewithword-basedmodelsthatarecentrallyconcernedwithquantifyingconditionalentropyinmorphologicalorganization:thelearningparadoxraisedbyZipfiandistributionsof stimulipointstothenecessityofsomethingliketheLCECandcorrelatively,forthetypeofword‐basedmorphologicalmodelwithinwhichitoperates.

Giventhis,weexploreempiricaldatathatconfirmand/orchallengetheLCEC,suggestingthatbothhelptorefinethepossiblescopeoftheconjectureandthenatureofitsextensionstodatabeyondthoseanalyzedinAckermanandMalouf2006,Ackermanet.al.2009,BonamiandLuis2013,AckermanandMalouf2013,Bonami2014,BonamiandBeniamine2015,Sims2015,StumpandFinkel2015,Blevins2016,amongothers.

Friday, March 25

Bound subjects, phases and postponement of transfer

Howard Lasnik (Distinguished University Professor in the Department of Linguistics at the University of Maryland / College Park)

Family of Questions (sometimes called pair-list) readings (here abbreviated FoQ) with WHs and universal quantifiers most typically arise when the WH originates in the same clause as theuniversal:

(1) Who did everyone see FoQ T

(2) Who do you think everyone saw FoQ T

May (1985) presented an important analysis of some such cases, but as pointed out by Sloan (1991), it incorrectly predicts the possibility of such readings even when the universal and the WH-trace are not clause-mates:

(3) Who does everyone expect [Mary to see t]? FoQ *

(4) Who does everyone think [Mary saw t]? FoQ *

May (1977) had already noted the absence of family of questions readings in examples like (4). However, Sloan reported that May gave her examples with structures very similar to those of(3) and (4) that do allow the reading:

(5) Who does everyonei expect [PROi to see t]? FoQ T

(6) Who does everyonei think [hei saw t]? FoQ T

It has long been known that clauses without overt subjects (especially infinitival clauses) do not act like full clauses with respect to a wide variety of phenomena. Postal (1974) introduced the notion ‘quasi-clause’ for some such cases and later Rizzi (1982) developed a comprehensive theory of ‘restructuring’. There is now a vast literature on this topic. Some version might cover (5). Only sporadically mentioned are situations where a bound pronominal subject makes a complement clause similarly permeable. I will explore some further constructions behaving in a similar way, especially WH-island exemption observed by Ross (1967), as illustrated in (7), andconsider a possible explanation in terms of phases, based on joint work with Tom Grano.

(7)a. He told me about a book which I can't figure out [where PRO to obtain t]

b. Which books did he tell you [why he wanted to read t] *Mary

Bibliography

Grano, Thomas and Howard Lasnik. 2015. How to neutralize a finite clause boundary: Phase theory and the grammar of bound pronouns. Ms. Indiana University and University of Maryland.

May, Robert. 1977. The grammar of quantification. Doctoral dissertation, MIT, Cambridge, Mass.

May, Robert. 1985. Logical Form: Its structure and derivation. Cambridge, Mass.: MIT Press.

Postal, Paul M. 1974. On raising: One rule of English grammar and its theoretical implications. Cambridge, Mass.: MIT Press.

Rizzi, Luigi. 1982. Issues in Italian syntax. Dordrecht: Foris.

Ross, John Robert. 1967. Constraints on variables in syntax. Doctoral dissertation, MIT, Cambridge, Mass. Published as Infinite syntax! Norwood, N.J.: Ablex (1986).

Sloan, Kelly. 1991. Quantifier-wh interaction. In MIT Working Paper in Linguistics 15, 219-237.

Friday, April 1

Grammaticalization and Repair as a Resolution to Labeling Algorithm Failures

Robert LaBarge (Department of Linguistics and Applied Linguistics, Arizona State University)

This talk is an extension of work originally presented at ALC 8 and 9.In the first of these talks, I argued that grammaticalization occursas the result of labeling difficulties experienced by the childacquirer. In the second, I argued that derivations attempt to labeland repair difficult structures in real-time. Here, I will combine thetwo arguments, showing that repair and grammaticalization areessentially the same strategy, but only the latter is available to thechild acquirer. But why? This talk will focus on three phenomena: thefirst is a possible head-head Merged structure in Chinese which showsevidence of historic change from verb to modal. The second is aphrase-phrase Merged structure in Macedonian which shows evidence ofhistoric change from demonstrative to definite article. In both cases,the question as to why new structures do not replace the old onesentirely will be addressed. The third case involves Paul Postal-typeraising-to-object ECM constructions and asks why such constructionsrequire a null subordinate C. In all cases, I argue for an exoskeletalstyle scaffolding (a la Borer) that guides both grammaticalization andrepair, but in different ways.

Friday, April 15

Undergraduate Research Forum

Prof. Diane Ohala

Friday, April 22

Change in aspect and argument structure

Elly van Gelderen (Regents’ Professor, English Department, Arizona State University)

By sketching some of the changes that affect the argument structure throughout the history of English, I shed light on the universality of the aspectual division in manner and result, the major theta-roles that depend on this, and the special status of the Theme. For instance, I show that unaccusatives are reanalyzed as causatives or copulas, due to the persistence of the Theme, but not as unergatives or unergatives as unaccusatives. Object experiencers are reanalyzed as subject experiencers but not the other way round. The reason for this is that verbs hang on to their basic aspectual classification and their Themes, and that the appearance of certain theta-roles is constrained by others.

Friday, April 29

Prosodic phonology of Nez Perce double reduplication

Kathryn Pruitt (English Department, Arizona State University)

This talk presents joint work with Amy Rose Deal (UC Berkeley) on the prosodic phonology of fully-reduplicated and doubly-reduplicated adjectives in Nez Perce. Full reduplication is associated with adjectives in Nez Perce, which may be derived from other categories as in (1) or from bound roots as in (2) (Aoki 1994). (Other morphological means of marking or deriving adjectives are available in Nez Perce but will not be discussed here.)