Defaults without Faults: Why Linguistic Theory Needs Default Inheritance
In this insightful content, explore the significance of default inheritance in linguistic theory, addressing exceptions, irregularities, and the role of cognitive linguistics. Delve into the history, working mechanism, and application of default inheritance in handling word order and constructions. Discover its relevance in logic, AI, human reasoning, and cognitive psychology, shedding light on prototypes and graded categorization.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Defaults without faults Dick Hudson Edinburgh CLRG, May 2019 1
Plan 1. Why a linguistic theory needs default inheritance (DI). 2. A brief history of DI in WG. 3. How DI works. 4. The special problem of word order. 5. In defence of cognitive linguistics. 6. DI and constructions: WG and CxG. 7. DI and tokens: dependency and phrase structure. 2
1. Why a linguistic theory needs DI. Any generalisation may have exceptions. Irregular verbs (e.g. went) Exceptional word order (e.g. Have you finished?) Semantics (e.g. fake diamond) etc. DI is widely accepted as the mechanism for handling exceptions in CxG: ... construction grammars share key features. Goldberg (1995, 2006), Croft and Cruse (2004), and Croft (2001) all exploit default inheritance ... (Gisborne 2011 = Constructions, Word Grammar, and grammaticalization) 3
Flies DI in logic and AI (Wikipedia) bird Doesn t fly Default logic is a non-monotonic logic [invented in 1980] to formalize reasoning with default assumptions. Default logic can express facts like by default, something is true ; by contrast, standard logic can only express that something is true or that something is false. This is a problem because reasoning often involves facts that are true in the majority of cases but not always. A classical example is: birds typically fly . This rule can be expressed in standard logic either by all birds fly , which is inconsistent with the fact that penguins do not fly, or by all birds that are not penguins and not ostriches and ... fly , which requires all exceptions to the rule to be specified. Default logic aims at formalizing inference rules like this one without explicitly mentioning all their exceptions. penguin x bird(x) fly(x) x bird(x) & penguin(x) fly(x) 4
DI in Life yes We need to be able to generalise in order to enrich the present (e.g. Does it have wings?) predict the future (e.g. What will it do if I approach?) But there are no generalisations in nature. They re all created by our minds so we accommodate penguins. So humans use default reasoning (DI). And if we want to model the human mind, we need DI in the model. yes yes no 5
DI and prototypes in cognitive psychology Wikipedia again: Prototype theory is a mode of graded categorization in cognitive science, where some members of a category are more central than others. For example, when asked to give an example of the concept furniture, chair is more frequently cited than, say, stool. Prototype theory has also been applied in linguistics, as part of the mapping from phonological structure to semantics. George Lakoff 1977 Linguistic gestalts (and in a talk at UCL) Maybe prototypes result from default reasoning? 6
2. DI in WG: the early days 1976: Arguments for a Non-transformational Grammar. Psychological reality is important in choosing between theories. 1980: Sociolinguistics Prototypes are better than boxes for explaining: how we learn categories from limited experience flexible use of categories. 7
DI in WG: 1984 Word Grammar The Selective Inheritance Principle If I [instance] is an instance of M [model], then any proposition which applies to M must apply to I as well (with I substituted in it for M ), provided that this proposition does not contradict any proposition which is already known to apply to I. No default inheritance , no isa . Applies to all languages all cognition 8
DI in WG: 1990 English Word Grammar past(verb) = base(verb)+ed Isa hierarchies introduced. Prototypes are created by DI BUT: DI is non-monotonic: any inference may be overridden by a later inference. e.g. what is the past tense of COME? 1. past of COME = <came> 2. past of verb = base of verb + ed. Why can t we apply 2 after 1? i.e. why must 1 win? verb past(COME) = <came> COME 9
How to make DI monotonic: stipulation Stipulated overriding entered and persisted through 1997 (ELL) e.g. past tense of COME: structure of past COME = <came> structure of past COME = stem of COME + mEd. NOT! SO: <came> but *<come-ed> verb past base+{ed} {come}+{ed} COME past:COME {came} 10
How to make DI monotonic: procedural 2007 Language Networks I abandoned stipulated overriding. and replaced it by node-building: Exceptions isa default cases. Every word token has a node. This node is created at the foot of the taxonomy. DI only applies in creating new nodes. So it always applies to the lowest node. verb past base+{ed} COME past:COME {came} token 11
3. How DI works V 2010 Introduction to WG but implicit in earlier work. Bottom-up Recursive 1. Start with token T, which isa U 2. For each relation R of U, 1. ignore it if R of T already has a value, 2. otherwise copy its value to T. 3. If U isa V, climb up to V and repeat 2 (still copying to T). U R T R 12
4. But what about word order? landmark 2007: Word order is defined by the landmark taxonomy. A word s landmark is the word from which it takes its position. Before and after isa landmark . So before and after are distinct relations E.g. dependent after, but subject before. Problem: so neither will block the other. A bad solution: a special or relation. before after or word depend -ent subject 13
A better solution: rethink positions position Separate a word s: landmark L its position P the relation between L and P. E.g. dependent after but subject before Now p overrides P as the value for position and the relation or isn t needed. landmark L P word after depend -ent p l w subject before position 14
5. Cognitive linguistics Basic assumption: linguistic cognition isa general cognition. So language has access to all general cognitive machinery. And language needs no other cognitive machinery. Linguistics needs cognitive science for: DI, networks, isa taxonomies, activation, attention Cognitive science needs linguistics for: node creation taxonomies of relations symbolic networks DETAIL 15
6. DI and constructions: WG and CxG DI allows the general to coexist with the specific. So it s perfect for constructions. DI is used in both CxG and WG. But CxG and WG are different: CxG: constructions are schematic phrases. WG: construction are sub-networks of words. E.g. He gave her a good kissing. (Trousdale 2008) Semantics: Accomplish Xaction (Agent Patient Xaction (durative/iterative)) Syntax: give (Npi (Subj) NPj (iO) [a [gerund]] NPk (dO)) 16
action Giving a kissing, WG style accomplish durative/iterative ? Patient CxG schemas translate easily into WG networks for syntax for semantics But GIVE isn t always used like this. Agent referent sense referent referent dO iO Subj GIVE A(N) noun gerund 17
word WG and sub-lexemes Lexemes are in a taxonomy below word-classes. But they re not the foot of the hierarchy, because distinct uses give distinct sub-lexemes. Gisborne 2010 verb OPENintrans e.g. The door opened. OPENtrans e.g. He opened the door. OPENeyes e.g. It opened my eyes to ... 18
Sub-lexemes and constructions subj The lexeme GIVE has a sub-lexeme GIVEGact, e.g.He gave her a good kissing/kiss. word dirObj verb action indObj GIVE er ee subj dirObj GIVEGact indObj 19 referent
7. DI: types and tokens A word token isa some word type. So even sub-lexemes aren t the bottom of the taxonomy. E.g. Drink this. Tokens can have sub-tokens. E.g. Drink this. Drink this? (copied) So they can also have sub-tokens based on dependency. imper- ative sing- ular THIS DRINK Drink this. dirObj Drinkthis Drink2 this2? 20
DI: dependency and phrase structure. VP Now what s the difference between dependency and phrase structure? It s all in the relations: PS: A, B both = part-whole. DS: A = isa B = dependent sister. DS = Bare Phrase Structure! A B Drink this. Drinkthis dirObj A B Drink this. 21
Conclusions DI is a safe logic provided it applies in node-creation. DI is everywhere. But it s in language that we see it most clearly. The isa relation forms taxonomies that include performance as well as competence. This is why DI accommodates constructions, as frozen performance . 22
Thanks for listening You can download this slideshow from www.dickhudson.com/talks 23