# 2008 HowIsMeaningGroundedInDictionaryDefinitions

- (Masse et al., 2008) ⇒ A. Blondin Massé, Guillaume Chicoisne, Yassine Gargouri, Stevan Harnad, Olivier Picard, and Odile Marcotte. (2008). “How Is Meaning Grounded in Dictionary Definitions?" In: Proceedings of the TextGraphs-3 Workshop (TextGraphs-3).

**Subject Headings:** Dictionary, Sensorimotor Category, Symbol Grounding Problem, Referent

## Notes

## Cited By

- ~43 http://scholar.google.com/scholar?q=%22How+Is+Meaning+Grounded+in+Dictionary+Definitions%3F%22+2008

## Quotes

### Abstract

Meaning cannot be based on dictionary definitions all the way down: at some point the circularity of definitions must be broken in some way, by grounding the meanings of certain words in sensorimotor categories learned from experience or shaped by evolution. This is the “symbol grounding problem”. We introduce the concept of a reachable set — a larger vocabulary whose meanings can be learned from a smaller vocabulary through definition alone, as long as the meanings of the smaller vocabulary are themselves already grounded. We provide simple algorithms to compute reachable sets for any given dictionary.

### 1 Introduction

We know from the 19th century philosopher-mathematician Frege that the referent and the meaning (or “sense”) of a word (or phrase) are not the same thing: two different words or phrases can refer to the very same object without having the same meaning (Frege, 1948): “George W. Bush” and “the current president of the United States of America” have the same referent but a different meaning. So do “human females” and “daughters”. And “things that are bigger than a breadbox” and “things that are not the size of a breadbox or smaller”.

A word’s “extension” is the set of things to which it refers, and its “intension” is the rule for defining what things fall within its extension.. A word’s meaning is hence something closer to a rule for picking out its referent. Is the dictionary definition of a word, then, its meaning?

Clearly, if we do not know the meaning of a word, we look up its definition in a dictionary. But what if we do not know the meaning of any of the words in its dictionary definition? And what if we don’t know the meanings of the words in the definitions of the words defining those words, and so on? This is a problem of infinite regress, called the “symbol grounding problem” (Harnad, 1990; Harnad, 2003): the meanings of words in dictionary definitions are, in and of themselves, ungrounded. The meanings of some of the words, at least, have to be grounded by some means other than dictionary definition look-up.

How are word meanings grounded? Almost certainly in the sensorimotor capacity to pick out their referents (Harnad, 2005). Knowing what to do with what is not a matter of definition but of adaptive sensorimotor interaction between autonomous, behaving systems and categories of “objects” (including individuals, kinds, events, actions, traits and states). Our embodied sensorimotor systems can also be described as applying information processing rules to inputs in order to generate the right outputs, just as a thermostat defending a temperature of 20 degrees can be. But this dynamic process is in no useful way analogous to looking up a definition in a dictionary.

### 2 Definitions and Notations

In this section, we give mathematical definitions for the dictionary-related terminology, relate them to natural language dictionaries and supply the pertinent graph theoretical definitions. Additional details are given to ensure mutual comprehensibility to specialists in the three disciplines involved (mathematics, linguistics and psychology). Complete introductions to graph theory and discrete mathematics are provided in (Bondy & Murty, 1978; Rosen, 2007).

### 2.1 Relations and Functions

Let [math]\displaystyle{ A }[/math] be any set. A *binary relation on [math]\displaystyle{ A }[/math]* is any subset [math]\displaystyle{ R }[/math] of [math]\displaystyle{ A \times A }[/math]. We write *xRy* if *(x,y) in R*. The relation [math]\displaystyle{ R }[/math] is said to be (1) *reflexive* if for all *x in A*, we have *xRx*, (2) symmetric if for all *x,y in A* such that xRy*, we have *yRx* and (3) transitive if for all *x,y,z in A such that *xRy* and *yRz*, we have *xRz*. The relation [math]\displaystyle{ R }[/math] is an *equivalence relation* if it is reflexive, symmetric and transitive. For any x in A*, the *equivalence class* of [math]\displaystyle{ x }[/math], designated by [*x*], is given by [*x*] = {*y in A | *xRy*}. It is easy to show that [*x*] = [*y*] if and only if *xRy* and that the set of all equivalence classes forms a partition of *A*.

Let [math]\displaystyle{ A }[/math] be any set, *f : A → A* a function and [math]\displaystyle{ k }[/math] a positive integer. We designate by f^k the function *f ᴑ f ᴑ: : : ᴑ f* (*k* times), where ᴑ denotes the composition of functions*.*

### 2.2 Dictionaries

At its most basic level, a dictionary is a set of associated pairs: a *word* and its *definition*, along with some disambiguating parameters. The word to be defined, [math]\displaystyle{ w }[/math], is called the *definiendum* (plural: *definienda*) while the finite nonempty set of words that defines [math]\displaystyle{ (w, d_w) }[/math], is called the set of *definientes* of [math]\displaystyle{ w }[/math] (singular: definiens). ^{[1]}

Each dictionary entry accordingly consists of a definiendum [math]\displaystyle{ w }[/math] followed by its set of definientes [math]\displaystyle{ d_w }[/math]. A *dictionary* [math]\displaystyle{ D }[/math] then consists of a finite set of pairs [math]\displaystyle{ (w, d_w) }[/math] where [math]\displaystyle{ w }[/math] is a word and [math]\displaystyle{ d_w = (w_1, w_2,..., w_n) }[/math], where [math]\displaystyle{ n }[/math] >= 1, is its definition, satisfying the property that for all ([math]\displaystyle{ (w', d'_w) }[/math]) in D and for all *d in d_w*, there exists [math]\displaystyle{ (w; d_w) }[/math] in D* such that [math]\displaystyle{ d = w' }[/math]. A pair ([math]\displaystyle{ (w, d_w)) is called an [[entry]] of \lt math\gt D }[/math]. In other words, a dictionary is a finite set of words, each of which is defined, and each of its defining words is likewise defined somewhere in the dictionary.*

### 3.1 Reachable and Grounding Sets

Given a dictionary [math]\displaystyle{ D }[/math] of [math]\displaystyle{ n }[/math] words and a person [math]\displaystyle{ x }[/math] who knows [math]\displaystyle{ m }[/math] out of these [math]\displaystyle{ n }[/math] words, assume that the only way [math]\displaystyle{ x }[/math] can learn new words is by consulting the dictionary definitions. Can all [math]\displaystyle{ n }[/math] words be learned by [math]\displaystyle{ x }[/math] through dictionary look-up alone? If not, then exactly what subset of words can be learned by [math]\displaystyle{ x }[/math] through dictionary look-up alone?

…

- Example 12.

Continuing Example 10 and from what we have seen so far, it follows that the grounding kernel of G is given by KG = fbad, dark, good, light, not, or, thingg: Level 1 words are “color” and “eatable”, level 2 words are “fruit”, “red” and “yellow”, and level 3 words are “apple”, “banana” and “tomato”.

### 4. Grounding Sets and the Mental Lexicon

In Section 3, we introduced all the necessary terminology to study the symbol grounding problem using graph theory and digital dictionaries. In this section, we explain how this model can be useful and on what assumptions it is based.

A dictionary is a formal symbol system. The preceding section showed how formal methods can be applied to this system in order to extract formal features. In cognitive science, this is the basis of computationalism (or cognitivism or “disembodied cognition” (Pylyshyn, 1984)), according to which cognition, too, is a formal symbol system – one that can be studied and explained independently of the hardware (or, insofar as it concerns humans, the wetware) on which it is implemented. However, pure computationalism is vulnerable to the problem of the grounding of symbols too (Harnad, 1990). Some of this can be remedied by the competing paradigm of embodied cognition (Barsalou, 2008; Glenberg & Robertson, 2002; Steels, 2007), which draws on dynamical (noncomputational) systems theory to ground cognition in sensorimotor experience. Although computationalism and symbol grounding provide the background context for our investigations and findings, the present paper does not favor any particular theory of mental representation of meaning.

A dictionary is a symbol system that relates words to words in such a way that the meanings of the definienda are conveyed via the definientes. The user is intended to arrive at an understanding of an unknown word through an understanding of its definition. What was formally demonstrated in Section 3 agrees with common sense: although one can learn new word meanings from a dictionary, the entire dictionary cannot be learned in this way because of circular references in the definitions (cycles, in graph theoretic terminology). Information – nonverbal information – must come from outside the system to ground at least some of its symbols by some means other than just formal definition (Cangelosi & Harnad, 2001). For humans, the two options are learned sensorimotor grounding and innate grounding. (Although the latter is no doubt important, our current focus is more on the former.)

The need for information from outside the dictionary is formalized in Section 3. Apart from confirming the need for such external grounding, we take a symmetric stance: In natural language, some word meanings — especially highly abstract ones, such as those of mathematical or philosophical terms — are not or cannot be acquired through direct sensorimotor grounding. They are acquired through the composition of previously known words. The meaning of some of those words, or of the words in their respective definitions, must in turn have been grounded through direct sensorimotor experience.

To state this in another way: Meaning is not just formal definitions all the way down; nor is it just sensorimotor experience all the way up. The two extreme poles of that continuum are sensorimotor induction at one pole (trial and error experience with corrective feedback; observation, pointing, gestures, imitation, etc.), and symbolic instruction (definitions, descriptions, explanation, verbal examples etc.) at the other pole. Being able to identify from their lexicological structure which words were acquired one way or the other would provide us with important clues about the cognitive processes underlying language and the mental representation of meaning.

To compare the word meanings acquired via sensorimotor induction with word meanings acquired via symbolic instruction (definitions), we first need access to the encoding of that knowledge. In this component of our research, our hypothesis is that the representational structure of word meanings in dictionaries shares some commonalities with the representational structure of word meanings in the human brain (Hauk et al., 2008). We are thus trying to extract from dictionaries the grounding kernel (and eventually a minimum grounding set, which in general is a proper subset of this kernel), from which the rest of the dictionary can be reached through definitions alone. We hypothesize that this kernel, identified through formal structural analysis, will exhibit properties that are also reflected in the mental lexicon. In parallel ongoing studies, we are finding that the words in the grounding kernel are indeed (1) more frequent in oral and written usage, (2) more concrete, (3) more readily imageable, and (4) learned earlier or at a younger age. We also expect they will be (5) more universal (across dictionaries, languages and cultures) (Chicoisne et al., 2008).

### 5 Grounding Kernels in Natural Language Dictionaries

In earlier research (Clark, 2003), we have been analyzing two special dictionaries: the Longman’s Dictionary of Contemporary English (LDOCE) (Procter, 1978) and the Cambridge International Dictionary of English (CIDE) (Procter, 1995). Both are officially described as being based upon a defining vocabulary: a set of 2000 words which are purportedly the only words used in all the definitions of the dictionary, including the definitions of the defining vocabulary itself. A closer analysis of this defining vocabulary, however, has revealed that it is not always faithful to these constraints: A significant number of words used in the definitions turn out not to be in the defining vocabulary. Hence it became evident that we would ourselves have to generate a grounding kernel (roughly equivalent to the defining vocabulary) from these dictionaries. The method presented in this paper makes it possible, given the graph structure of a dictionary, to extract a grounding kernel therefrom. Extracting this structure in turn confronts us with two further problems: morphology and polysemy. Neither of these problems has a definite algorithmic solution. Morphology can be treated through stemming and associated look-up lists for the simplest cases (i.e., was [math]\displaystyle{ \rightarrow }[/math] to be, and children [math]\displaystyle{ \rightarrow }[/math] child), but more elaborate or complicated cases would require syntactic analysis or, ultimately, human evaluation. Polysemy is usually treated through statistical analysis of the word context (as in Latent Semantic Analysis) (Kintsch, 2007) or human evaluation. Indeed, a good deal of background knowledge is necessary to analyse an entry such as: “dominant: the fifth note of a musical scale of eight notes” (the LDOCE notes 16 different meanings of scale and 4 for dominant, and in our example, none of these words are used with their most frequent meaning).

Correct disambiguation of a dictionary is timeconsuming work, as the most effective way to do it for now is through consensus among human evaluators. Fortunately, a fully disambiguated version of the WordNet database (Fellbaum, 1998; Fellbaum, 2005) has just become available. We expect the grounding kernel of WordNet to be of greater interest than the defining vocabulary of either CIDE or LDOCE (or what we extract from them and disambiguate automatically, and imperfectly) for our analysis.

### 6 Future Work

The main purpose of this paper was to introduce a formal approach to the symbol grounding problem based on the computational analysis of digital dictionaries. Ongoing and future work includes the following:

*The minimum grounding set problem.* We have seen that the problem of finding a minimum grounding set is NP-complete for general graphs. However, graphs associated with dictionaries have a very specific structure. We intend to describe a class of graphs including those specific graphs and to try to design a polynomial-time algorithm to solve the problem. Another approach is to design approximation algorithms, yielding a solution close to the optimal solution, with some known guarantee.

*Grounding sets satisfying particular constraints.* Let D be a dictionary, G = (V;E) its associated graph, and U V any subset of vertices satisfying a given property P. We can use Algorithm 1 to test whether or not U is a grounding set. In particular, it would be interesting to test different sets U satisfying different cognitive constraints.

*Relaxing the grounding conditions.” In: this paper we imposed strong conditions on the learning of new words: One must know all the words of the definition fully in order to learn a new word from them. This is not realistic, because we all know one can often understand a definition without knowing every single word in it. Hence one way to relax these conditions would be to modify the learning rule so that one need only understand at least r% of the definition, where r is some number between 0 and 100. Another variation would be to assign weights to words to take into account their morphosyntactic and semantic properties (rather than just treating them as an unordered list, as in the present analysis). Finally, we could consider “quasi-grounding sets”, whose associated reachable set consists of r% of the whole dictionary.*

*Disambiguation of definitional relations.* Analyzing real dictionaries raises, in its full generality, the problem of word and text disambiguation in free text; this is a very difficult problem. For example, if the word “make” appears in a definition, we do not know which of its many senses is intended — nor even what its grammatical category is. To our knowledge, the only available dictionary that endeavors to provide fully disambiguated definitions is the just-released version of WordNet. On the other hand, dictionary definitions have a very specific grammatical structure, presumably simpler and more limited than the general case of free text. It might hence be feasible to develop automatic disambiguation algorithms specifically dedicated to the special case of dictionary definitions.

*Concluding Remark*: Definition can reach the sense (sometimes), but only the senses can reach the referent.

## References

- Barsalou, L. (2008) Grounded Cognition. Annual Review of Psychology (in press).
- Bondy, J.A. & U.S.R. Murty. (1978) Graph theory with applications. Macmillan, New York.
- Cangelosi, A. & Harnad, S. (2001) The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories. Evol. of Communication 4(1) 117-142.
- Changizi, M.A. (2008) Economically organized hierarchies in WordNet and the Oxford English Dictionary. Cognitive Systems Research (in press).
- Chicoisne G., A. Blondin-Massé, O. Picard, S. Harnad (2008) Grounding Abstract Word Definitions In Prior Concrete Experience. 6th International Conference on the Mental Lexicon, Banff, Alberta.
- Clark G. (2003). Recursion Through Dictionary Definition Space: Concrete Versus Abstract Words. (U. Southampton Tech Report).
- Christiane Fellbaum (1998) WordNet: An electronic lexical database. Cambridge: MIT Press.
- Christiane Fellbaum (2005) Theories of human semantic representation of the mental lexicon. In: Cruse, D. A. (Ed.), Handbook of Linguistics and Communication Science, Berlin, Germany: Walter de Gruyter, 1749-1758.
- (Frege, 1892) ⇒ Gottlob Frege. (1892). “On Sense and Reference.” In: Zeitschrift für Philosophie und philosophische Kritik, C: 25-50.
- Garey, M.R. & D.S. Johnson (1979) Computers and Intractability: A Guide to the Theory of NPcompleteness. W.H. Freeman, New York.
- Glenberg A.M. & D.A. Robertson (2002) Symbol Grounding and Meaning: A Comparison of High- Dimensional and Embodied Theories of Meaning. Journal of Memory and Language 43 (3) 379-401.
- Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42:335-346.
- Harnad, S. (2003). Symbol-Grounding Problem. Encylopedia of Cognitive Science. Nature Publishing Group. Macmillan.
- Harnad, S. (2005). To Cognize is to Categorize: Cognition is Categorization. In Lefebvre, C. and Cohen, H. (Eds.), Handbook of Categorization. Elsevier.
- Hauk, O., M.H. Davis, F. Kherif, F. Pulvermüller. (2008) Imagery or meaning? Evidence for a semantic origin of category-specific brain activity in metabolic imaging. European Journal of Neuroscience 27 (7) 1856-1866.
- Karp, R.M. (1972) Reducibility among combinatorial problems. In: R.E. Miller, J.W. Thatcher (Eds.), Complexity of Computer Computations, Plenum Press, New York, 1972, pp. 85-103.
- Kintsch, W. (2007) Meaning in Context. In T.K. Landauer, D.S. McNamara, S. Dennis & W. Kintsch (Eds.), Handbook of Latent Semantic Analysis. Erlbaum.
- Procter, P. (1978) Longman Dictionary of Contemporary English. Longman Group Ltd., Essex, UK.
- Procter, P. (1995) Cambridge International Dictionary of English (CIDE). Cambridge University Press.
- Pylyshyn, Z. W. (1984) Computation and Cognition: Towards a Foundation for Cognitive Science. Cambridge: MIT Press.
- Ravasz, E. & Barabasi, A. L. (2003). Hierarchical organization in complex networks. Physical Review E 67, 026112.
- Rosen, K.H. (2007) Discrete mathematics and its applications, 6th ed. McGraw-Hill.
- Steels, L. (2007) The symbol grounding problem is solved, so what’s next? In De Vega, M. and G. Glenberg and A. Graesser (Eds.), Symbols, embodiment and meaning. Academic Press, North Haven.
- Steyvers, M. & Tenenbaum J.B. (2005). The largescale structure of semantic networks: statistical analyses and a model of semantic growth. Cognitive Science, 29(1) 41-78.
- Tarjan, R. (1972) Depth-first search and linear graph algorithms. SIAM Journal on Computing. 1 (2) 146-160.

,

- ↑ In the context of this mathematical analysis, we will use “word” to mean a finite string of uninterrupted letters having some associated meaning.