The study of the emergence of complex linguistic structures from basic building blocks is a challenging scientific endeavor, especially when it comes to language origin, use, and development. Vogt claims that an inductive model with certain constraints in the form of transmission bottlenecks that function as pressure mechanisms accounts for a faster emergence of compositionality in language than a predefined, deductive semantics that does not entail such pressure mechanisms. Based on an a priori learning mechanism for the emergence of compositionality in languages, he tries to provide proof for this hypothesis in the form of inductive learning of compositional structures in the constrained language of a number of simulated robotic agents.
Compositionality refers to the process of mapping elementary semantic values onto complex representations (for example, with the sentence, “Give me the book,” “give” maps onto an action, “me” maps onto a person, and “book” maps onto an object). Pressure mechanisms such as transmission bottlenecks refer to constraints placed on the simulation (for example, one existing language speaker and one learner versus a larger user and learner population).
The audience for this paper should be familiar with language game modeling, iterated learning modeling, and language evolution to appreciate the research. The caveat with any such study is, as the author acknowledges, that any modeling has to deal with a simplification of human language behavior. As a case in point, the paper investigates how objects are described and learned in an abstract environment.
The significance of the presented research is twofold: It provides a framework for studying the emergence of compositionality based on the influence of real-world regularities and randomly generated linguistic regularities, and it scientifically models the factors that have an influence on emerging properties/compositionality.
Linguists used to joke that, following Chomsky’s claim of innateness, language would be impossible to learn, and, following the “possible-world” semanticist Montague, language would be impossible to understand. Now language behavior can at least be simulated in constrained form using a computer.