For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.


We investigate the production, comprehension, acquisition and historical development of language (especially its phonetics, phonology and morphology) by combining evidence from language observations and laboratory experiments with computational simulations using artificial neural networks. We are currently inspired by recent methodological developments in artificial intelligence.

Key objectives

  • Creating a computational model of phonetic, phonological and morphological production, comprehension, acquisition and historical development.
  • The model should entertain multiple levels of representation, probably closer to 20 than to 5.
  • The model should be bidirectional, i.e. the same connections should handle production as well as comprehension.
  • In the model, phonological and morphological entities should be emergent rather than innately given.
  • Once the model works for phonology and morphology, it can generalize to syntax and semantics, thereby providing a new perspective on language as a whole.


  • The DeepFon people can join the BiPhon meeting, every other Friday at 14:00 or 15:00. This meeting is open to other phoneticians and phonologists, as well as to other ACLC and ILLC members interested in computational language emergence.
  • ACLC seminars (and the like) in 2021 and 2022:
    On the day before Klaas Seinhorst’s PhD defence, we organized a meeting at the ACLC, called Workshop on learnability and typology (February 18, 2021), featuring the following four talks:
  • Paul Boersma (ACLC), in collaboration with Kateřina Chládková (Charles U. Prague) and Titia Benders (Macquarie U., Sydney): Phonological features emerge substance-freely from the phonetics and the morphology.
  • Janet Pierrehumbert (University of Oxford): On iterated learning and lexical contrastiveness.
  • Jakub Szymanik (ILLC): Why are natural language quantifiers monotone?
  • Steven Moran (University of Neuchâtel): Evolution of speech sounds.


Paul Boersma (coordinator); Titia Benders; Dirk Jan Vet (engineer); Marianne de Heer Kloots; Charlotte Pouw and various thesis and tutorial students, both BA and MA.