There were some big developments this year for lab denizens.

Departures:

  • James Michaelov defended his thesis and left to start a postdoc in Roger Levy’s lab at MIT, working, among other things, on open science
  • Cameron Jones defended his thesis and left to start a postdoc … right back here in our lab! The main thrust of the work is on persuasion and deception in large language model-human interactions.

Arrival:

  • Pam Riviere, or at least the half of her that isn’t still in the Voytek lab, joined us to continue her work as a Chancellor’s Postdoctoral Fellow.

And then there was the usual slew of work we want to brag about.

  • X is apparently all a-twitter about our results showing the first machine to pass (for some definition of “pass”) a Turing Test (for some definition of “Turing Test”). It’s GPT-4, with a cheeky prompt
  • Some brand-new work comparing human and LLM Theory of Mind abilities like recursive mind-reading and this battery of tests
  • A comparison of visual grounding in humans and vision-language models
  • An Outstanding Paper-winning paper on the curse of multilinguality focusing on how to deliver LLMs for low-resource languages, and more analysis of why LLMs underperform for low-resource languages
  • The discovery that LMMs show cross-lingual structural priming, which might mean something very important about their abstract grammatical representations

It’s been quite a year.