New News

There were some big developments this year for lab denizens.

Departures:

  • James Michaelov defended his thesis and left to start a postdoc in Roger Levy’s lab at MIT, working, among other things, on open science
  • Cameron Jones defended his thesis and left to start a postdoc … right back here in our lab! The main thrust of the work is on persuasion and deception in large language model-human interactions.

Arrival:

  • Pam Riviere, or at least the half of her that isn’t still in the Voytek lab, joined us to continue her work as a Chancellor’s Postdoctoral Fellow.

And then there was the usual slew of work we want to brag about.

  • X is apparently all a-twitter about our results showing the first machine to pass (for some definition of “pass”) a Turing Test (for some definition of “Turing Test”). It’s GPT-4, with a cheeky prompt
  • Some brand-new work comparing human and LLM Theory of Mind abilities like recursive mind-reading and this battery of tests
  • A comparison of visual grounding in humans and vision-language models
  • An Outstanding Paper-winning paper on the curse of multilinguality focusing on how to deliver LLMs for low-resource languages, and more analysis of why LLMs underperform for low-resource languages
  • The discovery that LMMs show cross-lingual structural priming, which might mean something very important about their abstract grammatical representations

It’s been quite a year.

Annual update

In shocking news, everyone in the lab spontaneously and simultaneously lost the ability to type for slightly more than a year. Just as unexplainedly, just this week we seem to have all recovered, so we’re back with an update.

We’re excited to tell you about a few publications from the last year.

  • A survey of hundreds of papers that summarizes what we currently know about large language models
  • One study of how large language models compare to humans on a Theory of Mind task, one on how they deal with object affordances (can you dry your feet with a magnifying glass?) and one on how they do with quantifiers like ‘few’.
  • Several papers using large language models to better understand the N400 evoked response potential, here and here.
  • An analysis of the internal representations of multilingual language models
  • A theoretical, behavioral, and computational magnum opus (if we do say so ourselves) demonstrating that human word meanings are both categorical and continuous

We’re also happy to have two new Ph.D. students join the lab this year, Sam Taylor and Yoonwon Jung. Welcome! As a result, the lab is as big as it has ever been. Quantity has a quality all its own.

See you in less than a year (p<0.05).

The lab website is live!

This is the designated home for updates on lab activities, publications, presentations, and awards. Will we maintain it regularly or ignore it and let it fall into disrepair for a decade like the last lab website? Check back often to find out!