In shocking news, everyone in the lab spontaneously and simultaneously lost the ability to type for slightly more than a year. Just as unexplainedly, just this week we seem to have all recovered, so we’re back with an update.
We’re excited to tell you about a few publications from the last year.
- A survey of hundreds of papers that summarizes what we currently know about large language models
- One study of how large language models compare to humans on a Theory of Mind task, one on how they deal with object affordances (can you dry your feet with a magnifying glass?) and one on how they do with quantifiers like ‘few’.
- Several papers using large language models to better understand the N400 evoked response potential, here and here.
- An analysis of the internal representations of multilingual language models
- A theoretical, behavioral, and computational magnum opus (if we do say so ourselves) demonstrating that human word meanings are both categorical and continuous
We’re also happy to have two new Ph.D. students join the lab this year, Sam Taylor and Yoonwon Jung. Welcome! As a result, the lab is as big as it has ever been. Quantity has a quality all its own.
See you in less than a year (p<0.05).