The computer scientist Ellie Pavlick aims to find out: She is translating philosophical concepts such as “meaning” into concrete, testable ideas.
John Pavlus in Quanta: Start talking to Ellie Pavlick about her work — looking for evidence of understanding within large language models (LLMs) — and she might sound as if she’s poking fun at it. The phrase “hand-wavy” is a favorite, and if she mentions “meaning” or “reasoning,” it’ll often come with conspicuous air quotes. This is just Pavlick’s way of keeping herself honest. As a computer scientist studying language models at Brown University and Google DeepMind, she knows that embracing natural language’s inherent mushiness is the only way to take it seriously. “This is a scientific discipline — and it’s a little squishy,” she said.
Precision and nuance have coexisted in Pavlick’s world since adolescence, when she enjoyed math and science “but always identified as more of a creative type.” As an undergraduate, she earned degrees in economics and saxophone performance before pursuing a doctorate in computer science, a field where she still feels like an outsider. “There are a lot of people who [think] intelligent systems will look a lot like computer code: neat and conveniently like a lot of systems [we’re] good at understanding,” she said. “I just believe the answers are complicated. If I have a solution that’s simple, I’m pretty sure it’s wrong. And I don’t want to be wrong.”
A chance encounter with a computer scientist who happened to work in natural language processing led Pavlick to embark on her doctoral work studying how computers could encode semantics, or meaning in language. “I think it scratched a certain itch,” she said. “It dips into philosophy, and that fits with a lot of the things I’m currently working on.” Now, one of Pavlick’s primary areas of research focuses on “grounding” — the question of whether the meaning of words depends on things that exist independently of language itself, such as sensory perceptions, social interactions, or even other thoughts. Language models are trained entirely on text, so they provide a fruitful platform for exploring how grounding matters to meaning. But the question itself has preoccupied linguists and other thinkers for decades.
“These are not only ‘technical’ problems,” Pavlick said. “Language is so huge that, to me, it feels like it encompasses everything.”
More here.