
At the AGM in October, the YAE launched a new named webinar series called ‘The Inquiry Lectures’. This year’s theme is Artificial Intelligence.
The next webinar will be on the 12th of February, from 13.00-14.30 CET, when we will have lectures from Rachel Sterken and Emanuele Rodola. The webinar is open to all – not only YAE members. The titles and abstracts for the talks are as follows
LLMs are Candidate Generators
Rachel Sterken (work with Alex Radulescu)
When an LLM tells you ‘Paris is the capital of France,’ is it actually saying something, or just producing text that looks like it is? This paper argues that LLMs are best understood as candidate generators: sophisticated systems that produce well-formed text optimized to be useful, but without the beliefs, intentions, or commitments that characterize genuine communication. In a similar manner to the way a chair affords sitting, but the chair doesn’t sit; an LLM output affords asserting, but the LLM doesn’t assert. The real linguistic work happens when you take up that output and make it your own. This talk will chart a middle course between the two extreme positions currently popular. Pessimists dismiss LLMs as “stochastic parrots” mindlessly regurgitating training data, but this undersells their remarkable capabilities. Optimists attribute genuine understanding and beliefs to these systems, but this mistakes impressive engineering for human-like communicative agency and mindedness Our candidate generation framework acknowledges that LLMs produce extraordinarily useful and impressive outputs while maintaining that meaning, truth, and responsibility enter the picture only through human adoption. This isn’t merely an academic distinction: it has real implications for who’s accountable when AI-generated content goes wrong, how we should evaluate these systems, and what we’re actually doing when we interact with them.
Science at Scale Without Scaling Up
Emanuele Rodola
Scientific discovery is moving at a pace that is increasingly hard to track. While we have looked to AI to manage this information explosion, current “artificial scientist” models are running into serious technical and ethical barriers. The dominant trend of scaling up has become too expensive and environmentally costly, creating a divide that favors only the most resource-rich institutions. This presentation is going to propose a different path: interoperable machine learning. Instead of building bigger black boxes, we’ll look at how universal representations allow us to repurpose and stitch together existing models. This approach has the potential to make AI more sustainable, democratize research, and ensure that human scientists stay central to the process of creating and verifying knowledge.
The details to join the zoom are as follows:
https://uofglasgow.zoom.us/j/86955981448?pwd=JI6P1nArr4JbAFEVBa4s11Q83kvQXz.1
Meeting ID: 869 5598 1448
Passcode: Inquiry
