Karl John Friston FRS FMedSci FRSB is a capacity in neuroscience and a theoretician at University College London.

Karl Friston is one of the most influential neuroscientists of our time and a central figure in the history of human brain mapping. Many listeners will know him for the free energy principle, active inference, dynamic causal modeling, voxel-based morphometry, and many other theoretical contributions.

In this episode, we take a different route and go back to the early history of Statistical Parametric Mapping, or SPM: the software and statistical framework that helped turn functional neuroimaging from a local craft into a shared scientific language.

We discuss how Karl moved from psychiatry into brain imaging, what the first PET activation experiments felt like, how SPM emerged at the MRC Cyclotron Unit and later the Functional Imaging Laboratory, how the software spread through the neuroimaging community, and how key collaborators helped shape modern PET, fMRI, VBM, DCM, EEG, and MEG analysis.

We also talk about mentorship, the culture of the FIL, open software, MNI space, spatial normalization, and what it means for a scientific tool to become infrastructure for an entire field.

00:00were Excel spreadsheets. So the very first implementation of SPM was actually writing down the statistical equations in an Excel spreadsheet, where each cell in the Excel spreadsheet was corresponding to a voxel. Oh, really? Simulating the data in order to perform. So that was important. It's important from time to time just to forget everything you thought you knew and start again. And just question everything. So at another level, it was a really important experience. Analysis. At the time, it was just everybody helping everybody else. And in a particular instance, it was basically me being sent with my code on a quarter-inch magnetic tape to Leslie Ungerleider and Jim Haxby and Barry Horowitz at NIH. And then copy it all back. 01:09Welcome to Stimulating Brains. Hello and welcome to Stimulating Brains. Today, I'm deeply honored to welcome Karl Friston, one of the most influential neuroscientists of our time and a central figure in human brain mapping. It is certainly not Karl's style to boast with numbers, but I would still briefly add that his published work has been cited over 400,000 times and he has an h-index of almost 300, which are both numbers that are typically unheard of. Many of you may be wondering, many listeners will know Karl for the free energy principle, 02:01active inference, dynamic causal modeling, voxel-based morphometry, statistical parametric mapping, and many other theoretical contributions. But in this episode, I would like to take a slightly different route than featured in most episodes on other podcasts with Karl. Rather than beginning with the more modern concepts such as active inference, we will go back to the early history of SPM, or statistical parametric mapping, the software, and statistical framework that helped turn functional imaging from a local craft into a shared scientific language. SPM began in the PET era, while Karl was at the MRC Cyclotron Unit at Hammersmith Hospital. The first SPM software, now often called SPM Classic, was shared with the emerging functional imaging community in 1991. The first major rewrite, SPM94, followed in 1994 and later versions, carried the field through PET, fMRI, structural MRI, EEG, MEG, 03:00DCM, VBM, Bayesian modeling, and much else. So today I invited Karl to tell this story from the inside. How a psychiatrist came into brain imaging, what the first activation experiments felt like, how SPM became shared infrastructure, who the key collaborators were, how the MNI space and spatial normalization shaped the field, and how the functional imaging laboratory became a kind of training ground for modern human neuroimaging. We will even hear that the very early first version of SPM was a set of Excel spreadsheets. That surprised me. As always, thank you so much for tuning in, Stimulating Brains. I hope you enjoy the conversation as much as I did. So thank you so much, Karl, Professor Friston, for joining this interview, the podcast. I know how busy you are. You even just told me that this is the third interview today. 04:01So thank you so much even more so for taking the time to talk to us. As you may have seen or know, I always start with one icebreaker question, which is about hobbies. What do you do when not working or when not engaged in neuroscience? I watch television. Fantastic. Any particular shows? Yes, I have a little routine. So as it comes to the end of the workday, my wife prepares my meal and we watch the news together. And then we watch some light entertainment or dark entertainment, depending upon the offerings on TV. And then I try and watch either a gardening program or a DIY program and then cap it off before going to bed, usually at about sort of 1 o'clock. Usually at about sort of 1.30 with a political roundup of the day, 05:01what the papers say and the like. So that's what I actually do. The proper answer to your question is once a year, I take August off and I do that to commit to landscape gardening. So I just try to improve my little garden, building temples and pagodas and pathways and the like. And then my wife populates it with pretty plants for the rest of the year. Well, I'd love to see that. Are there any pictures of this that you could maybe share for the episode? Yes, I'll send you a picture that my wife Anne took of having built a Gothic arch at the end of the garden. I will send you a picture of that just to prove I do do things apart from work from time to time. Fantastic. Sounds great. Going a bit more into your career, but starting off very early, what kind of childhood or early intellectual environment shaped you? 06:00And then, you know, maybe later, much later, who were key mentors and turning points in your career? This could be a long answer. You know, obviously... Summarize your life. Summarize my life. You've got all the important people in it. Well, clearly you have to start with your parents. And I don't mean that in a trivial sense. The thing is, they were extremely formative in determining the kind of scientist or person I was going to be and the particular sort of direction of travel in terms of scientific inquiry. So my mother was a nurse. But in those days, you had to give up being a nurse. Once you got married, you weren't allowed to be married as a nurse. So she devoted herself to bringing us up, her children. But she remained passionately interested in the way people worked and the way they behaved, in particular, popular psychology. So I was surrounded by books and conversations about popular psychology as it was in my formative years. 07:04My father was a bridge engineer and had a passion for physics and the like and used to make me read another kind of book. And the one that I remember most acutely is Sir Arthur Eddington's Space, Time and Gravitational... I think that was an eye-opening and wonderful book. So I had this mixture of physics and maths and psychology. So the obvious thing I wanted to be was a mathematical psychologist when I grew up. So I went to my careers advisor and said, I want to be a mathematical psychologist. And the career advisor said, Oh dear, well, you need to be a doctor first. Because he thought I wanted to be a psychiatrist. And I didn't know the difference. And he didn't know the difference. So I spent the first few years of my education studying to become a doctor and then specialising in psychiatry. But at the earliest opportunity, I tried to move into research. 08:00And that's where the list of mentors arises. And the first mentor was a gentleman called Phil Cowan who comes from a stable of very influential clinical academics in the UK that effectively established neuropharmacology. As a discipline. And there's still some of the good and great now. People like Dave Nutt in the UK were part of that group. That was my first exposure, which was fortuitous. Because understanding the importance of neurotransmitters and pharmacology in shaping the way our functional anatomy and our computational architectures has been an enduring theme. Especially in relation to the next set of mentors, which would include people like Peter Liddle. So Peter Liddle was at that time becoming the good and great. 09:00But at that time, a bright young thing in schizophrenia research. And was the only person to give me a job in research. And that job was to be working with the newly formed MRC Cyclotron Unit at the Hammersmith Hospital. They had just established, and where I say they, people like Terry Jones and Richard Frackowiak had established the first European sort of positron emission tomography infrastructure that enabled us for the first time to look at the brain in action. The very first, in Europe at least, or in the UK at least, the first time to have that window open. To have that window on functional anatomy. To have that window on the brain at work. Because I was working for Peter and his special interest was schizophrenia, my job was to collect data from a cohort of people with chronic schizophrenia. 10:05Which was an interesting endeavour. In the sense that it is now possibly unimaginable that you would ask somebody with delusions and delusions, somebody with delusions and chronic schizophrenia to expose themselves to a radioactivity machine and have their brain scanned at the same time. This was in the days when we used to administer the radioactivity, the radio traces required to form images of these tomographic reconstructions of the brain. We had to inject radio traces. And not into the veins, into the arteries. Really? Yeah. Well, actually that's not true. In order to monitor the blood levels of the radio traces, you needed arterial stamps. So this was a sort of heroic kind of early brain imaging. 11:03I'm sure that sounds like it, yeah. It was. We didn't actually do arterial stamps on the patients. One funny little story I remember from those days. A certain proportion of these, who I was also partly responsible for their clinical care, thought that the brain scanning actually cured them to a certain extent, which was very pleasing. Very nice. But I do remember then doing radial stamps on another great mentor, Chris Frith, who was at that time recruited to the MRC. And he was a rising star. Probably a risen star in schizophrenia research. Having worked with sort of structural images and discovered large ventricles and the like with Tim Crow at Northwick Park. But he now moved over and we worked together and tried to make sense of these positron emission tomography data. 12:04That was the genesis under the guidance of Richard Frackowiak and colleagues of the MRC. And then we worked together with some of the colleagues of the statistical parametric mapping software. So written out of need. Written out of need to make sense of what was at that time the biggest data available. So this was before the Human Genome Project. Yeah, yeah, yeah. Fantastic. We'll get to that, right? That's the main focus, to hear a bit about the origins of SPM and that history. I did hear you say once, probably jokingly, that you wasted six years of your life with the psychiatry residents. Was that a joke? Is that true? Do you think it was truly a waste given what you did later? Or was it actually really important to make you know more about diseases? How would you see that? 13:01You clearly want me to answer the second and you'll be absolutely right. No, it was a joke. Yeah. It was incredibly formative as an experience. And as you are intimating, it set the undertone for all subsequent developments and aspirations in subsequent research, even to this day now. You know, the software developments and all the theoretical developments that inherited from that and in terms of just understanding how the brain work were all in the service of trying to, get a mechanistic handle on the kind of psychiatric disorders to which we were exposed as young clinicians, and in particular schizophrenia. So very informative time. And also informative from another perspective, which is probably less anticipated. When you move into this kind of field, 14:03and the field I'm referring to here is working with people with severe mental disorders in a therapeutic community, you undergo a process they call de-skilling, which basically means you've got to forget about everything you thought you knew and then re-skill in this new context. In this instance, you know, a community of about 30 people with chronic schizophrenia and their carers and their doctors and psychiatric nurses and social workers. So that was important. It's important from time to time, just to forget everything you thought you knew and start again. And just question everything. So at another level, it was a really important experience. De-skilling, that's great. I have to remember that. Did you at the time already think about more mechanistic language for maybe psychiatry, such as, you know, inference prediction, brain dynamics, or something else, given you wanted to become a mathematical psychiatrist? 15:01No, absolutely. I mean, there was always that, you know, I realized I was on a journey. I knew I was on a journey for me. I thought, you know, I thought I was going to take that kind of approach. I thought, oh, this is clever. I'll go and see if it's an established thing. And I remember being both horrified and delighted to find that Donald Hebb had actually written down exactly this some decades earlier. And I was just trying to work out if I'd be born in time to actually meet him. And I hadn't. But the horror was that I'd wasted about a year working out something that was already known. 16:03And so this speaks to another, I think, sort of another missive or principle, which I keep referring to, and more so in later life, which are one of the two things written on Feynman's blackboard, supposedly or purportedly, at the time of his death. And the first one, which I'm sure we will return to later in the context of generative models and the free energy principle, is that which I cannot create. I do. I do not understand. Yes. The other one, know how to solve every problem that has been solved. So that puts a lot of pressure on making sure you know what's been solved before you waste years rediscovering the same solutions. But both of those, I think, sort of bits of advice are very pertinent then and are very pertinent now and certainly applied in those days. 17:01So, yes, absolutely. Whilst caring for... and learning one's clinical skills, there was always the deeper question in the back of one's mind, you know, what's going on here? How can I get a mechanistic understanding of what's going on? How can I create a model of this? And clearly subsequent experience in brain imaging realised that in a sort of pragmatic sense in creating forward observational statistical models of the brain in terms of brain imaging. Makes sense. When did brain imaging begin to look like a bridge between clinical psychiatry and formal models of brain function for you? You did mention the Hammersmith cyclotron unit there. Maybe you can even expand a bit how you arrived there, you know, how that looked like at the time software wise, imaging wise, how you maybe made that 18:02connection between you know the clinical work and and brain imaging yes i mean you know i did not make that connection it was made for me so this was one of those lucky accidents in one's career that you were just in the right place at the right time with um outstanding questions outstanding in many senses of the word and and so i was recruited by Peter Liddle to work on an MRC-funded project to look at um patterns of brain activity in people with schizophrenia this had been done before using glucose PET in America but it was the first time in in the uk that we'd used sort of a faster kind of water so the the key issue uh here was that one was able to take a sequence of PET scans um anywhere between six and twelve PET scans now that sounds you know exceedingly limited from from your you know point of view when we're talking about a variety of time series with thousands of observations 19:03but in those days this was the first time you actually had a time series a very very limited time so but it was a time series which meant you were in the game now of being able to compare patterns of brain active activation in real time under different sort of mental or cognitive brain states so it was a really exciting time and my brief was to um use this technology to try and see if there were any characteristic differences between the patterns of brain activation in people with and without schizophrenia but to do so you know your as your question invites um the you know there were lots of things that needed to be solved so the the your data analysis was clearly one of them you know one of the key packages that was available at that time to make sense of these data was something called analyze uh spelled the American way by Rich Robb who i subsequently met in um davos you know years later 20:05uh and so that was from Johns Hopkins University but it didn't um it was just a preparatory move to actually analyze the dynamics of um that you would require to make any inferences about sort of functional anatomy in terms of you know which parts of the brain responded uh to these changes in um perceptual set or cognitive uh um cognitive processing um does that still have it sorry for the uh uh interruption does that have something to do with the analyze image format the header and image uh yes yes that was exactly where it came from yeah interesting yeah that was that was you know that was um effectively designed for region of interest analysis so if you could you know you could summarize your data using a region of interest and then that was the the package of choice but beyond that there was nothing 21:28Excel spreadsheets where each cell in the Excel spreadsheet corresponded to a voxel in order to simulate the data, in order to perform a voxel or a pixel based analysis of an authentic statistical kind. So this is what we spent our first, well, my first year doing. So my job in a team of people who were all addressing really challenging problems from the radiochemistry of keeping these machines alive and working through the, 22:03you know, right through to the physics with Terry Jones to the clinical input of patients and the relationship with the Institute of Neurology by Richard Frackowiak. David Brooks was a key player. Chris Frith, of course, had come in to provide. And lots of other people, everybody's from psychologists to chemists to mathematicians to engineers. They were all trying to solve all of these problems together to just to get some one sort of sensible image out that summarised and enabled one to test hypotheses about how the brain worked. And it was a really important endeavour. I don't want to pre-empt any of your further questions. But, you know, this was a. This was the first opportunity to test some sort of really fundamental hypotheses about structure, function, relationships in the brain. 23:02In the sense up until that time, all we had was effectively neuropsychology. All we had was inferences that this brain could be segregated into functionally specialised areas simply because when I damage that area, I lose this function. So this was a neuropsychology. So this was a neuropsychological model. So the idea that you had functional segregation as a principle of brain organisation was purely hypothetical. But for the first time, you could now actually look at the brain responses, the special specialisation in response to your simple paradigms of the kind, you know, engineered by another of my mentors, Sami Ezeke. What's the difference between looking at sort of black and white images versus colour images? And. You are able now to to measure in every part of the brain the responses and show, yes, the functionally specialised colour centre in the brain visual area four was responsive to colour. 24:08But crucially, only that and that there was no other. So then you had a definitive test, empirical test of this hypothesis of functional segregation, functional specialisations, anatomically segregated. Of course, nowadays, people just take it for granted that functional specialisation is a thing. But it was just an idea in the sort of, you know, some late 80s and early 90s. And if we can. Just based on lesional inference. Absolutely. OK. Yeah, absolutely. Yeah. Yeah. So for the younger listeners that, you know, grew up in the fMRI era like myself, how did such a PET experiment look like? Maybe you can even. Describe this one that you just opened up with with V1 or one of yours like you did mention you had multiple scans, five to six scans, if I remember correctly. 25:03But but how did you like which task that people perform or how did it work? Well, the actual stimuli design and experimental design was almost isomorphic with the principles that you'd use in an fMRI design, in particular block fMRI design. So let's take two examples. So the the study of functional specialisation, the visual system, namely identifying things, so-called colour centres, for example, would simply involve asking people for about, you know, eight to 16 seconds to look at a particular stimulus, black and white image, during which day, the stimuli would be, you know, during which data were acquired and then cached. And then you'd repeat the process, but looking at a coloured image, the equivalent coloured image, with all the right sort of normal controls 26:02to make sure the only difference was in the colour of the, in this instance, the Mondrians used to induce the responses. And then you'd rinse, wash and repeat three times. So you'd have black and white colour, black and white colour. So you'd have six scanners. And then you would organise them and then test the hypothesis that there was a difference, or there was a sort of a lawful trajectory of the six scans up, down, up, down, up, down. And it may have been increasing with time. So you've got an interaction with the order or the time and all the normal things that one would associate with a block design fMRI experiment. So you can look at sort of early PET scans as basically lumping together sequences of blocks in a block design fMRI experiment. Just giving you, you know, sometimes six, but later on 12 bites of the apple. So conceptually not dissimilar from fMRI, but fMRI was two or three years away. 27:04So a lot of the problems that would subsequently be required to be resolved for fMRI were actually addressed in the context of PET, such as spatial normalisation, if you want to leverage the signal-to-noise reduction inherent in averaging, or put another way, if you wanted to demonstrate that this particular aspect of functional anatomy was conserved over subjects, then you had to put them into the same anatomical space. So this induced the notion of spatial normalisation. Not only that, but you also had to register the spatial normalisation. So you had to register the spatial normalisation. And you had to do the PET scans in case the people move. And of course, if you've got chronic schizophrenia, you're likely to not be able to stay still. So the things like spatial registration, spatial normalisation, they were all open problems 28:01that required solutions. Solutions that in spirit still survive today in the software and in the sort of state-of-the-art techniques that people bring to the table. Even if in implementation detail, they have obviously improved. But these things were solved. What were not solved when fMRI came along was the very fact that you can now acquire an image in a matter of seconds, with a TR of two seconds or even less nowadays. Then that introduced something that wasn't in the PET data. In the PET data, the positron emission tomography data, you were taking data samples minutes apart because you had to let the radioactivity wear off before you then started to do the test. So you started to inhale the radioactive oxygen that was then labelling your... playing the role of a radio tracer that could be picked up by the scanner. But when you're acquiring data generated 29:03through exactly the same hemodynamic and neurovascular mechanisms that fMRI, or BOLD fMRI at least, rests upon, you're now acquiring data faster than the natural correlations in the data. So then there was a whole other period of development, not only having to deal with the spatial correlations induced by working with extended images, and then that's another story which underwrites the development of statistical parametric mapping, but also the correlation structures and the introduction of convolution models and deconvolution, implicitly the inversion of convolution models apt for fMRI data. So... And with that, maybe for the listeners, you mean HRF convolve, like convolving a stick model with something like HRF, right? Yeah, absolutely. So the hemodynamic response function, 30:00how was that discovered? I mean, we're jumping to fMRI pretty quickly now, but when came that canonical HRF function about? Is that early days? Well, I mean, yeah, it's a great question because it inherited exactly from, how to separate fMRI signal in fMRI time series from noisy, smooth fluctuations. And this is not a trivial problem. And I remember people like Ed Bulmore spending many years of their life with colleagues at King's looking at this, as did we. So to do this separation in this particularly difficult problem that your random effects, or your random fluctuations, are not identical between the observations. So if they were identical, you could just use standard least squares estimators and standard parametric statistics. But when you've got serially correlated data, that becomes much more problematic 31:02in terms of the effective degrees of freedom in time. You know, how many independent observations do I actually have in a sequence of fMRI data that have been sampled so quickly that the noise or the random fluctuations, at least, are contaminating each successive acquisition slice, for example. And that really rests upon understanding the dynamics of the signal. And understanding the dynamics of the signal can go many different ways. But mathematically, you have to do it. That which I cannot create, I do not understand. So you have to have a mathematical model of the fluctuations that are induced by experimental design, the effects of interest relative to the, the random fluctuations that are not. And therefore, we need to understand neurovascular coupling. So now you need a model, a generative model of the way in which neural activity causes 32:01what you observe, the BOLD signal, which is through a hemodynamic response. That's a dynamical thing. There were, at that time, hemodynamic models that would link blood flow, to BOLD, known as balloon or Windkessel models, pioneered by people like Richard Buxton and Mandel and other colleagues. But there wasn't a full hemodynamic model that ran from neural activity induced experimentally through to the hemodynamic response. So we had to build these models. And when building these state-space models, or differential equations, if you pinged these things, then they have the functional form of a hemodynamic response function. And a first order approximation to these usually nonlinear responses, 33:00because these hemodynamic models have an inherent nonlinearity, say, due to the elasticity of the vascular architecture and the like. But to first order, you could then treat this as a response function. And it would be the hemodynamic response function. And it became the canonical hemodynamic response function, simply because its functional form was so conserved over brain areas and over people in humans. And that in turn, well, how do you put that into a model? And you can go one of two ways there. You can sort of live, remain in the world of differential equations, state-space modeling, and that ultimately became known as dynamic causal modeling. Yeah. Yeah. Yeah. Yeah. Or you could say, OK, let's just take a first order approximation and put it into a general linear model. How do you do that? You do exactly what you just said a few minutes ago. The stick function, yeah. Exactly. So your stick function now stands into the sort of neuronal perturbation, the experimental effect 34:02you've induced by experimental design. And when you convolve with a canonical hemodynamic response function, you are now basically replicating what you would have done if you'd integrated all the dynamics and the neurovascular coupling into a full state-space model or dynamic causal model or the same kind of thing. And that then became known as the classical GLM, but it's actually a convolution. Because you're convolving your stick functions, you're actually creating a good model of the hemodynamic response to a neuronal perturbation or input. But the story doesn't end there, of course, just because you've now got a good model of your signal, you're now able to do the same thing. But you still have to contend with the correlations amongst the noise, the random fluctuations, which now led to the notion of the effective degrees of freedom. And that required a reapplication of the theory 35:03of stochastic processes that have these correlated structures in them that would be used to do the spatial correction, effectively looking at spatial degrees of freedom when making differences in statistical parametric maps. And that's what we applied in the context of the temporal behavior of fMRI time series. So there was an interesting sort of repurposing of the maths that we developed for dealing with spatial correlations, spatial smoothness, to handle temporal or serial smoothness in fMRI time series. Interesting. It got complex pretty quickly. And I guess deconvolving from BOLD signal back to neural space is probably even more complex. 36:05Yeah. you're normally assuming serially uncorrelated errors. However, it's not really a problem in the general spirit of things because when you invert a general linear convolution model, you are doing the deconvolution. So you don't need to actually estimate the neuronal response. All you need to do, again, referring to this notion of the importance of a generative model underneath the data in the spirit of that which I cannot create, I do not understand. You actually have to build something, a model of it, before you understand it. You build it. Yeah, makes sense. I knew the deconvolution from PPIs as well. So I know that in the general task-based fMRI analysis in the GLM sense, 37:03you don't need to deconvolve. You construct based on the neural behavioral or perturbational task data, you do. You essentially build a fake BOLD signal, an estimated BOLD signal, and then you compare that with every voxel in the brain with the real true BOLD signal and get a beta estimate. That's my basic understanding of the idea, which I still find very brilliant. And you did mention when we talked about the PET data before, the control condition. I think the general idea of doing something like this, even STEM, and I think that's something that's really interesting, is that you have two different conditions that are almost the same, but there's one thing that's different, right? And then you subtract them from each other. So I guess that is probably one of the big principles that, 38:00I don't know if you had to invent it or you had to come up with to even make sense of PET and fMRI data. Yeah. Again, that's a really insightful question. Before I answer that, I'm just telling you little stories about how we address that and how we and others address that. I just want to emphasize your important observation that having a forward model, a generative model, a model that maps from causes to observable consequences, if that model is a convolution model, then inverting or fitting that model is a deconvolution operator. So that's what I meant by we're always deconvolving, even in our perceptual senses. EEG reconstruction or fitting fMRI general linear convolution models, these are all deconvolutions, but they're deconvolutions simply because we are inverting or fitting a convolution model. So that's why you don't need to do the deconvolution explicitly. 39:01You just need to invert the right kind of generative model. And in this instance, it was a convolution model. That's such a general principle, which has stuck with me and I think all of my colleagues throughout the decade, I think it's worthwhile saying out loud using your words. But the other fascinating, yes, you're absolutely right. And of course, we were not alone in this. So we're talking about the nascent field of human brain mapping as we know and love it. You're pioneered by work by people like Marcus Raichle and Peter Fox and Steve Petersen and other colleagues at WashU and St. Louis and lots of other people around the world. You know, Alan Emerson, who's the founder of the Neurodegenerative Services Group in Montreal. Well, I won't list because people will be upset if I don't list them. But just to say there was a community that led to the organisation of human brain mapping, because before that, the only place that we could talk to each other 40:02was the Society for Regional Blood Flow and Metabolism. So, honorably called the gerbil stragglers because they were interested in stroke. against ! in the neurovascular architecture and lesions to it. So a lot of these people initially got together to talk about these problems at the Society for Regional Blood Flow and Cerebral Blood Flow and Metabolism. And it was only after a few years that the Organization for Human Brain Mapping 41:02was formed as quite a small society, just to allow people to shout about these issues. And the key issues, of course, at that time were experimental design. And you're absolutely right that the very early subtraction studies did appeal to Donder's notion of cognitive subtraction, which you could think of as the very first brain mapping experiments in the sense that he was looking at the temperature of the skull that was overlying the functionally specialised responses with and without ringing bells, for example. So immediately... And then, ultimately, that started to lead to a more principled approach to experimental design and the ontology of designs that survives today as the basis of a good experimental design. And I should add, sometimes violated by sort of new wave approaches as each generation comes and learns the skills of the trade. 42:04But eventually they will all return to this kind of... ontology of experimental design. And the first important move is to move beyond simple subtraction designs where you've got one activation condition and one baseline. You're just basically doing a careful task analysis in order to identify the one task component or processing component that differentiates between these two conditions. And then you can attribute the difference in activation to this particular task component. What becomes more interesting is when you now turn to factorial designs, when you've got two factors. Because if you are making the assumption of what's called pure insertion, which is where most of the original debates and arguments and advances, largely due to Cathy Price, who then became famous in language 43:00and recovery of stroke research. If you... If you now look at factorial designs, then you have to immediately develop a sort of a mental image of differences and differences and differences, which would be the interactions and the importance of being... making sure that your experimental design is balanced in order to efficiently estimate the interactions to test the assumption of pure insertion. By pure insertion, what I mean is if I add this cognitive process, this task component to this compound task, is the extra activity that I see purely explained by the addition of this component? Or is... Have I changed the context in some way and there's an interaction now between the added component and the task at hand? And the only way to really address that is to have a factorial design. 44:02So that was the rise of factorial designs. And then you ask yourself, well, what happens if one of the factors is not easily binned into categorical things like sort of black and white versus colour or rewarded versus unrewarded? What happens if it's more subtle, like the amount of reward... Yes. ..or the brightness of an image? And then you move into parametric designs. So you've got these sort of two moves you can make. You've got sort of your standard subtractive design that can then be embellished with multiple factors to multifactorial design, and any one of those factors can now be made parametric. And all of this had to go into the general linear convolution model. So, you know, immediately you move, if you're just dealing with PEP, from an analysis of variance kind of models with indicator variables saying that this is condition one, this is condition two, this is condition three, to analysis of covariance, 45:00where you're allowed now for parametric regression, to see the differences in the ! So I guess Excel was not good enough anymore at that point. What was next there? Yeah, so at this point we've moved to MATLAB. Yeah, absolutely. Why MATLAB? It's interesting now with, you know, all the Python buzz. I'm a dinosaur already. I still work with MATLAB quite a lot. But, you know, I remember loosely 46:02there used to be an academic solution back in the day from what's called MatrixLab and was then bought by MathWorks, I think. Was that a deliberate choice? Why MATLAB? Was it the only powerful enough tool that could handle this? Or, yeah, how did you choose that? Yeah, I guess that is, I think, a very pressing question in relation to the current celebration of the community of the internet. You can now do SPM in Python, which seems to please a lot of millennials like you. I can't quite get into the joy of that because I still work with MATLAB. But you're absolutely right. At that time, MATLAB was the academic software. And actually, I might argue it remains the academic software for true academics, not in machine learning and not in getting your chatbot to write code for you. You'd have to go to Python for that. But when it comes to sort of high-end academic inquiry, 47:03then MATLAB was seen and still to a certain extent. I think now sort of software engineers, I think about sort of 56% use Python and only about 3% to 4% use MATLAB anymore. But in academia, I think you'd probably find many more people use MATLAB. And certainly at that time, it would have been 100%. Mm-hmm. So why MATLAB? Well, it was a third-generation or third-order kind of language, so it meant you didn't have to learn C or C++. You could actually get straight in there. And crucially, you could write down the expressions that you would find in a statistical text on how to analyze your data. So you could write down literally the expressions you would find in a text. It didn't exist in those days, but your standard text on general linear model, for example. 48:02So it was this very high-order language that allowed you to express code in a way that was readable by somebody else in reference to their understanding of what they were creating as their generative model. So it was pedagogically a really important move. So we certainly did not want to write it. We could have written it in C, which is much, much more faster, much more efficient, and much easier to use. But it would have had absolutely zero educational or... So it would not have been useful in socializing the ideas and expecting what they're actually doing under the hood. And it's interesting, I think, you sort of bring up the pedigree of MATLAB. So before it was commercial, you're absolutely right, it was written by people like you and me in the early days of X-ray crystallography. So I think in New York, I may be wrong. 49:01But so people with our academic aspirations and funding were suddenly confronted with this massive X-ray crystallography data. How on earth do you make sense of this? And what they needed, of course, was some way of doing really fast Fourier transforms, not dissimilar to sort of tomographic image reconstruction in post-traumatic emission tomography, or indeed sort of fMRI image reconstruction. So they actually built this software that was super fast and really accessible for fast Fourier transforms and handling large matrices, which was, of course, perfect for brain imaging that had very large matrices. So there was a natural choice, and it was in the spirit... You know, we were using MATLAB, and I don't know that we even had to pay for it initially. You know, it was academic. It was academic software. I think... I can't remember. This is probably a made memory or a false memory, 50:00but I think the notion of paying for it or paying for a licence only came in a few years later, where people realised that suddenly there was an uptake in this community. And indeed, nowadays, literally this month, MATLAB have started to fund... or look at funding SPM development because it is still used and written. covers the toolbox everybody knows that but who else played a role and then what what how how did i assume it was probably small humble beginnings mainly written for yourself or for the you know internal use or how did that all come about no that well that's that is absolutely right it was 51:03just um small internal user literally small with small excel spreadsheets and small mac in the box computers yeah for one or two other people uh you know long-term friends people like uh david brooks and uh and colleagues in belgium doing do you know trying to analyze that actually this was radio um receptor radio ligand binding studies um looking at parkinson's disease um and then uh when handling larger data of the sort required to analyze the schizophrenia data that we were um um required then to analyze with everything that we were able to do and then we were able to do that with the data that we were able to do and then move to matlab and a more expressive environment um and you're absolutely right it was just a handful of people who were rushing to ensure that there was a pipeline that could make the optimal sense of these data by sticking to first principle accounts and um you know good practice in terms of a generative model and sort of base optimal or 52:03maximum likelihood optimal and the you know it's an original instantiation um solutions um or solutions of these generative models so andrew holmes was a really key player in those days um he subsequently got um seduced into into pharma and industry and it's probably very very rich somewhere in the north of england uh but originally he was he was he was inspirational and in terms of co-writing the code and making sure that the implementation of the general linear models that were under the hood in those early code bases 53:10about a year or so after this sort of Excel level implementation of spatial normalization. So he was another key player. So all of these characters, with the exception of Andrew Holmes, are still on the scene. Some of them sort of pursuing a sort of open science, educational, pedological role. Some still deeply committed to generative modeling and computational anatomy like John Ashburner. And me still sort of carrying the torch. And John, so John Heather was another name I read. Yes. Yes, I forgot to mention John Heather. Yes, absolutely. Where did you get that from? I'm not even sure. I think from, I don't know if it was from the SPM book or from, I have that in my cupboard actually, 54:00the old red one, or from the website. I'm not sure. But I have it in my notes here. Yes, I forgot about John. Yes. So John was, yeah, he was a sort of, what would be called a data scientist nowadays, but a sort of software engineer, systems administrator. No, he was a more sort of paternal figure to all these young 20-year-olds who were actually writing software at that time. I think he ended up in Thailand sort of having a nice time. Interesting. I haven't heard from him in decades. John Ashburner, in my naive view, looking back, is always, I associate him mainly with the spatial things, right? Normalization, co-registration. Is that a dart hell in the end or a shoot and before that segment, is that a unified segmentation? I mean, is that accurate or? Yes, that's what I meant by the computational neuroanatomy. Yes, he was interested in the spatial transformations. 55:03And as you say, that list of subsequent developments, through all refinements of the way that you generate an image of a particular person's brain. So again, under the hood of all of those developments that John has been pursuing over the decades is a commitment to understanding or normalizing, registering, classifying, segmenting a brain that has come from this person by generating that particular person's brain, by starting with a generative model that is a canonical brain and then warping it in the right kind of way, by a physically plausible way, until it matches the observed brain. And then, because again, it's this notion that you can do deconvolution by inverting a convolution model. And this is, you can unwarp a brain by inverting a warping model. So if you start from a standard atlas, for example, as supplied by the work of people like Alan Evans, 56:01the MNI standard atlas, and then warp it in the right kind, in the right kind of way, using things like Dartel and Shoot and further more biologically plausible ways of shaping and warping brains to make it match this person's brain, then you've solved the problem, provided you can do the inversion. Yeah, interesting. Makes sense. And we use that every day, of course. In the PET era, you probably did linear registrations, right? There's not enough contrast. Well, there's not enough contrast. Well, there's not enough contrast. Well, there's not enough contrast. Well, there's not enough contrast. Well, there's not enough contrast. Well, there's not enough contrast. Well, there's not enough contrast. That's certainly true for most applications. However, the early sort of application of this sort of generative modeling approach, you know, sort of taking a template and warping it, did actually use nonlinear basis functions, but they were very low order. The reason that I may falsely remember that 57:01is that when we came to spatially normalize the schizophrenic brain, the schizophrenic data, the data from the people with schizophrenia, there was an enormous variability of a nonlinear sort in terms of different patients, sometimes having particularly large ventricles or cortical thinning. Sure. So it was a very heterogeneous, so we deliberately wanted that. We wanted that within cohort variability to look at sort of parametric effects and correlates within the syndrome of schizophrenia, non-schizophrenic form disorders. And so, we actually did need to use nonlinear, but the degree of nonlinearity is easily tailored by the number of spatial basis functions that we used in order to do the warping. So the warping fields can be very, very smooth. And in those days, a linear mixture of low order spatial basis functions, things like discrete cosine sets, and you can truncate it to control the degree of smoothness. 58:02So, yeah, a lot of the very earliest spatial normalization was literally an affine normalization, a linear 12 parameter based upon landmarks identified on the ACPC line, which is a gift to people like Peter Fox and the St. Louis group. But spatial normalization, proper in the sense we're talking about it, started off gently nonlinear and has become increasingly nonlinear and accurate and more precise as the years have rolled on. and the ACPC registration has, of course, a deep history in stereotactic surgery, right? Which, you know, this podcast is often about, uh, the brain stimulation and the sorts. It was the Talairach atlas originally, I think, created for clinical purposes, but that you became a first space, right? Before the, absolutely. Yeah. Yeah. So that's, I mean, that nods to the heritage, you know, so the, the, the clinical heritage of cerebral blood flow metabolism and its society and all the clinicians, 59:02who were involved. So you had physicians, but you also had surgeons. And so, you know, that was the, and remains, I imagine, uh, the, um, the, the reference brain for, um, human brain mapping. Um, and, um, so I mentioned the MNI space, which was basically the Montreal's version of a TALARAC frame of reference. Um, that, as you say, is, um, uh, and was, um, predicated on the AC anterior commissure posterior commissure, uh, line. Um, and I imagine most of functional neurosurgery nowadays uses exactly the same, um, reference plane, and which becomes particularly acute when it comes to all the subcortical machinations that you love. And that's, yeah, I mean, it is interesting to me, you know, because I, I kind of came in my, you know, comparably young history. I came from the human brain mapping field and then kind of traversed into the more clinical realm and, 01:00:01and, and, and became interested in the ACPC space. So I kind of learned about the MNI space first, and then the functional coordinates, which are simpler, which are really just three landmark points in the brain. And then relative to that, you know, a normal Euclidean, um, coordinate system. And I always thought the nonlinear form of an MNI transform is makes more sense. It would make things more comparable, would make things maybe more precise. And I think as you know, time goes on, there are some targets, for example, Helen Myberg's target in the, for depression in the subgenual, um, cingulate is just too far away from the ACPC or too, um, you know, variable regards to a linear ACPC transform that they use more and more the MNI coordinates. Um, but much of the subcortical, subthalamic, um, the brain simulation often still reports standard ACPC coordinates where, you know, 15 millimeters could sometimes be lateral or sometimes medial to the nucleus, 01:01:01because there's no correction for it. And I think it's, uh, coming more and more also to the clinical field to use these more nonlinear transforms where a coordinate is transferable across brains, right? Where, yes. So, but, but it, it, up till this day, people are not, it's not always being used. Um, and, uh, surgeons sometimes mistrust non-linear, non, non-linear transforms. Oh, I didn't know that. Right. I mean, from my perspective, I imagine the perspective of people like, Alan Evans, um, and, and his colleagues, um, that they attempted basically just to get a group of spatially registered, um, neuro, neurotypical, um, images, structural images into the Talairach space. So that, you know, that whole point, that the whole exercise was to basically provide an image based, a pixel or voxel based, um, Atlas that would go hand in hand with the Talairach atlas, 01:02:01per se, based upon a single individual. Um, but sort of, you know, conceptually, um, committing to exactly the same frame of reference and the same origins. Um, and, you know, from our point of view in terms of spatial normalization using non-linear deformation fields, I think, I think Mike Miller was also working on these lines. I, it's a long time ago now, Fred Bookstein, anyway, these, these are names which, which, um, um, it would be courteous to mention, but I, I may be miss mentioning them. Um, that, um, the, the agenda was basically to get every brain into this Talairach space that was nuanced through the MNI averaging. So there was, there was a linear affine transformation from the MNI space to the, um, to the Talairach space as defined by the Talairach atlas, uh, which, you know, some people worried about, some people didn't. Um, but the whole point was to get this person's brain through a non-linear, 01:03:00inverting the nonlinear wall, warp into this, um, linear space. And that's exactly the unwelping that was solved by starting with the Talairach space as embodied in the Talairach, in the MNI's brains as the generative model for, um, the spatial normalization. So, you know, it's a shame that people are suspicious of the non-linearities because the whole point of spatial normalization is to, um, remove the non-linearities to get, to get you back into a nice ACPC, uh, frame of reference. And I guess it's, it's a matter of training, right? And I mean, surgeon, surgeons might be, you know, um, worried for good reasons because of course these transforms can be misleading and wrong sometimes, you know, and if it's about precision and millimeters, um, but I, my, my, my perception is that younger neurosurgeons that have trained with these techniques and done them themselves are less, uh, worried or more, you know, they, it's, as always, it's a tool, right? You can use it correctly and wrong. 01:04:00Um, but you know, back to the history here. So, so, uh, when did the MNI space come in? You mentioned Allen Evans and, um, MNI 305 space was the first ones that correct or yes, yes. I was a linear average if I remember. Yes. Yes. Okay. Yeah. And then the more famous one, which had 152 nonlinear warps as well. I think there's a linear version and a nonlinear version. How did that, it's, you know, also interesting in terms of just that, that must've been a time when SPM was already a thing probably used worldwide. When you started collaborations with Canada, how did that, who, who were the players? How did that come about? Well, that was the, yeah, the, the spatial, the commitment to those particular spatial, uh, coordinates that was, uh, yeah, it was multilateral. Yeah. There are lots of players. Yeah. This, we mentioned the, um, St. Louis group, the, um, Los Angeles group, John Mazziotti's team, Art Toga and the like. 01:05:01And all the work being done by, um, all, all their colleagues. Um, and, um, Roger Woods, I think was a player, had something called ART. Um, I can't remember what the, um, the acronym stands for. Um, and then you have the NIH people, um, sort of Jim Haxby as a young man, and Leslie Ungerleider, um, and subsequently people like Peter Bandettini, uh, you know, joining them. Um, so, so these were, and many, many others, but not that many, you know, you could fit them all into, into a large room, um, where the players, um, I think there was a joint commitment to this particular space. And we all used the, uh, hard work done by the Montreal group in creating these, um, dissemblable atlases, averages, that's sort of the role as, um, templates, if you like, for this generative modeling or spatial normalization. Um, and I should say that uptake was, was not just in terms of data analysis, 01:06:00but sort of also in establishing the very first examples of, um, open science databases. You, you, you could argue that SPM was the first example of an open science data analysis software, but in terms of databases per se of the kind that were emerging at that time under the auspices of the human genome project, the human brain project, um, no, not the human brain project, um, the Peter Fox's, uh, initiative to try now, collate the accumulating evidence for different kinds of functional specialization in an atlas format required a commitment to a particular standard reference frame. And that was the MNI version of the Talairach space. So he played an enormous role having, uh, remember monthly meet, um, not monthly, yearly meetings in San Antonio where we'd all gather and talk about, so how to standardize things, because these were also questions, 01:07:00um, um, the late app, um, um, cerebral blood flow metabolism meetings and the very, very early, um, organization of human brain mapping meetings. So it was a general consensus among not unnecessary, massive number of people. And this was a small field, you know, you couldn't really afford to do pet and no one really had easy access to FMI at this stage. And if you did, you'd have to pay a lot of money for it. Um, so, you know, it wasn't difficult to reach consensus. It was all about trying to, um, present ourselves as a coherent collegiate community with valid internal standards. Um, that was interestingly quite important at that stage. Um, I think people like Peter Fox recognize this. Um, and I only post hoc really recognized it because there was a lot of, um, competition from other fields. Um, and sort of one interesting sort of perspective and story, um, 01:08:00I remember from that era, you know, when those very early pet studies came out, everybody, it would look like from a retrospective position, um, that you were just picking the low hanging fruit from the, at the time it was pioneering, but the first group of spatial specialization, the brain. Um, so from, from our point of view and from our naive prospective point of view, this, this was, um, cutting edge science. And, And because of that, it was, you know, one had the audacity to submit everything to nature. And of course nature accepted everything. So from the point of view, so it was quite normal for me to have my weekly, well, I was not sure I ever had one nature paper, but, uh, it was quite normal for the community to have their weekly nature or science paper. That was a way the science worked. We were doing cutting edge science. But of course, if you'd spent decades, uh, in a monkey electrophysiology, 01:09:00trying to adjust similar questions, training your students to, um, keep monkeys alive and do invasive electrophysiology and all of that, you were horrified because your experiments would take two to three years to train the monkey, to do the recordings, to analyze the data. You'd have two or three monkeys at most. Two or three laters, you write your nature of science paper, but we were doing it every week. So, because it's so easy to acquire the data. So there was a lot of, um, under the hood angst from non brain imaging community, neuroscience, existing neuroscience community about the, uh, the potential hype of brain mapping at that time. So there was a lot of pressure on us to establish our rigor and, um, I repeat sort of that we're using standardized and, uh, techniques and well validated good practice. And part of that involved a common command, commitment to standard spaces and interoperability and communication. 01:10:02And part of that was all committing to the same, um, canonical space within which to report our results. Of course, the other part was that the really detailed, um, work done with people like Keith Worsley and Andrew Holmes, um, on the, um, the correction for multiple comparisons, because that was a very big thing. You know, if we didn't get that right, then we could be accused of all the, the detractors of brain imaging at the time, which were numerous, you know, ranging from, um, people quietly muttering and then neurophysiology labs to people like Fodor, you know, big people and the philosophy of science, for example, possibly philosophy of mind critiquing brain imaging, um, lots of arguments about false positives and, and, and, you know, the lack of statistical validity. So all of these things had to be counted. Um, and that meant that unlike nowadays, I have to say, there was a lot of pressure on being statistically extremely valid and rigorous in the way that you reported your results. 01:11:04You couldn't say, oh, this looks like the default mode and I've done a bit of representation of similarity analysis. And this looks a bit similar to that. You know, you had to very, very, very precise in terms of, um, protecting yourself against false positives, simply because any excuse that any, a peer reviewer from outside the field had to reject you, they were going to use because they saw you sucking up all their resources. All that space, all that PhD students, you were, you were basically a competitor. So, so it was not, even though there were weekly papers, it was not easy, right? And, uh, it was still hard work. I'm sure. Uh, my, my, my doctoral father, Felix Blankenberg, who trained with you back in the day, I'm sure you remember, remember him. Him. He once said, this was my early days. So I hope it's correct that he said, um, you sometimes dictated papers. Is that true? You, well, but by the time, by the time Felix was, was intimate enough with me to know that was absolutely true. 01:12:02Yes. Yes. That's very impressive. Yeah. To, uh, have the, you know, um, physical or the, the mental capability of dictating it. Um, yeah, he's, oh, well, if you don't do that, I would certainly try it. I find that it, um, you're much more coherent and natural when you're actually just speaking to something or somebody. So if you pretend you're speaking to somebody, try to explain it to your, your mother or to your student, you are the actual narrative that comes out and the structure of the text is actually much more convincing. I, I find I write, if I want a really important paragraph to look good, I could type it on the keyboard in a few seconds, but I don't, I deliberately get up, walk around with a dictaphone. It looks, I have to think about what I'm saying. Yeah. Yeah. So it makes sense. I, I heard it before my first paper. So back then it wouldn't have been an option to try and dictate, but, but now, you know, you're right. I should maybe try it. Yeah. 01:13:01Good point. So one, one other question, you know, was there ever an SPM one or is the early sequence better understood as SPM classic, I think, and then 91, 94, 95, 96, 99 and so on. I think so. Yes. Again, you probably best ask people like John Ashburner who are slightly younger than I am. I can get here to an age where most of the time, most of my memories are probably made memories. But I think that's absolutely right. Yeah. You know, the SPM classic was just the code that we wrote in the MATLAB that we gave to friends. And, you know, if you, I can say, if you remember, because you could remember because you were probably still at school if you were born at that stage. And so this was in the era before email. Yeah. It was certainly before social media, 01:14:00but certainly even before email. So this was in the era when you wrote a paper, you couldn't email your paper PDF format to the editors. You couldn't use PDFs and computer graphics to make your figures. You literally had to, you had to sort of cut out photographs and paste them on. And I still got some early examples of the very first specialization papers that we used. And then they would have to be hand typed. Yeah. And then, you know, assembled and then sent by snail mail to the editors. The same with software. You know, you couldn't electronically communicate software. So that meant physically carrying in those days, it was, um, um, uh, cartridge magnetic tapes. Um, so the, the original, and you know, it sounds like a grand move nowadays. And of course, in retrospect, you know, people will tell a story that, you know, this was, this was the first commitment to open science. And, uh, you know, it sort of subverted the commercialization of data analysis software. 01:15:01And it was the right way to socialize, um, and democratize, um, data analysis at the time. It was just everybody helping everybody else. Uh, and in particular instance, it was, uh, basically me being sent with my code on a quarter inch magnetic tape to Leslie and Jim Haxby, um, Barry Horowitz, NIH, and then they're copying it on their computer and see if they can get it to work too. Uh, so this was, this was in the days before people like Bob Cox and, uh, Adney, uh, you know, it was really, uh, we need some software to analyze these data because we don't know what to do with it. Karl, you know, can you send Richard, can you send Karl across, uh, see what is okay. So you know, and in that time, you were in Richard Frackowiak, like slap when you really like built SPM at the time. Yes. No, Richard Frackowiak, uh, should be, I think sort of, um, acknowledged as the, uh, group leader that was, um, in charge when all of these innovations and developments were first conceived 01:16:03of and, and, uh, and prosecuted. Absolutely. Yeah. And then, um, when was, so you mentioned, you know, the, the first one was just given to friends was the, the first version that was maybe more meant as a real toolbox then, you know, SPM 91. And was that, um, yeah, like, what was that the first time people outside your, any, any immediate, um, list of collaborators could find it somewhere openly or it was, um, accessible or when did it become maybe, yeah, with, with the internet probably only right. Or when did it become, I think, yes, I can't remember, but I, I certainly do remember what, you know, one year living in the basement of Jim Haxby's house for two weeks, teaching them how to do SPM analysis. Um, next year answering questions on the SPM help line. Yeah. And I, I don't know. Yeah. Okay. I can't remember you, but you don't, 01:17:00you have to ask John Ashburner. I think he'd remember better because I say he's, he, he loved at that stage, the computer science. Um, and he'll, he'll know who, who made that happen. Yeah. Yeah. Yeah. There's, there's tons more questions I would have, but in the interest of time, um, I, you know, typically stop with, with rapid fire questions, to just round up the conversation. there's one last question that, you know, um, is more, more individual, um, which is, you know, uh, just this legacy, um, maybe can reflect on that a bit. When I started in the field in Germany, most of my mentors are the level, you know, one generation above me were almost all trained at the field. So everybody with a name, um, was there, um, had been there. Um, I mentioned Felix Blankenburg, but there's so many like Cornelius Weiler was my other doctoral father in Freiburg, 01:18:00and then Stefan Kleppel in Freiburg, Christian Büchel, but there were so many more. I, I'm just listing a few, few random names here. So at the time, really the field was, and I mean, it's still one of the most important centers, but I think at the time it was really the cradle of fMRI analysis with, of course, some other centers like Martino's maybe in, in, in Washington, but, um, I think there were a few, and you mentioned a few, but, but, um, how did that maybe feel also when then becoming older and just seeing everybody thrive and kind of these spin offs and having created or co-created such a field? Can you reflect a bit on that? Well, it is now in retrospect, immense pride, but at the time, um, it was just the natural way of things. I think you're absolutely right that, um, you know, it was, it emerged, perhaps, in context, you know, much of what we were talking about started at the MRC Cyclotron, um, unit in Hammersmith. And then a few years later, 01:19:01during which time I took a sabbatical, um, with Gerry Adelman, the, uh, Neurosciences Institute in America, but then returned, and within months we moved into the FIL, the functional imaging laboratory, um, under the auspices of the Institute of Neurology that subsequently then was taken over by University College London. Um, so the FIL that you're talking about, you know, sort of the, uh, the longer period under, again, the auspices of Richard Frackowiak, um, when he became established as a world centre in, um, brain imaging and, uh, image analysis and modelling of that particular kind. Um, that was absolutely his heyday. Um, and, um, it, you know, it, it started off, exactly as you say, in terms of just, um, training up the next generation of people to use the software, that was currently being developed. Um, and after a few years, that just became a habit. And I do remember saying, our job is this basically is a finishing school. 01:20:00So we used to take the brightest young people, usually from Germany or France. And I remember all the Germans, there weren't the most of the excellent Germans there. Um, and we just took the brightest young, uh, sometimes actually clinically, you know, surprising number of clinical, uh, young theoreticians, um, or psychologists. And they weren't necessarily sort of mathematicians or statisticians. They, they, they, they were sort of, you know, from the sort of human sciences and clinical sciences. Um, and we were a finishing school. Um, so we expected people to come in, uh, do three years, probably at most. Sometimes people hung around for five years and very, very occasionally one of them stayed and then became a senior, but it was quite a small unit in small groups. So there wasn't really, we weren't really trying to sort of, um, build a little empire. Um, you know, we, what we, we, we saw ourselves really as a finishing school. Um, and had great fun doing it because, you know, at the time, all these people, Felix and Christian and, and, and Colleen were coming through every, all of these, 01:21:00um, all the issues that was, you know, confronting these, uh, young, uh, scientists had to be solved. And, and so we were, you know, every day solving one problem or another problem, instantiating that in the software, hence the, you know, from 91, 94, whatever. Um, uh, and at the same time, skilling these people, but also learning from these people through their problem solving and, you know, get, you know, using the questions and the problems they posed as a focus of direction of travel on what should be developed. So you're always, always responding to needs. It's always a game. How would you best enable, um, uh, and you best enable by teaching and disseminating and responding to questions and needs. And that just became a sort of didactic ideological exercise. And that was the fill for decades. We were a paper mill, um, you know, a little sort of, we've been called the paper mill, some other, 01:22:00lots of other sort of, um, neuro Disney, uh, sort of things that people go for a bit of fun for a few years. And then we all grew a little bit older. Um, and, um, you know, but it did go on for a long time. You know, it was a great experience. And I think that's, I think that's the spirit of, uh, you know, just training generation after generation, as you say, these now delightful that you look around and, you know, what you thought were your, um, your young people and, and, and now basically in charge of the world. Yeah, yeah, yeah, yeah, absolutely. Really fantastic. Did you ever regret writing SPM or essentially making an open toolbox because there's sometimes misuse, right? People just pressing buttons, don't understanding the more complicated things, not understanding the more complicated things. Was there, uh, was there ever a doubt of I put a tool in the hands of fools or not really? No. No. Great. Um, I do wonder about that sometimes and with Lead-DBS, um, 01:23:00toolbox and, you know, but, but I also often come to the conclusion, at least on the, you know, a net positive. It is, it is a good thing, but, um, there, yeah, I have seen DCM papers that were maybe not the best ones. Right. And probably if you read them, you think, well, I don't believe of that tool, but, but yeah, that's, I see what you mean. Yes. No, certainly there is, um, a side sense of responsibility when people apply these procedures, uh, without a foundational training of how to best leverage these in terms of answering their questions. But, um, I tend to be quite forgiving. You know, the majority of papers I review, I sort of treat as sort of training or PhD papers. Uh, you know, so you, you, you quite forgiving. And then you just try and provide review comments. So the next time they do it, they'll be slightly closer to the, you know, the, you know, the good practice that, um, is often, and, 01:24:00you know, and some people start to write prep papers about, you know, 10 simple rules for this and that, or good practice in class. So you can step up very energetically trying to get people to use DCM in the right kind of way. Um, but every, everything is a learning experience. And I think you just have to, you just have to be forgiving sometimes when people publish what their learning experiences in the open literature. Just a few rapid fire questions, if I may. Um, what, what was a true Eureka moment in your career that you may remember? Yeah. Um, they're pretty frequent. Um, There are several that come to mind. Um, but I'm going to pick one. Um, which speaks to my later career in terms of theoretical neurobiology. Um, and, um, when I read the early accounts of, um, predictive coding by people like Dana Ballard and Rajesh Rao, um, I suddenly realized it was the same objective function that was underwriting inference and learning. 01:25:01And that to me was very neat mechanistic mathematical psychology. So for me, that was one of many insights, which actually came from reading somebody else's paper. There are lots of others. I'm sure. Leave it at that. I once heard you say that active inference 01:26:02secretly stands for AI. Is that true or is it post hoc reasoning? I don't think it's post hoc, to be quite honest. So there are a number of ways that you could have formulated or sold or pitched the application of the free energy principle to active vision or active sensing. But you could have also called it sort of the free energy principle sentient behavior. You could have also... The trick was to try and... Again, you're very much like that example about sort of the same objective function underwriting inference and learning. The active inference just turns upon the insight. It's exactly the same objective function that underwrites action and perception. And it's an objective function 01:27:00that lends itself to interpretation in terms of inference and Bayesian belief updating. And so we needed to have action and some kind of perceptual inference in the title. So active inference was the obvious one. But I do remember thinking, that's good. That's good. Because it sounds like AI. Okay, love it. But what actually happened was now people don't use that. So they actually write AIF for the active inference framework to disambiguate it from active inference. I never came up with that. I didn't quite know what it meant when I first read it. The AIF. But now if you read peer-reviewed material. To distinct it from AI, you mean? Yeah, absolutely. Okay, yeah. Okay, I once heard a story that somebody gave you a physical Markov blanket. Is that true? It is. And it was in fact my son. And it was a wonderful birthday present. 01:28:02And you've got it printed up with Markov. Actually, a portrait of Markov in the centre of it with keeping your state's warmth since 1670, whatever it was. So it now features pride of place on my settee or my couch in my office at UCL. If you like, if you remind me, I can send you a picture of my son's birthday present along with it. With the Gothic Arch from the landscape garden. Yes, let's do that. Last question. Is there any advice you would give to young researchers entering neuroscience or academia in general today? Yeah, I'm often asked this question and my universal response is keep your options open. And post hoc, I've actually realised that's exactly consistent with the free energy principle 01:29:00in the spirit of Occam's principle and James's maximum entropy principle. But pragmatically, I think that's still the best advice. Take every opportunity you can to have a broad foundational training as possible because at some point it's going to be useful in increasing the latitude of choice and where you end up at later life when you sort of drill down on the particular things that you're interested in. So take every chance you can, whether it's sort of master's, PhD or just evening classes and this and that to keep your foundational training as broad as possible. The actual advice I usually follow though is to keep your foundational training as broad as possible. And I think that's still the best advice. So take every opportunity you can to have a broad foundational training as possible. Just do as you're told. That seems to work as well. Okay, that's a good point. Yeah, I think in the very beginning, I actually do say that too for PhD students. Sometimes people come in and think they already have to have their own agenda, their own ideas and so on. But it's the wiser choice often to just follow the PI for a few years and then develop these, right? Potentially. Yeah, yeah. I mean, life is just a journey of sating curiosity at the end. 01:30:05And the more you can be curious without having a sort of predetermined agenda to be this or that, the better your life will be, the better the lived life will be. Any questions I should have asked but did not ask? You should have asked. No, no, no. You've asked some wonderful questions. I'm now thinking about all the eureka moments I should have told you about, but that's not your problem. That's my problem. Let's do one more then. If you have time, I have time. Can you do one more eureka moment to finish off? Or two? Another eureka moment was very early on when realizing the identity 01:31:00between one's treatment of stochastic processes and the differential geometry that underwrites random field theory. And this was a sort of eureka moment that Keith Worsley held my hand in terms of exposing. So this won't make sense to anybody other than those people who have both a formal and intuitive understanding of Gaussian random fields and differential topology and differential topology. But yeah, just the dots suddenly linking. Do you want? Yeah, it's that there's when you say eureka moment, I'm immediately drawn to moments in my life when I realize, oh, it's as simple as that. This is the same as that. I know they were the same thing. I just didn't see. I just didn't see that before. So those are the kind of moments that come to mind. And in one sense, they motivate this advice to keep your options open, 01:32:00because if you don't spend your knowledge and your inquisitive, inquisitive, your inquiries broadly, there'll never be an opportunity to join the dots because you just haven't discovered the dots that can be joined. Yeah, yeah, yeah. Makes sense. Love it. Thank you so much, Karl. This was really a big honor for me. And I know how busy you are. And again, you mentioned it's the third interview today. So thanks for the marathon. You were as bright and sharp as always, despite the two podcasts before this. So thank you. It's great. I've enjoyed myself. Great. Greatly. And I will send you those photographs. So, you know, just to celebrate our conversation. Fantastic. Thank you. Thank you.

Click any highlighted text passage to jump the Spotify player to that point. The transcript text is present directly in the page HTML for search engines and accessibility.

Citation: Horn, Andreas (2026). #79: Karl Friston — The Origins of SPM and the Making of Modern Human Brain Mapping. figshare. Media. https://doi.org/10.6084/m9.figshare.32253432.v1

Additional material supplied by Karl Friston

Prof. Friston also shared photographs connected to the stories in this episode: an image with early mentors, a Markov Blanket gift, and several of his gardening projects.

  • Karl Friston with early mentors

    Photo of Prof. Friston’s early mentors, from left to right: Chris Bench, Chris Frith, Karl Friston, Ray Dolan, Peter Liddle.

    Episode 79 - Karl Friston with early mentors
  • Markov Blanket

    Markov Blanket gifted to Prof. Friston by his son, with an inscription: ‘Keeping your states warm since 1670’.

    Episode 79 - Markov Blanket
  • One of Prof. Friston’s gardening projects, discussed in the episode.

    Episode 79 - Karl Friston's gardening project
  • One of Prof. Friston’s gardening projects, discussed in the episode.

    Episode 79 - Karl Friston's gardening project
  • One of Prof. Friston’s gardening projects, discussed in the episode.

    Episode 79 - Karl Friston's gardening project
  • One of Prof. Friston’s gardening projects, discussed in the episode.

    Episode 79 - Karl Friston's gardening project