Episode Transcript
[00:00:00] Speaker A: It's super interesting right now you have a formula, then you build it, then you like measure it again and then you're like, oh, it's behaving different than I thought. And this whole loop is like test and failure right now in the industry and it costs money, it costs time and then imagine you need to scale it up. So what I'm doing working with this, some colleagues in that MIT and Air Force lab who found a different way how to measure and like the qubit and how to make better qubits and know upfront how the qubit behaves.
[00:00:30] Speaker B: Welcome to the Institute, a weekly podcast in partnership with Pensions Investments. I'm your host, Angelo Calvello. In each 30 minute episode I interview asset owners, the investment professionals deploying capital, who share insights on carefully curated topics. Occasionally we feature brilliant minds from outside of our industry, driving the conversation forward. No fluff, no vendor pitches, no disguise marketing. Our goal is to challenge conventional thinking, elevate the conversation and help you make smarter investment decisions. But always with a little edginess along the way.
Welcome back to another episode of the Institutional Edge. We're going to jump back into part two of my conversation with Bettina Kichler, an MIT Sloan fellow and an All Around AI nerd.
Last time we discussed how 2026 is a year of reckoning for AI and that AI, especially LLMs and Gen AI, are going to need to show an ROI. In this episode, we're diving into AGI and quantum computing. Are we actually getting closer or is this still science fiction? Because on one side you got people like Dennis Hassabis committed to developing AI, and then on the other hand you got people like Yann Lecun remaining very skeptical. Are we getting closer or is it science fiction? And I know Bettina, you're going to answer it any way you want. So go ahead.
It's the existential question. You know, my PhD is an existential philosophy. You got to give me some credit for an existential question.
[00:02:10] Speaker A: In five years. I'm like, oh, I knew better predictions in AI is just like, it's very difficult. But if you know what's interesting, it's like what we see right now, especially at experts like CEO of Google DeepMind, but also like the former AI research at Matter. He's like, it's totally nonsense to talk about artificial general intelligence because right now the technology is about reading text and coping the pattern of a text.
So here I was also like, it's artificial intelligence, general intelligence. You can't learn from just text.
But then on the other side, it gets better the more compute power you put in. So it's like meta. You can look at it and it gets better. It's actually.
They calculate the length of the task an LLM can do right now to track when or if AGI is happening. And right now it's doubling every seven months, I think. So there is progress. So it's very difficult to really say yes or no. But from our blight perspective, it's just math. Like right now, how should that happen? I don't know. And a lot of, like, experts say, like, it's nonsense, but in specific tasks, I totally believe, like, we're getting to a, to a space where we think it's AGI. But I mean, what's intelligence itself? There is no common ground on what's intelligence and how to measure it. Even for humans, we don't have a definition of it. So how can we talk about AGI? That's like, I'm always, like, curious about it.
[00:03:49] Speaker B: I'm going to go back. You mentioned something a second ago where we're finding that certain types of AI get very good at certain tasks.
We saw this, like in 2017 with ImageNet. Right. When the deep neural nets were able to identify CAT or not CAT better than humans.
I would consider that to be a superhuman, you know, result. They achieved a superhuman result. But it's not generalizable.
Yeah, you know, AGI needs to be something that is generalized and it could walk into, I don't know, a museum and it'll talk to you about a painting and then it'll talk to you about a restaurant that you go, I mean, you know what I mean? And those are trivial examples, but it's the idea that it has to be beyond a specific task. For me, that's how I would measure it.
[00:04:43] Speaker A: It.
[00:04:43] Speaker B: Can it be. Can it generalize?
[00:04:46] Speaker A: Exactly. It's like what I really like. They compare with the coffee test, you know?
[00:04:50] Speaker B: You know the coffee test.
[00:04:52] Speaker A: Coffee test. I. I love coffee. So that's why I like this.
[00:04:55] Speaker B: I haven't had coffee in 60 years, so I don't even know what you're talking about.
Go ahead.
[00:05:01] Speaker A: Oh, my God. So the one thing is about AGI is like, okay, when something can make me coffee, then we reach AGI. Like, I don't know who said it, but this is like one example, like a famous one. It's because if you, you need to put in coffee in there, what if something like spares or like some technical issues on the coffee machine, like, happens? Can the artificial General intelligence solve that. Like, can it make me cough in the morning, whatever happened? So this is like one, one way to think about it. And you need to know how to make coffee. You need to see, okay, something went wrong, what went wrong? And this whole thing, when we go into robotics, which is another topic, so this one, the coffee test. But then why are we talking about so much about AGI when as you said, there are models who can see patterns. We can see like in health care, there's so potential, there's so much going on right now. This is amazing.
So let's focus on that, invest on that, because then we need to see a clear value and outcome for that. So I would just like look and focus on that then on theoretical like what's AGI? And define it.
[00:06:11] Speaker B: And isn't that example, if you're going to focus on one difficult problem and try to achieve results using some type of AI that are equal to or better than what humans can achieve, isn't that first AlphaFold what DeepMind did with proteins, they were able to do to solve the problem better than humans have been able to do it, right?
[00:06:41] Speaker A: Yes.
[00:06:42] Speaker B: Okay.
[00:06:43] Speaker A: And for that you would need to know the problem, right? Like there are some amazing scientists, they're like, oh, we can do this and that. And then they came in touch with like this problem, they're like, oh, let's do and solve this problem. And they did.
So how to put this kind of like industries people together so we actually work on specific programs that we see there is like, if we solve that, we have so much value.
And that's where also like coming back to the AOI discussion from before, what I've observed is that a lot of smaller models now, like companies are focusing on smaller models on specific problems which costs less, less energy consumption, better outcomes, success rates. So really defining the problem like in terms of. I think Google just launched a translation large language model, which is like why? But that's like once simple example, but it's like in healthcare or like in other areas like in finance. So people going like training models in specific unspecific finance data for better language models or other AI whatever. Like it depends really on the problem. But I think that's the way to.
[00:07:59] Speaker B: Go forward is to focus on specific problems that will generate the roi, which could be manifest either in terms of profitability or societal impact. If you could do something with these, with AlphaFold, that helps scientists and researchers discover cures. And I think this has already been an outcome from that. But my point though about alphafold is for deep mind. It's a step on the way to AGI. Solving that problem is a step, just like Alpha zero was a step on the way.
Because the whole issue with reinforcement, learning and then the paper that Silver and others wrote about it, you know, what is it? Attention is all you need. And so you're saying let's. I think you're saying this and you'll tell me I'm wrong like always. But let's apply the optimal type of AI system or systems to a specific problem so we can generate the ROI and it will have some societal benefits, ideally like in medicine.
And maybe we'll stumble our way to AGI eventually. Is that kind of a stumbling being my verb? You could change that. But you know what I mean.
[00:09:13] Speaker A: I mean if you look at deep tech or technology advancement, that's usually how it works. Like you work on specific problems, you work through it, you get better and better and like go like to like your ideas or vision.
[00:09:26] Speaker B: But what does deep tech mean to you? Don't just give me that word because I don't know that one. Are you taking me to my second point which is quantum? Is that where you're going with this?
Are you going to quantum? Otherwise, define deep tech for me.
[00:09:39] Speaker A: Now I have to, right?
Just like one point before, like, just like, okay, I really like to work on like real problems and so often that's step by step to bring AOI and value, especially in like to the society. But I also understand that as a company or entrepreneur you have to have visions and you have like a North Star not only for your company but also for investors. Right? So I think there's just two different worlds we are like right now, especially like in media and we talk about AGI that's very like jumping back and forward and I think that's just something we need to have in mind, like what AI can do right now and where it brings AOI and what's for investors and getting money. What's your business model? Do you need investors? Do you have other portfolio? Like Google has a lot of other like income. Who is talking about AGI and who is not about talking AGI and like in which kind? What kind of like in which sense and like in which kind of language?
[00:10:45] Speaker B: Okay, now tell me about Quantum. I know nothing about it, but my good friend I will say in Milan, Dr. Elisabetta Basilico has been studying quantum. So I know she's going to listen to this and give us some comments. But what the heck is quantum computing? Because in investing I'm always looking for an edge. And if I, if quantum computing can give me an edge to allow me to make better, faster decisions, I'm all in. But I don't even know what it is. And that's never stopped me before. But so go ahead.
[00:11:16] Speaker A: What's quantum? That's a big question. Like I talked with many quantum, like with quantum physics, quantum investors. And what I've observed is that, and like my learning is it's still in the research phase, right? It's totally research phase. We think about applications, but we are not there yet because it's very difficult to understand. Even like quantum experts say it's difficult to like explain it. And one thing that I always tell people when you want to know about more about quantum is like in our everyday life, if you have a ball, you throw it in a specific speed, you know where it lands, right? Like you can predict it, you can like calculate it with tiny particles. They have different rules of nature. It's a totally different thing. They're not predictable. So now they think with this kind of ideas, you can manufacture new compute, like a new quantum computing using this kind of ideas, which it's not ideas anymore, it's like it's proven. But how to use that in calculating things you can't do with a classic computer.
And quantum will never replace classic computers. It's only for specific tasks and specific problems that mirror or match this kind of like particles behavior.
[00:12:34] Speaker B: So tell me about the specific tasks or problems that quantum is or will be well suited for. And you know, you gave the prefatory remarks, we're not there yet, we're in the research stage. But there's a lot of money flowing into this on the VC side. There's no question about that. So.
[00:12:50] Speaker A: Especially now.
[00:12:51] Speaker B: Yeah, right. So what are the, what are the tasks or problems that you envision it being well suited for?
[00:12:58] Speaker A: I mean what I've heard from the industries, like especially in healthcare, molecules who are like, especially for drug discovery, they behave differently than we know from other like bigger particles. Which means classic computers can't. They're limited.
So if those molecules behave differently and as many particles from quantum physics have this similar like rules of nature, then we can use or we need to use quantum computing to solve like new products and new like discoveries in drugs. So this is like this kind of mirror, like in material science you have this kind of like problem or like, I don't want to say problems kind of different where object behave differently than we see in our everyday life. Like big objects like A ball.
Whenever I talk to scientists, they want to explain it to me. I'm like having so many more questions after I talk to them.
So like for me what I'm looking at right now is like what's the industry? Where's the industry? Like right now there's a research phase, but a lot of, there are a lot of breakthroughs. There are different bats. What kind of like it's kind of like qubits. What's the standards of quantum qubits right now? Like it's kind of part of the hardware.
There is no standards. There are different bets. Microsoft is doing something different than IBM, than Google. But what you can see already in the industry is that companies spying each other, startups buying each other like they're trying to evolve into a new phase where you need to bring also some use cases and really prove that kind of ideas you have from research paper, bringing it into like the application, the reward phase and how to scale it up at the end. So right now you have different kind of hardware which you can't scale up. So it's like you do like little experiments. It's an experiment level right now. But when and wherever you scale it up then more problems occur.
[00:15:00] Speaker B: I mean I'm trying to get to an understanding of what type of problems because I mean I went down this road because you're doing this simulation commercialization.
[00:15:10] Speaker A: Yes.
[00:15:11] Speaker B: And I assume that takes quantum into some kind of commercial ecosystem. Right.
[00:15:17] Speaker A: So what I'm like looking at, it's a simulation tool for the hardware.
So it's like how to produce manufactured better hardware on it's on a very low stack. We don't talk about application. That's here. Then you have like healthcare material science. They say it's like areas I'm here, I guess research how to make better qubits to predict how the qubit behaves. Because we don't know. It's super interesting right now. You have a formula, then you build it, then you like measure it again and then you're like oh, it's behaving different than I thought. And this whole loop is like test and failure right now in the industry. And it costs money, it costs time and then imagine you need to scale it up.
So what I'm doing, looking at like this, working with this, some colleagues in at MIT and Air Force Lab who found a different way how to measure and like the qubit and how to make better qubits and know upfront how the qubit behaves. And this is like a huge thing in Terms of like simulation, a qubit. But then the question is, can you simulate other things, not just qubits?
Can you use this kind of theory, which is from nuclear physics. Can you implement it for finance or logistics where data is very messy, Real life is very messy. We don't know how and how to put it in. And those AI models we talked about.
So can we use this kind of theory for nuclear physics where they're used to having this kind of very unpredictable inputs and behavior, not only for manufacturing quantum qubits, but for like things that we need right now, like logistics or any. Anywhere you have like messy data.
So that's what I'm looking into right now.
[00:17:09] Speaker B: Messy data and sounds like instability.
[00:17:12] Speaker A: Yes.
[00:17:12] Speaker B: You know, it's a very uncertain environment.
Using a term for reinforcement learning.
What the hell's a qubit? Yeah, I mean you're throwing it around like I know what I'm talking about. I have no idea.
[00:17:24] Speaker A: In simple terms, the storage information like a chip, which is not. But just like to make it very simple.
And you need thousands to make quantum computing work.
[00:17:34] Speaker B: Okay.
[00:17:34] Speaker A: And it's just one hardware piece. Who makes IBM is doing is a manufacturer, for example. There are some labs who are making it. But it, as I said, different qubits. Microsoft is doing it, but it's a different. It's like their flexonium, their transmon and that like different kind of like it's engineering combined with physics and it's a whole thing. I really, I'm not an expert on that. I'm not going into the science thing. I'm just like looking. I'm trying to figure out a case so you can do that. How can we use it? Who needs it? Who has that problem?
What does it mean for investors? What does it mean for the whole ecosystem in quantum computer computing? And with all of technology that we are looking at right now, especially I think Europe and us both are very looking into technology.
How do you use it in different kind of industry and applications? Because I think there's a lot of potential. We just like we need to have more. Think more creative how to implement it in different. For different kind of problems. At the end, it's a simulation tool. It's like simulating something which is very noisy. It behaves very unpredictable.
So where else do we have this problem now?
[00:18:45] Speaker B: I understand it.
[00:18:46] Speaker A: You do.
[00:18:49] Speaker B: About this.
About this much.
You know, there is some. My friends will tell you my whole career was like that. It's like the Rio Grande, we say a mile wide and an inch deep.
[00:19:02] Speaker A: So you Just need a great team. Like everyone is an expert on something.
I'm just, I'm on a business side so I'm trying to figure out and connect people with different kind of problems industries and trying to see where is the value in it.
[00:19:18] Speaker B: So we're going to close here because I've taken way too much time especially because of the footfall I had at the beginning. But in closing again we're talking about institutional investors, whether it's a sovereign wealth fund, a pension fund, et cetera. What's your outlook going back to 2026?
How can these folks think about AI? Not in terms of investing in AI stocks or anything like that, but integrating AI so they can make better decisions. That would be the roi. What advice would you give them looking forward beyond the hype? Because you know we don't want to talk about all the hype. Don't put that aside.
[00:20:00] Speaker A: I would go to the operational level getting people together cross functional and working through with them where they see the potential, where the problems are also where to see the potential and work through like a new project process and workflow making a sandbox again like and protected like frame framework, how to test it and see what the learnings are and then how to make it better from there doing it again and again and then you can see okay, does it make sense and does it actually bring a better prediction for the business model for like whatever like in terms of like investing decisions. So it's really going back to the operational level and working through strategically cross disciplinary is key. Yes.
[00:20:50] Speaker B: You can't just talk to it and you can't just talk to the portfolio manager. It's got to be across or otherwise it'll be siloed and there'll be data problems and everything else. Right.
[00:21:00] Speaker A: It doesn't make sense. That's the whole thing about Genai large language model. Now they understand different disciplines and you can put data together, structured and unstructured data. That's the whole point of it.
[00:21:16] Speaker B: That's where you get the power. When you could start using both structured and unstructured data.
[00:21:21] Speaker A: Yes. Because that's new.
[00:21:22] Speaker B: Within a secure environment.
[00:21:24] Speaker A: Within a secure environment, yes.
Especially in that industry like don't test something on the fly.
[00:21:31] Speaker B: Well I usually ask my guest one final question but I don't think it works for you and that's what's the worst investment pitch you ever heard? I don't think that suits you. Does it?
I could give you another opportunity and you could ask me any question you want.
[00:21:48] Speaker A: Okay, I will ask You a question?
[00:21:49] Speaker B: Oh, boy.
Okay.
[00:21:53] Speaker A: If you think about AGI, which you mentioned before, if you think AGI happens in one year, what would you do differently now?
[00:22:05] Speaker B: Wow. First, I don't think it'll happen in one year, but if it happens in the near term, I would probably go back to my roots at the academy and work on understanding if it's ethical or not. Is it responsible or not?
And the topic we didn't talk about is embodiment. I'd want to see if the AGI was embodied.
I think you need to have embodied AI to hit AGI.
Are you okay with that or not? Are you going to push back like always?
[00:22:40] Speaker A: Why do you think so?
[00:22:41] Speaker B: Why you need embodiment?
[00:22:43] Speaker A: Yes.
[00:22:43] Speaker B: Because intelligence is embodied in humans. And if we're going to see. I know it's an alien intelligence and it could be disembodied, but that's my framework. I told you in my notes. I did my dissertation on embodiment, so I'm kind of stuck there. So I would look for this AGI to be embodied. And you're seeing a lot of work in robotics. And to your point, if it's going to make you coffee in a seamless fashion, I assume it has to have some kind of embodiment.
[00:23:10] Speaker A: So.
[00:23:11] Speaker B: So it knows how to put that little coffee thing that you all drink in Europe, that Keurig, in the little container and close it.
[00:23:17] Speaker A: So, I mean, it's an interesting question because you disagree with me.
[00:23:22] Speaker B: As soon as you say that, I can see you don't like the answer.
Don't tell me it's interesting. Just tell me.
[00:23:28] Speaker A: I'm just observing, like, those two things. Like, one is, like, humanized AI, AGI, but also AI. You don't have to program chatbots to be like humans. Right. You could have done it differently. And the other thing is, like, how would AGI look like if you don't put it, like, in a robot who looks like a human?
Why? And I have my opinion in that. I think in terms of, like, in Asia, I think that's also. They're more positive about AI. They feel like there is a sword in every object. Right.
So here, like in Western Europe, they're more like, oh, I don't know. I don't want to have a robot who, like, takes care of me when I'm older.
I think if someone.
We still have this connection with machines, kind of. You give names, your car.
There is this kind of sense. But I think it's worth talking about what's ethical. Where is the line and how we think about AGI or AI, let's say only AI. It's already a question we really need to think about and making decisions.
Just like there's a lot of fear. And if you have a lot of fear, it defines regulations, it defines politics, it defines investments. And I think we need to be very careful how we use the language about it. And also, like, how AI looks in, like, humanize it or not humanize it. Yeah, I don't have, like, a clear answer to that. I think it's just, like, worth, like, thinking through it.
[00:24:57] Speaker B: I'm glad you don't have a clear answer because to become dogmatic at this point in the development of AI would be a terminal matter.
You have to stay open because there's so many surprises. And I don't mean open AIs ChatGPT, I mean AlphaFold and even the stuff the folks did at Deep Seek.
What I read is true and I've done a lot of reading and I've written on it. Seems like they made some, how would I say, some strides in terms of efficiency.
So to be dogmatic now is the worst thing. And then that dogmatism could either be optimistic dogmatism or it could be pessimistic. So you have to be very careful.
[00:25:42] Speaker A: Yeah.
[00:25:43] Speaker B: So, Bettina, we should probably write something one day, you and me, but I'm not going to write in German. German was one of my languages, by the way, at university.
[00:25:52] Speaker A: Wow, I didn't know that.
[00:25:53] Speaker B: Nobody does. Trust me.
I mean, with this accent, trying to speak German. I mean, I have a hard time saying sausage. So.
[00:26:03] Speaker A: Okay, so next time we're writing in German. I think now we can do it.
[00:26:08] Speaker B: No, not going to happen. I can use Google Translate.
[00:26:11] Speaker A: That's the thing. I will never know. Or maybe I will. We can say, let's try. Let's test.
[00:26:16] Speaker B: All right, I'm going to stop it here and say, Bettina, thank you very much for playing along. I know it wasn't easy. We took a circuitous route, kind of like your career and my career. It's been all over. But thank you.
[00:26:27] Speaker A: Thank you so much for having me. It's always a pleasure to talk to you and I love your questions.
[00:26:31] Speaker B: Well, thank you.
[00:26:32] Speaker A: Really make me think a lot.
[00:26:34] Speaker B: Well, that's why we should write something.
Thanks for listening. Be sure to visit P and I's website for outstanding content and to hear previous episodes of the show. You can also find us on p and I's YouTube channel. Links are in the show notes if you have any questions or comments on the episode or have suggestions for future topics and guests, we'd love to hear from you. My contact information is also in the Show Notes and if you haven't already done so, we'd really appreciate an honest review on itunes. These reviews help us make sure we're delivering the content you need to be successful. To hear more insightful interviews with Allocators, be sure to subscribe to the show on the podcast app of your choice. Finally, a special thanks to the Northrup family for providing us with music from the Super Trio.
We'll see you next time.
[00:27:29] Speaker A: Namaste the information presented in this podcast is for educational and informational purposes only. The host yes, and their affiliated organizations are not providing investment, legal, tax or financial advice. All opinions expressed by the host and guest are solely their own and should not be construed as investment recommendations or advice investments Investment strategies discussed may not be suitable for all investors as individual circumstances vary.