September 30, 2023

Innovation & Tech Today


Buyer’s guide: The Top 50 Most Innovative Products

The Power of the Sun, Pt 4: How A.I. is Applied to Fusion

A.I. technologies and machine learning have the immense capability to improve research and development across a variety of industries. Because of its ability to continuously learn from both successes and failures, A.I. can work at a pace much faster than humans. In the fourth part of our “The Power of the Sun” series focusing on fusion, TAE Technologies CEO Michl Binderbauer breaks down their collaborative effort with Google to incorporate A.I. into fusion energy research and development.

I&T Today: You briefly mentioned how you’re collaborating with Google on A.I. capabilities. How is A.I. integrated into your processes?

Michl Binderbauer: It’s very fascinating, actually. It’s really neat and it’s incredible what’s possible today. As you develop technology, generically speaking, you want to innovate as fast as possible. And this is of course even more urgent in the private sector where time is money; investments typically have very short time cycles. The United States is particularly acute on this. So we are driven to deliver this as fast as we can, and so we’re always looking for ways to improve.

I had a really brilliant meeting once with Eric Schmidt while he was still executive chairman of Alphabet, and we were talking about innovation and comparing our notes and experience in the software sector versus what we do. And they can rewrite code overnight. And you look at the history of, say, Google search engine, the value creation there is measured in months.

But what we do, of course, generationally speaking, every five years we build a very new, large machine. They’re very expensive, they take a long time to design, and then you want to tweak it. You’ve got hundreds of knobs and you want to evolve it fast. So in the end, the problem is similar. We want to be as fast as possible. We want to innovate. You innovate through learning from failure. So you don’t want to be afraid of failing. In fact you want to fail fast so you can learn fast. You want to prototype things; you want to break them. With software code it’s obvious and easy to do, but it can be done here, too. And this is one area where A.I. has helped us incredibly.

So to give you a flavor, in 2010-2011, when I started thinking about this, we used to have a machine that had, say, a hundred knobs you could adjust. When you introduce a new piece of hardware, it’s kind of like a new machine at that point. You don’t know what the setting of those hundred knobs is going to give you the sweet spot. And you don’t want to do science at a suboptimal point. You want to find at least a local maximum or minimum, depending on what you’re looking for, and how you get there. That used to require, with human minds now, an awful bit of time. You keep 99 knobs the same, you change one knob, and you do experiment after experiment and diligently map out what that knob does. Then you go another dimension, you keep that knob now steady and you vary another one. You get the idea.

So we could map out the space with about a thousand experiments which would take us about a month and a half to two months to do. That’s not only expensive, that’s, from a time perspective, a lot of time invested and it’s small progress. That’s kind of how it used to be. So I started thinking, is there ways that we can improve that? I began here with some of our really bright people to get them to start thinking and working and developing some software. We looked at using statistics and some machine learning to improve on this. So, for instance, you can change 20 knobs at the same time, and you could still unravel the effects that led to what you find. In other words, cause and effect are nicely related so you can learn from it, which is a handicap for a human mind. More than one knob at a time, we can’t decipher that.

So, we began doing this and lo and behold by about 2014, we had reduced the time it would take us to dial something in from about a month and a half to about two weeks. Massive progress. This is when I said, “Wow, if we could get people involved who do this for a living and have much more depth and expertise in this, perhaps we could even supercharge this further.”

I knew some people at Google X (the company’s R&D subsidiary that often works in A.I.) and they kindly arranged a seminar up there. They brought in people from Google’s main research unit and one thing led to another, and so we began developing a joint effort to harness Google’s A.I. expertise and technologies coming out of driving ad revenue for the most part, it’s in the core competence area of Google, which is what we needed. And so now what we’re doing today, we can dial in an optimal state in 20 experiments, which equates to about half an afternoon of experimentation.

We can prototype things now almost overnight. I can break something today, learn from it, put a new version on it and test it tomorrow, and either fail or succeed with it, and I can charge forward. That is incredible. This is one reason, and we’re perhaps on the frontier of this, using this more than anybody I know in the field, but we can compress the learning cycles dramatically. So this is really, really, exciting.

Another area that is equally impressive is when you think of control. Think of this problem: you’ve got this super hot ball of stuff, it’s suspended in mid air. Think of it sort of like oozy Jell-O suspended by rubber bands; it will run out between the rubber bands, right? How do you keep it there? So we sort of add gelatin to the mix, if you will. So instead of oozy Jell-O, think now of a more solid object like a football or something, suspended there by rubber bands. Much more doable, right? How do we achieve that? Well we inject these accelerated particle beams and things and that creates the stiffening, but there’s a lot of control necessary for that. You’ve got to keep the ball in the right place; you’ve got to keep the beams aiming at the right spot. If you drive the tool hard you can blow it apart. If you drive it too little then it doesn’t work right. So there’s all this fine interplay.

At the interplay of all this is now A.I. software that learns from past mistakes, that can learn from failures and successes. In fact, it learns more from failures than it does from successes, but it wants to have both. It looks for patterns. It says, “Okay, here’s the pattern that I know historically led to defeat, so I don’t want that pattern, and I can reinforce it this way again based on historical learning. If I do this, it overloads the plasma and the whole system will respond like that and I can launch countermeasures therefore to fix that.” This is very similar to how people learn how to walk or bipedalism develops in children. We learn intuitively, the brain does, from recognizing success and failure patterns and reinforcing the right ones.

In fact, in the fusion example, the world has always done that. We’ve always tried deterministically to simulate these systems and calculate forward using mathematics and physics intuition, and that works to some degree, even with the fastest computers. By the time you’ve understood what your problem is, where it comes from, and how to fix it, plasma’s already dead, the thing is over. But, A.I. does it differently. These systems we now use, for instance, they don’t know why something is going wrong. They don’t need to. They only need to know that it isn’t the ideal state and they need to know, through experience, how to bring it back to that. And sometimes, therefore, we don’t need to know why. And that’s a beautiful thing.

And so the nice thing is, we can solve problems without always having to know how exactly the problem comes about. We just know we don’t want it and we quickly fix it and the system can do this literally on the fly, on time scales commiserate to counteract within the dynamical time scales of the evolution of these plasma blobs so that nothing bad happens. And they do this every day now. So A.I. is transformational. It really is. And in many of these things that we now routinely use, like I said earlier, 10 years ago we couldn’t have used, but now we have it. Again, it’s not harvesting what we’ve learned for 50 years in the niches of the field, but from laying developments, parallel in other areas, that now come together beautifully to buttress each other up and develop these great opportunities.

The Power of the Sun, Pt 1: What is Fusion Energy?

The Power of the Sun, Pt 2: Fusion’s Clean Energy Future

The Power of the Sun, Pt 3: Fusion Energy’s Progress

By Alex Moersen

By Alex Moersen

Alex Moersen is an Associate Editor for Innovation & Tech Today, covering pop culture, science and tech, sustainability, and more. Twitter: @yaboii_shanoo

All Posts





* indicates required


We hate spam too. You'll get great content and exclusive offers. Nothing more.



Looking for the latest tech news? We have you covered.

Don’t be the office chump. Sign up here for our twice weekly newsletter and outsmart your coworkers.