COMPLEXITY: Physics of Life

Sabine Hauert on Swarming Across Scales

Episode Notes

If complex systems science had a mascot, it might be the murmuration. These enormous flocks of starlings darken skies across the northern hemisphere, performing intricate airborne maneuvers with no central leadership or plan. Each bird behaves according to a simple set of rules about how closely it tracks neighbors, resulting in one of the world’s most awesome natural spectacles.

This notion of self-organizing flocks of relatively simple agents has inspired a new paradigm of engineering, building simple, flexible, adaptive swarms that stand to revolutionize the way we practice medicine, map ecosystems, and extend our public infrastructure. We’re living at the dawn of the age of the robot swarm – and these metal murmurations help us create communications networks, fight cancer, and evolve to solve new problems for an age that challenges the isolated strategies of individuals.

This week’s guest is Sabine Hauert, Assistant Professor in Robotics at the University of Bristol and President/Co-founder of, a non-profit dedicated to connecting the robotics community to the world. In this episode, we talk about how swarms have changed the way we think about intelligence, and how we build technologies for everything from drug delivery to home construction.

Visit our website for more information or to support our science and communication efforts.

Join our Facebook discussion group to meet like minds and talk about each episode.

Hauert Lab Website.

RoboHub Website.

NanoDoc Website.

Sabine at Nature on the ethics of artificial intelligence.

Sabine's 2019 SFI Community Lecture.

Follow us on social media:

Episode Transcription

Michael: Well Sabine, thanks for joining us.

Sabine: Yeah, sure. Pleasure to be here. Thanks so much.

Michael: So you work on swarm robotics, and let's just get this right out of the way. You know a lot of people, their introduction to swarm robotics was The Matrix: Revolutions. They're not thinking about it typically in terms ... Or, like, Terminator: Genisys. They're not thinking about it in terms of medical applications, construction applications. So I'm really curious to hear from you what are the most exciting deployments of swarm intelligence in robotics and elsewhere that people are working on right now. And then we can work backwards from that.

Sabine: I'm excited that swarms, I think, are ready to get out of the lab. So for the past 10, 20 years we've been building up towards swarm robotics. So we've been looking at nature and what algorithms nature uses to self organize systems, trying to implement them on robots. And typically we've been doing that in small numbers.

And now the push is to understanding how we can make these things work in larger numbers. And we're starting to have these capabilities because the hardware is there, and our ability to discover new swarm algorithms is there as well. And so a little bit like the area of machine learning has taken off because we have this conjunction of better algorithms and better hardware. I think we're going to see the same thing in robotics. So I'm excited about the prospects because the reality is we don't yet see many of these swarm systems in the real world.

So for me it's sort of the cusp of bringing together these huge numbers of hardware platforms that we can now make ... the ability to design better swarm algorithms and also the new application. So thinking of swarming more broadly than just swarm robotics, but can we start engineering swarms in biomedical applications? Should we be looking at how we engineer the collective behaviors of things like nano particles for cancer treatment? Should we be thinking of cancer cells actually as swarm systems that we can understand, but also maybe engineer so that they do things that are less harmful. Once you start thinking of systems as swarms, then you see swarms everywhere. And that's what gets me excited.

Michael: So in your community lecture last night, you talked about ... That you're consulting with a company that's looking at construction swarms and that's a really interesting example. And then on the other end of the scale you talked about nano-medical swarms. So I'd love to just set the stage by hearing a little bit about how thinking about swarms differs at the macro and the micro and the nano scales and how the research from those different scales informs the research going on at other scales.

Sabine: I love thinking of swarms across scales. I guess you noticed that yesterday. And there's really different ways of approaching it. I think when you have small numbers of more capable robots, then you give them more intelligence on the individual level. So in the case of the swarm construction, this is a startup called Assembler and they're trying to make robots that can navigate bricks and deposit bricks as they go along. And so these individual robots will need to have more capabilities than some of the nano particle work that I'm doing.

And so how do you design the algorithm so that they can understand the environment on individual level, but also coordinate with other robots so that the system as a whole in this case can go and build a brick wall. At the biomedical side or the nano side, what we're seeing is systems that work in tremendous numbers, so in the 10 to the power of 13. Because of their size, these systems are inherently limited in what the individuals can do.

And so then we needed an entirely different type of algorithm to the ones that we were thinking of when we were designing robotic systems, that were more capable individually and maybe worked in the 50 to the 100 scale. And when we're looking at the nano biomedical scale, it's really all about reaction diffusion. So things that move in very simple ways just react to their local environment. Maybe by emitting a signal that others can react to, and using those basic building blocks are actually quite similar to the building blocks that we use that in our cellular systems to develop these fully functional organisms, which are ourselves.

So I think it's really fascinating how you can still get these beautiful complex emergent behaviors with very minimal systems at the individual level, but work in huge numbers. I'm also finding that as we think across these scales, actually designing some of the algorithms to make nanoparticles work together for cancer treatments made us realize that maybe we should design robots swarms a little bit better, that could also work in huge numbers. But that will require the individual robots to have limited capabilities, a little bit like those nanoparticles, simply because we won't build huge swarms of robots unless these individual agents are cheap and if they're cheap, they're noisy and they have limited capabilities. So we're learning across these different skills and lessons that I think can be applied.

Michael: You know, when I was talking to David Krakauer for the first episode in this series, we were talking about this spectrum of scientific approaches. On one end of the spectrum you have like fluid dynamics, and on the other end of the spectrum, you're modeling in depth every agent in a system. This movement from the extreme granularity to evaluating things in aggregate. And it seems as though those two different kinds of science are the two poles of this scale of approaches that you're talking about.

Sabine: So here's an example. We have a new European project called EvoNano, and it's about using AI to design nanoparticles for cancer treatment. And there we fundamentally have different scales. For example, we can simulate the growth of a tumor and that's an agent based model. And then we need to model those 10 to the power 13 nanoparticles and where they distribute throughout that tumor to make sure that they're impacting all the cancer cells.

And there we're looking at just a tiny slice of this agent based model tumor where we then run a stochastic model because we can't play that number game with the agent based model when looking at the nanoparticles. So it's really fun because we're having to bridge together all these different types of simulation to answer these concrete questions of how you engineer the collective behavior, in this case of nanoparticles. That being said, I think there is a toolbox that does generalize across these scales.

So when we engineer swarms, there's really two different things that we do. One is either we use bio inspiration, you could imagine using that across the scales from the nano scale to the more macro scale when we deploy our robots. And the other is using tools like machine learning to automatically discover the rules for your agents that give you a desired collective behavior. So in the case of the nanoparticles, it's automatically generating the nanoparticle design that gives you the rate distribution in a tumor. In the case of the more macro skill capable robots, it's automatically engineering or automatically discovering and using, in our case, artificial evolution. The behavior of individual robots that are doing, for example, a foraging task maybe in the 20s rather than the 10 to the power 13, but that toolbox is the same.

Michael: So I was really impressed by your discussion about artificial evolution last night and in this case, the swarm of robots, each running its own evolutionary algorithm and then swapping information with each other. And so it seems like there's two kinds of learning going on in that system, right? Because the swarm, Jess Flack has talked about this a little bit in terms of collective computation, where each agent is gathering and then aggregating information in two different steps.

So am I understanding this right that this system where you're training robots to collaborate on relocating a Frisbee from one end of a little arena to the next, there's really two different kinds of learning going on there, or learning in two different scales. And so I'm curious with that.

And then also with this notion of crowdsourcing design for nanoparticles, the particles are too small to contain their own intelligence but reflect a distributed intelligence of agents collaborating on their design. There's just different ways of thinking about intelligence and these systems and the way that those intelligence, those types of intelligence are distributed. So I'm curious to take a step back with you and look at this more at the meta and ask, what do you think about individual and collective intelligence being a swarm robotics engineer and how has that work shaped your thoughts on this and your own evolving thoughts on this shape the way that you kick the little Frisbee around?

Sabine: I tend to think of swarms as a system, and the intelligence is endowed on the system as a whole, not on particular individuals. And so if you think of a level of intelligence that you want for your swarm, well, if you have a huge number of agents individually, they don't need to be that intelligent to get the system level swarm intelligence that you're looking for.

If on the other end of the spectrum, you have those 20 more capable robots. Well, fewer numbers, maybe individually more capable, but yet the system as a whole is what I ultimately care about. And so that means that very often when we're doing the automatic optimization, for example, with the smaller swarm, what we're setting is a swarm level fitness. We want the swarm as a whole to be able to do something.

For example, pushing a Frisbee. This work is really new – with Allen Winfield, Matthew Studley, and Simon Jones – in that we're evolving the behaviors directly on board the robot hardware. So these robots have GPUs, which give them enough processing power to run the artificial evolution algorithms directly on board. And so that is actually challenging, because what we used to do is we would have a computer external to the swarm run evolutionary algorithms. During my PhD, this took weeks to actually do, and then we put the best behavior on the swarm.

This suffers from a reality gap because very often you put it on the swarm and it doesn't do what you thought it would do based on the simulations, because the real world is complicated. And so you need to design a different type of evolutionary algorithm if you're going to run these algorithms directly onboard individual agents in the swarm. First of all, because there's no godly view of that system.

So that swarm level intelligence that I'm trying to optimize, actually the individuals don't see that because they're only looking from their local perspective. And so they need to give a score to the rules that they're evolving based on what they see locally is a good proxy for a swarm level, assistant level swarm intelligence. So that's one challenge.

And then the things that they evolve, they need to figure out how to share amongst their peers so that good solutions are propagated. So it's called an Island model, how these things are evolved, and it's just really interesting to see it work actually. So in just 15 minutes as opposed to the two weeks before we get a swarm going from not knowing how to push this Frisbee to something they can operate as a collective in a swarm sense. But the way I think of these across scales is really looking at the system level and then trying to figure out how you design these rules so that every individual can contribute to the swarm level of fitness.

Michael: So how does this differ from the way that people are talking about training fleets of autonomous vehicles, for example? Is it a similar? Or perhaps, how do you see this differing from the way that we as individuals are constantly evolving our models of the world, sharing those models with each other, and refining those models? Because one of the robots swarms that you talked about last night actually included a peer sensing opinion algorithm where they were checking in with each other. I'd love to hear your thoughts on all of that.

Sabine: Well, if there are going to do onboard evolution, they need to have a good model of their world so that they come up with solutions that are actually going to work in their world. So part of evolving these behaviors directly on the robot itself is so that you could imagine putting them in the wild, and then building up their own model of the world that makes sense, because they're there and they can measure it and they can do something in it and see what the effect is.

And so in theory, they could improve their own model of the world, use that to evolve a better behavior and then deploy that directly on the go as they do that. So it is important for these robots, sometimes at least the more capable ones, to have a good model of their worlds. I think there's also ways in which we can develop good models of the world by sharing information.

So the decision making algorithm that you're referring to allows robots in that case with more limited capabilities, but because there's many of them sampling the environment for example, good quality sites or bad quality sites or good quality decisions, bad quality decisions, they as a collective can come to a model. It's not really a model of the world, but they can come to a decision about the world that's just based on all these different information points that they're able to sample. And in that case, they're very simple rule sets. So it's not about creating a complex model of the world. And yet the properties that emerge make you believe that they have a good model, a good model of the world, but it's just aggregation of useful information in space and time.

Michael: Did you come into this research with these ideas and this way of seeing the world? Again, you talk about when you study swarms, you see them everywhere. Did you get into this because you were fascinated by that system level intelligence or what brought you into engineering robot swarms in the first place?

Sabine: I was lucky to take a course by Dario Floriano. He taught bio inspired artificial intelligence when I was a master's student. And actually I now teach bio inspired artificial intelligence to my students at University of Bristol. And then that led me to go to Carnegie Mellon University for a year, for my final master's year, and I joined Manuela Veloso RoboCup team, so playing robot football. And at that time they were leagues with robot dogs, the Sony AIBOs.

And it just got me so excited about this idea of making robots work together. As part of our team, I went to the US championship, which the team won. And you would literally jump up and cry when these robot dogs would score a goal. And so I went back to Switzerland to Dario's lab with the goal of making robots work together.

And so that's really what got me into this area of swarm robotics. And at that time I was doing swarms of fine robots to create communication networks and things like disaster scenarios, and trying to find the algorithms to make these robots coordinate even though they weren't meant to have GPS. And so they needed to be quite creative about their solutions. And then I thought, "That's 10 robots and we keep going on and on about these larger swarms that you see in nature."

And we just weren't developing that on the robotics side. So I looked for another robot platform and I spent three years in a lab that made nanoparticles for cancer treatments, with Sangheeta Bhatia over at MIT. And that sort of was the bridge between the scales and starting to see things a little bit differently. And I was also lucky to go to Radhika Nagpal’s lab every week over at the Visa Institute at Harvard, just for robot fix because I was committing myself in the nano and biomedical world and I needed to get back to the things that I knew. Sometimes that's a healthy thing to do.

And she was developing this kilobot swarm which was this 1024 robot platform, so it's called a kilobot because instead of a kilobit it's a kilobot. And it just made so much sense that we have these nanoparticles in huge numbers and limited capabilities. Radhika was developing the hardware to be able to do some of these experiments with large numbers of robots and taking the legacy of what I'd done with smaller numbers but more capable robots, and seeing how we could bridge these different worlds and build the tool suite that could help us address some of these questions of more generally, how do you engineer a swarm across scales. And so that's really how I got about it. People wonder if I've jumped to different fields, but actually I feel like this swarm engineering is the thing that ties it all together. And ultimately these are different agents with different applications, but the same mission of swarm level intelligence.

Michael: So Albert Kao, one of the postdocs here who studies collective behavior, a lot of what you're saying here reminds me of a paper that he collaborated on recently that showed that there are instances where you actually don't want the agents in a network to be that smart. That the collective intelligence starts to break down if the memory of an agent is too long, if it's not adaptable. And of course, you listen to research like this and at least two things come up for me.

One is what does this tell us about human collective intelligence and human behavior and human society and the way that we structure things. And then another is, in what ways does it not tell us anything? In what ways does behavior at a given scale really only have import to other phenomena at that scale. And so I'm curious how far you are comfortable drawing inferences from this kind of research into other domains, where you see this work applying in ways that it wasn't originally intended, and where you think that people are trying to stretch the metaphor.

Sabine: It's hard not to anthropomorphize these swarms. For example, the decision making, you see them turn red and blue and you're ruling that they're all going to go to the right side of the forest. And my students are always claiming that they're deciding football matches or doing who knows what. So you do see these self-organized systems and you see something natural about it. Or for example, our shape formation.

As you grow these limb like structures, you just can't help but see an organism as it develops. So it's true that because we see that, we tend to say, "Oh, maybe this could help us understand something about how humans make decisions about elections or whatnot." I think they can be good proxies to at least play with ideas, and because we can easily program them with simple sets of rules and visualize the dynamics on a hardware platform, sometimes that helps open the mind and you see things you might not necessarily see in simulation or in these less controllable systems.

But nature, humans, and such are way more complex than the things that we're putting in our robots. So I do think we need to be careful in assuming these are good models of these different systems. It's also true that while I keep talking about swarming across scale, some of the algorithms that we do in small numbers of robots, actually most of them would break down if we did them with huge numbers of robots. Which is actually contradictory to what we usually say in swarming, which has that swarms, because they're decentralized, they're scalable to huge numbers. They're robust and individual failure, but actually many of the algorithms that we've been signing for those small numbers actually do rely on those small numbers working reasonably well. So you can assume that if you put a hundred robots and maybe a portion of them misbehave or do something poorly because they're just not working, that will skew the swarm behavior as a whole.

So they might be coming to a decision that wasn't necessarily the one that they were meant to do. And so we need to do a more careful consideration of how you engineer these swarms in a way that makes them robust and reliable. The swarm engineering across scales and getting features for free only works if you really tested across those different scales and you've learned something about that system. I actually think that the things that are inspired from the nano micro world are more robust in huge numbers simply because we're making so fewer assumptions about their capabilities that there's less breakdown points on which you can fall.

Michael: Yeah. You know, listening to that brings up a lot about the study of the evolution of human civilization. It gets to that issue of, how do we manage ourselves at scale? We need a different algorithm for relating to people to just throw random things in here.

That's the whole thing with the blockchain thing, right, is an attempt to scale trust in trustless environments. So I'm curious, given all of that about the advent of multicellularity in these robots swarms and tissue differentiation, because it seems like the theme across all of these different systems and substrates is that you reach a certain swarm size, and then the swarm actually benefits from differentiating within itself. Do you see work being done on that?

Sabine: There is a lot of work on homogeneous swarms that have the same program, but even though they have the same program, the environment they're in is going to drive their behavior. So you could get different behaviors of the individual robots based on their location and what they're sensing. With the morphogenesis work that we've done with James Sharpe over at the CRG, you essentially generate chemical fields based on two virtual morphogens that give you terrain like spots or stripes depending on how you set those patterns.

And then that allows you to grow these lymph structures so it can be seen a little bit like differentiation, even though the code for all those robots is exactly the same. Just like the code for all ourselves is exactly the same. So it's quite interesting to see how from a homogeneous swarm, you get something that is quite specialized on a local scale and it's able to do its function.

Also interesting is when you go to huge numbers of robots, if you want to make them robust, there's really two different strategies. Either they're so simple that it's just reaction to fusion and if individuals fail, that doesn't really matter because the other ones are just going to continue moving randomly and reacting to their neighbors. So you somehow make that system robust by making the individuals so dumb that there's not much to to attack.

The other side is you have more sophisticated behaviors, and then you need to introduce some sort of artificial immune system, which actually makes sense. If you look at the evolution in multicellular systems, at some point we had to come up with our own immune system to be able to weed out back actors so that we could continue maintaining these complex systems. And so in that case, you need to actively have a system that allows you to see what your peers or other robots are doing so that you can detect if there are any actors that are breaking down or not working, because those could fundamentally alter the intended behavior of your swarm as a whole.

So I think it's a really good time right now to start studying these questions of how you make these systems robust, and what you need to introduce maybe in terms of artificial immune system or checking so that you can have that work. But likewise, it could be that we designed the swarm rules in such a way that they're safe by design and individually you're limiting the ways in which that system could fail.

Michael: Yeah. To take a turn into my own dark mind on this stuff. I was thinking about this last night when one of the audience members brought up military applications and was interested. I think a lot of people, at least in the general public, are interested in that and not on one side of that conversation necessarily the other. But just interested. And it occurred to me that, were I DARPA, there's really two different programs here in a way, that there are ... Like we were just talking about, two different approaches depending on how finely grained your methods are.

One of them is to actually build your own swarm of intelligent actors, and another is to figure out how to co-opt another swarm. And so I'm curious about this in terms of thinking about it in the language of infection or invasion. It seems as though if the swarm is dumb enough, you can't really hack it, right? You can't actually co-opt the nano medical swarm because we're talking about very dumb random behavior. But I'm curious, how do you imagine something like a cognitive regime change happening in a sufficiently intelligent swarm?

Sabine: What do you mean by "cognitive regime change?"

Michael: There's so much research being done here by the people that pass through this campus on, for example, can we halt a seizure or can we change pattern of brain activity? Are there points where we have extraordinary influence on this network? What kind of research is being done on influencing the swarm that has already been designed? That's already influencing these emergent behaviors?

Sabine: Yes, so first of all, on the military side, I think we need to be mindful of what we create and there's many good civil applications of these technologies, whether it's the biomedical applications, the search and rescue, the ability to use this to sense the environment or pollutants or whatnot. But we also need to be wary of applications which aren't the ones that we're intending to design.

And so I think that's something that we need to be honest about it and make sure that we're mindful of as we develop these technologies. In terms of making sure the system is robust or being able to control a system, a swarm system. We're looking at research right now that aims to send in a couple of robots into a swarm, and that swarm could be a biological swarm, say mosquitoes or a flock of birds that's not going in a good direction and might hit a wind farm.

And so the idea is that these artificial agents that we've programmed could go in, sample the reaction of the agents to its presence, and then extract the swarm rules from those interactions. So rather than having a laboratory setting where you have a godly view of what your swarm is doing, because you can monitor whatever the agent is doing, you would send in one or two or three agents, have them sample, and then extract the swarm model from that. And the reason it's interesting to do that is once you've extracted that swarm model from the swarm system that you haven't designed, then you would be able to potentially control it. So one example that we've been working on with Martin Homer is flocking. So that's a simple case where you could send in an agent, have them sample the behavior of other flockers, extract for example, the repulsion radius, and then using that information just by driving a couple of robots that know where to go, you're essentially pulling the flock as a whole.

So there's definitely ways in which you can skew the behavior of the swarm if you know how that swarm operates, and that might be useful if the swarm is misbehaving and you want to just be able to send a couple agents in to be able to push that swarm elsewhere. It could be useful in the case of understanding and monitoring animal populations, and being able to push them in the right direction. That works for a limited subset and we're just starting to explore this, where you have an idea of what the rules are. Not necessarily for things that are more complex than that.

Michael: Yeah. Listening to this, it makes me just freewheel into questions like could you inject spy robots into the construction crew of someone else's house and alter the building plan. Are there new forms of graffiti and subterfuge that can…? Of course there are!

Sabine: I have a student who, just on the more creative side, I think we need to be ... People go straight to the, what if we hack it in this form to something awful, because that is ultimately what science fiction tells us. And I was part of the Royal Society’s working group on machine learning that did a public survey of the perception of machine learning. And very few people knew of the term machine learning, something like 10%. and then what was interesting is they knew the applications of it. So they knew about the fact that you could talk to your phone and it would answer back, and natural language processing. They knew about autonomous cars and they learned from it mostly from mainstream media and science fiction. So I can name so many shows that have swarms gone wrong. Whether it's, you know, Black Mirror, Love, Death and Robots, recently…

Michael: Although, Black Mirror was so annoying because they had this, they called it the autonomous insects, but they were all controlled from that central Jurassic Park computer center.

Sabine: I know! We wrote a whole take down of that episode, but you know, I love science fiction. I really do. And it's interesting. But I think we need to think really creatively about these swarms and the really good things they could do simply because instead of having one robot that's limited, we have many robots that could do things that we simply can't do, like cover a large area typically.

So on the artistic side, since you brought up graffiti, I have a student with Paul O'Dowd who is looking at how we can use these robots forms as material that we could sculpt. And so can you develop a cool humans swarm attraction where you're moving your arms in interesting ways and that guides these materials, which is built up robots, to form different shapes.

I think it's really fun to think of these as new materials, new substrates, that we can start to sculpt and model based on something that's maybe more artistic and fun. Maybe maybe a robot that goes in and does graffiti. It's not a bad idea.

Michael: Well you know I've definitely read science fiction like Charles Stross's Accelerando where swarms are deployed for evolutionary housing, where the building itself changes its architecture in response to the needs of the moment where you want to sit down, and so the room exudes a chair. Is this the kind of heady, futuristic space that you fantasize into?

Sabine: There's been projects on that, where you can fill out a core project about robot furniture that could self assemble into usable functional shapes. Our work in morphogenesis with James Sharpe, it looks at growing these shapes in a fully self-organized way. Right now they're very organic looking and so they don't seem that functional. But actually the follow-on projects that Danielle's doing is looking to make them more functional.

A little bit like slime balls that, from this amorphous blob, can explore an environment and connect the nearest points and do all these interesting computations. I do like the idea of having swarms that adapt to the need in an environment. And there's a lot of work in modular robotics as well that have had that as a principal. Could you have a robot that self assembles into a walking robot if it needs to climb over something and a rolling robot if it needs to do something different.

These ideas have been there. What's interesting with the large numbers is all of a sudden you have critical mass to actually start building shapes that are a bit more functional potentially. James came to us because he had a smaller swarm and so you couldn't see the full beauty of this morphogenesis in action. You fundamentally needed those large numbers when you're using these algorithms. And so I love the idea of things like self-forming shapes depending on needs.

Michael: Okay. So you've been a podcast of yourself for 10 years, you said, at RoboHub, which I highly recommend to people who want to understand science communication done well. You've interviewed, I don't even know, countless engineers and researchers. This is an A and a B. Back to the question I asked you at the beginning: what are some of the most inspiring projects in terms of their potential implications that you see going on out of all of that? And obviously you're going to leave a ton of amazing stuff out, but whatever comes to mind. And then what aren't you seeing? What are the areas of robotics research, or research more generally into swarm behavior and swarm intelligence? Where do you feel like there are blank spots on that map?

Sabine: So the RoboHub team is a very big team and actually Audrow Nash and his team now are doing a lot of the most recent interviews, but it is mind opening to speak to so many people in the field. And I love everything. It's very hard to choose. But what I'm excited about really is robots that are getting into the real world. Because I think one of the challenges that we face in public perception is that we keep talking about robots. But actually in all my discussions that I have with the public, when I ask them, "Who has a robot at work, who has a robot at home?" No one has one or very few might have a Roomba or something like that. And so we need to change that. So actually the discussions I like most are the startups that are making real robots that are getting out into the real world.

And I think we need to be seeing more of that. So when you say what's missing, it's actually seeing more of that. It's seeing these robots that are being deployed and used by people.

The other thing that we're missing is, after 10 years of doing science communication where we're saying that the experts need to tell their story and explain more about what the work they're doing is, I realize we need to do the opposite and actually listen way more to the public. So right now all my new PhD students start their project by doing use case studies, which I'm learning how to do as well because I'm not a social scientist. Currently we're running with use case studies with firefighters, with warehouse workers, as well as inspection experts for bridges. And just asking them, "What's your job? What do you care about? Where would you need help? What do you think of robots? What do you think of robot swarms?"

And I found that these use case studies that are just so mind opening, first of all, because when you're concrete, it's no longer the realm of science fiction. You're not talking about Terminator, you're talking about, "I walk over this bridge and I'm trying to detect a crack and this is how I do it. And this is where a robot would be useful."

They're actually super open minded to technology. For example, firefighters, often their media person will know how to use a drone. So they've already had that type of interface to robotics. And they genuinely see an area where their expertise is important. That's what they value themselves as. And they see an area that really they couldn't care less about stacking boxes in the back of their charity shop, for example.

And so I think we need to be doing that actually way more so that we're developing the technology that doesn't have a backlash and something that we can deploy in the real world that is useful for people. It's funny because after 10 years of science communication you'd think I'd have known that, but actually it's this work with the Royal Society that just made me realize how it's done and how it should be done. So that's what I want to see more of.

Michael: You know, in a way that seems to be itself an instance of a biomimetic approach to design, where you have more than one direction of information flow, where you're drawing on the collective intelligence of communities. Let's tie a bow on this here, but it seems like all this looking to the natural world for inspiration in the design of technology is bringing us to a point where it's getting harder to actually draw the line between the living and the non-living. What are your thoughts on the robotic or the technological in its relationship to the biological?

Sabine: Our wet lab on the nano biomedical side is in the synthetic biology lab, and it's really interesting to see the interface between the engineering approach and living systems. And I think there's definitely, just like we're doing swarming across scales, I think we can think of controllability of agents across scales as well, and to design these systems as a whole, whether they interface natural systems or artificial systems.

Again, this idea of swarm level intelligence and how you implement that at the individual level. And those individuals could be a mix of real cells. They could be a bunch of mix with microparticles, they could be a mix of an external observer that's trying to interface with that swarm. So we need to think of the system, and the building blocks of that system I think could really be broad.

Michael: Well, awesome. Sabine, it's been a pleasure to be a dumb node in a smart network with you for the last 40 minutes.

Sabine: Oh, that was a very smart conversation. So thank you.

Michael: Thanks a lot. Where will we send people to follow up on learning about your work?

Sabine: They can check out the and RoboHub if they're interested in robotics more broadly.

Michael: Excellent. Thank you.