In my last piece, I wrote about the rise of organoid intelligence—tiny brain-like structures grown from stem cells that can learn, adapt, and interact with virtual environments. They’re not just mimicking intelligence. They’re developing it in a way that looks eerily familiar to ours.
But while the science is fascinating, there’s a darker side to all this. The more these organoids start to behave like real brains, the harder it gets to ignore the ethics. What happens when a computer isn’t just code, but something alive?
Organoid intelligence isn’t some future tech—it’s already here. And unlike regular AI, you can’t just reboot it. These systems learn through experience. They form memories. And once that happens, there’s no clean reset button.
Some are already being used in experiments and robotic systems. Others are even available for rent as biological processors.
But what happens when they stop working properly? Or when they’ve learned too much? With silicon chips, you just shut them down. With organoids, “shutting down” might mean ending a living system.
That raises some uncomfortable questions.
Calling it “killing” might feel dramatic—maybe even inaccurate. These aren’t conscious beings in the way we usually define it. But when you’re dealing with something that grows, learns, and remembers, ending it isn’t just technical. It’s philosophical. And it’s not a decision we can make lightly.
Some researchers are starting to raise these questions. Neuroethicist Dr. Nita Farahany has warned that if brain organoids ever become sentient—or even come close—we’ll need a serious ethical framework to guide how we treat them.
Others, like Dr. Julian Savulescu, have floated ideas like engineered forgetting, a kind of biological reset—but even he admits it’s risky and could do more harm than good.
None of this is hypothetical anymore. We’re already working with these systems. And if there’s any chance they can feel something—even something we don’t fully understand—don’t we have a responsibility to find out before we go further?
What if an organoid becomes distressed but can’t express it?
What if it resists being shut down in subtle ways we miss?
What if it gains just enough awareness to suffer, but not enough to communicate why?
We might be building systems that learn like children and disposing of them like outdated machines.
So what exactly are we creating here—tools, minds, or something in between?
Organoid intelligence forces us to rethink what it means to build something that learns. If there’s even a small chance these systems can suffer, then ignoring it isn’t just careless—it’s a moral failure.
We’ve already created them.
Now we have to decide what kind of creators we want to be.




Leave a comment