Their study addresses what Oxford philosopher Nick Bostrom calls the control problem: how do we ensure super-intelligence machines act in our interests? The researchers conceived of a theoretical containment algorithm that would resolve this problem by simulating the AI’s behavior, and halting the program if its actions became harmful. [Read: Meet the 4 scale-ups using data to save the planet] The study found that no single algorithm could calculate whether an AI would harm the world, due to the fundamental limits of computing: This type of AI remains confined to the realms of fantasy — for now. But the researchers note the tech is making strides towards the type of super-intelligent systems envisioned by science fiction writers. “There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it,” said study co-author Manuel Cebrian of the Max Planck Institute for Human Development. “The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.” You can read the study paper in the Journal of Artificial Intelligence Research.