Have We Lost Control of the Buckets & Mops?
Has mankind already lost control of AI... like Mickey’s Buckets and Mops in Fantasia? If so, how would we know?
In the 1940 Disney film Fantasia, Mickey Mouse is the Sorcerer’s Apprentice tasked with the mundane job of cleaning up the castle for his master. Ever resourceful and a bit lazy, young Mickey uses some of what he has learned as an apprentice and makes the buckets and mops come to life... then he makes them multiply. Mickey’s problem soon becomes evident as the buckets and mops not only multiply, they become an army. Mickey cannot make them stop and he cannot prevent them from making more of themselves. Has the modern world, in its infatuation with automation and visions of a more perfect future, let something loose which it cannot stop? At this writing (in mid-2025) we can definitely say, “Maybe... maybe not.” What a helpful answer.
Here are some things to consider.
Mickey’s purpose in Fantasia was simple: automate dull and tedious work. The castle was huge and there were endless stairs and halls he had to mop. He had only one bucket and mop. Like Mickey, we today are also after automation – to make repetitive and tedious tasks less so. But we also need automation to do very complicated things and faster. In Mickey’s case, the buckets and mops took over the castle and sent him fleeing. At first, his worry was his boss, but it soon became more; the buckets and mops were after him, too.
So here’s the question of the day. Have things already progressed to the point where we could not shut down AI if we wanted to, and do we even want to? And if AI is “out of control,” what does that mean? At this writing, the answer to the first two questions is still no and the answer to the third is still “We don’t know.” Again, what a helpful answer.
Mickey had it easy because he quickly saw that his buckets and mops went beyond mopping and beyond just 2 or 3 replicas. Mickey decided fast that he didn’t want an army of automated buckets and mops at all. His answer was, “Yes it’s out of control,” and “I don’t want any more of them!” His choice was to shut AI down ASAP. Now it’s worth asking, do we have any such consensus about our own buckets and mops?
Of course not. Our world today is still divided on AI. But also unlike Mickey, we were not warned to stay away by a wise old master. When in the history of man, has ANYBODY ever recognized a certain new knowledge was a total mistake and closed it down immediately? I don’t know of such a time. The only bodies of “knowledge” I can think of that were stopped were those that proved obsolete or utterly fruitless – in which cases, the true knowledge was in what didn’t work. Remember Edison’s quip about his lightbulb – he learned a thousand ways how not to make one.
One of the arguments made for not stopping the advancement of AI research is because other countries will keep going and get ahead of us. So AI is part of a technology arms race. In this sense, it is reminiscent of the advent of nuclear weapons during and after
World War II. People said at the time that the atomic bomb was a fine thing as long as only the good guys had it. But as soon as others figured out how to make such weapons, everything changed. The nuclear genie was out of the bottle. The Americans could not stop developing more weapons and countermeasures because the Soviets were hard at work on them, too. They say that now about AI, except now, it’s with respect to dozens of countries, not just two or three.
Concerning nuclear weapons, people today say – and have been for some time -- that the proliferation of nuclear weapons is one of the greatest threats to the survival of mankind. They argue, somehow we MUST stop more from being developed and getting into the wrong hands. But, of course, nobody has figured out a way to do that. Who precisely would control that effort? And what would victory look like even if it could be done? Would we really return to a pre-nuclear age where NOBODY had even ONE nuclear bomb anywhere? Would the know-how remain buried in a subterranean vault out of reach of the human race forever?
Nobody has or ever has had the answer to these questions, as we all well know. We have stopped SOME countries from getting the bomb; we have made SOME agreements at times to limit deployments. But what we have never done is remove them and the knowledge to make them from the earth. Not even close.
That is somewhat like where the world stands in regards to AI research. Today some say that AI is as dangerous to us as nuclear weapons in the decades after World War II. (And it’s worth pointing out that they still are.)
So now it is time to answer the first big question. And it is this. Yes, Mickey, we have lost control – to the extent that we will not prevent buckets and mops from proliferating or existing. But we have NOT lost control to the extent where it – whatever “it” means – will take over the world and enslave us all. If history is any indication, what will happen is that people will figure out ways to slow it down, detect it, counteract it, and monitor its activity... just as with nuclear weapons.
At my company, the LexForge AI Group, we are all about making AI serve, not rule, people. We do not agree with all the hooey on social media that we only have so long until we pass the point of no return. I Robot to us is just a movie – and will remain so.
(Apologies to both Isaac Asimov and actor Will Smith).
That being said, here is what we do agree with. MANKIND is the problem, and it is some stupid humans that might do something disastrously stupid that could affect the rest of us – just as in the COVID pandemic. In that, human beings did things that made the whole affair a lot worse. And there is substantial evidence it did not arise naturally, but was the result of humans doing stupid things with their knowledge, not unlike Mickey with his partial sorcerer’s knowledge. We think AI is here to stay. The buckets and mops are not going to disappear. But it is possible to leverage AI to benefit people. After all, that’s why it was invented in the first place. And here, unlike nuclear weapons, there is a key difference. Nuclear weapons were developed during a world war for blowing up things and killing people. The peaceful use of nuclear energy was a byproduct of that effort, it was not the primary purpose. AI, on the other hand, was not developed in a frenzy to kill people, it was to make life better. In that sense, AI is more like the internal combustion engine or the first airplane; it had peaceful uses first of all. Just like Mickey the Apprentice, we simply want to make much of the tedium of life easier.
When I asked this question at a 2024 technology conference, it was to a panel of PhDs in various fields. They thought it a great question, and their answers varied. But one person added something to it, by adding this: “It’s not so much whether we have lost control, but how would we even know?”
In that sense, we are very different from Mickey. He knew he had lost control and fast.
We, on the other hand, are in a bit of a conundrum because it is not as easy to tell. Right now, there are automated AI processes that talk to us, text with us, and come up with answers for us. They do it faster than us. But yet they’re not always right. And we’re still in the place where humans still call the shots. Or so it is thought. Here are four somewhat freaky ideas about how we would or wouldn’t know.
1. If a government entity, such as a department or agency, takes an action which the rest of the government or even the President cannot prove they ordered or directed, that may be a clue. (And pretty scary.)
2. If a “person” who it cannot be proved is a real flesh-and-blood person, makes an appearance or a statement independently, that may be a clue. Or, a variation thereof – a “person” who appears to be a certain real flesh-and-blood person, does so – and no human director or agent can be identified -- and the actual person appears to contradict it, that also may be a clue.
3. If an action is taken by an AI-powered system or technology that no human-directed office or team can certify as under their direction and control, that may be a clue.
4. If a series of fast actions are taken in response to a crisis – say, in the financial markets – and no post-incident investigation can conclusively explain what it did or how, that may be a clue (albeit a more positive one). Again, these are only possible clues, they don’t mean we’re out of control just yet – or do they? See, here’s the weird part: autonomy is often what we WANT these systems to have, therefore the definition of “out of control” is harder than it sounds -- do we want it or not? Well, yes and no; sometimes we do and sometimes we don’t. If an autonomous AI agent brings my car to a screeching halt before I go hurtling over a cliff on a rainy night, I was “out of control” and it had taken control from me... and I’m very happy about the outcome.
All of the four freaky clues above would indicate something has taken place without Mickey’s direct approval or direction – a rogue mop and bucket. But the real question would be about neutralizing it. How do we stop it, turn it off, slow it down, detect it and decide? The real question should be that last one – how to stop it. Every solutiondeveloped should have AS PART OF ITS DESIGN what measures can humans take to neutralize any AI capability independent of itself. This simply means that if a particular AI technology or system is designed and built, an independent means of stopping it should be approved and tested, like the airframe parachute system on the Cirrus SR-22 aircraft. A third and final question to ask may be this, the cost. Can we gauge the cost of having to neutralize an AI technology or system when such a moment arrives? If shutting one down means putting an entire city or county in the dark for 2 months, can we sustain that?
The reason for this final question is because in many cases, it may not be enough to determine what can shut it off. It may be just as critical to determine if we can afford to.
Pete Farrell is a principle member at the LexForge AI Group, based in Colorado.