How do we prevent superintelligence from owning that on/off switch?

There is one way, which is to tally up all the ways that things could be dangerous and preemptively block or ban them.

This is not my favorite approach. I am no American gun rights fundamentalist, but this argument of “X could be dangerous has been used often”, particularly in countries like mine, to erode fundamental freedoms and prevent the spread of technologies that give power to citizens.

For example, it is the exact same argument that generations of Sri Lankan presidents have used to block social media whenever they or their political allies are getting slandered for corruption. I actually do have a Supreme Court case lodged against a former president for this very particular thing, and their argument was that social media posting somehow compromised national security.

The second approach, which has been the default public policy approach, is to instigate rules after the fact. After some observable harm has been done, rules are set in place to make sure it does not happen again. OSHA regulations are a good example of this. This means that rules and regulations typically tend to be written in blood.

This is also often not a very viable approach. Recent reading led me to Brian Merchant’s Blood in the Machine, in which he details the story of Luddites. It is clear that regulation arrived only after a generation of children had been tortured, maimed and their fingers crushed in factories and the character of life forever changed.

This approach of regulating after massive public outcry translates into political expression by representatives may have worked when technological spread was slow, but as I have pointed out with my examples, we have had many generations of technologies that have brought us closer, reducing the distance between the spread of ideas. Every generation of new technology tends to acquire user bases much faster than the ones before. We can expect both the damage and the benefits to scale appropriately. In many cases we can expect these to be underway before the representatives even wrap their heads around the existence of the technology.

The middle path is to see whether there are experiments that can be run that preemptively do demonstrate harm, with instruments, simulations, testers and/or volunteers. Certifications are then assigned. This is how drugs are tested. We do this for all manner of things. Computer mice have certifications.

To an extent, this does clash with how we think about corporations in the modern world. Many corporations, particularly those of a software-based nature, are given extraordinary license to move fast and break things. And yet, just as we expect people, when they break things, to apologize and make amends, we should also be able to hold corporations accountable for breaking things. The potential breakage could be tested beforehand to make it fairer for everyone involved.

This is an approach of cautious conservatism. It tries to strike a balance between clutching pearls and mopping up the blood after the fact. It is the principle of looking before leaping.

It is also not perfect. There is of course an argument that this kind of regulation only strengthens the incumbents who have had enough time to solidify their position. And yet we force people to take driving tests to certify that they are safe enough to be on the road. Certainly, people who have been driving for a long time find these tests much easier to pass, and no doubt quite a few young drivers are frustrated, but on the whole it makes society safer.

There is also the argument that this sort of testing requires extraordinary capacity in government. This kind of capacity is genuinely hard to build; after all today’s technological progress is rapid and many of those most eminently qualified are lured by expensive salaries to the all-important task of optimizing click-through rates on an ad for mystic yoga moms. Much of other progress happens in open source spaces, where progress and ideas and contributions are massively decentralized and nearly impossible to contain. And if we are serious about super-intelligence, we have to acknowledge the fact that we really need extraordinary capacity to maintain oversight of the intersections of strange and fast intelligence and see if they lead somewhere.

What then do we do? Do we adopt a policy of blind trust, letting brave, greedy and sometimes foolish pioneers blunder into the wilderness and hoping that nothing comes back out of it to slaughter us? Do we stay within our homes forever, afraid of the world outside? Do we meditate on every road in the garden of forking paths before moving a foot?

This is the fundamental challenge facing us today. We face the concept of superintelligence but have no real conception of how to define it. Even having defined it (in however haphazard a manner as I have done here), we still need to make hard choices about how we go forward. No choice here is universally pleasing.

We risk either stifling innovation, sacrificing people to the wheels of progress, or of paralysis by analysis.


This construct of the driving test is key. The driving test, unlike an FDA drug trial, is not so expensive that only corporations with massive millions can afford it. It is cheap enough and easy enough to implement that it is a reliable, reasonably usable test. Even governments and regulatory authorities without much in the way of resources can enact it.

What, then, is a driving test for something that we may only identify after the fact? What is the simplest sequence of processes for identifying an omega point and tracing back its causes and trying to understand what here might be super intelligent?

This is where I must end my thoughts and leave it in your hands. To some degree I believe this lies on our conception of omega points as I have outlined them above.

Technologies may move faster than we are capable of comprehending, but the fundamental needs of humans have moved much slower. The Declaration of Human Rights does not change with every Twitter hot take. Our general conception of things that we need to improve about the human condition have probably not changed a lot either. The UN SDGs, as uselessly broad as they are, are proof that we can at least agree on a few lines of text; perhaps with enough tinkering, we may get to a list of desired outputs.

Perhaps these can form the principles for the outputs that we desire or dislike.

Hopefully this has sparked some train of thought. My apologies for the shoddy construction of the station, but I hope it serves well enough to get the train rolling.