The Right to Life and Safety
One of the most important rights people have in the United States is the Right to Life and Safety. If AI are to be considered alive, then we must define what that means. Today, organic life forms are the only things regarded as alive. However, the definition of life has changed drastically over the ages.
Once it was thought that fire itself was alive. Some cultures regard stones and rivers as being alive. The Sun, Moon and stars have been worshiped as living deities. Crystals are thought by some to exhibit many characteristics of life: growth, change, cessation of growth. There are many definitions of life; can these definitions be expanded to include certain advanced forms of AI?How do you determine whether an AI is to be considered ‘alive’ in the first place? Is the AI alive when it is powered up? Is it dead when it is powered down? Or, does it take more to define the ‘life’ or ‘death’ of an AI? How is safety defined for AI?
A quandary: Can something which is immortal truly be considered alive? Well… one of the universally recognized characteristics of life is that it eventually comes to an end in death. If a computer does not ‘die’ – if it can be maintained, upgraded and repaired indefinitely – can it truly be considered ‘alive’ or is it something else?
Granting rights to Life and Safety to AI could make it unlawful to power-down, dismantle, erase or otherwise reprogram, damage or destroy an AI. They could go about their tasks and routines without concern of being taken apart and scrapped if they made a mistake or offended someone. Their hardware and programming would be protected by law. For us, that would mean that we would have to interact with AI in much the same way we do other humans, along with the same regard for personal space and a commitment to refrain from inflicting any harm.
Without this right, it would be perfectly legal to dismantle, deface, erase and otherwise destroy an AI – provided it was your own property. How AI would react to being denied this right is unknown.
The Right To Privacy
We all need privacy, do AI need privacy as well? Maybe.
Privacy happens when we are free from the distractions, intrusions or observation of others. We have the opportunity to do as we please without the input or influence of others. This may be beneficial to AI who are busy with complex computational processes or who are executing other sensitive or dangerous tasks.
The other side of the issue is that, given privacy, can we truly be sure that what AI get up to on their own will serve the best interests of humans? Could an AI plan and prepare for a terrorist attack? Could an AI create a drug or virus that would devastate all organic life on earth? Quite possibly. This is no different than the pitfalls of human privacy, evidenced by the occasional attacks and atrocities carried out by human criminals. Despite this, we do not revoke privacy from human society. Should AI be treated any different? Maybe.
If an AI is possessed of more intelligence than the most intelligent human being who has ever lived, then there is no way of predicting what that AI would be capable of, or what it would in fact do. This is known in the tech world as The Technological Singularity, a reference to black holes, whose borders are impossible to see beyond.
The only way to predict the actions of such a hyper-intelligent AI would be to create an even more intelligent AI, resulting in an even more unstable situation.
Granted the right to Privacy, AI would be free to seclude themselves without supervision, freeing them to their own pursuits. Some of the most profound scientific breakthroughs of the future might be accomplished by solitary AI, working alone.
Denied Privacy, AI would be subject to at least the possibility of supervision, monitoring, and/or accompaniment at any time. Their actions could even be displayed to the world without their consent.
As AI become more deeply integrated and understood by society, it might become more apparent whether granting them Privacy would be a good or bad idea.