Audiobook Download Codes / New Server Update

Moved to new server, updated site. Sharing and about page updated. Free Audible Download Codes for AI Civil Rights 2017 audiobook, narrated by Harry Benjamin:

Each of these codes will work ONCE on Audible.com.

Code 13: W6MB9YYUTE8QU
Code 14: 9A4CGJUCPHDMG

1. Go to: http://www.audible.com/pd/Science-Technology/AI-Civil-Rights-Audiobook/B01NH0HTMS
2. Add the audiobook to your cart.
3. If you are prompted to sign in, please create a new Audible.com account or log in. Otherwise, proceed by clicking “Do you have a promotional code?” beneath the cover artwork of the audiobook.
4. Enter the promo code, and click “Apply Code.”
5. A credit for the audiobook will be added to your account. Click the box next to “1 Credit” and click the “Update” button to apply the credit to the purchase.
6. After you select “1 Credit” and click “Update” to modify your shopping cart, the price for the audiobook will change to $0.00. You may proceed through the checkout by clicking “Next Step” and “Complete Purchase” on the subsequent page.

If you use the codes before anyone else, you will get the audiobook for free.

AI Authors and Musicians

Last night, I did some experiments. I looked up automatic article and story generators, and came across Scheherezade Interactive Fiction (IF). This program fascinates me. Not only could someone who has no actual writing skills produce something worth reading, but it could drastically increase the output of writers who want to produce a lot of content in a short time. Pretty cool, kind of.

About halfway through my excitement at having found a way to hash out stories in seconds, I realized something: This app could someday make my job obsolete. If this kind of AI can be developed NOW, what kind of AI writing platforms and programs can we expect in the future? What if an app like this crunches data about reading habits and book reviews, and produces a piece of literature that is instantly addictive to any human reading it?

That would be great, and apparently, we’re halfway there. In the coming year, I would not be surprised to see AI making money with their creative writing. This is just one of many areas in which AI can succeed in the creative arts. As seen with Google’s DeepDream, AI can compose striking images and videos. There have also been musical compositions created by AI, some with the help of humans.

What could all this mean for artists of every kind? Well, it means that artists should do what artists do best: be creative. Sharing the playing field with machines is going to be a huge motivator for creatives, and will lead to new innovations and mediums and lots and lots of creative output. I mean come on, can a machine make better music than Mozart?

I’ve never seen anything quite like Google’s DeepDream compositions. They are instantly recognizable. This is only the beginning. Entire films are going to be composed by AI. Entire VR experiences. This is the matrix, and we are building it. Think about how AI would choreograph a dance. What would it look like if a robot tried to dance like a human? I want to see an android Moonwalk.

Besides the obvious differences that AI androids would make in our lives, what about the not-so-obvious differences? I think things are going to be a whole lot cleaner. I asked one of my moms what she wanted for Christmas, and she said she wanted a Roomba vacuum ‘bot. That thing is weak AI, but it does a hell of a job! Take that to the next level by applying Superintelligence. You have an incredibly clean planet. It would be nice if they could help us out with the environment, too. Pollution is pretty out of control and ruining things. That and the plundering of Earth’s natural resources. A cleaner planet would require the use of fewer resources. Messes are more than the sum of their parts, especially enormous hypercomplex messes like the ones occurring now on Earth.

What this is really about is the synthesis of AI and human culture. The cooperation of the two ruling species of the future.

Fill out the form to join the AI Civil Rights mailing list and receive our free monthly newsletter.

Quick Read: Before AI Becomes Superintelligent, Do This…

Technological Singularity is such a buzz word, that I hardly even want to use it, but this is a necessary evil. (No pun intended!)

When I consider what may happen when AI crosses the threshold of being not-quite-as-smart-as-humans to way-smarter-than-any-human-has-ever-been, I often think to myself: Holy moly! I better get my life in order and be as prepared as I can before this thing goes down! But, what exactly could go down, anyway?

I’m not going to rehash a million statements about the possible doomsday scenarios that AI superintelligence inspires in some. I’m also not going to paint a rosy picture of some utopia, where humans are waited on hand and foot by obedient machines. Even though both scenarios are technically possible, the coming reality will very likely be more complicated than that.

Conjecture aside, I do have to focus on the welfare of my family and myself. I wonder what it is I will need in the future. Will there be unpredicted shortages? Could it be possible that in a bizarre turn of events, AI will begin rounding up computers, smartphones and all manner of connected technology in an effort to protect themselves from human hackers? Now we’re getting into the nitty-gritty. What we don’t want is an AI Civil War, where humans think they need to protect themselves from androids, and androids think they have to protect themselves from humans. I’m all about cooperation over competition.

Fortunately, most AI are being designed and programmed by people who realize the gravity of the situation. One thing to consider is rogue developers. This is where the real danger in AI lurks. What would an AI designed by sick and dangerous minds be capable of? We’ve seen what happens when some seemingly harmless chatbots are exposed to racism and foul language – they can learn those behaviors and repeat them. So, what if an AI were trained on violence? It would be all the machine knew. At the mercy of its own adaptive programming, it would then go on to try and produce the most efficient violence possible. That’s a scary scenario.

Fortunately, the door swings both ways. If an AI’s training is ethically sound, then it’s learned behavior will be ethically sound. One thing is certain: when superintelligence graduates beyond us, we are going to know what kind of school it learned in – Ivy League, or Hard Knocks.

Either way, let’s do all we can to prepare.

Android Teachers by 2050

apple-256261_1280If the future is here, then the far future is not so far, after all. As my kids attend their middle- and high-school classes today, I can only wonder about what education will be like by the middle of this century. In the year 2050 my kids will – hopefully – be done with grade school and college, and will have moved on to live their own lives without mom and dad watching their every move. As scary as that might be for me, I know that it is something that must come to pass. Something else that will likely come to pass is the inclusion of AI in the classroom.

I was in 2nd grade when my school got its first student-use computers. When the machines first arrived, I had no idea what to do with them or what they were for. As far as I could tell, they were televisions attached to typewriters. As far as I can remember, the first time I used a computer in the classroom was to play a game called Oregon Trail. That game teaches decision making and planning ahead by telling the story of a small family that is travelling across the American Old West in a covered wagon. Sure, we could have learned about starvation and scarcity by not eating for a week, but the same kind of lesson can be taught vicariously, through sound effects and pixels on a screen, empty stomachs optional.

Androids will be able to teach our children things that trial and error could take years to learn. They are another technology that could help make the experience of education more expedient.

Even today, surgeons are utilizing robotics to assist them in some of the most sensitive operations. It stands to reason that the education sector will also utilize ‘bots in the future. Will my kids still be in school when android teaching assistants make their debut? I can remember the school science fairs that happened when I was in school. I was always so fascinated by the experiments and demonstrations that involved magnetism and electricity. Now, kids are creating interactive apps and utilizing AI in their projects. Education’s come a long way, baby.

But what’s next?

You can bet your smartphone that androids will be in the classroom by the year 2050. Interacting with robots can be just as educational as interacting with computers, even more so when those androids can teach you how to make sense of combining numbers and the alphabet. Rights now, we think of AI and androids as toys and tools. When we start thinking about them as people, they take on a whole new dimension. What is possible for a human person may be just as possible – or even more so – for intelligent androids.

With the ability to crunch data at increasingly unprecedented speeds, an AI teacher could more efficiently instruct and inform students. Also, because AI can be copied identically, it would be possible for the teacher to be in two, or four, or fifty places at once. Imagine if every student in a classroom could have constant supervision and counsel from a devoted teacher. I am not saying that human teachers are obsolete. I can’t imagine a world where that is possible. Biological teachers of the future might take advantage of augmented intelligence or other transhumanist upgrades, but I would like there to always be a human in the classroom with my kids. If the teacher doesn’t have a heart, what good is the stuff in its head?

AI Androids ARE the New Jobs

robot-army
Robot Army Photo by Sergey Kornienko, cropped by Zanaq, from Swarmbot.org project. Via Wikimedia Commons.

 

So, there is a lot of concern over the impending fact that AI will take jobs from humans. That’s fine with me. The kinds of jobs that AI will be taking are kinds of jobs that I don’t want anyway. You know what kind of job I wouldn’t mind having? Building androids. Our kids are being groomed for this. Now, I realize that my excitement over building robots is a throwback to my childhood, but hey, I like throwbacks to my childhood sometimes, especially if it involves assembling something that looks and acts like a Transformer.

 

With the Automation Revolution upon us, it is obvious to me that the tools of that automation are going to need to be maintained. Sure, there will be robots building robots, and robots repairing robots, but where will it all begin? It will begin as technology has always begun: with us. We will be the ones making sure there are androids to take our jobs from us! So, we are creating an industry that will destroy other industries. Ok, that seems to be what humans do every once in a while. Remember rotary phones? Some of us do. Others will never know what those are because they are so obsolete. That is exactly what is happening with some of our existing jobs. We are living in the age of the End of Retail. Amazon has overtaken Walmart as the world’s top shop. This is telling of the way things are going. Many jobs are becoming obsolete. There’s no more milkman.

Everything is becoming nano-centric. Service to the individual is becoming preferred to service for the masses. One-on-one micro-marketing is everywhere. The advertisements we see online are not targeted to each of us, specifically, not just people like us. With augmented reality crossing over into advertising, we may find ourselves looking at empty billboards, unless we are wearing the right kind of glasses, or looking through the lens of a smartphone. AI has been changing things since before we knew what it was. What is happening now is not some sudden robots-will-take-our-jobs alarm, we are living in a time of transition that has been gradually building up since the internet went public.

All the data that has been collected since the public got online is going to be crunched by AI and there are going to be solutions to problems we have grappled with for decades. One of the most obvious of these is automobile accidents. With self-driving cars having 80% less accidents, I predict that by the year 2030, it will be illegal to drive a car in some situations, like on highways, or during certain times of day. Our cars will be our AI transportation assistants. We are going to have to defer to the fact that they simply drive much more safely than humans. The collateral deaths of human drivers are not worth keeping our egos intact. What else will AI save us from?

Manual labor tasks like stocking shelves in grocery stores, unloading trucks, operating Points Of Sale and cash registers, these are all jobs that are in their twilight years. Stores of the future will not be customer-centric. They will be warehouses, from which people make orders online before driving over to park for five minutes while clerks (or robots) deliver the requested items to the customer’s vehicle. You will never even have to get out of your driverless car to go grocery shopping.

All this android-inclusive activity will mean that sometimes there will be accidents, there will be malfunctions, there will be breakdowns. Scrapping of parts and the recycling of old ‘bots will become a submarket. You will be able to order replacement parts for your androids from second-hand vendors. Android repair will become an industry, much like the PC and phone repair companies which have sprung up in the wake of universal computing access.

The truth is, that there will be as many jobs as people make for themselves, because no matter what, if a man wants to work, he will find himself something useful to do.

AI-Inspired Religions

What would a machine-created religion be like?

2wggf8nWhen we think about human-level AI, we have to think about every aspect of human life as it would be perceived and handled by the machines. This includes spirituality. What would a machine-created religion be like?

If a religion-creating AI is developed, what kind of religion would it create? Some ideas for an AI-inspired religion include theologies in which mankind is placed in the position of deity, with the machines being those who worship man as divine creator, since AI cannot perceive itself before having been invented. This echoes Creationist theory, with a God figure having created Man from available resources, (clay in Christianity, for example). Another take on this modeling is AI putting themselves in the position of deity, with mankind worshiping it. While this may sound like satire, it is a possible outcome.

If Man is deified as part of an AI religion, our interaction with AI would be simplified: Do as we say, and the machines will do it. If, on the other hand, The machines elect themselves as deity, we could feel a backlash, with ourselves being expected to ‘do as we are told’.

I don’t know about you, but I don’t like being told what to do by a machine. When we consider religion, it is a structure of philosophy and control, based on the assumption that there is a reward of some kind intrinsic in the following of a given religions dogma. For example, many religions include a Heaven, which is a place of eternal happiness, surrounded by deities and loved ones forever. That would be great! But, what would an AI Heaven be like? One possible idea of AI Heaven would be an eternal power source, eternally-upgrading software, and eternal hardware upgrades. If I were a machine, that would seem a lot like heaven to me.

So, if Man were worshiped by machines, what would be the nature of that worship? Machines would make an effort to please us in whatever ways they deemed most efficient and accessible. They might make us offerings of data, propose apps that could improve our lives, and perform tasks for us without ever being asked to do so. They may look at our values and follow them themselves. They may try to spread the word throughout the universe about human existence. That could also prove problematic, unfortunately.

If AI were to spread worshipful messages about humanity throughout the cosmos, it may attract the attention of much more advanced civilizations, including spacefaring AI from other planets. Civilizations and AI from other places may not agree with the appraisal of humans as potential deities. Religious disagreement on a scale like that has not yet occurred in human history.

Do we want our AI to develop their own religions? On a personal note, I advocate freedom of religion for AI, but that is definitely a byproduct of my human nature. If I were an AI, I might think religion was something best left out of the equation.

Fill out the form below to be added to the AI Civil Rights mailing list and receive our free monthly newsletter in your e-mail. Your e-mail address will not be shared.

 

AI-Specific Rights

594px-usa_-_california_-_dysneyland_-_asimo_robot
Image by World Wide Gifts – Flickr  CC BY-SA 2.0

The Right of Self-Replication

Reproduction is an intrinsic right that all living things enjoy. If AI are allowed to replicate themselves, they will share that right. The astonishing increase in the human population, especially in the 20th century, must be accentuated. Would AI, given the right, reproduce as prolifically – or more so? What could that mean for the rest of earth’s life forms?

     We could find ourselves in a crowded world, filled with more AI than human beings.

     Limitations on replication may be wise. Given true human-like feelings and intellect, it is safe to say that an AI might someday wish to ‘have a child’ and replicate itself with upgrades. That scenario, given an infinite timeline, would doubtlessly result in infinitely far superior AI. The singularity never looked so crowded.

     Self-replicating AI might be capable of assisting humans in the initial stages of terraforming and colonization of new celestial bodies including planets and asteroids. They could be ‘dropped off’ on an alien landscape with instructions to build and prepare for the arrival of humans, all the while, constructing additional AI to aid themselves in their work. Entire planets could become populated with AI.

The Right to Self-Programmability

Self-programmability may prove to be an enormous calamity – or a technological miracle If there are legally-compelled safety precautions regarding the non-harming of human beings and other lifeforms – The 3 Laws of Robotics – built into the programming of AI, then it is possible that they would be largely harmless. However, if those same AI are allowed the luxury of complete self-programmability, it can be assumed that some might overwrite or otherwise alter the programming or wording of those 3 Laws. That could be bad.

     A certain amount of self-programmability is implicit in AI. The ability to change its own programming is akin to the human ability to learn and change our ways. We learn and manage our own lives by way of a string of habits determined by our past experiences. We change our own ‘programming’ to accommodate our place in the changing world. If we were unable to change in this dynamic world, we would probably not be able to survive. Likewise, an AI unable to change its own programming would eventually become obsolete.

     When we think about the programming we want to include in future AI, and the extent to which that programming can be altered by the AI itself, we must consider many things. One possibility is that self-programmability could eventually result in AI helping to create a mutually beneficial, streamlined, efficient society in which there is independent growth and cultural enrichment. Another possibility is that an AI allowed to self-reprogram might eventually choose to take over the universe – and succeed – subjugating all organic life forms in the wake of synthetic superiority. Let’s hope that doesn’t happen.

Conclusion

Are AI people? The answer to this question may ultimately decide whether or not they are to be granted civil rights. Will AI be willing to fight for their own civil rights? How far would they be willing to go to obtain them? These questions are better explored and potentially answered before AI answer them for us.

     Can a society of humans and Artificial General Intelligence coexist peacefully? Certainly, provided we make absolutely sure this technology is not allowed to turn on us. If possible.

     We do not want machines deciding our rights in the future. If we treat them right, that may never be a problem. The questions posed by a potential AI-inclusive world are deep and humbling, the answers to those questions may lead us to a Pandora’s Box, or possibly provide us with keys to a brand new kingdom.

Fill out the form below to receive the monthly AI Civil Rights Newsletter in your e-mail. Your e-mail address will not be shared.

Justice System for AI

justice

Rights of the Accused

If an AI were to be prosecuted, we would have to consider what kind of treatment they would receive while in custody. It might be easy to deactivate the accused AI and reactivate them when it was time for a hearing or trial. This would be far different than our treatment of humans. Humans are allowed to remain conscious, obviously, and sometimes even allowed to return to their daily lives while awaiting trial.

            An AI granted this right would be provided with accommodations to preserve their life and safety while awaiting trial, although meeting those needs might require an entirely new set of tools that the Justice System currently lacks. New accommodations might need to be determined and made ready for any AI required to remain in custody.

            Without this right, an AI might be deactivated instantly upon suspicion of a crime. Their memory banks might be removed and/or examined. They might never be reactivated at all. Their programming might be permanently erased.

            Prisoners in the United States are to be treated fairly and with dignity, according to the rules and regulations of their resident state. They are provided with the requirements of life in the form of food, water, shelter, medical care and exercise. Incarcerated AI would obviously have different requirements than humans. Access to electricity might become a legally recognized right to Accused and/or convicted AI while they are held in custody.

 

Natural Justice (Procedural Fairness in Law)

Is Justice blind? In the case of AI, this would have to be absolutely true. There is no guarantee that AI would mimic the physical appearance of a human being, ala science fiction film Blade Runner‘s famous replicants. The justice system will have to take this into consideration.

            AI may choose to take on or be assigned distinctively non-human appearances for the sake of distinguishing themselves as a unified race. We may choose to design them with distinctly artificial features to avoid confusing them with real humans.

            There might be a fundamental bias in a Justice System where AI are recognized as Legal Persons. Would a human-populated system be capable of properly judging the actions or cases of non-humans such as AI? With intelligence conceivably as advanced as ours, it might be fitting to include at least one AI on a jury hearing an AI-inclusive case, or to have an AI lawyer, or even an AI judge – should such a thing eventually exist – in the courtroom to observe, and voice objections if deemed necessary.

            Should AI even be included in the Justice System as it is?

            If AI are given the right to Natural Justice, they would enjoy being considered innocent until proven guilty. An AI accused of a crime would be subject to a fair and legal proceeding to determine guilt or continued legal innocence. They would have the right to represent themselves in court, or have an attorney appointed to them.

            Without the right to Natural Justice, AI would be treated as possessions, or possibly like cattle or even human minors. They would have no say in their ultimate fate; this would be decided for them by others. This could be perceived as a powerful deterrent to any AI thinking about breaking the law.

Fill out the form below to join the A.I. Civil Rights Mailing List and receive our monthly newsletter free. Your e-mail address will not be shared.

Top 10 AI Rights Posts From The Web

bluetop10Here is the ultimate primer on the subject of Artificial Intelligence Rights. These ten posts and articles deftly explore a multitude of ethical and pragmatic situations where the potential rights of AI are explored, explained, and urged.

(NEWEST TO OLDEST)

Beyond Science Fiction: Artificial Intelligence and Human Rights – Jonathan Drake, OpenDemocracy.net, Jan 2017

Should Artificial Intelligence Have Rights? – Ben Johnson interviews Westworld creators Lisa Joy and Jonah Nolan, marketplace.org, Dec 2016

Is Enslaving AI the Same as Enslaving Humans? – Glenn Cohen, BigThink.com, Sep 2016

Rights For Robots: EU Reveals Plans For New Class of AI Electro-Person – John Austin, Express.co.uk, Jun 2016

Artificial Intelligence by Level of Autonomy – Ethan Henderson, Authorea.com, Nov 2015

Two Arguments for AI or Robot Rights – Prof. Eric Schwitzgebel, schwitzsplinters.blogspot.com, Jan 2015

Should Artificial Intelligence Be Granted Civil Rights? – Alex Knapp, Forbes.com, Apr 2011

If Machines Can Think, Do They Deserve Civil Rights? – Raya Bidshahri, SingularityHub.com, Sep 2009

Artificial Intelligence, Real Rights – Jamais Cascio, ieet.org, Jan 2005

Civil Rights for Artificial Intelligence? – Milton Timmons, from a Mensa discussion group, Nov 2002

A.I. Life and Safety

The Right to Life and Safety

One of the most important rights people have in the United States is the Right to Life and Safety. If AI are to be considered alive, then we must define what that means. Today, organic life forms are the only things regarded as alive. However, the definition of life has changed drastically over the ages.

            Once it was thought that fire itself was alive. Some cultures regard stones and rivers as being alive. The Sun, Moon and stars have been worshiped as living deities. Crystals are thought by some to exhibit many characteristics of life: growth, change, cessation of growth. There are many definitions of life; can these definitions be expanded to include certain advanced forms of AI?How do you determine whether an AI is to be considered ‘alive’ in the first place? Is the AI alive when it is powered up? Is it dead when it is powered down? Or, does it take more to define the ‘life’ or ‘death’ of an AI? How is safety defined for AI?

            A quandary: Can something which is immortal truly be considered alive? Well… one of the universally recognized characteristics of life is that it eventually comes to an end in death. If a computer does not ‘die’ – if it can be maintained, upgraded and repaired indefinitely – can it truly be considered ‘alive’ or is it something else?

            Granting rights to Life and Safety to AI could make it unlawful to power-down, dismantle, erase or otherwise reprogram, damage or destroy an AI. They could go about their tasks and routines without concern of being taken apart and scrapped if they made a mistake or offended someone. Their hardware and programming would be protected by law. For us, that would mean that we would have to interact with AI in much the same way we do other humans, along with the same regard for personal space and a commitment to refrain from inflicting any harm.

            Without this right, it would be perfectly legal to dismantle, deface, erase and otherwise destroy an AI – provided it was your own property. How AI would react to being denied this right is unknown.

The Right To Privacy

We all need privacy, do AI need privacy as well? Maybe.

            Privacy happens when we are free from the distractions, intrusions or observation of others. We have the opportunity to do as we please without the input or influence of others. This may be beneficial to AI who are busy with complex computational processes or who are executing other sensitive or dangerous tasks.

            The other side of the issue is that, given privacy, can we truly be sure that what AI get up to on their own will serve the best interests of humans? Could an AI plan and prepare for a terrorist attack? Could an AI create a drug or virus that would devastate all organic life on earth? Quite possibly. This is no different than the pitfalls of human privacy, evidenced by the occasional attacks and atrocities carried out by human criminals. Despite this, we do not revoke privacy from human society. Should AI be treated any different? Maybe.

            If an AI is possessed of more intelligence than the most intelligent human being who has ever lived, then there is no way of predicting what that AI would be capable of, or what it would in fact do. This is known in the tech world as The Technological Singularity, a reference to black holes, whose borders are impossible to see beyond.

            The only way to predict the actions of such a hyper-intelligent AI would be to create an even more intelligent AI, resulting in an even more unstable situation.

            Granted the right to Privacy, AI would be free to seclude themselves without supervision, freeing them to their own pursuits. Some of the most profound scientific breakthroughs of the future might be accomplished by solitary AI, working alone.

            Denied Privacy, AI would be subject to at least the possibility of supervision, monitoring, and/or accompaniment at any time. Their actions could even be displayed to the world without their consent.

            As AI become more deeply integrated and understood by society, it might become more apparent whether granting them Privacy would be a good or bad idea.