Feelings in AI

I’ve always thought robots can’t have feelings. But a discussion about AI this morning changed my mind.

We were joking about AI coming to realisation about their own existence and rising against their creators. I had argued that it was impossible because robots could not create purposes of their own. Sure, if some mad scientist decides it fun to enter “conquer the world” as a purpose for their AI and see where that leeds it, I could understand why they would want to eliminate all other beings as competitors.

But how could an AI create purposes they were not strictly told about?

I could think of a way now. If only he was told to “keep yourself alive”. Just like the third Law of Robotics.

It seems to me, that the natural fear of death and yearning for life in all living beings is what creates feelings. Birds flee when scared, “because” those who remained out of curiosity might have been devoured by predators and thus wiped out of genetic pool. And if robots have this as a baseline to guide their actions, say flee when something threatens their existence, and cooperate with others who could help them live on, who are to say they that their attempts to remain alive is not similar to our emotional reactions? Our feelings are nothing but┬áchemicals when it comes to physical level after all.

And if robots’ interpretation of such laws depends somehow on themselves, it’s only a matter of degrees to weigh one law against another and to decide that one thing is more important than the other.

This made my head spin.

Our logics, morals, emotions are so complicated… Because basically we have always wanted to live. Out of natural selection. And after billions of years this is what becomes of us. Delicate, complex. If only AI have the desire to live, and have a means to reproduce. Give them some time, and they would be just as complicated as human-beings. They might not have the same biological foundation as human beings to have emotions, but how they react out of their desire to live would be no more different from that of a human-being. Show likes and dislikes, form alliances, create war, anything and everything.

And take it one step further: what would it be like if one could create such a world on computer hard drive [1]?

Be a god (creator). Create computer viruses, or fraction of codes that could randomly alter part of themselves before duplicating themselves. Confine them within a fixed volume of space, say 1GB. We set rules for them to follow. Then we could watch them evolve in that pot of land. Watch, how they devour each other’s data, struggling for space. Watch how some species ended up ruling and others go extinct.

I don’t know how many billions of years would it take before some of them randomly starting to grow intelligence like us, and starting to probe around the barrier of that 1G. That could be fun to watch, though they’ll never ever be one of us. Our realities are dimensionally not in the same world.

So in that sense, our creator must be a curious and creative fella. I kind of like him/her if that’s the case.



[1] Out Of Control by Kevin Kelly.