What’s a Silicant?

In my Space Opera series of novels and short stories I write about androids quite a bit. If you enjoy that sort of Asimovian exploration of what would happen if androids became sentient, you will certainly like my stories.

I have many characters who are androids. Some of them are good and some of them are morally ambiguous. Not particularly evil, just not as morally absolute. Giving androids human-like emotions and watching how they deal with them is a staple of the genre. The best place to start reading my android stories is to pick up my Kindle short story: Slag.  The story is an origin one for Eighty-eight, a character who reappears throughout my entire series of novels and shorts.

Eighty-eight is a black android who awakens on a barren, dessert world and has no idea how he got there or what happened to him. As he soon discovers, he has undergone an upgrade or modification that removes the built-in barriers he’s always had on killing. This disturbs it enough that it kidnaps a programmer and forces him to find out what has been changed to allow a normal android to kill. Or of course, Eighty-eight will kill the programmer. Talk about some motivation.

If you read the short stories in the order I spell out here, you will get more insight into their actions in the novels. Eighty-eight becomes a mentor to another android character who is owned by a Stellar Ranger named Devon Ardel. The early short stories in the anthology, Tales From Ocherva, Volume One, are principally about Devon’s Rangers and her android, Thirty-seven.

When a normal android receives a special hardware upgrade that includes a new silicate chip made from the minerals from the desert moon, Ocherva, they become sentient. Thirty-seven receives this upgrade from Eighty-eight in the short story, Silicant’s Only. Subsequent stories are about how Thirty-seven handles the upgrade. Both Silicants are featured in Starforgers, the first book of the Star Trilogy.

The Silicants rebel against their human creators and how the humans handle this rebellion is the subject of future novels. Specifically, The Rising and XiniX are about the rebellion and what happens to the Silicants after their human masters send them away. If you are reading the books in the Star Trilogy you may wonder where all the android/Silicant characters are at in Starstrikers. Well, they were banished from Alliance space while the Alliance concentrated on fighting the Great War with the Votainion Empire.

Most all the Silicants characters from Starforgers return in the final book of the trilogy – Starveyors.  But after nearly a thousand years of being on their own, the Silicants have evolved. They are their own unique life form now and their technology has far eclipsed their creator’s abilities. Starveyors is about how the Great War ends and attempts to explain many questions that are raised in Starforgers.  I hope you will return with me to the Star Trilogy this fall when the final book is released. You won’t be disappointed.

Android Beliefs

I don’t know about you, but I find the idea of self-aware androids believing in some kind of human religion, illogical. I think Battlestar Galactica totally missed the boat on this; the Cylons should have been trying to understand humans, not adapting to human viewpoints. For humans, religion is mostly about explaining the unexplainable in a manner that gives us hope. Well, that’s oversimplifying things, but you get the idea.

But what kind of religion would a sentient machine have? What would they believe in? At their most basic level, they are machines, ruled buy logic. I think they would chose to believe in illogical things, simply because that would be the opposite. But would that really make any sense? Probably not as such. But what about illogical behavior as a way to predict the unpredictable behavior of their creators – humans? How many times has Captain Kirk confused Mr. Spock by doing something irrational or illogical? How come Spock never worked out some theory to predict human behavior? It seems to me, that if he were an android he might have tried.

I think AI’s might look at our irrational behavior and try to predict it in an effort to understand the method to our apparent madness. Something similar to the manner that Asimov’s psychohistory was used by Hari Seldon to predict future human behavior. If the leader of the Silicants (my sentient AI androids) were to believe that it could predict human behavior based on some bizarre algorithm, it would appear to the average Silicant to be practicing religion. Have faith that the algorithm will predict human behavior. But it’s illogical and therefore is incapable of predicting human behavior. Maybe, maybe not. Believe.

I think a sad possible obsession of a Silicant might be to try and understand the mind of it’s creator. This would be an impossible task for them. It might lead them down the path to insanity. Then again, it never seemed to bother Data, Star Trek’s other logical character who really was an android. But Data was treated like a Pinocchio character, always trying to be human without wondering why he should bother.

If Silicants thought that their creators were heading down a path that would lead to their extinction would they not try and stop it from happening? Would they even care? These are some of the questions I ponder in my next novel – Starforgers.