
“The defense would like to call Helen Meers, CEO and founder of Innovations by Meers, creator and distributor of Nanny-Bot, to the stand.”
“Ms. Meers, you founded your company when you were just 26 years old, is that correct?”
“Yes, in my parents’ garage.”
“What was that first product you made in your parents’ garage?”
“It was a robot-dog of sorts. A very crude prototype.”
“What could this robot-dog do?”
“Well, the original was operated by a remote with a list of ten commands, like sit, come, bark, roll over, things like that. As it was developed further, the remote turned into verbal commands that the dog could understand and obey, and the number of understandable commands increased to over a hundred.”
“That wasn’t the end of development though, was it?”
“Oh, certainly not. The model we eventually marketed and put up for sale—we called it Widget—was able to recognize human faces and voices, learn new commands if taught with the correct syntax, things like that. Its movement was also much more lifelike.”
“Could you tell us why you sought to make a product like Widget?”
“Objection. Relevance?”
“Overruled. Continue, Counselor. Answer the question, Ms. Meers.”
“Well, growing up, I had a dog that I absolutely adored. I had tons of friends over the years that loved her, too, but they couldn’t have their own dog for whatever reason. Allergies, parents didn’t want the mess, whatever. I saw the joy that my dog brought others, and I always wished they could have that joy. Then I got older and discovered the benefits of service animals and emotional support animals, and I was reminded of all those friends that just couldn’t have dogs. I thought about all the people in the world that would benefit from a bond with an animal, but for whatever reason they couldn’t take care of one. At first, robot dogs were just a silly idea in my head, and someone dared me to do it, so I did. And then it grew into Widget.”
“And how did the public receive Widget?”
“Oh, they loved him. He was more popular than I ever could have imagined. My entire company kickstarted on his success.”
“But isn’t it true that after four years, Widget’s sales started to go down, and they never picked back up again?”
“Yes, that is true.”
“And Widget ceased to be produced after six years?”
“Yes, that is also true.”
“Do you know why sales dropped so dramatically that you had to literally stop creating the product?”
“Well, there were a number of reasons. The largest one being that Widget was so expensive. I tried my best to make it affordable for the masses, because that was just the point, to include everyone that couldn’t have a real dog. But the technology involved was just too expensive.”
“Well, Ms. Meers, forgive me for saying so, but there are always people with money. If cost were the only issue, you would still have buyers somewhere out there. Is there another reason?”
“Well, Widget didn’t work out the way I’d originally hoped.”
“Meaning?”
“He didn’t have the…therapeutic effect I had hoped he would. He was great fun for children and adult ‘geeks’ if you will, but he just wasn’t received by animal therapy programs. Some of them tried it. But it never produced results that they were happy with.”
“And why do you think that is?”
“Well…I don’t know…at the time we were putting more research into different products, and Widget just became a thing of the past…”
“Would you agree with me if I said that perhaps it was simply because Widget is not a real dog?”
“Objection! Your Honor, how is this relevant to the case?”
“Counselor?”
“I’m nearly there, Your Honor.”
“Get to the point. Quickly.”
*silence*
“Ms. Meers?”
“I’m sorry, could you repeat the question?”
“Would you say that the reason Widget did not produce successful therapeutic results is—perhaps—because he’s not a real dog?”
*nervous silence*
“Perhaps, yes.”
“Thank you, Ms. Meers. Now, how long after the rise and fall of Widget did you start work on Nanny-Bot?”
“I was well in my thirties. Maybe ten years after we took Widget off the shelves.”
“And what made you want to create something like Nanny-Bot?”
“Well, after Widget, I realized that robotics were an elite brand. It’s unfortunate, but that’s just the nature of the business. Maybe someday that kind of technology will be cheaper to produce and therefore more affordable to the masses, but that hasn’t happened yet. So I had to come up with a big product that really appealed to the upper class. That was the niche I was going for. Robot vacuums already existed, robot butlers, robot maids, already existed. But all of those things were so…impersonal. So automatic, easy to program. When I thought back to Widget, I thought how close we’d gotten to real human-robot connection, real companionship. Something was missing with Widget, which is why he stopped selling. And that’s when it hit me. The answer was robot companionship. But it wasn’t enough to just sell someone a ‘friend’. I had to make a product that would really take a burden off the family’s shoulders while still maintaining relationships. And Nanny-Bot was born.”
“So, Nanny-Bot is able to recognize faces and voices, follow commands, learn new ones, form relationships, etcetera. Very similar to Widget. Is that correct?”
“Yes, but the relationships were much deeper and much more profound. We took facial recognition to a new level with Nanny-Bot. She’s able to recognize the hierarchy of the family. Which face is the mother, which face is the father, which child is older, younger. She can therefore remember who she should be taking orders from, and who she should be giving orders to. She can also store the needs of each individual child in her care. One could argue that she remembers these needs better than a human nanny. She even has a feature to protect children from allergies. She can detect allergens immediately and takes preventative action. If a human nanny has five charges, each with different needs and allergies, she may not realize a five-year-old is eating peanut butter until it’s too late. Nanny-Bot can expertly prevent a disaster like that. She remembers everything.”
“Very impressive. In comparison to Widget, how well has Nanny-Bot sold?”
“Oh, Nanny-Bot is exponentially more successful. People feel much more connected to her than they ever did with Widget. Parents just rave about her. And the children love her. That’s the most important thing. When creating her, it wasn’t enough to have a robot to keep the kids out of your hair. That would just be the same as locking them in a room with the TV. Nanny-Bot stimulates their brains, nurtures them, makes them laugh, engages them in intelligent conversation, teaches them lessons. And she’s excellent at keeping them safe.”
“So parents love her, children love her, she sells well. Wonderful. And she went through multiple safety tests, is that correct?”
“Oh, yes, hundreds upon hundreds. The product on the shelves now took years and years to perfect. But she is one hundred percent safe. Guaranteed.”
“Great. So Nanny-Bot is certifiably safe to be around children. She is guaranteed to not cause them any physical harm. Is that correct?”
“Yes.”
“Alright. But is Nanny-Bot guaranteed to not cause any emotional, psychological, or mental harm?”
“Nanny-Bot only disciplines a child as far as a parent will allow them to go. She is intelligent enough to recognize harmful words that can hurt a child’s feelings or damage their morale. She will never, ever physically discipline a child. If a parent asks Nanny-Bot to inflict physical abuse on a child, she does not obey, and she has been programmed to call authorities if the parent enacts said abuse on their own.”
“Is it true that Nanny-Bot was studied by child psychologists?”
“Yes. They used her in several experimental forms to see how she would react in different situations, and how children would react. The children always left the experiments happy and healthy.”
“So, do you and your company guarantee that Nanny-Bot does not cause any emotional, psychological, or mental harm to the children she cares for?”
“Yes. We do.”
“Thank you. These experimental studies, as you call them, were any of them long-term?”
“Meaning?”
“Let me rephrase: Once you released the children from the experiments, did you ever bring back those same children?”
“Oh, no. We wanted to get as diverse a pool as possible. Every child is unique. We wanted children of all races, ethnicities, abilities, and we wanted to see how they all reacted to Nanny-Bot.”
“I see. So your guarantee of mental, emotional, and psychological safety is based on several sessions, each no more than a few hours, each with different children? Children that never had any interaction with Nanny-Bot again?”
“Not several. Hundreds upon hundreds of studies.”
“That doesn’t answer my question.”
“Can you rephrase the question?”
“Certainly. Is it true that Nanny-Bot is guaranteed to be safe to a child’s mental health, and is it true that this guarantee is based on individual, brief studies with random children each time?”
“Yes. Hundreds of individual studies.”
“Thank you. Do you know the definition of a long-term experiment, Ms. Meers?”
“I’m sure I do.”
“For anyone on the jury that does not, a long-term experiment can be defined as an experimental procedure that runs through a long period of time, in order to test a hypothesis or observe a phenomenon that takes place at an extremely slow rate. Now, Ms. Meers, would you consider the growth and development of a child to be a phenomenon that takes place at an extremely slow rate?”
“Well, yes, of course.”
“So, being that none of the experiments you ran on Nanny-Bot involving real life children followed the growth and development of a child alongside Nanny-Bot, you can agree that none of the experiments you ran on Nanny-Bot involving real life children were long term?”
“Well, yes. I can agree with that. A study like that would have taken…well, eighteen years. That’s an enormous amount of time and commitment for the family involved.”
“So, is it true that you guaranteed the mental safety of children under Nanny-Bot’s care without actually testing the long-term effects she would have on their mental, psychological, and emotional health?”
*long pause*
“Yes.”
“Thank you. Ms. Meers, do you personally know the defendant, Jaxon Gilcrest?”
“No, I don’t.”
“Are you aware that his parents own a Nanny-Bot, and that he was an only child raised by a Nanny-Bot?”
“I am now.”
“And are you aware that Mr. Gilcrest is on trial for the murder of his girlfriend, Courtney Halliday?”
“Yes, I am.”
“Are you aware that Mr. Gilcrest is a diagnosed sociopath?”
“Not until you said it.”
“Do you know how one becomes sociopathic?”
“No, I don’t.”
“Gary Olsen, Doctor of Psychology explained this to the jury yesterday in court when Ms. Meers was not present. The jury should remember, but I will summarize for Ms. Meers. Mr. Gilcrest’s crime was impulsive, not planned. He simply beat his girlfriend to death with a frying pan because he was tired of listening to her voice. This is textbook sociopathy. Psychopathy involves calculated and premeditated crimes. Both psychopaths and sociopaths show no remorse for wrongdoings, they’re manipulative, can’t understand human emotion. The largest difference is that psychopaths are born, sociopaths are made.
“To quote Doctor Olsen from my notes: ‘Psychologists use the term psychopathy to illustrate that the cause of the ASPD is hereditary. Sociopathy describes behaviors that are the result of a brain injury, or abuse or neglect in childhood. Would you say that Mr. Gilcrest was abused or neglected in his childhood?”
“Well, I…surely I don’t know…”
“Let me rephrase. If Mr. Gilcrest was raised by Cora, their Nanny-Bot, is it possible that he was abused or neglected in his childhood?”
“Absolutely not. Nanny-Bot would never harm a child.”
“I’m going to ask that question again, but before I do, let me ask you another question. Do you know what the amygdala is?”
“It’s a…part of the brain, isn’t it?”
“That is correct. In a normally functioning brain, the amygdala is responsible for emotional responses like fear, guilt, love and desire. Do you know what the prefrontal cortex is?”
“Another part of the brain.”
“Yes. The prefrontal cortex is responsible for empathy and guilt. According to Doctor Olsen’s testimony, studies have shown that in the brains of those with asocial personality disorder, meaning psychopaths and sociopaths, there is a disconnect between the prefrontal cortex and the amygdala. Sociopathy arises when there is damage to either one or both of those areas, or if they aren’t allowed to fully develop in crucial years due to abuse or neglect.
“Now, in the case of Jaxon Gilcrest, this disconnect has given him a disturbing lack of remorse and empathy. Essentially, according to Doctor Olsen, he is unable to love. Would you agree then, Ms. Meers, that without an amygdala, one cannot love?”
“Well, it certainly seems that way.”
“Now back to the issue of neglect. Do you know the psychological definition of neglect, particularly emotional?”
“No, I don’t.”
“Once again, according to Doctor Olsen: ‘A parent emotionally neglects a child when the parent fails to show the child the level of affection or attention that, as a parent, they should.’ Do you consider it emotional neglect to leave one’s children in the care of a nanny?”
“Surely not. There are certainly cases of neglectful or abusive nannies, and there are certainly cases of parents who want nothing to do with their children and leave them strictly in the care of the nanny. But to me, as long as the nanny is fulfilling her duty as the child’s caretaker in both physical and emotional aspects, then no crime is taking place.”
“Thank you. Let the record show that I am not accusing Mr. and Mrs. Gilcrest of emotionally neglecting their son. I am in no way suggesting that nannying is inherently abusive or neglectful. What I am getting at is that there is something missing in the connection between artificial intelligence and developing children that leads to the underdevelopment of crucial parts of the brain. According to Doctor Olsen, a child needs love and affection in order to develop properly. Lack of love and affection at the crucial years of early development is the leading cause of sociopathy. Knowing all of this information, I ask you to answer again: If Mr. Gilcrest was raised by Cora, their Nanny-Bot, is it possible that he was abused or neglected in his childhood?”
“Nanny-Bot would never — ”
“I’m going to ask you that question one more time, Ms. Meers, but let me ask you one more question. Does artificial intelligence, such as Nanny-Bot, have an amygdala?”
“Well…no. Of course not. AI don’t have brains in the literal sense. They can learn and store information like a human brain, they can process the same five senses that humans can, but they can’t process anything in the emotional sense.”
“Things like love, affection, empathy?”
“Yes. I suppose.”
“Ms. Meers, you’ve just answered my question without me even asking!”
“Objection!”
“Sustained. Watch it, Counselor.”
“I’ll rephrase. Knowing as we do that AI do not possess the parts of the brain that allow them to love, much like a sociopath, is it possible that Mr. Gilcrest was not provided with adequate emotional stimulation? That he was not given a chance to develop properly and fully?”
“Nanny-Bot underwent hundreds of tests! What exactly are you accusing me of?”
“I’ll rephrase again. Is it possible that Nanny-Bot unintentionally caused psychological damage upon Jaxon Gilcrest, simply because her creators had no idea the long-term effects she would have on developing children?”
*long pause*
“Yes. It’s possible.”
“Thank you. Now to come full circle, Widget did not have the therapeutic effect you’d hoped he would. Can you remind the jury why that is?”
“Objection. Leading the witness.”
“I’ll rephrase. Do you recall agreeing with me in saying that the reason Widget did not have the therapeutic effect you’d hoped he would is simply because he was not a real dog?”
“Yes.”
“Then, is it possible that Nanny-Bot, with her lack of ability to love, is not fit to raise human children without there being averse effects, simply because she is not a real human being?”
“Nanny-Bot passed every inspection. Every parent and child that ever interacted with her loved her.”
“Yes or no question, Ms. Meers.”
“Nanny-Bot is perfectly fit to raise children.”
“That was not the question. I asked if it was possible that she was not fit to raise human children, being that she is not a real human being, the same way that Widget is not a real dog. So, is it possible?”
*silence*
“Answer the question, Ms. Meers.”
“No. What you are insinuating is not true. Nanny-Bot could never, ever hurt a child. I would never create a product that would harm anyone!”
“But isn’t it possible that it could happen by mistake? Products for children get recalled all the time. I’m only asking you to acknowledge if it is possible that Nanny-Bot was unintentionally created with faults!”
“No!”
“Ms. Meers, are you aware that you are not the one on trial?”
“Objection!”
“No harm comes from just admitting the truth!”
“Badgering!”
“You are the witness, not the defendant!”
“Counselor!”
“I have devoted my life to Nanny-Bot! I won’t allow some greasy lawyer trying to let a psycho run free destroy her and my name!”
“Thank you, Ms. Meers. No further questions.”
AUTHOR BIO
Michaela Catapano is earning a BFA in Musical Theatre and a minor in Creative Writing from Wilkes University. She has appeared in The Crucible as Mary Warren and A Chorus Line as Maggie.