A strong aroma of coffee beans hit my nostrils as I opened the door to one of many downtown Seattle cafés. Water droplets gathered at my brow, my hair freshly dampened by cold, north pacific rain. Several empty booths sat to my left and the barista flashed a quick smile from the counter as my eyes brushed past him. At the corner of the café was my target, a well-dressed man, likely in his mid-forties, sipping from his coffee, entranced with his smartphone. He had likely been there for hours, untouched by the afternoon rainstorm. The door slammed shut behind me.
As I approached, I noticed his left shoe nervously tapping away just out from under the table. He glanced up as I neared, immediately placing his phone face down upon the tabletop. As expected, he offered an anxious half smile.
No one ever wants to talk to a journalist.
I tossed my soaking wet jacket in a heap in the corner of the opposing booth seat, introduced myself and then we got straight into the details. I pulled out my trusty digital audio recording device and clicked a simple button on the side as I placed it softly between us. This was the point of no return. As soon as the first words are spoken, captured, and stored within this tiny metallic object, the story is out.
If it were not for the stack of hastily taken photocopies of incriminating documents or the millions of lines of computer code and digital images he provided on a small flash storage drive, I probably would have never bought into it.
Let’s start from the beginning. I’ll be referring to my source only as Mr. Smith, a false identity to protect him from retaliation and exposure. He is one of thousands of employees at Seattle-based research and development lab, Startech Industries, known primarily for aerospace technology design and manufacturing.
In March 2010, Startech initiated a project entitled “Code N728-b.” Startech typically uses leading designation “N” on government contracted projects. Smith explained that the remainder of the code is much like an address, “team seven’s twenty-eighth project, class b,” he said smugly. Class “b” refers to the second highest resource allocation and priority designation.
At an undisclosed time, Mr. Smith became aware of the details on this project, as did most other employees at Startech due to the nature of the testing, and complications with development. He described an event where half of the building’s security systems malfunctioned, locking hundreds of workers from their labs. A memo in the following days passed off the event as a glitch in employee database systems; however, staff familiar with those systems had no record of any malfunctions in their software. Follow-up correspondence attributed the event to complications that occurred during artificial intelligence development for Code N728-b.
The problems with N728-b only continued as development progressed. Several months after the initial security lockout malfunction, many employees began experiencing issues with their computer networks. Their workstations would appear to be in use from a remote user, which prompted investigations into the local network to determine the source of the traffic. Panic overtook the offices as many suspected a cyber-attack, a successful hacking of their systems which could expose all current or archived data.
Days later, the hack seemed resolved and another memo once again attributed the disturbance to the highly secretive project, assuring that the company’s systems were at no point externally breached. At this point, several missteps should have taken down or limited the project’s development cycle, yet for some reason the company continued to assign more resources to the team responsible, which included the expansion of the team to take in teams three, four and eight. The total staff involved with the project, now renamed NRAS-a, exceeded 300.
Designation NRAS-a is a military robotics and AI venture that seeks to develop an armored human exoskeleton suit with flight and combat potential. The goal was to create a compact, roughly human-sized system that could be directed by one pilot with the assistance of an artificial intelligence program in case of emergency situations or to handle auxiliary functions. Yes, Startech was building the Iron Man suit.
The early issues were caused by a completed artificial intelligence unit that was conducting machine learning tasks meant to be restricted to the understanding of the space in which the device was located by using an array of sensors and wireless technology. The unit was able to thoroughly analyze the data within the confines of that lab and was able to identify the singular entrance and exit point, along with the security features installed. The security system malfunctions were a result of the AI’s attempt to disable the lab’s locking system, so it could expand its analyses to adjacent locations. Smith described the initial host device as a “spider-like” robot, roughly the size of a small dog. “It was really designed to crawl, sort of like how a spiderbot programs crawl the web for search engines to index and make sense of the internet and its content,” said Smith. It completed a similar task in the following months when instructed to browse networked databases to accumulate technical data and schematics for use in diagnostic and repair protocols.
The exoskeleton suit itself was still in early stages of development at the time of the massive team merger. There was little functionality aside from prototype leg actuators which were designed to complement human input with stabilization and lift assistance. Over the next few years the team was able to develop a barebones NRAS unit, functional but essentially featureless. Their efforts to create what is thought to be the first full-body exoskeleton went on without much issue. Power management was their primary concern, followed closely by ensuring pilot safety. Each robotic joint needed to mimic a human joint’s functionality and flexibility to avoid injury to the pilot.
Sweat began to pour from Smith’s head before he continued, despite the café being quite comfortable. He fiddled with his coffee, looking down at his hands. I tapped on the table and his attention returned.
There were casualties afterward. Employees and test pilots were severely injured and killed in mishaps. The integration of the AI into the completed NRAS was mishandled, Smith explained in explicit detail, recounting the gruesome severing of one pilot’s spine. The issue was in the override protocol and the inherent design of the AI, which would initially reject the pilot’s input and their limited ability to control the NRAS. Despite the suit being very finely tuned to match the average pilot’s range of motion, the AI was able to determine that without the pilot it could make more complex maneuvers to complete its goals more easily.
In the following days Startech willingly altered and manufactured evidence, proceeding with several other catastrophic tests in an attempt to work out the system’s flaws. It was partially successful, prompting the company to move its testing facilities for the project to another section of the premises. Startech officials then ordered a staged and controlled destruction of the lab, placing four total victims, two of which were still living, though horribly maimed, inside a locked sector while it collapsed. It was reported that a disastrous fuel leak led to a series of uncontrollable fires and collapses inside the building, resulting in the tragic deaths of multiple workers. Startech shut down for several weeks to hold services for these employees, and to undergo repairs. News and media outlets reported on these tragic events in late November 2016.
Many team members showed hesitation to continuing work on the project; several key members resigned shortly after the events. The project trekked onward despite this and neared completion by June 2017. The final prototype, NRATS, or Navy Robotics AI Tactical Suit, received its final update before field testing later that month. The AI had shown clear signs of malfunction throughout the development process, but was fed data on military strategy, use of deadly force, threat assessment, and tactics just days before shipping. Smith disclosed that he expected, based on changes to the AI’s protocol and documents received by the team, should a human pilot be unable or unwilling to complete a task, the AI would take full control over the suit. Thus, the pilot inside the suit would become prisoner to witness the AI’s determination of the best course of action to complete the goal. He remarked and warned about the consequences involved, based upon the previous failures of the unarmed version of the suit. There are three identical suits currently in possession of the US Navy, which according to documents that Smith obtained, have been used to varied success in several heavily redacted “black-ops” Navy SEAL missions.
One can only speculate on exactly how these deadly weapons were used, or to what extent their pilots were in control of the missions they were placed on. The documents are carefully blacked out, leaving but the most subtle and vague accounts of the technology in use.
Smith did not linger on the documents much, his sweaty palms shuffled the papers into a large envelope and back into a satchel, from which he drew out a laptop computer. He flipped open the lid and powered it on, letting the booth fill up with the hum of a spinning hard drive disk. Moments later he tapped a passcode in, then fumbled with a portable flash drive as he attempted to insert it in multiple orientations into the computer’s universal serial bus port. After a brief pause, he continued tapping away on the keyboard for a few moments before he spun the laptop around. He explained in the most rudimentary way possible what the thousands of lines of code displayed on the screen meant. I was looking at a complete copy of the same AI deployed in the NRATS prototypes. He began to scroll down through each string, each line, stopping here and there to explain what each section of the program’s code was doing. He stopped occasionally to point out potential flaws that could result in unexpected behavior, the same sort of behavior the AI displayed in its early days. Overreaching, intrusive, and without regard for human life.
An AI, in Smith’s opinion, should be built to simulate human thought processes. This ideal is what he called a “strong AI.” NRATS AI is essentially a strong AI and creates reason using human thought processes as a model or guideline. It follows an incredibly strict and detailed ruleset, with overlapping tiers of priority goals, which it cannot alter. The NRATS AI only appears to function like human intelligence, while in reality it is something much greater. It is able to perceive, reason, and act within its given mission parameters. It is human intelligence without the limitations of morality, ethics, or emotion. Smith explained that many AI developers explicitly avoid military applications of the technology for fear of how it may be used, but Startech, at the time of the project’s launch was struggling economically and was in no position to turn down lucrative government funding.
In a demonstration of the AI’s raw code, Smith ran a series of prebuilt scenarios to which the AI was prompted to simulate responses to. In each of these scenarios, the AI responded predictably, keeping the integrity of the mission first, its own survival next, and the safety of the NRATS pilot third. In the most strenuous simulation, the AI was tasked with the destruction of a target deep inside enemy territory. Smith provided inputs throughout this computer simulation by entering lines of code, acting as the pilot.
The scenario begins in a darkened room; the unit and pilot are alone. The mission objective is exactly one mile to the north, through a desolate urban landscape filled with assailants, traps, and the occasional friendly target. There is only one path forward, through a single iron door. Smith navigated to open the door, and the chaos immediately began. He furiously typed away strings of code, rapidly inputting instructions. I watched from over his shoulder, leaning in to get a better look, but I could not understand the language. Smith explained as best as he could, pointing out anomalies and instances in which the AI overrode his commands, which were frequent. “Imagine writing a paper, and your word processor didn’t just autocorrect your mistakes, but generated the whole thing for you based on a few lines,” he said.
As soon as the door was breached, five heavily armed assailants rushed through, Smith managed to handle one target, despite some minor corrections before the AI assumed full control and dispatched the others. It reasoned the inadequate reflexes of the pilot would result in damage to the unit, Smith explained as he took a moment stretch his hands during a brief break. The team, Smith and the NRATS AI, moved forward, exiting a dark room left in ruin. Sniper fire rained down on the team as soon as they entered the ruins of a generic urban battlefield. Lines of code obscured parts of the environment like the clouds of dust they represented. Even though Smith was intimately familiar with the scenario, the AI was able to perceive threats before Smith. Smith’s rapid tapping of the keys ceased, and he sighed. The AI had already locked him out of controls. A bullet from a sniper just barely grazed the right arm of the suit. It was on auto-pilot from here on out.
The AI had determined the pilot was not skilled enough to complete the mission in a satisfactory manner only three minutes into the simulation. The NRATS suit’s rocket propelled flight mode was engaged. The AI took the fight to the air, spiraling and weaving gracefully to avoid fire, then unleashed a wave of carefully targeted counterfire to eliminate four targets perched on the rooftops. One more target was obscured through the cover of six hostages bound together, chained and bent over exposed piping, which served as human sandbags for the enemy sniper. We both watched as Smith traced his finger across the lines of code, tracking a single round fired by the AI into the pile of hostages. It struck one of them, passing horrifically through their head and then striking perfectly center mass on the target. I instinctively pressed myself back into the booth upon hearing Smith remorsefully describe the scene.
We monitored the program for the next fifteen minutes, which displayed a resourceful showing of force and strategy. The AI was able to avoid traps, dispatch assailants and overcome obstacles without sustaining any further damage. Smith commented on the relation to real world scenarios, and whether pilots were being subjected to the same level of lockout during operations. He suggested that the AI might never consider a human pilot as efficient as its own ability to control the unit.
Startech had built an incredible weapon, capable of perceiving its surroundings, analyzing the data, and reasoning the most effective course of action. It’s ability to evaluate a human operator and predict their performance is most concerning when combined with the authority to override control. Smith, and many other developers like him, are worried about the consequences and what the future may hold, not to mention the atrocities committed along the way.
Over the past year there have been many attempts to uncover the mysterious incident at Startech Industries in late 2016. Many took the building’s partial collapse at face value, a tragic accident. For the first few months though, a small collection of conspiracy theorists angered the families of the victims and the general public with wild accusations of complex coverups involving terrible experiments. They were ultimately dismissed as unfounded and the conversation stopped. Smith speculates that some of Startech’s employees were involved in the circulation of these rumors in hopes that a proper investigation would discover the truth. Those investigations never came to light. Who would have ever imagined that the company was not only developing what is essentially a weaponized supercomputer capable of mass destruction, but also committed multiple acts of homicide along the way.
Mr. Smith gathered his belongings slowly from the café booth, packing everything away carefully. I was stunned by what I had heard and seen but could only manage to utter the most routine response. I thanked him for his time and gathered my notes as well, remembering to click the little button on the side of my digital recorder. “Why now?” I posed a final question to Smith.
He gave a half grin as he looked up from his belongings, which quickly faded as he replied, “It knew what it was doing. It knows that it kills people and understands what that means. Not just bad people, but anyone that prevents it from doing that best that it can. We made a monster.”
As I gathered the still damp heap that was my coat and left the establishment I forced a smile back at the barista who had greeted me hours before, then prepared to leave the pleasant aroma of fresh coffee behind. As I stepped out, I glanced back inside once more, and Mr. Smith had seemingly vanished through a different exit. I paused for just a moment to process my surroundings, then was overcome with a sense of appreciation for my free will. Water droplets gathered at my brow, my hair freshly dampened by cold, north pacific rain once more.
FICTION BY Kyle Kraemer
Kyle is an undergraduate student studying Communications at Wilkes University in Wilkes-Barre, PA. He is a US Navy Veteran with four years served on the crew of the USS Hopper with two deployments to the Persian Gulf. Kyle is an avid photographer and enjoys creative writing and technology. He aspires to a career in broadcast journalism.