Uber self driving car accidents and the facebook data scraping scandal are the most recent indications of a significant risk in the technology industry: the lack of mature safety processes. The icons of the “hot industry” of the 21 century have yet to learn how to make safety. They still don’t get that they, like all leaders of human endeavors, are responsible for the safety of the work they do.
“Safety and ethics are still elective, rather than foundational, to software design”.
I have found Quartz News Outlet to be a source for thought provoking articles about current themes such as technology. This morning I read an editorial piece in Quartz’s Weekend Edition commenting on the lack of oversight in software design. The timing is good because of the events mentioned above, I have been contemplating the march of technology into the realm of autonomous equipment in general society. The recent fatality in Phoenix involving and Uber self driving vehicle and a pedestrian (NY Times: Self-Driving Uber Car Kills Pedestrian in Arizona, 3/17/2018) has returned my thoughts to the issue of safety in tech innovations and why it is so lacking.
I have extensive professional experience ensuring safety of automated industrial systems, which gives me unique insight into the challenges such efforts create. I have watched with interest as software and hardware designers have begun to introduce autonomous designs into the public space– a space with much less certainty that the industrial environment. Yet the safety processes around technological innovation are far more rigorous in the industrial enviroment. Even with these controls, accidents are not uncommon. So, the relative lack of safety processes around the public space is sure to create increased risk and accidents.
The Quartz editorial (provided below) sums up well the fact that software design is still an immature field without adequate controls for avoiding harm to society. Other fields have had to go through similar maturation processes that usually include an “A-Bomb Moment” (see below) that crystallizes the resolve to establish controls. I think we are yet to have this moment in the tech sector but we (society) can see it coming. Let’s hope it is not an extinction level event like the real A-Bomb…
“The stakes couldn’t be higher. Technology mediates almost every aspect of our lives”.
I propose several areas that the tech industry should put in the forefront of efforts to improve. these are:
- Management of change procedures
- Risk Assessment/Risk Reduction
- Machine Safety: Control Reliability
- Human performance improvement/Human error prevention
- Operational safety procedures
- Governmental safety regulations
I don’t want to make this a longer post by expanding on these topics but I have included some highly relevant links above. I would like to point out a particularly important industry group with a great history regarding safety in technology. The organization is the IEEE. Every tech company should be involved with this group and be informed of their work. I also direct the Tech Leaders to the Robotics Industries Association’s robotic equipment safety standard (ANSI/RIA R15.06-2012) as a rare example of strong safety processes in technology.
I hope that leaders in the Tech Industry will take heed of the suggestions above and initiate efforts to improve the processes that create and ensure safety in their operations and products.
From Quartz News Outlet:
“As Robert Oppenheimer watched a mushroom cloud from the first nuclear detonation bloom over a New Mexico test site, he repeated a line from the Hindu epic Bhagavad-Gita: “Now I am become Death, the destroyer of worlds.” The scientist who helped build the world’s most lethal weapon saw how physicists would forever confront the consequences of their discoveries.
Today, computer scientists are contemplating their own “A-bomb moment.” Facebook’s carelessness with user data, and the attacks the company has enabled against western democracies, are on software engineers’ consciences.
“Computer science is a field which hasn’t yet encountered consequences,” writes Yonatan Zunger, a former security and privacy engineer at Google, who has compared the power in the hands of software engineers to “kids in a toy shop full of loaded AK-47’s.” Safety and ethics are still elective, rather than foundational, to software design.
Other fields have already had to reckon with such ethics. Chemistry’s discovery of dynamite and chemical weapons, and biology’s rationale for eugenics, prompted the creation of institutional review boards, mid-career certification, and professional codes of conduct. But software engineering is different. Coders are neither a profession nor a society in the traditional sense. Many are self-taught, and many have a healthy skepticism of any effort to corral the profession toward consensus.
The stakes couldn’t be higher. Technology mediates almost every aspect of our lives. Machines recognize speech and written text. Algorithms can recognize your face, as well as infer from data (with increasing accuracy) your gender, income, creditworthiness, mental health, and personality.
Tech companies already obsess over reliability—gaming out the “what-ifs” to prevent computer systems from crashing. Zunger says they need to apply the same planning to human consequences. “If you can do it without wanting to hide under a table, you’re not thinking hard enough,” he writes. “There are worse failure modes, and they’re coming for you.”—Michael J. Coren”