• 66°

Will my computer kill me?

Science fiction writers have written for years about the nightmares of technology that runs amuck. Whether you favor the apocalyptic scenarios of devices becoming sentient and overtaking humans or the destruction of our world through wide-spread use of technologies that foster environmental disasters, I imagine we all harbor a bit of fear of the rise of the machines.

I’ve written and spoken about security and end-user technology for quite a while. In our current efforts, security is a continuous process: we must constantly expend resources to tighten controls, install patches and review the delicate balance between too much security and usability. Patching is one of those annoyances that will not go away. However, recently, I was reviewing notes from a very large patching effort and an observation frightened me.

Despite the constant attention given to patching – patch your Windows computer, patch your smart phone, patch your networking devices – user compliance isn’t wonderful. In fact, if it weren’t for automated patching, most people would not bother.

What prompts a patch?

Security patches arrive from a company when an observed and verified issue presents the likelihood of compromise to a user of the product, service. Often the security issues are discovered by someone other than the manufacturer. Similarly, non-security patches are issued to address a wide-variety of deficiencies. Often these types of patches arise from large-scale use of a tool. Software designers run their products through a series of tests, all of which are designed to examine proper functionality of the tool. However, despite solid attempts at quality control, reproducing the myriad of user interactions from millions of users isn’t practical. Therefore, as we use software, we are also a participant in a quality review. You’ve likely noticed alerts from some software companies, where they seek your permission to collect error reports and other diagnostic data. These efforts are designed to identify software problems.

Some companies are more responsive to design flaws than others.

In fact, I’d offer that many companies are horrible and inattentive to bugs or issues with their products. I reviewed some of the technology in my home and noticed that the issue date on some hardware platforms was several years ago. After reviewing my own complaints to the companies, I visited discussion boards and quickly saw thousands of similar complaints. Yet, no patches or enhancements have been produced.

Why not?

Resources for product support are costly. Supporting millions of end-users doesn’t receive proper attention from many technology vendors. Some operate from a position of benign neglect and planned obsolescence. They simply work on the next version, address issues in the next offering, and offer an enticing cost for product replacement.

Just buy another one and everything will be fine for a while: disposable technology.

But as these devices and software solutions become less of an add-on and more of a necessity, how do we address the failures of those who do not develop, test and support their offerings properly?

The U.S. continues to lack comprehensive data protection legislation. Alabama passed one only last year. And each state now has differing data protection laws, some federal statutes address discreet areas of data security: healthcare and banking are notable examples.

Let’s travel slightly into the future.

Imagine that you’re riding along an interstate, headed to the sunny coast. Your semi-automated vehicle “knows” where to go, the vehicle is equipped with an array of internet-connected technologies and you, too, are connected to the same cloud of technologies. Course corrections occur as traffic patterns change, your vehicle interrupts you as it brakes quickly due to a sudden stop of the car in front of you. You were distracted. Your smartphone and smartwatch were alerting you to an incoming call. As your vehicle idles on the interstate, you exhale a breath of relief, the car saved you. You accept the incoming call. The sound system comes to life, it’s a family member.

“Have you heard the news?”

“Cars are driving erratically and smashing into one another.”

Your vehicle suddenly surges forward, splitting the bumper of the car before you, the airbags release.

As you slowly awake to unfamiliar surroundings, you learn that you have been transported to a hospital. The immediate area is chaotic, healthcare workers are running about in a frenzy and injured are strewn about the facility.

A nearby television shows miles of damaged cars. The reporter offers that a security issue with several car manufacturer’s products and a widespread failure of GPS systems combined to produce lethal effects.

Now, if my vehicle experiences a mechanical problem, I take it to a mechanic. But, those failures aren’t experiences shared simultaneously among thousands of vehicles. The mechanic addresses my issue and I leave. If a disruptive software issue is observed, are we equipped to handle the issues on a massive scale with our current technology consumption model?

In a recent conversation with a security practitioner, Mike Foster, offered the following.

“The future car problems are particularly concerning to me because with “traditional” vehicles we have this basic knowledge of what to expect when a car fails and some preparation for how to handle it. There are very few catastrophic failures that would cause a vehicle to suddenly stop or accelerate without warning and they’re not common. In addition, there are usually warning signs that something may be wrong to at least provide us with some kind of instinctive, almost subconscious recognition that raises our awareness while operating a vehicle. With a software glitch or hack this opportunity is completely bypassed, increasing the likelihood of such an event completely catching a driver by surprise. And once all systems are fully a part of this interaction an effective hack may make attempts to work around a problem in a crisis completely ineffective. For example, if my primary brakes fail, I still have the ability to engage my emergency break. But with a fully automated system this could be impossible and at that point I’d become a helpless observer utterly unable to change the outcome regardless of how much time I might have to respond.”

As the technology leaves the traditional settings, we all need to consider how “patches” will affect us. We do not want to be spectators sitting idly by as we are governed by the devices that we created and subjected to a poorly-designed patching process. Should vendors be held to a higher quality standard when the devices are intricate to our lives?

We worry rightfully about information security and privacy. Industry and legislative leaders can’t get those problems resolved properly. Instead, we create new security models, new management models, we patch, we buy new products and breaches continue to happen.

What will it take for a serious approach to the concerns of technology security?

I, for one, hope my computer doesn’t kill me before someone takes notice to the larger concerns.