Artificial Intelligence, Machine Learning, Big Data, Data Analytics, Autonomous Cars, Smart Homes, Internet of Things... Technological advancement marches ahead and as the power of computing machines continues to grow, we are undergoing a sea change in how the world works. This is no exaggeration. Disruptive technology has been with us for decades and the history of advancement inevitably includes the decline of old ways of getting things done, and old industries that are resistant to change. To move forward we must all learn to adapt; change is the only constant. Lifelong learning may be the single most important skill to master in order to maintain a career in an environment of constant change and continuous disruptive innovation.Disruption is difficult, however, and there’s a backlash in the air, a worry that all this change, despite its benefits, is not only eliminating jobs but also threatening the security of our lives and our future. Dystopian visions of machines taking over the human world are a staple of movie production in Hollywood, and fear of algorithms and bad actors have filled many hearts with dread about a techno-terrifying future. Self-driving vehicles threaten the livelihoods of millions of people who earn their bread by driving, and other professions may be entirely wiped out when technology performs their work with greater skill and accuracy than is humanly possible.
Despite the backlash, however, there is plenty of reason for optimism. The tools of A.I. and Machine Learning can help solve intractable problems and provide solutions that were never previously possible. Automation can free us from menial tasks and enable us to pursue more meaningful and interesting work. Smart homes that manage energy usage can save us money and prevent waste of precious energy resources. Yes, disruption is painful. Jobs will be transformed, and people will struggle. The answer, however, isn’t to stem the tide of progress, but rather to rise to the occasion by being adaptable, by becoming a lifelong learner, by challenging ourselves to move out of the comfort zone of the status quo and embracing change as an opportunity, rather than as a problem.
Whence the Fear?
Why are we afraid of so-called “intelligent machines?” Perhaps much of our anxiety about artificial intelligence stems from what is known as the “black box problem”. While books and movies stoke the fires of terror with worst-case scenarios about malevolent machines subjugating the human race, the core of our concern seems rooted in the fact that most of us simply have no idea what is going on behind the scenes. We think of technology as “magic” because it can produce amazing results in ways we don’t understand. Throw a problem at a machine learning engine, and suddenly you not only have information that no human could have deduced on their own, but also the ability to predict similar results in the future.
A key to calming the fears we have about our machines is learning how they operate. A good first step is seeking to understand how these technologies work at a high level; what are they designed to do? How are they designed to work? Once you have acquired a basic knowledge of concepts like neural networks and deep learning, you will understand that to call our machines “intelligent” is really just a metaphor. We humans design these things, and we humans are responsible for the things they do. Machine learning is, in a nutshell, only about having the machines identify patterns that exist in enormous data sets. The machines are not smart, we are. The machines only do our bidding; they find patterns we can’t see, but we tell the machines how to look for them.
Technology magic is like the magic of the Wizard of Oz; there is a mind behind the curtain, we only need to find out who or what it is. Enter Explainable Artificial Intelligence, or X.A.I. The core idea behind X.A.I. is that the machines we’ve trained to learn can also be trained to teach. While machine learning neural networks can become too complex for a human to understand, the machines themselves can be programmed to reveal their inner workings to us so we can demystify the magic inside the black box. With X.A.I., we can program our machines to help us understand the processing that is happening inside the black box. Then we will be able to make informed decisions about how to utilize the results of that processing.
To quote a recent article about X.A.I., “Even if a machine made perfect decisions, a human would still have to take responsibility for them — and if the machine’s rationale was beyond reckoning, that could never happen.” The key to breaking open the black box is understanding, first by learning how artificial intelligence and machine learning work at a high level; next, by having the machines we’ve created explain themselves to us as a matter of course. While knowing the inner workings of the black box will remove the mystery of the deus ex machina, there will still, inevitably, be risks involved with potential misuse or even abuse of the power of our machines.
Can we save ourselves from ourselves?
We can’t always prevent bad things from happening; people make bad choices, and the people developing intelligent machines must work towards a set of standards that put ethical responsibility squarely upon human shoulders. Our fear of intelligent machines, however, is primarily a fear of the unknown. Machines don’t make choices, people make choices. Machines memorize patterns, then utilize those “learned” patterns to make quantitative decisions and/or predictions. To quote the article above once again, “It’s an ontological question: Is the deep neural network really seeing a world that corresponds to our own? – NO! It is only following our instructions down a path of ever-increasing (geometrically increasing?) detail and particularization.” People make the decisions about what and how machines will learn, and people make the decisions about what the machines will do with that information.
There is no panacea or simple answer to the question of whether to trust our machines, but the best way to protect ourselves is to grow our knowledge and understanding of what our machines are doing and how that activity affects us. A great place to start learning is at the Tensorflow Playground. Here you can learn about machine learning and neural networks at a high level, and even tinker with a machine learning model. Once you begin to grasp the basics, you will be able to expand your comprehension in order to influence the decision makers behind the machines.
...and how do I influence the decision makers?
With your wallet; buy from companies that make efforts to increase visibility and transparency about what is happening in the “black box” of their products, or use open-source products that are transparent by virtue of making their code publicly available.
With your knowledge; find work in the field and influence it from within, or contribute to open-source projects like TensorFlow and help spread the idea of self-reporting code that teaches us as it learns.
By teaching others; talk to people about “smart” technologies and what they mean for our lives and our future. Help the people you know who struggle to understand basic ideas; try to paint a mental picture to help them grasp unfamiliar concepts. Write blog posts, share articles like this on social media, and contribute to the conversation.
The more we do, collectively, to help our machines help us, the brighter the future will be, and the more positive benefits our amazing inventions might deliver to humanity. We must take responsibility for what we build, and ensure that it works for us, rather than against us.
Fortune 500 companies are already taking advantage of Big Data and Machine Learning, but you can too. With tools like Google Cloud Platform (GCP) and partners like Cloudbakers who specialize in these types of projects, the Mid Market is disrupting their own industries and getting the same benefits that at one time only the top enterprises could achieve. Interested in discussing more about this with an expert? Contact us to learn more!