Artificial intelligence—it’s the stuff of science fiction that is becoming more and more a part of our daily lives. Beyond the initial “coolness” factor, however, there are some very real questions to be asked and challenges to overcome for today’s marketer and beyond. Brace yourself, it’s about to get nerdy.
Feeding Your AI
Access to mass amounts of information is what puts the “intelligence” in AI—parsing and crunching massive volumes of data from disparate sources and inputs to find relationships, connect dots and make predictions in ways that are not humanly possible.
Market research and advisory firm Ovum estimates that the big data market will grow to $9.4 billion by 2020, comprising 10 percent of the overall market for information management tooling.
Research by The Economist Intelligence Unit last April found that 37 percent of global marketing executives believe big data and AI were among the technologies they expected to have the biggest impact on marketing companies by 2020.
Ethics And Safety
Just because AI can do something, doesn’t necessarily mean it should. Machines are function-based, and thus may become dangerous or make mistakes as a means to a predetermined, programmable end. For example, a self-driving car might decide to strike a pedestrian to avoid a serious collision for its passenger, or while factory AI makes production more efficient, employment rates could plummet as fewer humans are required. Ethical dilemmas like these make experts wonder if implementing Isaac Asimov’s Three Rules Of Robotics may not be enough.
During a panel about AI ethics and education in San Francisco hosted by the Future of Life Institute, Illah Nourbakhsh, a robotics professor at Carnegie Mellon University, said that educators need to teach computer science and robotics students a basic understanding of ethics.
This is because the technologies they are creating are so powerful that they “are actually changing society.” Citing the examples of drones used in warfare and AI technologies used in advertising, Nourbakhsh said that cutting-edge technology on a global scale is changing consumer behavior.
Although humans program AI-powered robots to accomplish a particular goal, these robots will typically make decisions on their own to reach the goal, explained Benjamin Kuipers, a computer science professor and AI researcher at the University of Michigan.
Having a basic understanding of ethics can help technologists better understand the potential ramifications of the AI-powered software and robotics they are creating, he explained.
Respecting Privacy
Let’s face it—AI assistants are pretty dang cool. Unfortunately for consumers, however, AI messages have to be unencrypted, which tears a big, gaping hole in privacy. Whenever you ask Siri, Google or Alexa where to eat or what the capital of Uruguay is, your query gets sent to a cloud server where it is analyzed before returning at ludicrous speed with an answer. These assistants amass a huge amount of personal interests, habits, visited places and preferences in order to make better choices and recommendations.
While these assistants are designed to begin listening at the sound of a “wake word”—”okay, Google,” for example—consumers put their privacy at risk by even owning a device armed with microphones that can be accessed by a third party.
“Now is the time for setting privacy expectations,” said Michelle Dennedy, chief privacy officer for Cisco and founding member of Voice Industry Privacy Group, a new organization designed to set voice privacy agendas for developers. “We don’t want to kill the innovation cycle, but I care about whether my TV is listening to me,” added Joyce Brocaglia of Alta Associates, an executive cybersecurity search firm that helped launch the group.
Until computer scientists can invent “searchable encryption,” Google offers state-of-the art encryption within its Allo messaging app—but if you turn it on, your fancy AI assistant can’t function.
Starting with iOS 10, Apple is using Differential Privacy technology to help discover the usage patterns of a large number of users without compromising individual privacy, according to a statement given to Wired.
Craig Federighi, senior vice president of software engineering at Apple, stated in a press release that machine-learning algorithms able to understand personal data such as photos are being used only within the confines of a person’s iPhone, not on Apple’s cloud servers. “We believe you should have great features and great privacy,” he said.
There are other issues to consider as well, like whether data gathered for marketing could be subpoenaed by authorities for a criminal investigation or purchased by a third party.
AI is proving highly useful to human beings—especially marketers—and the industry is expected to grow significantly over the coming years.
Experts are hard at work finding solutions for these challenges as the technology becomes more integrated into life as we know it.
So far we’ve managed to avoid a robot uprising. It’s probably best to avoid any AI calling itself “Skynet” for now, though.