Learning with machines

Principles to keep in mind as you build a community around Artificial Intelligence (AI)

January 24, 2019

A blog post by Scott Pobiner, specialist leader, Deloitte Consulting LLP.

Sociologist Everett Rogers described uncertainty as the principal challenge to innovation adoption.1 The greater the level of uncertainty about innovation, he pointed out, the less likely it is that innovation will find broad adoption within a community. This is a problem for any innovation that requires community engagement to succeed but is particularly troubling for smart systems that, beyond simple utilization, require engagement to thrive—to learn. Without communities, smart systems are only as smart as their creators made them. Infants in an uncaring and utilitarian world—but with disproportionate strength.

In from smart products to smart systems, we discuss the importance of leveraging participatory design methods to create better, more adaptable, and more valuable smart systems. We emphasize a parallel distinction between "smart products" and "smart systems," and between "human-centered design" and "participatory design" that speaks to why designing a product for a user is different from designing an autonomous system for a community of users. I'd like to elaborate on the important distinction between smart products and smart systems and highlight three principles for engaging communities of users to help create better outcomes for all.

Smart products to smart systems

We often think of a "smart product" as a device. A thing that handles a single task or a series of integrated tasks for us. Most of these "smart things" are incremental iterations of existing "dumb things;" think smart coffee maker, smart refrigerator, smart doorbell…etc. To wit "smart products" have taken on the implicit virtue of being a thing that users can walk away from, disengage, or simply ignore—even the incessant and demanding vibrations of a smart watch.

A smart system, on the other hand, is nearly always present and should support many more tasks, for more people, than its solitary counterparts. It is an inherently more complex thing that must handle more use cases, with a wider array of tasks, for more people in a way that feels connected and consistent. Prioritizing those tasks to ensure that the most important issue is addressed requires an explicit definition of what users are most well-served by a particular course of action. Were it not for the fickle nature of human behavior, this wouldn't be a problem, but people are often unreliable and much more innovative than machines. We tend to adapt to conditions rather than improve them; we cobble together practices that don't always work in collaboration with others, and sometimes work in conflict with smart systems. Consider the challenge of managing first responder resources and you might realize how uniquely powerful a smart system might someday become.

Due to their inherent complexity, smart systems are rare, and once again with a nod to Rogers, this is a problem as it negatively affects the perceived attributes of them. With fewer opportunities to experience and therefore to understand smart systems, uncertainty could prevail and we could lose out on the chance to create the systems that will help change our world. In short—learning to live in an automated world means participating in its development.

Here are three principles to keep in mind as you build a community around artificial intelligence (AI).

Cultivate trust. Bring communities of users into the design process and provide them with a direct view into how these technologies work, how the insights are honed from raw data, and what the limits of technology are.

Go deeper. Meet with community stakeholders regularly to discuss/reveal latent issues of concern and also aspirations for what artificial intelligence might enable.

Align for the long-term. Machine learning is a long play. The data takes time to wrangle and training sets take time to refine. Seek ways to bring value to communities along the journey and work together to define the short-term wins and long-term objectives. Remember that problem understanding and problem solving should happen concurrently. Finding ways to bring more people into that process will increase your chances of success and make it a more palatable process for all.

Highlights from the State of AI in the enterprise report

  • The impact of AI on the workforce:
    • 56 percent of leaders indicated that AI will transform their business within three years—down from 76 percent a year earlier.
    • 72 percent of respondents see AI-driven automation as substantially altering tasks and roles over the next year.
    • A majority of respondents (72 percent) believe that when AI truly improves decision-making, it also leads to greater job satisfaction.
  • AI ethical concerns:
    • 32 percent of respondents ranked ethical issues as one of the top three risks of AI, but most don’t yet have specific approaches in place.
  • Regarding demand for a human element to create and implement AI:
    • 30 percent of respondents ranked AI researchers tasked with inventing new kinds of algorithms and systems as a top-two need.
    • 28 percent noted software developers to build AI systems are needed.

End notes

1 Rogers, Everett M. Diffusion of innovations. Simon and Schuster, 2010.s

Insert Custom CSS fragment. Do not delete! This box/component contains code needed on this page. This message will not be visible when page is activated.

Site-within-site Navigation. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Did you find this useful?