Limited functionality available
It’s an exciting time with the types of technologies that are both emerging and converging in compelling ways to deliver potentially amazing human experiences. For those who work in digital, it can be enticing to start eagerly incorporating new, emerging and exponential technologies.
For instance, how might we use social robots in aged care and health to help with social isolation, loneliness and in-home healthcare? Or in education with interactive education delivery models for children with additional needs? Urban planners can use extended reality to imagine and experiment with designs of our cities of the future so that they best serve citizens. Next generation chatbots fuelled by artificial intelligence could answer enquiries efficiently at a time that is most convenient to customers.
It’s an exciting time to be working in experience design and digital services. However, there is also a groundswell of awareness that the current combinations and applications of newly emerging technologies can have unintended consequences such as filter bubbles, fake news, algorithmic bias & discrimination, device ‘addiction’ and negative effects on mental health and wellbeing. Are these truly “unintended consequences”, or merely unconsidered impacts? Before we are swept up in the excitement of it all, and keep building and using some of these exponential technologies, we need to pause and ask some simple but important questions:
These are questions that tap into purpose, intent and outcomes – essentially embedding the human element into the technical considerations. Because, ultimately, isn’t this who we are designing for?
Educational and market responses are already underway in recognition of the general awakening to the power that emerging technologies possess, and the responsibilities we have when designing with them. Universities are incorporating social science into computer science degrees to reacquaint computer science with social theory, and setting up research institutes to investigate broader social impacts. Large technology companies such as Microsoft and Google have published their design principles for artificial intelligence. Apple is positioning itself in the market as a leader in data privacy and Salesforce have appointed a Chief Ethical and Humane Use Officer to develop strategies around ethical technology use. Apple and Google have both launched a range of “digital wellbeing” tools on devices to help users manage screen time, notifications, and prioritise the use of apps to foster a healthy relationship with technology.
We also see the market offering ways for people to disconnect from technology such as digital detox camps, quiet carriages on public transport, and products such as the Light Phone, which is a minimalistic, low feature mobile phone.
The future of emerging, exponential, and even our current day technology is about designing transparent, less distracting and more ‘healthy’ interactions and experiences. Experiences that sit discreetly, politely and purposefully within a human being’s context. And that’s worth giving our attention to as a first step in using these exciting technologies.
As companies wrestle with how to bring these considerations into products and services, these six questions provide a starting point to consider how ethical considerations can be structured into design process.
Brittany is a Senior Consultant within the Deloitte Digital Consulting practice. She has over 6 years marketing experience with a specialism in marketing optimisation and campaign strategy. She has worked across a number of industries including retail, not-for-profit, hospitality and tourism. She has particularly strong experience in developing and executing marketing strategies, campaigns and events that drive tangible outcomes.