We discuss below some of the key elements that organizations should consider integrating when developing and implementing social AI solutions.32 These elements can address some of the challenges and can help create better work for humans and better humans for work.
1. Training the social AI model to generate impartial recommendations for building workforce trust
The training dataset for the social AI algorithm should be chosen to ensure a fair representation of the population and mitigate biases resulting from human inputs. Also, it’s important to ensure that recommendations (for improvements in communication, workflows, etc.) are not influenced or biased by career levels, e.g., junior professionals need more training on inclusive communication. Further, the application should not only offer corrective action on communications and interactions but also convey appreciation and admiration when employees adapt their behavior to the recommendations and improve the quality of their interactions.
2. Defining responsibility and accountability for the social AI solution and the workforce
In recent years, there has been much discussion about whether AI should be held to machine or human standards—both ethically and legally.33 And, who should be held responsible and accountable for a decision: AI or the person who created or deployed it?
It’s important to establish responsibility and accountability in conversations and interactions among AI and human users. When doing so, social AI cannot be considered in isolation. It’s a part of an organization’s overall ethics policy, and human users remain integral throughout the AI loop. Let’s consider an example where a social AI application provides a recommendation about language choice to an employee or a suitable composition for a team to a project manager. In this case, the responsibility to generate the most appropriate recommendation lies with the developer teams; however, the accountability to act on that recommendation rests with the user, i.e., the human workforce. It’s imperative that this understanding of responsibility and accountability is documented and communicated to the developers as well as end users.
3. Defining the purpose of social AI clearly to drive data privacy
During one of our research interviews, an AI specialist who focuses on AI/machine learning product management said, “The topmost challenge is privacy … users freak out when they learn that their data is being collected … they feel, ‘I am being monitored, and my behavior will be distributed where I don’t have control.’”34
One way to alleviate privacy concerns is to ensure that the user data isn’t used for evaluative purposes. In other words, don’t use AI to rate your workforce’s emotional intelligence for performance reviews. Also, the application should seek permission to use the workforce data for each purpose (analyzing team conversations, sales pitches, customer support calls, etc.), and there shouldn’t be blanket consent from the workforce for the deployment of multiple social technologies.
Depending on the AI’s purpose, there may or may not be a need to store the data. In a simple example of turn-taking and analyzing airtime in a multiperson conversation, the data is useful in the moment to allow everyone to contribute to the discussion, and it can be deleted after the conversation. In other applications, such as improving contact center conversations, data may need to be stored for future training and improvement purposes.
Conversational AI should replicate the trust and discretion that is integral to human-to-human conversations. As we share information with other individuals, there is an unsaid understanding that the listener will exercise discretion when sharing that information with other individuals. Likewise, as social AI systems interact with other human users (say peers or customers) on behalf of the workforce, they must only share what the user is comfortable sharing with other parties. For instance, an AI database has a user’s full date of birth—but when another human user or AI bot requests this information, the system uses discretion and only shares the day and month but not the year, thus, moving the conversation forward while keeping the data safe.
4. Securing social AI models by design
The European Union Agency for Cybersecurity (ENISA), the Federal Trade Commission (FTC) in the United States, and other organizations globally, have outlined cybersecurity frameworks to assess the exposure level of an AI model to cyberthreats.35 Organizations should test their social AI models against these security frameworks periodically to check for vulnerabilities to existing and emerging threats and deploy appropriate security controls.
When developers are building the social AI training data, they could ingest harmful content into the training dataset, for example, malicious content that tries to access and edit a user’s data or the complete dataset. This could help in training the algorithms to identify abnormal behaviors compared to normal user patterns and restrict further user activity, even leading up to denial of service as required.36
5. Deploying transparent social AI models with explainable decision-making
The workforce should be able to see how their data feeds into the social AI algorithm, how the algorithm makes decisions, and how it would benefit them. The algorithm should be open to inspection and corrections as required. For example, if AI recommends that someone modify their tone, it should also provide a decision tree explaining why something is appropriate or not based on organizational guidance on language nuances.
IBM provides factsheets for each AI model that contain information about the creation and deployment of a model throughout the life cycle. End users can review the data captured and how it moves through the AI life cycle to determine the model’s decision-making process.37 Consumers trust food nutrition labels because it enables them to decide whether to purchase and consume an item. Social AI factsheets may drive transparency and trust with the workforce the way food labels do.
6. Maintaining social AI models’ robustness and reliability over time
When social AI systems can learn from users and each other, they can produce reliable results and, over time, build trust with users. Human intervention may be required to ensure the model is and stays robust. Teams need to identify the right people to provide human input. Have they received training on company guidelines and policies, and are they equipped to take on this responsibility? It’s important to identify periodic refresher trainings on bias mitigation and ethics for those involved to keep the solution robust over time.
Getting started
In Deloitte’s survey of business leaders conducted in 2022, 76% said they plan to increase or significantly increase their organizational spending on AI in the next year.38 In addition to the established uses of AI in the workplace for making internal processes more efficient and generating data insights, leaders have the untapped opportunity to leverage AI to enhance the social side of work. Here are some actions to consider to get started.
Define social AI use cases and establish value metrics. Define what constitutes a social AI use or interaction, so you know how to set metrics and measure them. Identify value capture for each social AI application (increase in contact center resolution rates, higher employee engagement, improved acceptance of new processes, etc.). Measure value both in terms of breadth and depth. Breadth can be assessed by looking at how far-reaching the impact of the social AI solution is. Is it compartmentalized to select functions within the organization or across the organization? Is the impact within the organization or outside as well with external stakeholders such as customers, potential recruits, etc.? Depth can be assessed by looking at whether the social AI application is simply improving existing processes or establishing new trustworthy processes, thereby reinventing work practices.
Make the workforce comfortable with social AI. It is a huge shift for the workforce to trust a machine socially—people have to get comfortable holding a mirror up to their development areas. Leaders and managers have the responsibility to enroll the workforce with the idea that the use of their data is mutually beneficial for them and the organization. It often starts with letting the workforce know how their data will be used, giving them a “trial period” to evaluate the application, and an opt-in/-out ability at any point in time. Also, professionals tend to prefer to take “recommendations” from AI—not instructions. As such, it’s important to make it clear in the social AI user interface that the application is playing the role of a coach or buddy and not that of a gatekeeper or enforcer.
Identify how the workforce would like to engage with social AI considering cross-cultural differences. Begin by identifying workforce needs for teaming, relationship-building, networking, etc., and assess where AI solutions can be implemented to address current problems or uncover value-creation opportunities. There may be cross-cultural differences in social AI deployments for a globally dispersed workforce. For instance, based on a survey of 1,015 respondents from 48 countries, respondents from East Asia are more likely to have a trusting attitude toward emotion AI compared to respondents from western countries. This could require leaders to develop location-specific strategies for their global teams.39
Build a custom solution suited to your organization’s social nuances. When implementing a solution, it’s important to work closely (as a partner) with the AI solution provider. Since every organization is different in terms of its processes, communication styles, work dynamics, etc., it’s important to deploy a solution that is customized to the needs of the organization and the unique needs of different functions within the organization (sales, customer support, human resources, learning and development, etc.). Also, it’s important to have the right training dataset to train AI models; some of the training datasets should come from the organization’s actual data to keep the model close to reality and ensure that the model keeps adapting to incoming data.
Pilot the social AI solution for internal conversations, incorporate feedback, then scale to external applications. Pilot the solution with conversations and interactions within the organization (among the workforce) and build feedback loops from the workforce before scaling the solution to external interactions (with potential recruits, customers, etc.). While scaling the solution, a transfer-learning approach may be helpful. For example, when a team is developing a microaggression detector algorithm, they will have to train the model on hours of audio inputs, which would be time- and cost-expensive. Instead, the development team can use pretrained models (used elsewhere in the organization) or external open-source models and adapt them to their needs. When using an external open-source dataset, make sure to check that it is diverse to train your model well.
Time is short—seize the opportunity. There is a confluence of cost and performance improvements in enabling technologies (such as cloud, network speeds, computer vision, and language recognition) that could make it opportune for organizations to implement social AI now.40 AI is a powerful tool in leaders’ arsenals. With it, they can drive efficiency by creating leaner and simpler organizations and enhance unique human capabilities for long-term organizational success. By driving greater trust and transparency in hybrid operations, AI can improve the quality of work, increase employee engagement, and reduce attrition. As such, organizations adopting a wait-and-watch approach may run the risk of losing competitive advantage in the current race for talent.