Artikel

AI for work relationships may be a great untapped opportunity

AI can do more than make work better for humans—it can help make better humans for work

You’re frustrated. Two functional leaders are pulling you into a nasty turf war when you need them to collaborate. You’re writing a frustrated reply, when a friend stops you. They recommend more appropriate wording, and that you ask the functional leaders to schedule a meeting to discuss conflicting priorities and come up with a solution. You take the recommendation and cool off. You would like to reach out and thank your friend and confidante, but you can’t because they’re an AI. With the help of current artificial intelligence (AI) technologies, this—and many other social capabilities—may already be possible with the tools that many organizations have access to.

While 91% of business leaders surveyed in 2022 said they have an enterprisewide AI strategy, they are typically using AI in the workplace to generate insights, optimize processes, lower costs, improve collaboration across businesses, etc.1 Within the context of these applications, the potential for human-machine collaboration is well-established.2 However, the potential of AI to improve human-to-human relationships among the workforce or with customers and potential recruits—what we call the social side of work—can often be overlooked.

By analyzing interactions and communications and generating personalized, data-driven recommendations, AI can do much more than just promoting email diplomacy. It could be a powerful tool for the workforce to nurture uniquely human capabilities. AI can help us prepare for key presentations, expand our professional networks, understand the personalities and feelings of customers, promote diversity and inclusion in everyday work, and even drive innovation and culture change across an organization. Of course, such capabilities come with adoption challenges. Skepticism for this kind of AI can run deep. But a careful, user-centric, opt-in/-out approach can help overcome resistance, and gradually introduce employees to AI.

What social aspects of work can AI improve?

Beyond the tactical knowledge, expertise, and skills needed to do one’s job, there are enduring human capabilities that are universally applicable and harder to develop, such as emotional intelligence, teaming, and empathy.3 These capabilities enable workers to build meaningful relationships with customers, leaders, peers, and potential recruits. The value of these human-to-human relationships can be foundational and critical to organizational success.4

We surveyed 2,620 business leaders as part of Deloitte’s State of AI 2022 study. More than two-thirds of leaders noted that their organizations have either deployed or were developing AI applications for natural language processing (including sentiment detection and text summarization), computer vision, text chatbots, and voice agents. Additionally, less than a third were planning or exploring these technologies.5 Organizations are typically using these technologies to generate insights, optimize processes, lower costs, and improve collaboration across businesses. In addition to these applications, AI technologies can analyze human interactions during and after an event to generate personalized, confidential recommendations at an individual and organizational level to help improve human interactions at work.

There are multiple AI applications for the social side of work (figure 1).

 

Simulations: Affective computing, also known as emotion AI, is a constantly evolving application that understands human emotions in response to a situation and makes recommendations accordingly. Its real-world applications span across a few areas of communication.6

For example, before a meeting or presentation, leaders can practice interactions with AI avatars representing team members.7 Based on the narrative, AI would generate possible arguments, assess persuasiveness, and give feedback to make communication more effective.8

AI simulations can also be helpful when leaders are looking for inputs on early-stage thinking. For complex topics, leaders may first seek inputs from AI, then review with peers and leaders at later stages when the thinking is more developed to save time.

Upskilling at scale: Traditionally, coaching has largely been made available to select professionals in an organization—either high performers or individuals experiencing performance issues that require direct interaction and intervention. This leaves out much of the workforce. AI can enable learning experiences tailored to an individual’s emotional intelligence needs and drive those learning experiences at scale.9 In one example, a coaching network uses AI and machine learning algorithms to match employees with coaches focused on different skills categories related to inclusive leadership, persuasive communication, etc.10

Networking: AI-enabled applications can connect professionals with other people who have similar interests and help them grow their professional networks within and outside of their organization. Users provide information about their professional background, industry or sector specialization, areas of interest, etc., to a model that can generate matches periodically, send introductory emails, and set up meetings. These interactions can drive virtual watercooler or coffee bar conversations in the hybrid work environment. On similar lines, AI-enabled platforms can also facilitate experience/expertise-based networking outside of the organization.11

Contact centers have been early adopters of automated voice systems to address higher call volumes, with labor shortages and lower IT budgets.12 However, the endless loops of automated responses can often lead to customer alienation, making this a much less popular and often-derided use of automation. AI can not only drive automation, but it can also make each customer touchpoint meaningful—while reducing the need for 24/7 human involvement. By analyzing data from past conversations, AI can help contact center representatives with insights to prepare a baseline customer profile before an interaction, enable them to perform well during the interaction, and help them update the customer profile based on the interaction to generate recommendations for future use.

Getting to know your customers before meeting them. Past customer interactions are a gold mine for deriving customer insights. AI tools can ingest basic customer data and previous conversations to create their profile based on their communication style, personal priorities, responses in previous conversations, etc. Contact center representatives can review this profile before engaging with a customer and be better prepared to have a seamless conversation.13 AI can also identify the most appropriate service representative for a customer based on similarities in personalities and communication styles.14

In one example, Vodafone Italy combined customers’ profile data with a customized language generator algorithm, and developed personalized promotional messages for each customer segment for plan upgrades, 5G launch, etc. The effort resulted in increasing customer subscriptions by 40% in 2020.15

Secondly, engaging effectively during a customer interaction. While engaging with customers, virtual agents or chatbots can conduct a real-time sentiment analysis of the conversation. The bot can then adjust its response based on the results of the sentiment analysis; for instance, if a customer interaction has a positive sentiment, the bot can pitch for cross-selling or upselling. And if the customer interaction has a negative sentiment, the bot can quickly transfer the call to a contact center representative with notes about the interaction, enabling the representative to take it forward.16

Even when representatives are interacting with customers, AI programs can monitor the interaction in real-time and provide suggestions to the representatives (through text prompts) on how to respond.17 Humana Pharmacy uses voice analytics in its call centers.18 Voice signals can be analyzed to determine customer engagement and provide real-time feedback to contact center employees during the calls, allowing them to amend their approach accordingly.19

The conversational AI solution should be sophisticated enough and be able to combine language semantics with voice tonality to understand the customer’s emotion correctly. For instance, a user (in a stable and flat voice) says, “I’m really surprised that you still haven’t managed to provide a resolution.” While the tone doesn’t show anger or frustration, some words, such as really surprised or haven’t managed, when spoken with longer-than-normal pauses, could indicate a negative emotion. The application should be able to pick up these nuances to generate effective advice for customer service representatives.

As conversational AI continues to learn and improve over time, benefits can be significant. One study involving 445 businesses across industries using AI solutions for contact center service reported 2.2 times higher first-call resolution rates and 4.5 times greater service-level agreement (SLA) attainment rates, compared to non-AI users.20

Finally, deriving insights after a customer interaction for future use. AI applications can analyze interactions with customers to update customer profiles, enable service professionals to improve their pitches, and also reassess the pairing between customer service representatives and customers based on similarities in personalities and communication styles for future interactions.

As the customer service use case shows, AI has the ability to automate processes (that have traditionally been done by humans) with a “human touch” and free up time for humans to take up higher-quality work. This use case also illustrates that worker data can be used to not only draw meaningful insights but also create a better work experience and, as such, can be mutually beneficial for the organization and the workforce.

Our survey shows that real estate firms are still in the early stages of managing their ESG compliance requirements. Only 12% of respondents say they’re prepared to immediately implement changes to meet new regulatory requirements, and only 7% use ESG data and analytics in their investment strategy decision-making (figure 2). Most plan to start incorporating ESG data over the next year to two years.

Real estate companies will need to learn about potential regulatory changes and adopt practices to comply with reporting requirements. Since more than 45% of survey respondents say they are awaiting guidance or an industry-driven response, industry associations can play a critical role by providing observations, information, and recommendations. CRE leaders should also be sure to focus on more than just the “E” in ESG—social and governance issues are also important.

Respondents were also closely following trends in tax regulation. With tax policies around the globe in flux, top concerns for the industry were increased tax rates, changes to transfer pricing/profit-sharing, and the automation of enforcements. CRE leaders can help prepare their organizations for upcoming tax changes by:

  1. Increasing transparency into reporting and data requirements for automated regulatory enforcement in certain jurisdictions; and
  2. Factoring in the tax implications of ESG initiatives. For qualifying activities, existing or soon-to-be enacted legislation could provide real estate organizations with tax benefits, such as tax credits.

AI and data-based algorithms can provide visibility into whether the organization is truly diverse. By analyzing the profile of the workforce, AI can assess diversity (race, gender, ethnicity, etc.) and monitor it in real-time across functions, career levels, and other criteria.

AI can also help attract diverse new talent in many ways, including:

  • Blind hiring. One of the earliest results of blind hiring can be observed in orchestras: Female musicians in symphony orchestras in the United States comprised typically less than 5% of all performers in the 1970s. Gradually, orchestras tweaked their audition process by introducing “blind auditions”—adding partitions to shield the identity of those auditioning. The percentage of female musicians then increased from 5% in the 1970s to 25–40% in the early 2000.21

In the workplace, AI can enable blind hiring by stripping away identifiable attributes from resumes that are typically not related to candidates’ skills, expertise, or experience. By removing attributes such as name, age, headshot, gender, race, or ethnicity from resumes before they reach hiring managers, AI can reduce human biases and help drive a more equitable recruitment process.

  • Soft-skill assessments. Some companies use evaluative AI screeners (with neuroscience-based games embedded) to better understand candidates’ hard-to-assess competencies, such as risk-taking, perseverance, and emotional intelligence, along with traditional traits such as logical reasoning and quantitative and verbal abilities.22

  • Interview panel design. After the initial screening, AI can also help hiring managers build diverse interview panels to minimize biases.

  • Diverse team building from internal talent. Referred to as the “internal talent marketplace,” AI can match people’s skills against project needs to build effective teams, while being intentional about bringing diverse professionals from outside of the core project team.23 In one example, IBM deployed its Opportunity Team Builder AI solution to identify the best candidates to join a sales team based on their social skills and predict the impact each member would have on the sales team’s overall performance. As team members are added to the team, the tool continuously calculates the gaps in skills needed until an optimal team is formed.24

Project teams may be more amenable to work assigned by AI compared to that assigned by their managers in some cases. Team members are likely to be more trusting toward AI when they are looking for quick and unbiased information, logic-driven solutions, confidential responses without the fear of scrutiny or retaliation, etc.25 By integrating AI in day-to-day workflows and allocations, managers can improve trust with their team members.

Diversity without inclusion is insufficient. AI can enable the workforce to drive respectful conversations and inclusive workflows that are critical—especially in hybrid and remote work environments.

AI can drive inclusion and accessibility in meetings, including:

  • Microaggression coaching. AI can detect microaggressions by analyzing written or verbal communications, suggest alternatives and provide feedback confidentially to the communicator to improve their sensitivity over time. When somebody’s tone becomes disrespectful, a sophisticated AI application wouldn’t scold or criticize them (a user may otherwise dismiss the coaching). Instead, the application would subtly suggest that their tone may have shifted toward the negative and nudge them to change their tonality26.

  • Encouraging turn-taking. Based on simple voice detection, AI can identify individuals and groups that take over a conversation, leaving no space for others to contribute to the discussion.27 Such in-the-moment analysis is helpful especially in hybrid/virtual settings to ensure everyone can speak and contribute to a discussion. Receiving such recommendations could be uncomfortable for many people and they could simply reject them. Thus, it’s imperative that organizations build trust in AI systems and help the workforce appreciate the role of AI to enhance their emotional intelligence through fair and impartial feedback.

  • Improving accessibility. AI can remove language barriers and improve accessibility in meetings and discussions. Meeting notes can be immediately transcribed into multiple languages to improve participation from global teams. Accessibility of content can be improved by providing lip-reading recognition for people with hearing impairment, facial or image recognition for people with visual impairment, and text summarization for professionals who aren’t comfortable with digesting large bodies of text in one sitting.

Leaders typically use a formal hierarchy and top-down communication to disseminate culture and values within the organization. However, they often face challenges in gaining acceptance through such channels. Not everyone who represents a box on an organizational chart is more influential than those that flow below them. The truth is that influencers can fall all over an organizational chart, but we tend to prioritize hierarchy over influence. In reality though, workforce behaviors and culture change happen in the organizational network (figure 2)28.

By using technologies such as text mining, natural language processing, etc., organizations can analyze who is connected to whom, the nature of their interactions and relationships, and informal influencers within the organization in a systematic and scientific way. Data-driven analysis of responses to surveys, focus group discussions, interviews, etc., can highlight reasons for workforce hesitancy toward proposed changes and the degree of resistance. It can determine who is “on the fence” versus opposed to change. When leaders understand the reasons and degree of hesitance, they’re often better equipped to formulate potential actions to address that hesitance, and drive acceptance and change management with the help of influencers29.

Informal influencers can also be helpful in driving innovation within the organization as they can mobilize individuals and groups to facilitate the flow of ideas and information within the organization. By analyzing the connections between employees, General Motors identified “influencers” from different teams and functions who could drive innovative ideas for product design and customer service.30 Then, they created an environment to develop the ideas by onboarding additional people interested in building the solutions and driving wider adoption across the organization.

Challenges confronting the social side of AI and potential solutions

Applications of social AI will likely face many of the same challenges as other AI applications—concerns about lack of explainability in AI decisions and risks associated with data privacy, trust, reliability, etc31.

We discuss below some of the key elements that organizations should consider integrating when developing and implementing social AI solutions.32 These elements can address some of the challenges and can help create better work for humans and better humans for work.

The training dataset for the social AI algorithm should be chosen to ensure a fair representation of the population and mitigate biases resulting from human inputs. Also, it’s important to ensure that recommendations (for improvements in communication, workflows, etc.) are not influenced or biased by career levels, e.g., junior professionals need more training on inclusive communication. Further, the application should not only offer corrective action on communications and interactions but also convey appreciation and admiration when employees adapt their behavior to the recommendations and improve the quality of their interactions.

In recent years, there has been much discussion about whether AI should be held to machine or human standards—both ethically and legally.33 And, who should be held responsible and accountable for a decision: AI or the person who created or deployed it?

It’s important to establish responsibility and accountability in conversations and interactions among AI and human users. When doing so, social AI cannot be considered in isolation. It’s a part of an organization’s overall ethics policy, and human users remain integral throughout the AI loop. Let’s consider an example where a social AI application provides a recommendation about language choice to an employee or a suitable composition for a team to a project manager. In this case, the responsibility to generate the most appropriate recommendation lies with the developer teams; however, the accountability to act on that recommendation rests with the user, i.e., the human workforce. It’s imperative that this understanding of responsibility and accountability is documented and communicated to the developers as well as end users.

During one of our research interviews, an AI specialist who focuses on AI/machine learning product management said, “The topmost challenge is privacy … users freak out when they learn that their data is being collected … they feel, ‘I am being monitored, and my behavior will be distributed where I don’t have control.’”34

One way to alleviate privacy concerns is to ensure that the user data isn’t used for evaluative purposes. In other words, don’t use AI to rate your workforce’s emotional intelligence for performance reviews. Also, the application should seek permission to use the workforce data for each purpose (analyzing team conversations, sales pitches, customer support calls, etc.), and there shouldn’t be blanket consent from the workforce for the deployment of multiple social technologies.

Depending on the AI’s purpose, there may or may not be a need to store the data. In a simple example of turn-taking and analyzing airtime in a multiperson conversation, the data is useful in the moment to allow everyone to contribute to the discussion, and it can be deleted after the conversation. In other applications, such as improving contact center conversations, data may need to be stored for future training and improvement purposes.

Conversational AI should replicate the trust and discretion that is integral to human-to-human conversations. As we share information with other individuals, there is an unsaid understanding that the listener will exercise discretion when sharing that information with other individuals. Likewise, as social AI systems interact with other human users (say peers or customers) on behalf of the workforce, they must only share what the user is comfortable sharing with other parties. For instance, an AI database has a user’s full date of birth—but when another human user or AI bot requests this information, the system uses discretion and only shares the day and month but not the year, thus, moving the conversation forward while keeping the data safe.

The European Union Agency for Cybersecurity (ENISA), the Federal Trade Commission (FTC) in the United States, and other organizations globally, have outlined cybersecurity frameworks to assess the exposure level of an AI model to cyberthreats.35 Organizations should test their social AI models against these security frameworks periodically to check for vulnerabilities to existing and emerging threats and deploy appropriate security controls.

When developers are building the social AI training data, they could ingest harmful content into the training dataset, for example, malicious content that tries to access and edit a user’s data or the complete dataset. This could help in training the algorithms to identify abnormal behaviors compared to normal user patterns and restrict further user activity, even leading up to denial of service as required.36

The workforce should be able to see how their data feeds into the social AI algorithm, how the algorithm makes decisions, and how it would benefit them. The algorithm should be open to inspection and corrections as required. For example, if AI recommends that someone modify their tone, it should also provide a decision tree explaining why something is appropriate or not based on organizational guidance on language nuances.

IBM provides factsheets for each AI model that contain information about the creation and deployment of a model throughout the life cycle. End users can review the data captured and how it moves through the AI life cycle to determine the model’s decision-making process.37 Consumers trust food nutrition labels because it enables them to decide whether to purchase and consume an item. Social AI factsheets may drive transparency and trust with the workforce the way food labels do.

When social AI systems can learn from users and each other, they can produce reliable results and, over time, build trust with users. Human intervention may be required to ensure the model is and stays robust. Teams need to identify the right people to provide human input. Have they received training on company guidelines and policies, and are they equipped to take on this responsibility? It’s important to identify periodic refresher trainings on bias mitigation and ethics for those involved to keep the solution robust over time.

Getting started

In Deloitte’s survey of business leaders conducted in 2022, 76% said they plan to increase or significantly increase their organizational spending on AI in the next year.38 In addition to the established uses of AI in the workplace for making internal processes more efficient and generating data insights, leaders have the untapped opportunity to leverage AI to enhance the social side of work. Here are some actions to consider to get started.

Define social AI use cases and establish value metrics. Define what constitutes a social AI use or interaction, so you know how to set metrics and measure them. Identify value capture for each social AI application (increase in contact center resolution rates, higher employee engagement, improved acceptance of new processes, etc.). Measure value both in terms of breadth and depth. Breadth can be assessed by looking at how far-reaching the impact of the social AI solution is. Is it compartmentalized to select functions within the organization or across the organization? Is the impact within the organization or outside as well with external stakeholders such as customers, potential recruits, etc.? Depth can be assessed by looking at whether the social AI application is simply improving existing processes or establishing new trustworthy processes, thereby reinventing work practices.

Make the workforce comfortable with social AI. It is a huge shift for the workforce to trust a machine socially—people have to get comfortable holding a mirror up to their development areas. Leaders and managers have the responsibility to enroll the workforce with the idea that the use of their data is mutually beneficial for them and the organization. It often starts with letting the workforce know how their data will be used, giving them a “trial period” to evaluate the application, and an opt-in/-out ability at any point in time. Also, professionals tend to prefer to take “recommendations” from AI—not instructions. As such, it’s important to make it clear in the social AI user interface that the application is playing the role of a coach or buddy and not that of a gatekeeper or enforcer.

Identify how the workforce would like to engage with social AI considering cross-cultural differences. Begin by identifying workforce needs for teaming, relationship-building, networking, etc., and assess where AI solutions can be implemented to address current problems or uncover value-creation opportunities. There may be cross-cultural differences in social AI deployments for a globally dispersed workforce. For instance, based on a survey of 1,015 respondents from 48 countries, respondents from East Asia are more likely to have a trusting attitude toward emotion AI compared to respondents from western countries. This could require leaders to develop location-specific strategies for their global teams.39

Build a custom solution suited to your organization’s social nuances. When implementing a solution, it’s important to work closely (as a partner) with the AI solution provider. Since every organization is different in terms of its processes, communication styles, work dynamics, etc., it’s important to deploy a solution that is customized to the needs of the organization and the unique needs of different functions within the organization (sales, customer support, human resources, learning and development, etc.). Also, it’s important to have the right training dataset to train AI models; some of the training datasets should come from the organization’s actual data to keep the model close to reality and ensure that the model keeps adapting to incoming data.

Pilot the social AI solution for internal conversations, incorporate feedback, then scale to external applications. Pilot the solution with conversations and interactions within the organization (among the workforce) and build feedback loops from the workforce before scaling the solution to external interactions (with potential recruits, customers, etc.). While scaling the solution, a transfer-learning approach may be helpful. For example, when a team is developing a microaggression detector algorithm, they will have to train the model on hours of audio inputs, which would be time- and cost-expensive. Instead, the development team can use pretrained models (used elsewhere in the organization) or external open-source models and adapt them to their needs. When using an external open-source dataset, make sure to check that it is diverse to train your model well.

Time is short—seize the opportunity. There is a confluence of cost and performance improvements in enabling technologies (such as cloud, network speeds, computer vision, and language recognition) that could make it opportune for organizations to implement social AI now.40 AI is a powerful tool in leaders’ arsenals. With it, they can drive efficiency by creating leaner and simpler organizations and enhance unique human capabilities for long-term organizational success. By driving greater trust and transparency in hybrid operations, AI can improve the quality of work, increase employee engagement, and reduce attrition. As such, organizations adopting a wait-and-watch approach may run the risk of losing competitive advantage in the current race for talent.

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

  1. Deloitte, The State of AI 5th edition, October 2022. View in Article
  2. Jim Guszcza and Jeff Schwartz, Superminds, not substitutes, Deloitte Insights, July 31, 2020; Jeff Schwartz et al., Superteams: Putting AI in the group, Deloitte Insights, May 15, 2020. View in Article
  3. John Hagel III, John Seely Brown, and Maggie Wooll, Skills change, but capabilities endure, Deloitte Insights, August 30, 2019v. View in Article
  4. Tina Hovsepian, “Business and people: Why relationships are essential for a successful business,” Forbes, July 20, 2018; Indeed, “What are human relations in the workplace? (With steps for practicing them),” August 16, 2022. View in Article
  5. Deloitte, The State of AI 5th edition, October 2022. View in Article
  6. Tamara Cibenko, Amelia Dunlop, and Nelson Kunkel, Human experience platforms, Deloitte Insights, January 15, 2020; Meredith Somers, “Emotion AI, explained,” MIT Sloan, March 8, 2019. View in Article
  7. Alelo, “Home,” accessed September 23, 2022.View in Article
  8. Ibid. View in Article
  9. Jun Wu, “How AI can help companies thrive in post-pandemic uncertainty,” Forbes, March 1, 2020; Sarah Fister Gale, “AI brings coaching to the masses,” Reworked, September 8, 2021. View in Article
  10. Business Wire, “BetterUp unveils new features to its leadership development platform to drive improved business outcomes and employee transformation,” October 1, 2019. View in Article
  11. Shephali Bhat, “Professional networking in times of Covid-19,” Economic Times, September 9, 2020. View in Article
  12. Deloitte AI Institute, “The AI Dossier,” accessed September 23, 2022. View in Article
  13. Daniel Limon and Bryan Plaster, “Can AI teach us how to become more emotionally intelligent?,” Harvard Business Review, January 25, 2022. View in Article
  14. Limon and Plaster, “Can AI teach us how to become more emotionally intelligent?”; Gong.io, “Home,” accessed September 23, 2022. View in Article
  15. Bloomberg, “Vodafone Italy drives new accounts with personalized, AI-powered messaging,” June 29, 2020; Jason Heller and Vipul Vyas, “How AI is helping companies make deeper human connections,” Harvard Business Review, November 11, 2020. View in Article
  16. Aberdeen, The ROI of intelligent virtual assistants in customer experience programs, accessed September 23, 2022.View in Article
  17. Cresta, “Home,” accessed September 23, 2022; Limon and Plaster, “Can AI teach us how to become more emotionally intelligent?.” View in Article
  18. Cogito, “How Humana Pharmacy leverages AI to enhance member experience,” accessed September 23, 2022.View in Article
  19. Ibid. View in Article
  20. Michael Parker, “Why contact center AI is a big deal for customer-focused brands,” Cresta, August 11, 2021; Aberdeen, The ROI of intelligent virtual assistants in customer experience programs. View in Article
  21. Claudia Goldin and Cecilia Rouse, “Orchestrating impartiality: The impact of “blind” auditions on female musicians,” NBER, January 1997. View in Article
  22. Monika Mahto et al., A rising tide lifts all boats, Deloitte Insights, January 18, 2022; Bernard Marr, “Artificial intelligence in the workplace: How AI is transforming your employee experience,” Forbes, May 29, 2019. View in Article
  23. Ina Gantcheva et al., Activating the internal talent marketplace, Deloitte Insights, September 18, 2020. View in Article
  24. Oznur Alkan, Elizabeth Daly, and Inge Vejsbjerg, “Opportunity team builder for sales teams, proceedings of the 2018 International Conference on Intelligent User Interfaces,” ACM, March 2018. View in Article
  25. Michael Schneider, “64 percent of employees trust AI over managers because robots give unbiased information,” Inc., accessed September 23, 2022; Oracle, “New study: 64% of people trust a robot more than their manager,” press release, October 15, 2019. View in Article
  26. Based on an interview with an AI specialist focusing on human-machine interaction, conducted on April 15, 2022. View in Article
  27. Ibid. View in Article
  28. Siri Anderson, Making the invisible visible, Deloitte Insights, February 27, 2019. View in Article
  29. Elizabeth J. Altman, David Kiron, Robin Jones, Jeff Schwartz, Orchestrating Workforce Ecosystems: Strategically managing work across and beyond organizational boundaries, MIT Sloan Management Review, May 17, 2022. View in Article
  30. Michael Arena et al., “How to catalyze innovation in your organization,” MIT Sloan Management Review (2017); David Green, “The role of organisational network analysis in people analytics,” LinkedIn, May 23, 2018. View in Article
  31. Deloitte Insights, Becoming an AI-fueled organization—State of AI in the enterprise, 4th edition, accessed September 23, 2022. View in Article
  32. Deloitte, “Deloitte introduces trustworthy AI framework to guide organizations in ethical application of technology in the Age of With,” press release, accessed September 23, 2022. View in Article
  33. Jeremy Elman and Abel Castilla, “Artificial intelligence and the law,” Tech Crunch, January 28, 2017; Council of Europe, Responsibility and AI, accessed September 23, 2022. View in Article
  34. Based on an interview conducted on April 15, 2022.View in Article
  35. ENISA, Securing machine learning algorithms, December 14, 2021; Alex Engler, “The EU and US are starting to align on AI regulation,” Brookings, February 1, 2022. View in Article
  36. Andrew Marshall et al., “Securing the future of artificial intelligence and machine learning at Microsoft,” Microsoft Ignite, accessed September 23, 2022. View in Article
  37. Pearl Chen, “IBM’s open source strategy champions AI trust and transparency,” IBM, September 10, 2020. View in Article
  38. Deloitte, The State of AI 5th edition, October 2022. View in Article
  39. Peter Mantello, “Bosses without a heart: socio-demographic and cross-cultural determinants of attitude toward Emotional AI in the workplace,” AI & Society (2021). View in Article
  40. Deloitte Insights, The Internet of Things: A technical primer, February 8, 2018. View in Article
Hade du nytta av den här informationen?