Tech companies are working feverishly to build generative AI features into everything from productivity suites to help desk solutions to software development tools.1 In a series of discussions this summer and a pulse survey this fall, leaders of large and midsize tech companies explained how they’re phasing generative AI into their workstreams and starting to reap the benefits.2 Their experiences and insights may prove instructive to any organization starting its generative AI journey.
Many pointed out that AI is not a new technology; it has been in use for years and is still an important investment area (see figure). Generative AI’s headline-grabbing arrival, however, caught the attention of everyone from C-level leadership down and is driving a new wave of experimentation and investment.
In our survey, we wanted to understand whether businesses draw a distinction between generative AI and AI more broadly. More than half of the respondents selected AI as the technology that will enable the most growth in the coming year, and 27% selected generative AI, with 16% of those respondents overlapping. Respondents could choose up to three items.
It is worth noting that respondents from larger companies (those with 10,000 or more employees or US$10 billion or more in revenue) selected generative AI at a higher rate than smaller companies did. This could be because of the level of investment or effort required for generative AI initiatives, or a greater capacity to scale efficiencies. Whether larger companies invest in and benefit from generative AI more that smaller ones is a factor to watch.
“[Generative AI] really changed the game and it was, hands-down, an edict by our CEO. He's driving it,” said one executive. “Our clients are pushing us, our internal teams are pushing us, and our CEO is pushing us,” said another.
Some respondents emphasized the need for centralized leadership and modernized data management systems. “The goal is to have a dedicated chief analytics officer or chief digital officer overseeing all teams,” one remarked. “This governance team’s [purpose] is to make sure that they're getting the use cases, understanding what problems could be solved by machine learning and AI, and then the most important thing is the data.”
Data cleanliness can help reduce hallucinations and inaccuracies, driving reliable results that improve as models learn. Privacy safeguards are also critical to implement at the earliest stages to help ensure that sensitive data is not exposed to unauthorized entities. In fact, some industries are using AI models to recognize personally identifiable information and add tags to large data sets, effectively cleaning the information and improving outputs.3
“The biggest challenge—and I don’t think there’s a solution to this yet—is the privacy and security,” one leader stated. Loading client data into generative AI implementations may put the information at risk, and using public datasets could introduce biases into the results. “There’s an additional challenge around ethical issues and algorithmic biases because if your training data is skewed one way or another, you’ll get biases,” they added.
Avoid legal and ethical pitfalls
Generative AI can open companies up to questions around data protection, content usage rights, and ethical practices.
One executive who works with health information explained that data-sharing at their company must go through several layers of approvals, including legal, privacy, and IT. They remove identifying information from the data and build secure, HIPAA-compliant data pipelines whenever data is transferred to outside partners.
Another significant challenge raised by several interviewees is ensuring reliable and accurate answers from generative AI. One approach to addressing this is triangulation, where multiple, similar requests are made to the AI to evaluate differences and similarities among responses, resulting in a reliability index. This challenge is far from solved, but one respondent indicated that their organization uses reliability, bias, and ethics indices to score AI responses and adjust models and prompts to improve results.
Many executives surveyed pointed to lawsuits alleging that generative AI large language models have been trained using copyright-protected content. In response, some software companies have pledged to assume liability in case their generative AI tools expose customers to IP infringement allegations. This indemnity may be limited, however, and it doesn’t necessarily cover the continued use of assets found to be in violation of copyright.4
International regulations governing privacy, content use, and ethical practices are also high on the list of concerns regarding AI adoption. The executives we spoke to pointed out that the EU AI Act may go into effect as soon as the end of 2023,5 and that US regulators are working with technologists on the industry side to understand and set parameters for AI use, spurred by the Biden administration’s recent executive order on “Safe, Secure, and Trustworthy Artificial Intelligence.”6
The build, buy, or partner conundrum
This last point is a major consideration for companies that are debating whether to develop their own proprietary AI tools, utilize existing “off the shelf” solutions, or pull together an agglomeration of cloud-based, on-premises, commercial, and proprietary components.
Companies may not be considering the long-term costs of generative AI as they experiment with proofs of concept and enjoy entry-level pricing, but those expenses will likely rise as implementations grow in complexity. One leader talked about the “tactical, functional” elements of compute and data infrastructure that would be needed. “Ultimately, how do we pay for the scale of data that's required for us to actually get really meaningful outcomes?” he wondered. “I think people underestimate, thinking, ‘Well, I saw this happen one time, therefore, I must go do it.’”
Another executive said their company aims to offer generative AI as a service, where their large-language models or analytics frameworks are customized for particular use cases and are located in an open-source cloud environment. Customers upload “clean” data, run their analyses and refine their models, and get their results without needing to invest in infrastructure and without putting sensitive information at risk.
Some leaders interviewed said their organizations are using “off the shelf” generative AI solutions for non-core tasks such as marketing and sales while focusing on proprietary generative AI implementations that will help drive competitive advantage.
One executive predicted that AI solutions will evolve into an ecosystem where large players provide foundational platforms and contextual models as commodities, while additional parties build capabilities and functions on top to cater to specific business needs. Customers, according to another leader, are more likely to trust a service when it’s built in partnership with a well-known and well-established tech giant’s AI platforms.
While it’s still early days for generative AI in the enterprise, the pace of investment and experimentation is accelerating. Committing to clean and accurate data, considering legal and ethical ramifications, and formulating a cost-effective deployment strategy can help companies avoid pitfalls and achieve their objectives.