Limited functionality available
Machine learning is a powerful and versatile technology, but a number of factors are restraining adoption in both the private sector and government. Five vectors of progress in machine learning can change that.
Artificial intelligence (AI) holds tremendous potential for governments, especially machine learning technology, which can help discover patterns and anomalies and make predictions. There are five vectors of progress that can make it easier, faster, and cheaper to deploy machine learning and bring the technology into the mainstream in the public sector. As the barriers continue to fall, chief data officers (CDOs) have increasing opportunities to begin exploring applications of this transformative technology.
Machine learning is one of the most powerful and versatile information technologies available today.1 But most organizations, even in the private sector, have not begun to use its potential. One recent survey of 3,100 executives from small, medium, and large companies across 17 countries found that fewer than 10 percent of companies were investing in machine learning.2
Read the full CDO Playbook
Create a custom PDF
Learn more about the Beeck Center
Subscribe to receive public sector content
A number of factors are restraining the adoption of machine learning in government and the private sector. Qualified practitioners are in short supply.3 Tools and frameworks for doing machine learning work are still evolving.4 It can be difficult, time-consuming, and costly to obtain large datasets that some machine learning model-development techniques require.5
Then there is the black box problem. Even when machine learning models can generate valuable information, many government executives seem reluctant to deploy them in production. Why? In part, possibly because the inner workings of machine learning models are inscrutable, and some people are uncomfortable with the idea of running their operations or making policy decisions based on logic they don’t understand and can’t clearly describe.6 Other government officials may be constrained by an inability to prove that decisions do not discriminate against protected classes of people.7 Using AI generally requires understanding all requirements of government, and it requires making the black boxes more transparent.
There are five vectors of progress in machine learning that could help foster greater adoption of machine learning in government (see figure 1). Three of these vectors include automation, data reduction, and training acceleration, which make machine learning easier, cheaper, and/or faster. The other two are model interpretability and local machine learning, both of which can open up applications in new areas.
Developing machine learning solutions requires skills primarily from the discipline of data science, an often-misunderstood field. Data science can be considered a mix of art and science—and digital grunt work. Almost 80 percent of the work that data scientists spend their time on can be fully or partially automated, giving them time to spend on higher-value issues.8 This includes data wrangling—preprocessing and normalizing data, filling in missing values, or determining whether to interpret the data in a column as a number or a date; exploratory data analysis—seeking to understand the broad characteristics of the data to help formulate hypotheses about it; feature engineering and selection—selecting the variables in the data that are most likely correlated with what the model is supposed to predict; and algorithm selection and evaluation—testing potentially thousands of algorithms to assess which ones produce the most accurate results.
Automating these tasks can make data scientists in government more productive and more effective. For instance, while building customer lifetime value models for guests and hosts, data scientists at Airbnb used an automation platform to test multiple algorithms and design approaches, which they would not likely have otherwise had the time to do. This enabled Airbnb to discover changes it could make to its algorithm that increased the algorithm’s accuracy by more than 5 percent, resulting in the ability to improve decision-making and interactions with the Airbnb community at very granular levels.9
A growing number of tools and techniques for data science automation, some offered by established companies and others by venture-backed startups, can help reduce the time required to execute a machine learning proof of concept from months to days.10 And automating data science can mean augmenting data scientists’ productivity, especially given frequent talent shortages. As the example above illustrates, agencies can use data science automation technologies to expand their machine learning activities.
Developing machine learning models typically requires millions of data elements. This can be a major barrier, as acquiring and labeling data can be time-consuming and costly. For example, a medical diagnosis project that requires MRI images labeled with a diagnosis requires a lot of images and diagnoses to create predictive algorithms. It can cost more than $30,000 to hire a radiologist to review and label 1,000 images at six images an hour. Additionally, privacy and confidentiality concerns, particularly for protected data types, can make working with data more time-consuming or difficult.
A number of potentially promising techniques for reducing the amount of training data required for machine learning are emerging. One involves the use of synthetic data, generated algorithmically to create a synthetic alternative to mimic the characteristics of real data.11 This technique has shown promising results.
A Deloitte LLP team tested a tool that made it possible to build an accurate machine learning model with only 20 percent of the training data previously required by synthesizing the remaining 80 percent. The model’s task was to analyze job titles and job descriptions—which are often highly inconsistent in large organizations, especially those that have grown by acquisition—and then categorize them into a more consistent, standard set of job classifications. To learn how to do this, the model needed to be trained through exposure to a few thousand accurately classified examples. Instead of requiring analysts to laboriously classify (“label”) these thousands of examples by hand, the tool made it possible to take a set of labeled data just 20 percent as large and automatically generate a fuller training dataset. And the resulting dataset, composed of 80 percent synthetic data, trained the model just as effectively as a hand-labeled real dataset would have.
Synthetic data can not only make it easier to get training data, but also make it easier for organizations to tap into outside data science talent. A number of organizations have successfully engaged third parties or used crowdsourcing to devise machine learning models, posting their datasets online for outside data scientists to work with.12 This can be difficult, however, if the datasets are proprietary. To address this challenge, researchers at MIT created a synthetic dataset that they then shared with an extensive data science community. Data scientists within the community built machine learning models using the synthetic data. In 11 out of 15 tests, the models developed from the synthetic data performed as well as those trained on real data.13
Another technique that could reduce the need for extensive training data is transfer learning. With this approach, a machine learning model is pre-trained on one dataset as a shortcut to learning a new dataset in a similar domain such as language translation or image recognition. Some vendors offering machine learning tools claim their use of transfer learning has the potential to cut the number of training examples that customers need to provide by several orders of magnitude.14
Because of the large volumes of data and complex algorithms involved, the computational process of training a machine learning model can take a long time: hours, days, even weeks.15 Only then can the model be tested and refined. Now, some semiconductor and computer manufacturers—both established companies and startups—are developing specialized processors such as graphics processing units (GPUs), field-programmable gate arrays, and application-specific integrated circuits to slash the time required to train machine learning models by accelerating the calculations and by speeding up the transfer of data within the chip.
These dedicated processors can help organizations significantly speed up machine learning training and execution, which in turn could bring down the associated costs. For instance, a Microsoft research team, using GPUs, completed a system that could recognize conversational speech as capably as humans in just one year. Had the team used only CPUs, according to one of the researchers, the same task would have taken five years.16 Google has stated that its own AI chip, the Tensor Processing Unit (TPU), when incorporated into a computing system that also includes CPUs and GPUs, provided such a performance boost that it helped the company avoid the cost of building a dozen extra data centers.17 The possibility of reducing the cost and time involved in machine learning training could have big implications for government agencies, many of which have a limited number of data scientists.
Early adopters of these specialized AI chips include some major technology vendors and research institutions in data science and machine learning, but adoption also seems to be spreading to sectors such as retail, financial services, and telecom. With every major cloud provider—including IBM, Microsoft, Google, and Amazon Web Services—offering GPU cloud computing, accelerated training will likely soon become available to public sector data science teams, making it possible for them to be fast followers. This would increase these teams’ productivity and allow them to multiply the number of machine learning applications they undertake.18
Machine learning models often suffer from the black-box problem: It is impossible to explain with confidence how they make their decisions. This can make them unsuitable or unpalatable for many applications. Physicians and business leaders, for instance, may not accept a medical diagnosis or investment decision without a credible explanation for the decision. In some cases, regulations mandate such explanations.
Techniques are emerging that can help shine light inside the black boxes of certain machine learning models, making them more interpretable and accurate. MIT researchers, for instance, have demonstrated a method of training a neural network that delivers both accurate predictions and rationales for those predictions.19 Some of these techniques are already appearing in commercial data science products.20
As it becomes possible to build interpretable machine learning models, government agencies could find attractive opportunities to use machine learning. Some of the potential application areas include child welfare, fraud detection, and disease diagnosis and treatment.21
The emergence of mobile devices as a machine learning platform is expanding the number of potential applications of the technology and inducing organizations to develop applications in areas such as smart homes and cities, autonomous vehicles, wearable technology, and the industrial Internet of Things.
The adoption of machine learning will grow along with the ability to deploy the technology where it can improve efficiency and outcomes. Advances in both software and hardware are making it increasingly viable to use the technology on mobile devices and smart sensors.22 On the software side, several technology vendors are creating compact machine learning models that often require relatively little memory but can still handle tasks such as image recognition and language translation on mobile devices.23 Microsoft Research Lab’s compression efforts resulted in models that were 10 to 100 times smaller than earlier models.24 On the hardware end, various semiconductor vendors have developed or are developing their own power-efficient AI chips to bring machine learning to mobile devices.25
Collectively, the five vectors of machine learning progress can help reduce the challenges government agencies may face in investing in machine learning. They can also help agencies already using machine learning to intensify their use of the technology. The advancements can enable new applications across governments and help overcome the constraints of limited resources, including talent, infrastructure, and data to train the models.
CDOs have the opportunity to automate some of the work of often oversubscribed data scientists and help them add even more value. A few key things agencies should consider are: