Posted: 11 Nov. 2017 6 min. read

Provenance and the search for blockchain’s killer app

Current certification schemes are seen as inadequate as they focus on regulating the firms in the chain of custody. The weakness is in the hand-offs. While you might certify the firms in the chain, and audit their behaviour, it is still possible for a malicious firm to defraud the system by playing games at the margins.

Selling more certified lumber than the timber you bought enables you to sneak lower quality wood into the supply chain. Stretching expensive certified milk with other chemicals lowers the cost of manufacturing baby formula. Or someone might try and pass of a conflict diamond as coming from a non-conflict zone.

The solution, of course, is to put all the information on the blockchain. Creating a clearing house – a database – enables us to track these handoffs. Using a blockchain to host the clearing house means that it’s decentralised, immune to the potential ethical failings of a central actor. There is no one to bribe or who might be otherwise biased, prone to selectively disclosing information or altering compliance processes. Finally, an impartial and trusted source of provenance information! Everledger was an early mover in this space, but now there’s a wealth of blockchain-powered provenance solutions emerging.

Creating a successful provenance solution is not so simple though.

If a provenance solution is to be successful then your most difficult challenge is the connection to the real world. When presented with a piece of information – the source of a raw material, or handoff from processing to manufacturing or assembly – how do you know that the information is trustworthy? And how do we know that the information is actually related to the physical good sitting in front of you?

The solution to the first problem is good old public key crypto and key management. Someone had to create the piece of information and sign it digitally with a private key, so that we can verify it with the corresponding public key. Prior to that the key-pair had to be created and associated with some real-world entity, a firm or an individual. This involves using some form of credential to identify the entity and verify they are who they claim to be, and then associating a newly created key-pair with this identity. We’re trusting both the key-issuing service and the source of the credentials (typically as they have been accredited by a government or some other governance body, or they are the government).

The solution to the second problem depends on that nature of the product, though all approaches boil down to assigning a unique identifier. If it is a countable object – like cars, bananas or diamonds- then we’ll engrave the id onto the object somehow. If it’s not countable – milk or wheat – then we engrave the id onto the container.

The problem of establishing the provenance of an organic steak, to pick a random example, depends on identifying the farmer who bred the cow and verifying that the cow was bred in accordance with organic standards. Practically this means obtaining a digital document that contains the both unique identifier for the cow (possibly based on a DNA sample) and the identifier for the farmer’s organic certification (should we choose to check it), and which is signed by the farmer (who is attesting that this cow was bred in accordance with these regulations).

It’s then possible to track the chain of custody from cow to steak via metadata. There’s even a handy (but underutilised) standard for this called Open Provenance.

Changes of ownership can be captured by having the farmer sign the cow over to the next entity in the chain of custody, a meat processing plant. When the cow is butchered and broken into parts a new document is created for each part, containing a unique identifier for the part along with the identifier of the meat processing plant and a pointer back to the provenance document for the cow: this part was taken from that cow by this meat processor. The meat processor might sell a hind-quarter to a local butcher – signing over the document for the hind-quarter to the butcher in the process. The butcher can divide the hind-quarter into smaller cuts, suitable for the home cook, and sell them as coming from an organic cow. Or they can, in turn, create documents for each cut (containing an id, pointer to the hind-quarter, and their own identifier and certification) and sell them onto restaurants. Tracking the provenance of uncountable objects is harder. Milk from multiple farmers, for example, is commingled in a tank at the dairy. Consequentially any cheese the dairy makes may have used milk from a number of different farms. It’s quite possible to capture these transformations using the same meta-data technique – this litre of milk was tank from a tank containing these farmers’ milk – though the resulting graph of information might be less useful to the end consumer than logical assertions based on the data in the graph, such as “all the milk used in the making of this cheese was organic”.

The last problem we need to solve is ensuring that unwanted materials weren’t introduced at any stage of processing. A nefarious meat processor, for example, might buy two cows -one organic, one not – butcher both, and then claim that all the resulting parts came from single (more expensive) organic cow (many consumers will not have the ability to test the DNA of the steak in front of them). Or a cheese maker might blend organic and non-organic milk without declaring it to similar affect. This is a hard to eliminate though.

If we compare the volume of organic milk bought by the processing plant with the weight of cheese sold, then we can do a quick sanity check based on assumptions of the volume of milk required to create each kilo of cheese (allowing for waste). This will never be 100% accurate though, as the dairy might be unwilling to disclose the precise volume of milk required as they consider it a trade secret, the volume required might vary naturally as milk is an agricultural product, or they might dilute the milk used in cheese making anyway and sell the reserved organic milk at the factory gate. Ultimately, we must trust the dairy – or whoever is manufacturing the product or processing the material – as we will never have complete information.

If do we want this sort of information then we can force processors – as part of certification – to manufacture products in batches and to publish a ‘processing report’ for each batch. These reports would contain the provenance details of all input materials and output products, so that we can verify that the volume of organic milk consumed and weight of cheese produced are roughly in line with expectations. We could also require that the provenance data for each product (cheese) includes a pointer back to this batch data (milk).

This provenance metadata is easy to distribute without the overhead of a blockchain. Large products might have the metadata written into an RFID tag welded to them (cars) or their container (milk). Small objects (diamonds) might have an id number engraved on them, with the provenance data no more than a database lookup away. This database could be centralised, but it is not much trouble to set up a collection of databases and use something efficient like a flooding algorithm to ensure that all databases have all relevant records. These databases could be hosted by the state governments (should we want a national solution), by the various retail chains (who might specialise on particular types of merchandise, or might just accept everything) or an industry body might pay two to three technology companies to each host a database.

Regardless, there is no need for the guarantee of global consistency provided by blockchain, and the cost and overhead it implies. We could use a blockchain, but why bother when there are other more effective and efficient distributed approaches to try? Provenance is not the killer blockchain app we’re looking for.

 

Ross Hancock was also an author on this blog.

More about the authors

Peter Williams

Peter Williams

Chief Edge Officer, Centre for the Edge

Peter is a recognised thought leader and practitioner in Innovation. Peter started working with internet technologies in 1993 and in 1996 founded an eBusiness Consulting group, Deloitte Australia. Since that time Peter is the CEO of the Eclipse Group, a Deloitte subsidiary, and then founded Deloitte Digital. He is also the Chairman of Deloitte’s Innovation Council and the Chief Edge Officer. He is recently named as one of Australia’s top Digital Influences and is an Adjunct Professor at RMIT.  

Peter Evans-Greenwood

Peter Evans-Greenwood

Fellow, The Centre for the Edge Consulting

Peter is currently a fellow at The Centre for the edge - helping organisations embrace the digital revolution through understanding and applying what is happening on the edge of business and society.Peter has spent 20 years working at the intersection between business and technology. These days he works as a consultant and strategic advisor on both business and technology sides of the fence.