Created: `$= dv.el('span', dv.current().file.ctime.toLocaleString(DateTime.DATETIME_SHORT))`
Last Modified: `$= dv.el('span', dv.current().file.mtime.toLocaleString(DateTime.DATETIME_SHORT))`
Jamey's interpretation of Yair Frankel's approach to moving product tokens under privacy.
Privacy here is being defined as the inability of an outside observer to be able to link the a transaction involving a specific token from one account to another account. The transactions we are concerned with are changes in claims against that token (I.e., a transfer of custody, a transfer of ownership, creation of a lien or a right of seizure). Transactions that serve to sign metadata updates, which might be done as a standalone transaction posting a new URI for the metadata or as part of a transfer of a claim, are also of concern. For a deeper description of claim-states refer to [[1. Token flow - managing product tokens with a claim-states table]].
An outside observer may be able determine which accounts are involved in a given transaction, but they should not be able to determine what the transaction was about. However, it is preferable if this information is also kept private as the frequency of transactions between accounts should be confidential (and dusting the network is probably not efficient). Throughout this post we refer to privacy as what a user can "see" or what is "visible" - this is better defined as "what data a user can obtain from monitoring a network and information can be inferred from that data"
## Product Tokens
We define product tokens as a "digital representation of a real world object". In certain claims - specifically the claim of custody - the real world state is the absolute truth (if you hold an apple in your hand you have custody over it even if the network says that you do not). Other claims are more subjective such as fiscal ownership which can be difficult to determine in real life (IRL) but on a blockchain is definitive (the claim can only belong to one account at one time). These nuances make product tokens different than digitally native assets like NFTs (in the general sense) or other forms of "tokenized real world assets" which primarily exist to represent assets of value that don't change hands often, but have a fluctuating valuation. Product tokens are minted, transferred, and burned - and they are designed to do that at massive scale (millions of movements per day). It's worth noting as well that product tokens are not the equivalent of digital twins - but that is a tangent discussion that you can read more about here: [[Digital Twin ≠ Tokenized Products]]. Currently, we do not assign a value to a product token, but we may wish do that it the future (though I think that value with be captured in a corresponding counter transfer of cryptocurrency upon a change in the claim of ownership).
Our objective with product tokens is to have the transactions in the network mimic real life supply chain movements as closely as possible. This is our best defense against having the network movements become out-of-sync with real life movements. But this may cause us to make some design decisions that are not a network efficient. We also need to carefully consider "enforceable rules" on the network (we are like kids in a candy store with smart contract logic) that might not be enforceable IRL. In general I think it's better to think of the network as recording the real life events, not dictating them.
## Fractionalization (or batch transactions)
In manufacturing we rarely make just one of something - companies buy and sell in bulk, the selling in bulk gets continuously broken down until we get to the end consumer who buys and consumes the "lowest saleable unit". Manufacturers produce products in lots or batches and they often ship out entire batches in one container (aggregation), it is on the downstream wholesalers and resellers to break those pallets down (dis-aggregate) and create smaller shipments (re-aggregate) to the end consumer. This is the process even for items with a unique serial number - they are almost all produced in batches (the serial number is applied as one of the last steps in packaging). And while the vernacular may be different in different industries, this broad description covers pretty much everything from coffee beans to personalized medicines.
So product tokens should function in much the same way. If I have 10,000 units in a batch and can fit 5,000 units on a pallet I will produce (1) 10,000 unit product token (call this A<sub>0</sub>) then split it into (2) 5,000 unit tokens (call these A<sub>1</sub> and A<sub>2</sub>) which can be transferred to someone else. This splitting function is called a fractionalization and it's important to have some rules around it like A<sub>0</sub> = A<sub>1</sub> + A<sub>2</sub>. Fractionalization under privacy will carry with it some responsibilities for example, if you are receiving A<sub>2</sub> you need to verify that it is equal to A<sub>0</sub> - A<sub>1</sub> otherwise you are going to have a hard transferring A<sub>2</sub> or some further derivative of it to someone else (there is a better explanation of why this is important below).
## Having transparency under privacy
So here's we we want our cake and eat it too. We want to adhere to the principle of "anyone who holds a claim on a product token, or has ever held a claim on that product token in the past, has the ability to see who else holds or has held claims on that token" but this must apply for *only* those accounts that hold, or have held, the a claim. In other words an outside observer should not be able to determine what the token in a given transaction represents, nor who has had it in the past. Further we want to extend this to let anyone with a claim on a product token access the metadata about that token (you can explore that approach in [[3. Token Claim Derived Authority (TCDA)]]).
To make this possible in a decentralized network we actually don't transfer the same token throughout the supply chain. What we do is:
1) Create a small record of the claim-state table (that includes the current state and all previous states along with the proofs that each claim-state transition was valid)
2) Encrypt that record using special authorization keys
3) Nullify the current token and mint two new tokens (token X_B1 for the amount we are sending and token X_A2 for the remainder that we keep)
4) Each token includes an encrypted version of the claim-states record (*this may be a pointer to where the encrypted version is)
5) The authorization key is transferred to the previous claim holders and the next claim recipient
6) The new claim holder verifies the transfer and checks the math (this is their acknowledgment of receipt, and what makes them accountable if they screw up the claim-state table)
7) For any previous claims holders they can check the updated claims-state table when the receive their new authorization key.
This starts to add up in a meaningful way when we have four separate hops in our supply chain, the diagram below specifies who can see what. ![[Lot Token Fractionalization Flow.png]]
What is not shown in the diagram above is the further branching of the remainders. For example if Alice transfers 8 of her remaining 10 units to Ellen it will not be visible to Bob, Charlie, or Dan.
For this to work we need to make some assumptions.
1. That Alice has a way to message Bob, Bob has a way of to message Charlie, and Charlie has a way to message Dan
2. That Bob understands that his ability to give X to Charlie might be restricted if he cannot prove that Alice previously had X before she gave it to him, and
3. That in the transfer Alice didn't create more X out of thin air (I.e., X_B1 = X_A0 - X_A1).
While that second point sounds obvious it’s not trivial because it requires a participant to “do something” for a future potential event that they might not necessarily need to do to execute todays event.
The third point is how we prevent double spend when we've taken away the ability of everyone on the network to independently verify that tokens are conforming to the rules of a smart contract (which of course requires full visibility to everyone).
But the first point is sticky as well. If everyone has a way of directly messaging the recipients of X why do you need a blockchain? Well, technically you don't - what we are doing is publishing proofs about each transaction, and there are other non-blockchain ways to do this. But, we do need a couple of things that blockchains are particularly good at:
1. Ordering the transactions (in A to B to C transfer C cannot have the claim before B does).
2. Registering and managing accounts. See the post on [[4. Organizational Account Management for Product Token Flows]] this will get complex, and we need a way to keep organizations accountable (plus addressing all of the KYC/AML rules has been a historical issue with decentralized networks that will not likely change)
3. Immutability & trust (at least for the large decentralized networks) by publishing the proofs on a public network we are able to create some level of permanence in that they become part of the blockchains state. Granted, as networks like Ethereum evolve into pure settlement layers, data availability will still be needed for historical records.
In short doing this without a blockchain would mean we would have to solve for many of it issues blockchain has already solved for and grow the network to some point that is consider sufficiently secure (as opposed to buying block space on Ethereum and relying on its security).
It’s worth pointing out here that the solution is not just a large messaging system or a set of business channels like in a hyperledger fabric approach - the blockchain is needed to settle state and to make sure the rules of the smart contracts are enforced; even if those rules are kept private.
### Just to recap...
So when Alice sends X to Bob she generates a proof that she has X_A0. She sends Bob an encrypted message indicating that she is sending him X and includes an encryption key that lets him verify that Alice does in-fact have X_A0. This encryption key also lets Bob see all the previous holders of X, Bob then initiates a transaction to transfer X_A0 from Alice and when that transaction clears the state is updated so that Bob now has X_B1 and X_A0 is nullified and Bob can generate his proof for Charlie. With some clever cryptography the new key that Bob generates will also work for Alice, so once X has been transferred to Charlie, Alice will be able to see where it is.
From the outside looking in all one would be able to observe is that Alice, Bob, Charlie and Dan all interact with the same contract. An astute observer might be able to detect that Alice is sending messages to Bob and that Bob is sending messages to Charlie, etc. But assuming we implement ways to obfuscate the frequency of actual transactions (I.e. send one message per day even if you are not sending anything) it will become very hard to tell the actual frequency.
This approach is fairly easy to extend to a claims state table (you are now just updating the state for multiple claims, which means there will be more updates to the participants.
Finally we could choose to leave some participants off of the track & trace authorization keys, but I think we should assume “anyone who currently has, or has had in the past, a claim on a token can see the current claims-state” and remove visibility as an exception not the rule.
### So where does this leave us?
Transferring product tokens under privacy changes the network from a distributed ledger to a proof publishing system. Cryptography is magical - it lets us propagate token transfers without leaking critical information - but it cannot solve everything, we still need real life controls that help keep the digital in sync with the physical. We need participants to do “a little bit more” in making sure they can verify transactions as the come in. And we need to develop ways of translating everything mathematical into something that supply chain workers can understand. But these things are doable; and once done we have set a new bar for inventory visibility under privacy they will be come common place.
### *One Last Thing*
The examples I used above are for batch manufactured products - where all the units within a batch are fungible. While this makes up a significant proportion of manufactured products there is an increasing push for more granularity through product serialization where the lowest salable unit is uniquely identifiable. But as I stated earlier these are still manufactured in batch in almost all cases. so we need a method to handle the fractionalization of serialized batches and it not as simple as making batches of 1.
One idea is to treat the list of serial numbers in a batch of product as a set which moves with the product token. As the set is split up we do the existing check of A<sub>0</sub> = A<sub>1</sub> + A<sub>2</sub> for quantity, but we also add to it the A<sub>1</sub> serial numbers are a disjoint set with the A<sub>2</sub> serial numbers (meaning that the set of serial numbers associated with A<sub>1</sub> does not overlap with the set of serial numbers for A<sub>2</sub> ). While this seems a bit clunky in many controlled supply chains it is expected today that the bill of lading or shipping manifest contain a listing of the serial numbers (though these are not often mathematically provable)
![[Serialized Token Fractionalization Flow.png]]