Created: `$= dv.el('span', dv.current().file.ctime.toLocaleString(DateTime.DATETIME_SHORT))`
Last Modified: `$= dv.el('span', dv.current().file.mtime.toLocaleString(DateTime.DATETIME_SHORT))`
#ProductToken #ClaimsState
A simple mental model for sharing claims-state under privacy could be to establish an off-chain transaction log which in effect is the claims-state table. If we consider each wallet address as a participants public key, we can simply encrypt that claims state transaction log for each address in the log after every transaction. Yes this creates quite a bit of replication, but generally speaking there are only 5-7 wallets involved in an end-to-end lifecycle. The smart contract on-chain is actually a ZKP circuit that enforces the rules of claims transfer (I.e., I must have a claim of ownership to transfer ownership) as discussed in the Token Flow post [[1. Token flow - managing product tokens with a claim-states table]]. All that goes on-chain is the proof that the rules executed completely. The open question remains around how to notify an account that they now have a claims-state transaction log to digest; I believe this could be done through some sort of listener.
As all things do it gets more complex the closer we get to real life. Below we will start with a simple model where one token = one product and we are only managing the claims of custody and ownership. This is a realistic use case for high value low volume assets like jet engines or personalized medicines but will have trouble scaling for batch products; keep in mind that we are proposing one claims-state TxLog per token. Without jumping to solutions, it occurs that the claims-state transaction log might well be managed under a decentralized storage solution like Filecoin, OpenDSU, or possibly Baseline (I know that's not DeStor, but its good for synchronizing data)
#### Scenario 1: Serialized Product, One token = one product
Alice grows an orange, she picks it (mints a token) and posts some data about it (signs a metadata transaction). She then gives it to Bob to sell on her behalf (transfers custody). Bob sells the orange to Charlie (Bob transfers custody to Charlie, Alice transfers ownership to Charlie). Charlie eats the orange (token burned, ownership nullified, custody nullified). This is effectively using an ERC 721 token but managing multiple claims against it.
Heres how the Claims State TxLog would look
*Serialized Unit X - Claims-State Transaction Log*
|#|From_Address|To_Address|Action|URI|
|--|--|--|--|--|
|01|N/A|Alice|mint token|*null*|
|02|N/A|Alice|assign ownership|*null*|
|03|N/A|Alice|assign custody|*null*|
|04|Alice|N/A|sign metadata|hash link to metadata|
|05|Alice|Bob|transfer custody|*null*|
|06|Bob|Charlie|transfer custody|*null*|
|07|Alice|Charlie|transfer ownership|*null*|
|08|Charlie|N/A|burn token|*null*|
|09|Charlie|N/A|nullify custody|*null*|
|10|Charlie|N/A|nullify ownership|*null*|
On each change (in this case 10 times) the TxLog would re-encrypted using the public key of anyone in the log (there is likely some optimization that can happen there, I.e., 8,9,10 could be done in one encryption). If we *really* wanted to keep this simple we could also just drop the metadata .json bIob in the URI...
#### Scenario 2: Batch Product, One token = many products
The batch product approach is the more likely scenario; where supply chains become opaque is when we have millions of similar units moving through the ecosystem. When a manufacturer issues a recall it is at the batch level and with the exception of truly unique items (art works, personalized therapies, etc) even serialized units are produced in batch. In 2022 in the US there were 67 Billion prescription medicines filled, unit level traceability would imply 67 Billion tokens. maintaining 67 Billion Claims-State TxLogs would be unreasonable, not to mention that up until the point of dispense all the previous transfers happen in bulk (I.e., I ship a pallet of product at a time)
So now we have an issue of fractionalization. We assign a quantity to a token (ERC 1155) and we rely on the ZK circuit that is our smart contract to always execute the rules that prevent that quantity from increasing and that the transfers of various claims are allowed.
Alice grows 5 oranges, she picks them (mints a token with quantity of 5) and posts some data about them (signs a metadata transaction). She then gives 3 to Bob to sell on her behalf (transfers custody for quantity of 3). Bob sells one orange to Charlie (Bob transfers custody of 1 orange to Charlie, Alice transfers ownership of 1 orange to Charlie). Charlie eats the orange (token burned, ownership nullified, custody nullified).
*Batch Token X - Claims-State Transaction Log*
| # | From_Address | To_Address | Quantity | Action | URI_To_Metadata |
|--|--|--|--|--|--|
|01|N/A|Alice|5|mint token|*null*|
|02|N/A|Alice|5|assign ownership|*null*|
|03|N/A|Alice|5|assign custody|*null*|
|04|Alice|N/A|N/A|sign metadata|hash link to metadata|
|05|Alice|Bob|3|transfer custody|*null*|
|06|Bob|Charlie|1|transfer custody|null|
|07|Alice|Charlie|1|transfer ownership|null|
|08|Charlie|N/A|1|burn token|*null*|
|09|Charlie|N/A|1|nullify custody|*null*|
|10|Charlie|N/A|1|nullify ownership|*null*|
Ok - so this doesn't look too bad, but it leaks some unnecessary information such as Charlie now knowing that Alice had 5 oranges to begin with. Batch quantity is generally not public data but we can probably figure out some ways around that.
Where it gets more challenging is when we add others into the ecosystem... if Bob sells the other 2 oranges to Dennis under this model Charlie would see that too, that is not acceptable. Likewise if Alice asks Ed to sell her remaining 2 oranges Bob gets to see his competitions inventory and Ed would get to see that Charlie buys oranges... The problem with the Batch Token Claims-State Transaction Log is that it just creates a mini ledger with no privacy.
#### OK, so what options do we have?
Well, we could restrict access to the claims-state log; instead of encrypting the entire log for every account, we could encrypt subsets of the rows that only include the transactions in a given chain of transactions for a given account. So that when Charlie accesses the Claims-State Log he can only decrypt the rows that are relevant to him. Yet he can still rely on the proofs that the other rows are valid. It's the "determining what's relevant" part that could be subjective... but I believe we can define rules for that.
Or we could think of each split (I.e., when Alice sends 3 out of 5 oranges to Bob) as a fractionalization. Alice would then burn her remaining two tokens and mint a new "sub-batch" with a quantity of 2 thus establishing a new claims-state txLog. *This is Yair's approach*. This would still let participants see the total batch quantity, but it would be pretty easy to salt that with a random int that Alice could just burn.
Once we figure out how to do A or B above we will need to consider how we treat serialized units within a batch. We will some how need a way to pass a list of serial numbers along with a quantity transfer. In theory we can make this only apply to the custody transfers which might simplify the process. I *think* we could treat the serial numbers as a set that is just continually split upon transfer, but I don't really know what I'm talking about there :).