Feedback about TokenScript design from Illia, the founder of Near Protocol

Hi Weiwu,

I think it worth replying, I reached several quite smart people, they all have similar feedbacks.

Below is Illia LinkedIn profile and Near Protocol website https://www.linkedin.com/in/illia-polosukhin-77b6538 https://near.org/team/

Hey, read through the paper.

I agree that problem exists that frontends are not linked to contracts in any way. And that there is no good metadata around operations of the contract.

At the same time I'm a bit skeptical that there is a good way to define this without a turing complete language (e.g. XML seems very constraining).

E.g. I would solve this problem with overlay of code & markup, where code defines interfaces and this code is checked in on IPFS and hash of this code is embedded into contract.

E.g. think mini apps inside wechat - where mini app's UI & client-side business logic is on IPFS. Wallets / browsers can then show this without centralized services.

I also agree that there is composability that is missing in current approach, where UIs are per "app" and not per "token". Going back to people wanting to build custom UXs - I think here mini app / widget analogy also works - you can define widget based on specific token / properties and that will allow UIs to pull on available widgets in the registry.

I'm prob missing some details as didn't read it fully but from my experience ppl have tried before doing similar stuff with semantic web and every time common libraries + free code won.

Thanks! I beg your pardon for me to use the question as an opportunity to explain TokenScript.

I agree that Semantic Web is a tried and failed path. Not only because it is information-oriented (instead of application-oriented), but also that it isn't evolutionary (the websites that use it does not gain an evolutionary advantage over the websites that don't use it). These lessons were taken to heart in the design of TokenScript

TokenScript presently is an overlay of JavaScript code and XML mark-up and the mark-up bits are for a few reasons.

One requirement of composability is security. It's vital that each token's code runs in its own VM† and interact with each other or the underlying web application through a layer of protection. The other is privacy. e.g. a token providing a zero-knowledge prove that the owner has more than 1000 balance must not reveal the actual balance (say 1,000,000) to the websites asking for such a proof, hence they must not share the memory/runtime.

These calls out for the need of an overarching "TokenScript Engine" that manages small Token VMs. Half of the XML part of TokenScript manages these VMs (called "Cards" in TS terminology). XML doesn't do the actual works of a token - the JavaScript in the Cards do.

The other half of the XML part manages the availability of data. e.g.

  • How many new keys are needed for a token to work and if they are allowed to leave the enclave / should be backed-up?

  • How many token attributes are there and how are they updated. This 1) allows token data to be indexed and managed for the higher level like a marketplace, akin to how web content not generated from JavaScript is available for the higher level like Google; 2) allows cards to be ephemeral, instead of having each token's JavaScript (each in its own VM) running in a user's wallet or dapp just to update states.

  • if a token only accept attestation of a certain signer/format, the JavaScript code handles it after TokenScript engine verified it. For an analogy, today's JavaScript code in a website does not validate the website's SSL certificate, since it gets to run only if the certificate is good. The JavaScript in TokenScript, which uses an attestation, also only gets to run only if the attestation is good. The purpose of such design is security, making Cards ephemeral, and making attestations' data available to marketplaces (since they are defined instead of interpreted).

"How if we replace the mark-up part to make TokenScript purely runnable code?"

I got intuitive feedbacks like that so I think of merging my comment to that one here too.

The current 2 responsibilities of mark-up (𝑎. manages small token VMs; 𝑏. manages the availability of data), if created in a programming language, would be declarative and delegated to the engine anyway, as tokens' code is not entrusted to do the two kinds of works, albeit with the additional requirement that the user of TokenScripts (e.g. marketplace) has to implement the runtime of that programming language.

My comments on mark-up's diminishing power, put under the light of W3C's failed attempts like XLINK, is this: the evolutionary force that didn't let JavaScript-only web take over HTML is still relevant in the budding decentralised web. Just replace "web content" with deal-offers and tokens, and "search engine" by "markets". That is, JavaScript-only websites aren't indexed; similarly, JavaScript-only Tokens aren't on the market.

Final notes

  • By today's hybrid design, if it's in XML, it's about how the engine should work for the token; if it's not XML, it's the JavaScript that the token needs to run for functionality.

† A quick example is the case when a token has a key in the keystore only supposed to be used within this very token. Another example is DvP security - where delivery side, like crypto kitty - work with the payment side, like cryptocurrency, in a single transaction, where one side is potentially the adversary of another.

1 Like