Thanks! I beg your pardon for me to use the question as an opportunity to explain TokenScript.
I agree that Semantic Web is a tried and failed path. Not only because it is information-oriented (instead of application-oriented), but also that it isn't evolutionary (the websites that use it does not gain an evolutionary advantage over the websites that don't use it). These lessons were taken to heart in the design of TokenScript
TokenScript presently is an overlay of JavaScript code and XML mark-up and the mark-up bits are for a few reasons.
One requirement of composability is security. It's vital that each token's code runs in its own VM† and interact with each other or the underlying web application through a layer of protection. The other is privacy. e.g. a token providing a zero-knowledge prove that the owner has more than 1000 balance must not reveal the actual balance (say 1,000,000) to the websites asking for such a proof, hence they must not share the memory/runtime.
These calls out for the need of an overarching "TokenScript Engine" that manages small Token VMs. Half of the XML part of TokenScript manages these VMs (called "Cards" in TS terminology). XML doesn't do the actual works of a token - the JavaScript in the Cards do.
The other half of the XML part manages the availability of data. e.g.
-
How many new keys are needed for a token to work and if they are allowed to leave the enclave / should be backed-up?
-
How many token attributes are there and how are they updated. This 1) allows token data to be indexed and managed for the higher level like a marketplace, akin to how web content not generated from JavaScript is available for the higher level like Google; 2) allows cards to be ephemeral, instead of having each token's JavaScript (each in its own VM) running in a user's wallet or dapp just to update states.
-
if a token only accept attestation of a certain signer/format, the JavaScript code handles it after TokenScript engine verified it. For an analogy, today's JavaScript code in a website does not validate the website's SSL certificate, since it gets to run only if the certificate is good. The JavaScript in TokenScript, which uses an attestation, also only gets to run only if the attestation is good. The purpose of such design is security, making Cards ephemeral, and making attestations' data available to marketplaces (since they are defined instead of interpreted).
"How if we replace the mark-up part to make TokenScript purely runnable code?"
I got intuitive feedbacks like that so I think of merging my comment to that one here too.
The current 2 responsibilities of mark-up (𝑎. manages small token VMs; 𝑏. manages the availability of data), if created in a programming language, would be declarative and delegated to the engine anyway, as tokens' code is not entrusted to do the two kinds of works, albeit with the additional requirement that the user of TokenScripts (e.g. marketplace) has to implement the runtime of that programming language.
My comments on mark-up's diminishing power, put under the light of W3C's failed attempts like XLINK, is this: the evolutionary force that didn't let JavaScript-only web take over HTML is still relevant in the budding decentralised web. Just replace "web content" with deal-offers and tokens, and "search engine" by "markets". That is, JavaScript-only websites aren't indexed; similarly, JavaScript-only Tokens aren't on the market.
Final notes
- By today's hybrid design, if it's in XML, it's about how the engine should work for the token; if it's not XML, it's the JavaScript that the token needs to run for functionality.
† A quick example is the case when a token has a key in the keystore only supposed to be used within this very token. Another example is DvP security - where delivery side, like crypto kitty - work with the payment side, like cryptocurrency, in a single transaction, where one side is potentially the adversary of another.