Thanks! I beg your pardon for me to use the question as an opportunity to explain TokenScript.
I agree that Semantic Web is a tried and failed path. Not only because it is information-oriented (instead of application-oriented), but also that it isn't evolutionary (the websites that use it does not gain an evolutionary advantage over the websites that don't use it). These lessons were taken to heart in the design of TokenScript
One requirement of composability is security. It's vital that each token's code runs in its own VM† and interact with each other or the underlying web application through a layer of protection. The other is privacy. e.g. a token providing a zero-knowledge prove that the owner has more than 1000 balance must not reveal the actual balance (say 1,000,000) to the websites asking for such a proof, hence they must not share the memory/runtime.
The other half of the XML part manages the availability of data. e.g.
How many new keys are needed for a token to work and if they are allowed to leave the enclave / should be backed-up?
"How if we replace the mark-up part to make TokenScript purely runnable code?"
I got intuitive feedbacks like that so I think of merging my comment to that one here too.
The current 2 responsibilities of mark-up (𝑎. manages small token VMs; 𝑏. manages the availability of data), if created in a programming language, would be declarative and delegated to the engine anyway, as tokens' code is not entrusted to do the two kinds of works, albeit with the additional requirement that the user of TokenScripts (e.g. marketplace) has to implement the runtime of that programming language.
† A quick example is the case when a token has a key in the keystore only supposed to be used within this very token. Another example is DvP security - where delivery side, like crypto kitty - work with the payment side, like cryptocurrency, in a single transaction, where one side is potentially the adversary of another.