Skip to content

Support caching model #94

@kantai

Description

@kantai

One of the big runtime costs associated with contract-call is the fact that the VM needs to load the invoked contract on each invocation. This can be solved by implementing a contract cache: for each transaction, the Clarity VM can keep an LRU cache of the last K contracts loaded. This would substantially speed up contract-calls (essentially making the only cost of cached calls be the context switching), and dramatically improve the performance and reduce the cost of using library contracts. This would encourage modularization, code re-use, etc., and reduce the need for techniques like manually copying in library contracts to avoid the contract-calls.

Importantly, the cost-tracker must be cache-aware, meaning the caching algorithm would be part of the Clarity spec (this also means that contract authors can write their contracts cognizant of the caching strategy). The cache size should probably be a protocol constant (like 5-10 contracts).

We could also explore caching for MARF data lookups, but I think that's something that can be done pretty readily at the application layer with let bindings.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions