Skip to content
feiben edited this page Nov 7, 2015 · 5 revisions

We publish LunarBase as a doc-oriented NoSQL database, not merely a key-value store or a cache solution. Since a big cache system is self-contained, LunarBase has a low deployment dependency and cost, you don't have to deploy another k-v cache for you hot data anymore.

It is:

  1. with a persistent DB storage;

  2. with a self-contained big cache system, enabled concurrency, high throughput capability;

  3. Free scalable, document-oriented without predefined schema, as all NoSQL database designed for;stem

  4. Benefit from the principle of Google Big table, LunarBase has extremely fast retrieval speed;

  5. for multiple purpose. One DB for both data storage and search, and even full text engin with simple word parsing plugins;

A single node supports:

single DB stores 2 billion records, each has a size limitation of 32K bytes. Hence the total size of one DB is 64TB;

One server runs several LunarBase Instances, each manages one DB. The number of instances depends on the hardware capability;

Concurrency support depends on the number of cores of CPU. There is a one-one mapping from internal threads and CPU cores;

Memory Consumption: LunarBase has an internal MMU, will not produce memory fragments after a long time running. This is an very important feature for any server-end middle ware;

Query speed: on a 4-core X86 server, mechanical HDD, 8G mem, LunarBase responses a query in 10 ms. If the big cache is opened and all hot data stores in it, the query speed of hot data are extremely fast.

Clone this wiki locally