746cfaac58
It seems that on some FS we get more blocks than we ask for. This is already handled when checking the allocated number of blocks, but after the file is closed it checks for an exact number of blocks, which fails on my machine. I changed the test to add one full page to the size, then calculate the expected number of blocks and check if the actual number of blocks is less or equal to that. |
||
---|---|---|
build_tools | ||
coverage | ||
db | ||
doc | ||
examples | ||
hdfs | ||
helpers/memenv | ||
include | ||
java | ||
linters | ||
port | ||
table | ||
third-party/rapidjson | ||
tools | ||
util | ||
utilities | ||
.arcconfig | ||
.clang-format | ||
.gitignore | ||
.travis.yml | ||
AUTHORS | ||
CONTRIBUTING.md | ||
HISTORY.md | ||
INSTALL.md | ||
LICENSE | ||
Makefile | ||
PATENTS | ||
README.md | ||
ROCKSDB_LITE.md | ||
Vagrantfile |
RocksDB: A Persistent Key-Value Store for Flash and RAM Storage
RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on earlier work on LevelDB by Sanjay Ghemawat (sanjay@google.com) and Jeff Dean (jeff@google.com)
This code is a library that forms the core building block for a fast key value server, especially suited for storing data on flash drives. It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF) and Space-Amplification-Factor (SAF). It has multi-threaded compactions, making it specially suitable for storing multiple terabytes of data in a single database.
Start with example usage here: https://github.com/facebook/rocksdb/tree/master/examples
See the github wiki for more explanation.
The public interface is in include/
. Callers should not include or
rely on the details of any other header files in this package. Those
internal APIs may be changed without warning.
Design discussions are conducted in https://www.facebook.com/groups/rocksdb.dev/