mirror of https://github.com/facebook/rocksdb.git
fbd2dafc9f
Summary: Add the MultiGet API to allow prefetching. With file size of 1.5G, I configured it to have 0.9 hash ratio that can fill With 115M keys and result in 2 hash functions, the lookup QPS is ~4.9M/s vs. 3M/s for Get(). It is tricky to set the parameters right. Since files size is determined by power-of-two factor, that means # of keys is fixed in each file. With big file size (thus smaller # of files), we will have more chance to waste lot of space in the last file - lower space utilization as a result. Using smaller file size can improve the situation, but that harms lookup speed. Test Plan: db_bench Reviewers: yhchiang, sdong, igor Reviewed By: sdong Subscribers: leveldb Differential Revision: https://reviews.facebook.net/D23673 |
||
---|---|---|
.. | ||
backupable | ||
compacted_db | ||
document | ||
geodb | ||
merge_operators | ||
redis | ||
spatialdb | ||
ttl | ||
write_batch_with_index | ||
merge_operators.h |