Find a file
Igor Canadi 1510339e52 Speed up FindObsoleteFiles
Summary:
Here's one solution we discussed on speeding up FindObsoleteFiles. Keep a set of all files in DBImpl and update the set every time we create a file. I probably missed few other spots where we create a file.

It might speed things up a bit, but makes code uglier. I don't really like it.

Much better approach would be to abstract all file handling to a separate class. Think of it as layer between DBImpl and Env. Having a separate class deal with file namings and deletion would benefit both code cleanliness (especially with huge DBImpl) and speed things up. It will take a huge effort to do this, though.

Let's discuss offline today.

Test Plan: Ran ./db_stress, verified that files are getting deleted

Reviewers: dhruba, haobo, kailiu, emayanke

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D13827
2013-11-08 15:23:46 -08:00
build_tools Conversion of db_bench, db_stress and db_repl_stress to use gflags 2013-10-24 07:43:14 -07:00
coverage Fix the gcov/lcov related issues 2013-08-22 17:01:06 -07:00
db Speed up FindObsoleteFiles 2013-11-08 15:23:46 -08:00
doc Add draft logo. 2013-10-09 22:55:30 -07:00
hdfs Add appropriate LICENSE and Copyright message. 2013-10-16 17:48:41 -07:00
helpers/memenv Change Function names from Compaction->Flush When they really mean Flush 2013-10-14 15:12:15 -07:00
include Speed up FindObsoleteFiles 2013-11-08 15:23:46 -08:00
linters/src fixing linters. 2012-12-14 14:05:27 -08:00
port Add appropriate LICENSE and Copyright message. 2013-10-16 17:48:41 -07:00
snappy Add appropriate LICENSE and Copyright message. 2013-10-16 17:48:41 -07:00
table Provide mechanism to configure when to flush the block 2013-11-07 21:27:21 -08:00
tools WAL log retention policy based on archive size. 2013-11-06 18:46:28 -08:00
util Speed up FindObsoleteFiles 2013-11-08 15:23:46 -08:00
utilities Making the transaction log iterator more robust 2013-11-04 20:49:03 -08:00
.arcconfig Enable linting in arc. 2013-02-01 11:34:25 -08:00
.gitignore Remove invalid items in .gitignore 2013-11-05 21:04:22 -08:00
LICENSE Add appropriate LICENSE and Copyright message. 2013-10-16 17:48:41 -07:00
Makefile [RocksDB] prefixhash memtable test 2013-11-05 23:20:10 -08:00
PATENTS Fix the patent format 2013-10-16 15:37:32 -07:00
README Update README file for public interface 2013-09-13 11:15:47 -07:00
README.fb Update the latest rocksdb version 2013-10-22 14:49:44 -07:00

rocksdb: A persistent key-value store for flash storage
Authors: * The Facebook Database Engineering Team
         * Build on earlier work on leveldb by Sanjay Ghemawat
           (sanjay@google.com) and Jeff Dean (jeff@google.com)

This code is a library that forms the core building block for a fast
key value server, especially suited for storing data on flash drives.
It has an Log-Stuctured-Merge-Database (LSM) design with flexible tradeoffs
between Write-Amplification-Factor(WAF), Read-Amplification-Factor (RAF)
and Space-Amplification-Factor(SAF). It has multi-threaded compactions,
making it specially suitable for storing multiple terabytes of data in a
single database.

The core of this code has been derived from open-source leveldb.

The code under this directory implements a system for maintaining a
persistent key/value store.

See doc/index.html for more explanation.
See doc/impl.html for a brief overview of the implementation.

The public interface is in include/*.  Callers should not include or
rely on the details of any other header files in this package.  Those
internal APIs may be changed without warning.

Guide to header files:

include/rocksdb/db.h
    Main interface to the DB: Start here

include/rocksdb/options.h
    Control over the behavior of an entire database, and also
    control over the behavior of individual reads and writes.

include/rocksdb/comparator.h
    Abstraction for user-specified comparison function.  If you want
    just bytewise comparison of keys, you can use the default comparator,
    but clients can write their own comparator implementations if they
    want custom ordering (e.g. to handle different character
    encodings, etc.)

include/rocksdb/iterator.h
    Interface for iterating over data. You can get an iterator
    from a DB object.

include/rocksdb/write_batch.h
    Interface for atomically applying multiple updates to a database.

include/rocksdb/slice.h
    A simple module for maintaining a pointer and a length into some
    other byte array.

include/rocksdb/status.h
    Status is returned from many of the public interfaces and is used
    to report success and various kinds of errors.

include/rocksdb/env.h
    Abstraction of the OS environment.  A posix implementation of
    this interface is in util/env_posix.cc

include/rocksdb/table_builder.h
    Lower-level modules that most clients probably won't use directly

include/rocksdb/cache.h
    An API for the block cache.

include/rocksdb/compaction_filter.h
    An API for a application filter invoked on every compaction.

include/rocksdb/filter_policy.h
    An API for configuring a bloom filter.

include/rocksdb/memtablerep.h
    An API for implementing a memtable.

include/rocksdb/statistics.h
    An API to retrieve various database statistics.

include/rocksdb/transaction_log.h
    An API to retrieve transaction logs from a database.