Commit graph

368 commits

Author SHA1 Message Date
Danny Al-Gaaf e55aea5512 document_db.cc: fix assert
Check for lhs and not twice for rhs.

Fix for:

[utilities/document/document_db.cc:36] ->
 [utilities/document/document_db.cc:36]: (style) Same expression on both
 sides of '&&'.

Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
2014-09-30 23:30:32 +02:00
Danny Al-Gaaf b140375565 ttl/ttl_test.cc: prefer prefix ++operator for non-primitive types
Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
2014-09-30 23:30:32 +02:00
Danny Al-Gaaf 43c789c8f2 spatialdb/spatial_db.cc: use !empty() instead of 'size() > 0'
Use empty() since it should be prefered as it has, following
the standard, a constant time complexity regardless of the
containter type. The same is not guaranteed for size().

Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
2014-09-30 23:30:32 +02:00
Danny Al-Gaaf 0de452ee9f document_db.cc: pass const parameter by reference
Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
2014-09-30 23:30:32 +02:00
Danny Al-Gaaf 873f1356a1 db_ttl_impl.h: pass func parameter by reference
Fix for:

[utilities/ttl/db_ttl_impl.h:209]: (performance) Function parameter
 'merge_op' should be passed by reference.

Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
2014-09-30 23:30:32 +02:00
Lei Jin fd5d80d55e CompactedDB: log using the correct info_log
Summary:
info_log from supplied Options can be nullptr. Using the one from
db_impl. Also call flush after that since no more loggging will happen
and LOG can contain partial output

Test Plan: verified with db_bench

Reviewers: igor, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D24183
2014-09-29 12:45:04 -07:00
Lei Jin 2faf49d5f1 use GetContext to replace callback function pointer
Summary:
Intead of passing callback function pointer and its arg on Table::Get()
interface, passing GetContext. This makes the interface cleaner and
possible better perf. Also adding a fast pass for SaveValue()

Test Plan: make all check

Reviewers: igor, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D24057
2014-09-29 11:09:09 -07:00
Lei Jin fbd2dafc9f CompactedDBImpl::MultiGet() for better CuckooTable performance
Summary:
Add the MultiGet API to allow prefetching.
With file size of 1.5G, I configured it to have 0.9 hash ratio that can
fill With 115M keys and result in 2 hash functions, the lookup QPS is
~4.9M/s  vs. 3M/s for Get().
It is tricky to set the parameters right. Since files size is determined
by power-of-two factor, that means # of keys is fixed in each file. With
big file size (thus smaller # of files), we will have more chance to
waste lot of space in the last file - lower space utilization as a
result. Using smaller file size can improve the situation, but that
harms lookup speed.

Test Plan: db_bench

Reviewers: yhchiang, sdong, igor

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23673
2014-09-25 13:34:51 -07:00
Lei Jin 3c68006109 CompactedDBImpl
Summary:
Add a CompactedDBImpl that will enabled when calling OpenForReadOnly()
and the DB only has one level (>0) of files. As a performan comparison,
CuckooTable performs 2.1M/s with CompactedDBImpl vs. 1.78M/s with
ReadOnlyDBImpl.

Test Plan: db_bench

Reviewers: yhchiang, igor, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D23553
2014-09-25 11:14:01 -07:00
sdong d0de413f4d WriteBatchWithIndex to allow different Comparators for different column families
Summary:
Previously, one single column family is given to WriteBatchWithIndex to index keys for all column families. An extra map from column family ID to comparator is maintained which can override the default comparator given in the constructor. A WriteBatchWithIndex::SetComparatorForCF() is added for user to add comparators per column family.

Also move more codes into anonymous namespace.

Test Plan: Add a unit test

Reviewers: ljin, igor

Reviewed By: igor

Subscribers: dhruba, leveldb, yhchiang

Differential Revision: https://reviews.facebook.net/D23355
2014-09-22 13:47:39 -07:00
Torrie Fischer ed9a2df8cc fix unity build 2014-09-19 16:10:33 -07:00
Yueh-Hsuan Chiang 035043559d Fixed a signed-unsigned comparison in spatial_db.cc -- issue #293
Summary:
Fixed a signed-unsigned comparison in spatial_db.cc

utilities/spatialdb/spatial_db.cc:542:38: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
cc1plus: all warnings being treated as errors
make: *** [utilities/spatialdb/spatial_db.o] Error 1

Test Plan:
make spatial_db_test
./spatial_db_test

Reviewers: ljin, sdong, reddragon, igor

Reviewed By: reddragon

Subscribers: reddragon, leveldb

Differential Revision: https://reviews.facebook.net/D23565
2014-09-18 10:57:20 -07:00
Igor Canadi a52cecb56c Fix Mac compile 2014-09-09 18:42:35 -07:00
Xiaozheng Tie 6cc12860f0 Added a few statistics for BackupableDB
Summary:
Added the following statistics to BackupableDB:

1. Number of successful and failed backups in class BackupStatistics
2. Time taken to do a backup
3. Number of files in a backup

1 is implemented in the BackupStatistics class
2 and 3 are added in the BackupMeta and BackupInfo class

Test Plan:
1 can be tested using BackupStatistics::ToString(),
2 and 3 can be tested in the BackupInfo class

Reviewers: sdong, igor2, ljin, igor

Reviewed By: igor

Differential Revision: https://reviews.facebook.net/D22785
2014-09-09 13:44:42 -07:00
Igor Canadi 55114e7f40 Some updates for SpatialDB
Summary:
1. add db statistics
2. write out batch every millionth write

Test Plan: unit tests

Reviewers: ljin, sdong, yinwang

Reviewed By: yinwang

Differential Revision: https://reviews.facebook.net/D22755
2014-09-08 17:00:43 -07:00
Nik Bougalis d40c1f742f Add missing break statement 2014-09-05 20:50:29 -07:00
liuhuahang bb6ae0f80c fix more compile warnings
N/A

Change-Id: I5b6f9c70aea7d3f3489328834fed323d41106d9f
Signed-off-by: liuhuahang <liuhuahang@zerus.co>
2014-09-05 14:14:37 +08:00
Nik Bougalis 0cd0ec4fe0 Plug memory leak during index creation 2014-09-04 20:52:00 -07:00
Igor Canadi a4816269f1 Relax backupable rate limiting test 2014-09-04 10:22:58 -07:00
liuhuahang ef5b384729 fix a few compile warnings
1, const qualifiers on return types make no sense and will trigger a compile warning: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]

2, class HistogramImpl has virtual functions and thus should have a virtual destructor

3, with some toolchain, the macro __STDC_FORMAT_MACROS is predefined and thus should be checked before define

Change-Id: I69747a03bfae88671bfbb2637c80d17600159c99
Signed-off-by: liuhuahang <liuhuahang@zerus.co>
2014-09-04 23:06:23 +08:00
Igor Canadi 2a8faf797c Compact SpatialDB as we go, not at the end 2014-08-28 11:32:49 -07:00
Igor Canadi 1913ce27b9 more concurrent flushes in SpatialDB 2014-08-27 08:48:31 -07:00
Igor Canadi 808e809366 Adjust SpatialDB column family options 2014-08-27 07:56:10 -07:00
Igor Canadi 0c39f54dfb Use Vector memtable when bulk loading SpatialDB 2014-08-26 19:23:09 -07:00
Igor Canadi ff6ec0eb17 Optimize SpatialDB
Summary:
Two things:
1. Use hash-based index for data column family
2. Use Get() instead of Iterator Seek() when DB is opened read-only

Test Plan: added read-only test in unit test

Reviewers: yinwang

Reviewed By: yinwang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D22323
2014-08-25 17:50:18 -07:00
Lei Jin 384400128f move block based table related options BlockBasedTableOptions
Summary:
I will move compression related options in a separate diff since this
diff is already pretty lengthy.
I guess I will also need to change JNI accordingly :(

Test Plan: make all check

Reviewers: yhchiang, igor, sdong

Reviewed By: igor

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D21915
2014-08-25 14:22:05 -07:00
Igor Canadi 01cbdd2aae Optimize storage parameters for spatialDB
Summary: We need to start compression at level 1, while OptimizeForLevelComapaction() only sets up rocksdb to start compressing at level 2. I also adjusted some other things.

Test Plan: compiles

Reviewers: yinwang

Reviewed By: yinwang

Differential Revision: https://reviews.facebook.net/D22203
2014-08-20 11:14:01 -07:00
sdong 28b5c76004 WriteBatchWithIndex: a wrapper of WriteBatch, with a searchable index
Summary:
Add WriteBatchWithIndex so that a user can query data out of a WriteBatch, to support MongoDB's read-its-own-write.

WriteBatchWithIndex uses a skiplist to store the binary index. The index stores the offset of the entry in the write batch. When searching for a key, the key for the entry is read by read the entry from the write batch from the offset.

Define a new iterator class for querying data out of WriteBatchWithIndex. A user can create an iterator of the write batch for one column family, seek to a key and keep calling Next() to see next entries.

I will add more unit tests if people are OK about this API.

Test Plan:
make all check
Add unit tests.

Reviewers: yhchiang, igor, MarkCallaghan, ljin

Reviewed By: ljin

Subscribers: dhruba, leveldb, xjin

Differential Revision: https://reviews.facebook.net/D21381
2014-08-18 16:37:38 -07:00
miguelportilla 93e6b5e9d9 Changes to support unity build:
* Script for building the unity.cc file via Makefile
* Unity executable Makefile target for testing builds
* Source code changes to fix compilation of unity build
2014-08-11 13:22:47 -04:00
Igor Canadi 5e3d5c5f6e Simplify SpatialIndexCursor
Summary:
Since we have enough memory to hold all primary keys loaded from spatial index, it is better if we first load all of them (store them in unordered_set for deduplication) and then query on primary key column family one by one.

We need to dedup all IDs, so we'll end up storing all of them in memory even with the current approach.

Test Plan: ./spatial_db_test is happy

Reviewers: yinwang

Reviewed By: yinwang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D20949
2014-08-01 16:50:40 -04:00
Igor Canadi 9c5a3f4746 FeatureSet DebugString
Summary: This will help debugging

Test Plan: ran, observed output

Reviewers: yinwang

Reviewed By: yinwang

Differential Revision: https://reviews.facebook.net/D20937
2014-08-01 08:55:37 -07:00
Igor Canadi 28b367db15 Initialize next_id 2014-07-25 10:29:50 -07:00
Igor Canadi c8e70e6bf8 Fix valgrind test 2014-07-24 17:31:23 -07:00
Igor Canadi f780f35b06 Fix compile warning 2014-07-24 16:45:26 -04:00
Igor Canadi 0754d4cb3b SpatialDB change API
Summary: I changed SpatialDB API so that we only specify list of indexes when we create the database. That way, whoever is querying the DB doesn't need to know the full list of indexes and their options.

Test Plan: spatial_db_test

Reviewers: yinwang

Reviewed By: yinwang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D20571
2014-07-24 16:39:33 -04:00
Igor Canadi e5f6980d99 Fix release compile error 2014-07-23 16:50:31 -04:00
Igor Canadi 6296330417 SpatialDB
Summary:
This diff is adding spatial index support to RocksDB.

When creating the DB user specifies a list of spatial indexes. Spatial indexes can cover different areas and have different resolution (i.e. number of tiles). This is useful for supporting different zoom levels.

Each element inserted into SpatialDB has:
* a bounding box, which determines how will the element be indexed
* string blob, which will usually be WKB representation of the polygon (http://en.wikipedia.org/wiki/Well-known_text)
* feature set, which is a map of key-value pairs, where value can be int, double, bool, null or a string. FeatureSet will be a set of tags associated with geo elements (for example, 'road': 'highway' and similar)
* a list of indexes to insert the element in. For example, small river element will be inserted in index for high zoom level, while country border will be inserted in all indexes (including the index for low zoom level).

Each query is executed on single spatial index. Query guarantees that it will return all elements intersecting the specified bounding box, but it might also return some extra non-intersecting elements.

Test Plan: Added bunch of unit tests in spatial_db_test

Reviewers: dhruba, yinwang

Reviewed By: yinwang

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D20361
2014-07-23 14:22:58 -04:00
Igor Canadi 0ff183a0d9 Move include/utilities/*.h to include/rocksdb/utilities/*.h
Summary:
All public headers need to be under `include/rocksdb` directory. Otherwise, clients include our header files like this:

    #include <rocksdb/db.h>
    #include <utilities/backupable_db.h> // still our public header!

Also, internally, we include:

    #include "utilities/backupable/backupable_db.h" // internal header
    #include "utilities/backupable_db.h" // public header

which is confusing.

This way, when we install rocksdb as a system library, we can just copy `include/rocksdb` directory to system's header files. We can't really copy `utilities` directory to system's header files.

Test Plan: compiles

Reviewers: dhruba, ljin, yhchiang, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D20409
2014-07-23 10:21:38 -04:00
Chilledheart 54f4e2f188 Fix clang compiler warnings 2014-07-20 22:57:20 +08:00
Igor Canadi 536f4b31a6 Add missing DB functions to DocumentDB 2014-07-10 09:52:46 -07:00
Igor Canadi f0a8be253e JSON (Document) API sketch
Summary:
This is a rough sketch of our new document API. Would like to get some thoughts and comments about the high-level architecture and API.

I didn't optimize for performance at all. Leaving some low-hanging fruit so that we can be happy when we fix them! :)

Currently, bunch of features are not supported at all. Indexes can be only specified when creating database. There is no query planner whatsoever. This will all be added in due time.

Test Plan: Added a simple unit test

Reviewers: haobo, yhchiang, dhruba, sdong, ljin

Reviewed By: ljin

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D18747
2014-07-10 09:31:42 -07:00
Lei Jin 5ef1ba7ff5 generic rate limiter
Summary:
A generic rate limiter that can be shared by threads and rocksdb
instances. Will use this to smooth out write traffic generated by
compaction and flush. This will help us get better p99 behavior on flash
storage.

Test Plan:
unit test output
==== Test RateLimiterTest.Rate
request size [1 - 1023], limit 10 KB/sec, actual rate: 10.374969 KB/sec, elapsed 2002265
request size [1 - 2047], limit 20 KB/sec, actual rate: 20.771242 KB/sec, elapsed 2002139
request size [1 - 4095], limit 40 KB/sec, actual rate: 41.285299 KB/sec, elapsed 2202424
request size [1 - 8191], limit 80 KB/sec, actual rate: 81.371605 KB/sec, elapsed 2402558
request size [1 - 16383], limit 160 KB/sec, actual rate: 162.541268 KB/sec, elapsed 3303500

Reviewers: yhchiang, igor, sdong

Reviewed By: sdong

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D19359
2014-07-08 11:41:57 -07:00
Igor Canadi 583feae8dd fix valgrind test 2014-06-20 15:22:11 +02:00
Igor Canadi 558050939f Fix release compile 2014-06-20 12:02:37 +02:00
Igor Canadi 00b26c3a83 JSONDocument
Summary:
After evaluating options for JSON storage, I decided to implement our own. The reason is that we'll be able to optimize it better and we get to reduce unnecessary dependencies (which is what we'd get with folly).

I also plan to write a serializer/deserializer for JSONDocument with our own binary format similar to BSON. That way we'll store binary JSON format in RocksDB instead of the plain-text JSON. This means less storage and faster deserialization.

There are still some inefficiencies left here. I plan to optimize them after we develop a functioning DocumentDB. That way we can move and iterate faster.

Test Plan: added a unit test

Reviewers: dhruba, haobo, sdong, ljin, yhchiang

Reviewed By: haobo

Subscribers: leveldb

Differential Revision: https://reviews.facebook.net/D18831
2014-06-20 11:14:14 +02:00
Igor Canadi 552c49f0f4 Remove upper bound for rate limiting unit test 2014-06-03 13:58:44 -07:00
Igor Canadi f725e4fe1f Make RateLimiting unit test less flakey 2014-05-20 17:09:38 -07:00
Igor Canadi b2cf95fe38 Call EnableFileDeletions with false as argument 2014-05-20 14:28:51 -07:00
Igor Canadi 7984b9bbb0 BackupableDBTest thread-safe
Summary: We need to lock accesses to some TestEnv variables. Otherwise we get failures like http://ci-builds.fb.com/job/rocksdb_asan_check/657/console

Test Plan: make check

Reviewers: dhruba, haobo, sdong

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D18489
2014-05-05 14:30:24 -07:00
Benjamin Renard 41e5cf2392 Add share_files_with_cheksum option to BackupEngine
Summary: added a new option to BackupEngine: if share_files_with_checksum is set to true, sst files are stored in shared_checksum/ and are identified by the triple (file name, checksum, file size) instead of just the file name. This option is targeted at distributed databases that want to backup their primary replica.

Test Plan: unit tests and tested backup and restore on a distributed rocksdb

Reviewers: igor

Reviewed By: igor

Differential Revision: https://reviews.facebook.net/D18393
2014-05-02 17:08:55 -07:00
Igor Canadi 3b5fa6bd9b Improve ttl_test
Summary:
Our valgrind tests are failing because ttl_test is kind of flakey. This diff should fix valgrind issue and make ttl_test less flakey and much faster.

Instead of relying on Env::Default() for getting current time, I expose `Env*` to all TTL functions that are interested in time. That way, I can insert a custom test Env which is then used to provide exactly the times we need. That way, we don't need to sleep anymore -- we control the time.

Test Plan: ttl_test in normal and valgrind run

Reviewers: dhruba, haobo, sdong, yhchiang

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D18399
2014-05-02 07:13:51 -07:00
Igor Canadi d6d67c0efe More s/us fixes 2014-04-30 07:04:36 -07:00
Igor Canadi 76c3dbf5b5 Make ttl_test a bit more robust 2014-04-29 12:19:54 -07:00
Igor Canadi 91ef2eae23 Use new DBWithTTL API in tests 2014-04-28 23:46:24 -04:00
Igor Canadi f868dcbbed Support for adding TTL-ed column family
Summary:
This enables user to add a TTL column family to normal DB.

Next step should be to expand StackableDB and create StackableColumnFamily, such that users can for example add geo-spatial column families to normal DB.

Test Plan: added a test

Reviewers: dhruba, haobo, ljin

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D18201
2014-04-28 20:34:20 -07:00
Igor Canadi fe331c8886 fix valgrind 2014-04-26 14:23:15 -07:00
Igor Canadi a618691a3b Read-only BackupEngine
Summary: Read-only BackupEngine can connect to the same backup directory that is already running BackupEngine. That enables some interesting use-cases (i.e. restoring replica from primary's backup directory)

Test Plan: added a unit test

Reviewers: dhruba, haobo, ljin

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D18297
2014-04-25 15:49:29 -04:00
Igor Canadi 3992aec8fa Support for column families in TTL DB
Summary:
This will enable people using TTL DB to do so with multiple column families. They can also specify different TTLs for each one.

TODO: Implement CreateColumnFamily() in TTL world.

Test Plan: Added a very simple sanity test.

Reviewers: dhruba, haobo, ljin, sdong, yhchiang

Reviewed By: haobo

CC: leveldb, alberts

Differential Revision: https://reviews.facebook.net/D17859
2014-04-22 11:27:33 -07:00
Igor Canadi 588bca2020 RocksDBLite
Summary:
Introducing RocksDBLite! Removes all the non-essential features and reduces the binary size. This effort should help our adoption on mobile.

Binary size when compiling for IOS (`TARGET_OS=IOS m static_lib`) is down to 9MB from 15MB (without stripping)

Test Plan: compiles :)

Reviewers: dhruba, haobo, ljin, sdong, yhchiang

Reviewed By: yhchiang

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17835
2014-04-15 13:39:26 -07:00
Igor Canadi 4daea66343 Turn on -Wmissing-prototypes
Summary: Compiling for iOS has by default turned on -Wmissing-prototypes, which causes rocksdb to fail compiling. This diff turns on -Wmissing-prototypes in our compile options and cleans up all functions with missing prototypes.

Test Plan: compiles

Reviewers: dhruba, haobo, ljin, sdong

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17649
2014-04-09 21:17:14 -07:00
Igor Canadi 2a0917b28e Merge branch 'master' into columnfamilies 2014-04-07 13:04:25 -07:00
Igor Canadi f6ed420e4d Fix default path in geodb_test 2014-04-07 13:04:16 -07:00
Igor Canadi 3d2fe844ab Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl.h
	db/memtable_list.cc
	db/version_set.cc
2014-04-07 11:31:11 -07:00
Igor Canadi acdc6a1a0d relax backupable db rate limit tests 2014-04-04 16:27:47 -07:00
sdong ef7dc38919 Fix some other signed & unsigned comparisons
Summary: Fix some signed and unsigned comparisons to make some other build script happy.

Test Plan: Build and run those changed tests

Reviewers: ljin, igor, haobo

Reviewed By: igor

CC: yhchiang, dhruba, kailiu, leveldb

Differential Revision: https://reviews.facebook.net/D17463
2014-04-03 20:51:27 -07:00
sdong 4af1954fd6 Compaction Filter V1 to use old context struct to keep backward compatible
Summary: The previous change D15087 changed existing compaction filter, which makes the commonly used class not backward compatible. Revert the older interface. Use a new interface for V2 instead.

Test Plan: make all check

Reviewers: haobo, yhchiang, igor

CC: danguo, dhruba, ljin, igor, leveldb

Differential Revision: https://reviews.facebook.net/D17223
2014-04-02 14:57:51 -07:00
Igor Canadi ddbd1ece88 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_test.cc
	db/internal_stats.cc
	db/internal_stats.h
	db/version_edit.cc
	db/version_edit.h
	db/version_set.cc
	include/rocksdb/options.h
	util/options.cc
2014-03-31 13:39:24 -07:00
Dhruba Borthakur 4031b98373 A GIS implementation for rocksdb.
Summary:
    This patch stores gps locations in rocksdb.

    Each object is uniquely identified by an id. Each object has
    a gps (latitude, longitude) associated with it. The geodb
    supports looking up an object either by its gps location
    or by its id. There is a method to retrieve all objects
    within a circular radius centered at a specified gps location.

Test Plan: Simple unit-test attached.

Reviewers: leveldb, haobo

Reviewed By: haobo

CC: leveldb, tecbot, haobo

Differential Revision: https://reviews.facebook.net/D15567
2014-03-28 15:26:44 -07:00
Igor Canadi 5826f9528f Make rate limiting unit test more robust 2014-03-27 11:53:05 -07:00
Igor Canadi e8168382c4 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	include/rocksdb/options.h
	util/options.cc
2014-03-25 11:09:40 -07:00
Danny Guo b47812fba6 [rocksdb] new CompactionFilterV2 API
Summary:
This diff adds a new CompactionFilterV2 API that roll up the
decisions of kv pairs during compactions. These kv pairs must share the
same key prefix. They are buffered inside the db.

    typedef std::vector<Slice> SliceVector;
    virtual std::vector<bool> Filter(int level,
                                 const SliceVector& keys,
                                 const SliceVector& existing_values,
                                 std::vector<std::string>* new_values,
                                 std::vector<bool>* values_changed
                                 ) const = 0;

Application can override the Filter() function to operate
on the buffered kv pairs. More details in the inline documentation.

Test Plan:
make check. Added unit tests to make sure Keep, Delete,
Change all works.

Reviewers: haobo

CCs: leveldb

Differential Revision: https://reviews.facebook.net/D15087
2014-03-24 20:47:53 -07:00
Yueh-Hsuan Chiang cda4006e87 Enhance partial merge to support multiple arguments
Summary:
* PartialMerge api now takes a list of operands instead of two operands.
* Add min_pertial_merge_operands to Options, indicating the minimum
  number of operands to trigger partial merge.
* This diff is based on Schalk's previous diff (D14601), but it also
  includes necessary changes such as updating the pure C api for
  partial merge.

Test Plan:
* make check all
* develop tests for cases where partial merge takes more than two
  operands.

TODOs (from Schalk):
* Add test with min_partial_merge_operands > 2.
* Perform benchmarks to measure the performance improvements (can probably
  use results of task #2837810.)
* Add description of problem to doc/index.html.
* Change wiki pages to reflect the interface changes.

Reviewers: haobo, igor, vamsi

Reviewed By: haobo

CC: leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D16815
2014-03-24 17:57:13 -07:00
Igor Canadi e6d4b006b6 Relax backupable RateLimiter unit test for slow environments 2014-03-24 11:59:42 -07:00
Igor Canadi b253f24403 Rate limiter for BackupableDB
Summary: Might be useful if client doesn't want to effect running system during backup too much.

Test Plan: added a test case

Reviewers: dhruba, haobo, ljin

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17091
2014-03-24 11:38:44 -07:00
Igor Canadi 64904b39a0 Merge branch 'master' into columnfamilies
Conflicts:
	utilities/backupable/backupable_db.cc
2014-03-17 17:57:14 -07:00
Igor Canadi 9caeff516e keep_log_files option in BackupableDB
Summary:
Added an option to BackupableDB implementation that allows users to persist in-memory databases. When the restore happens with keep_log_files = true, it will
*) Not delete existing log files in wal_dir
*) Move log files from archive directory to wal_dir, so that DB can replay them if necessary

Test Plan: Added an unit test

Reviewers: dhruba, ljin

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16941
2014-03-17 15:39:23 -07:00
Igor Canadi ee4b9966bc DeleteLogFiles in FailOverwritingBackups 2014-03-12 13:47:07 -07:00
Igor Canadi 25c8a1a20f More bug fixed introduced by code cleanup 2014-03-12 12:28:23 -07:00
Igor Canadi 457c78eb89 [CF] db_stress for column families
Summary:
I had this diff for a while to test column families implementation. Last night, I ran it sucessfully for 10 hours with the command:

   time ./db_stress --threads=30 --ops_per_thread=200000000 --max_key=5000 --column_families=20 --clear_column_family_one_in=3000000 --verify_before_write=1  --reopen=50 --max_background_compactions=10 --max_background_flushes=10 --db=/tmp/db_stress

It is ready to be committed :)

Test Plan: Ran it for 10 hours

Reviewers: dhruba, haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16797
2014-03-11 12:06:12 -07:00
Igor Canadi 9634ba42ac Merge branch 'master' into columnfamilies
Conflicts:
	db/compaction_picker.cc
	db/db_impl.cc
	db/db_impl.h
	db/tailing_iter.cc
	db/version_set.h
	include/rocksdb/options.h
	util/options.cc
2014-03-10 17:26:09 -07:00
Igor Canadi 9db8c4c556 Fix share_table_files bug
Summary: constructor wasn't properly constructing BackupableDBOptions

Test Plan: no test

Reviewers: benj

Reviewed By: benj

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16749
2014-03-10 14:42:03 -07:00
Igor Canadi b04c75d244 Dump options in backupable DB
Summary: We should dump options in backupable DB log, just like with to with RocksDB. This will aid debugging.

Test Plan: checked the log

Reviewers: ljin

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16719
2014-03-10 12:09:54 -07:00
Igor Canadi 0738ae6dc9 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
2014-03-05 12:25:05 -08:00
Igor Canadi 2b5155fb29 CloseDB in BackupableDBTest to make valgrind happy 2014-03-05 09:00:53 -08:00
Igor Canadi e3f396f1ea Some fixes to BackupableDB
Summary:
(1) Report corruption if backup meta file has tailing data that was not read. This should fix: https://github.com/facebook/rocksdb/issues/81 (also, @sdong reported similar issue)
(2) Don't use OS buffer when copying file to backup directory. We don't need the file in cache since we won't be reading it twice
(3) Don't delete newer backups when somebody tries to backup the diverged DB (restore from older backup, add new data, try to backup). Rather, just fail the new backup.

Test Plan: backupable_db_test

Reviewers: ljin, dhruba, sdong

Reviewed By: ljin

CC: leveldb, sdong

Differential Revision: https://reviews.facebook.net/D16287
2014-03-04 17:02:25 -08:00
Igor Canadi dc277f0ab7 [CF] Adaptation of GetLiveFiles for CF
Summary: Even if user flushes the memtables before getting live files, we still can't guarantee that new data didn't come in (to already-flushed memtables). If we want backups to provide consistent view of the database, we still need to get WAL files.

Test Plan: backupable_db_test

Reviewers: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16299
2014-02-25 13:21:14 -08:00
Igor Canadi ccdb93e775 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl.h
	db/memtable_list.cc
	db/memtable_list.h
	db/version_set.cc
	db/version_set.h
2014-02-12 14:01:30 -08:00
Igor Canadi b06840aa7d [CF] Rethinking ColumnFamilyHandle and fix to dropping column families
Summary:
The change to the public behavior:
* When opening a DB or creating new column family client gets a ColumnFamilyHandle.
* As long as column family handle is alive, client can do whatever he wants with it, even drop it
* Dropped column family can still be read from (using the column family handle)
* Added a new call CloseColumnFamily(). Client has to close all column families that he has opened before deleting the DB
* As soon as column family is closed, any calls to DB using that column family handle will fail (also any outstanding calls)

Internally:
* Ref-counting ColumnFamilyData
* New thread-safety for ColumnFamilySet
* Dropped column families are now completely dropped and their memory cleaned-up

Test Plan: added some tests to column_family_test

Reviewers: dhruba, haobo, kailiu, sdong

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16101
2014-02-12 13:47:09 -08:00
Lei Jin 994c327b86 IOError cleanup
Summary: Clean up IOErrors so that it only indicates errors talking to device.

Test Plan: make all check

Reviewers: igor, haobo, dhruba, emayanke

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15831
2014-02-12 11:42:54 -08:00
Igor Canadi 73f62255c1 [CF] Split SanitizeOptions into two
Summary:
There are three SanitizeOption-s now : one for DBOptions, one for ColumnFamilyOptions and one for Options (which just calls the other two)

I have also reshuffled some options -- table_cache options and info_log should live in DBOptions, for example.

Test Plan: make check doesn't complain

Reviewers: dhruba, haobo, kailiu, sdong

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15873
2014-02-04 17:26:51 -08:00
Igor Canadi 20b231d712 Merge branch 'master' into columnfamilies 2014-01-29 11:47:43 -08:00
Igor Canadi c1071ed95c Merge branch 'master' into columnfamilies 2014-01-28 16:04:00 -08:00
Lei Jin 9a126ba3b3 only corrupt private file checksum in backupable_db_test
Summary:
if it happens (randomly) to corrupt shared file in the test, then the
    checksum will be inconsistent between meta files from different backup.
    BackupEngine will then detect this issue and fail. But in reality, this
    does not happen since the checksum is checked on every backup. So here,
    only corrupt checksum of private file to let BackupEngine to construct
    properly (but fail during restore).

Test Plan: run test with valgrind

Reviewers: igor

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15531
2014-01-28 16:03:55 -08:00
Igor Canadi e5ec7384a0 Better interface to create BackupEngine
Summary: I think it looks nicer. In RocksDB we have both styles, but I think that static method is the more common version.

Test Plan: backupable_db_test

Reviewers: ljin, benj, swk

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15519
2014-01-28 16:01:53 -08:00
Igor Canadi ec2fa4a690 Export BackupEngine
Summary:
Lots of clients have problems with using StackableDB interface. It's nice to have BackupableDB as a layer on top of DB, but not necessary.

This diff exports BackupEngine, which can be used to create backups without forcing clients to use StackableDB interface.

Test Plan: backupable_db_test

Reviewers: dhruba, ljin, swk

Reviewed By: ljin

CC: leveldb, benj

Differential Revision: https://reviews.facebook.net/D15477
2014-01-28 11:26:07 -08:00
Lei Jin 9dc29414e3 add checksum for backup files
Summary: Keep checksum of each backuped file in meta file. When it restores these files, compute their checksum on the fly and compare against what is in the meta file. Fail the restore process if checksum mismatch.

Test Plan: unit test

Reviewers: haobo, igor, sdong, kailiu

Reviewed By: igor

CC: leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D15381
2014-01-28 09:43:36 -08:00
Igor Canadi 5356b2a680 Merge branch 'master' into columnfamilies 2014-01-24 18:34:48 -08:00
Siying Dong 8477255da3 Moving Some includes from options.h to forward declaration
Summary: By removing some includes form options.h and reply on forward declaration, we can more easily reason the dependencies.

Test Plan: make all check

Reviewers: kailiu, haobo, igor, dhruba

Reviewed By: kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15411
2014-01-24 17:16:22 -08:00
Igor Canadi 151f9e144f Merge branch 'master' into columnfamilies 2014-01-13 09:09:51 -08:00
ono_matope f8642dacde Fix share_table_files condition in BackupEngine constructor.
That makes BackupableDBTest.NoDoubleCopy test error.
2014-01-11 05:12:07 +09:00
Igor Canadi cb37ddf229 Feature requests for BackupableDB
Summary:
This diff introduces some features that were requested by two internal customers:
* Ability for backups not to share table files, because we can't guarantee that equal filename means equal content accross replicas
* Ability for two threads to call EnableFileDeletions() and DisableFileDeletions()
* Ability to stop backup from another thread and not slow down the DB close
* Copy the files to the temporary folder first and then atomically rename

Test Plan: Added some tests to backupable_db_test

Reviewers: dhruba, sanketh, muthu, sdong, haobo

Reviewed By: haobo

CC: leveldb, sanketh, muthu

Differential Revision: https://reviews.facebook.net/D14769
2014-01-09 12:24:28 -08:00
Igor Canadi 6de1b5b83e Merge branch 'master' into columnfamilies 2014-01-02 04:18:07 -08:00
Igor Canadi b60c14f6ee Support multi-threaded DisableFileDeletions() and EnableFileDeletions()
Summary:
We don't want two threads to clash if they concurrently call DisableFileDeletions() and EnableFileDeletions(). I'm adding a counter that will enable file deletions only after all DisableFileDeletions() calls have been negated with EnableFileDeletions().

However, we also don't want to break the old behavior, so I added a parameter force to EnableFileDeletions(). If force is true, we will still enable file deletions after every call to EnableFileDeletions(), which is what is happening now.

Test Plan: make check

Reviewers: dhruba, haobo, sanketh

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14781
2014-01-02 03:33:42 -08:00
Igor Canadi 269709a885 Merge branch 'master' into columnfamilies 2013-12-18 15:56:37 -08:00
Igor Canadi 9385a5247e [RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>

Sharing some of the work I've done so far. This diff compiles and passes the tests.

The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.

Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]

Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.

Please provide feedback.

Test Plan: make check works, the code is backward compatible

Reviewers: dhruba, haobo, sdong, kailiu, emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14445
2013-12-18 13:08:22 -08:00
Josef Šimánek 8c34189f0c Remove .DS_Store files. 2013-12-17 06:00:47 +01:00
Igor Canadi 417b453fa6 [backupable db] Delete db_dir children when restoring backup
Summary:
I realized that manifest will get deleted by PurgeObsoleteFiles in DBImpl, but it is sill cleaner to delete
files before we restore the backup

Test Plan: backupable_db_test

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14619
2013-12-12 14:57:18 -08:00
Igor Canadi 249e736bc5 portable %lu printing 2013-12-12 08:13:47 -08:00
Igor Canadi 5e4ab767cf BackupableDB delete backups with newer seq number
Summary: We now delete backups with newer sequence number, so the clients don't have to handle confusing situations when they restore from backup.

Test Plan: added a unit test

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14547
2013-12-10 20:49:28 -08:00
Siying Dong 4815468be4 Fix another sign and unsign comparison in test 2013-12-10 10:52:47 -08:00
Igor Canadi cbe7ffef9a fix comparison between signed and unsigned 2013-12-10 10:48:49 -08:00
Igor Canadi 7cf5728440 Cleaning up BackupableDB + fix valgrind errors
Summary: Valgrind complained about BackupableDB. This fixes valgrind errors. Also, I cleaned up some code.

Test Plan: valgrind does not complain anymore

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14529
2013-12-10 10:35:06 -08:00
Igor Canadi 784e62f98d Fix unused variable warning 2013-12-09 16:44:47 -08:00
Igor Canadi fb9fce4fc3 [RocksDB] BackupableDB
Summary:
In this diff I present you BackupableDB v1. You can easily use it to backup your DB and it will do incremental snapshots for you.
Let's first describe how you would use BackupableDB. It's inheriting StackableDB interface so you can easily construct it with your DB object -- it will add a method RollTheSnapshot() to the DB object. When you call RollTheSnapshot(), current snapshot of the DB will be stored in the backup dir. To restore, you can just call RestoreDBFromBackup() on a BackupableDB (which is a static method) and it will restore all files from the backup dir. In the next version, it will even support automatic backuping every X minutes.

There are multiple things you can configure:
1. backup_env and db_env can be different, which is awesome because then you can easily backup to HDFS or wherever you feel like.
2. sync - if true, it *guarantees* backup consistency on machine reboot
3. number of snapshots to keep - this will keep last N snapshots around if you want, for some reason, be able to restore from an earlier snapshot. All the backuping is done in incremental fashion - if we already have 00010.sst, we will not copy it again. *IMPORTANT* -- This is based on assumption that 00010.sst never changes - two files named 00010.sst from the same DB will always be exactly the same. Is this true? I always copy manifest, current and log files.
4. You can decide if you want to flush the memtables before you backup, or you're fine with backing up the log files -- either way, you get a complete and consistent view of the database at a time of backup.
5. More things you can find in BackupableDBOptions

Here is the directory structure I use:

   backup_dir/CURRENT_SNAPSHOT - just 4 bytes holding the latest snapshot
               0, 1, 2, ... - files containing serialized version of each snapshot - containing a list of files
               files/*.sst - sst files shared between snapshots - if one snapshot references 00010.sst and another one needs to backup it from the DB, it will just reference the same file
               files/ 0/, 1/, 2/, ... - snapshot directories containing private snapshot files - current, manifest and log files

All the files are ref counted and deleted immediatelly when they get out of scope.

Some other stuff in this diff:
1. Added GetEnv() method to the DB. Discussed with @haobo and we agreed that it seems right thing to do.
2. Fixed StackableDB interface. The way it was set up before, I was not able to implement BackupableDB.

Test Plan:
I have a unittest, but please don't look at this yet. I just hacked it up to help me with debugging. I will write a lot of good tests and update the diff.

Also, `make asan_check`

Reviewers: dhruba, haobo, emayanke

Reviewed By: dhruba

CC: leveldb, haobo

Differential Revision: https://reviews.facebook.net/D14295
2013-12-09 14:06:52 -08:00
Igor Canadi 0a5ec49895 Make DBWithTTL more like StackableDB
Summary: Now DBWithTTL takes DB* and can behave more like StackableDB. This saves us a lot of duplicate work by defining interfaces

Test Plan: ttl_test with ASAN - OK

Reviewers: emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14481
2013-12-06 16:10:43 -08:00
Mayank Agarwal 92e8316118 Make GetDbIdentity pure virtual and also implement it for StackableDB, DBWithTTL
Summary: As title

Test Plan: make clean and make

Reviewers: igor

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14469
2013-12-05 12:02:31 -08:00
Igor Canadi 3ce3658411 DB::GetOptions()
Summary: We need access to options for BackupableDB

Test Plan: make check

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb, reconnect.grayhat

Differential Revision: https://reviews.facebook.net/D14331
2013-11-25 15:51:50 -08:00
Igor Canadi 11c26bd4a4 [RocksDB] Interface changes required for BackupableDB
Summary: This is part of https://reviews.facebook.net/D14295 -- smaller diff that is easier to review

Test Plan: make asan_check

Reviewers: dhruba, haobo, emayanke

Reviewed By: emayanke

CC: leveldb, kailiu, reconnect.grayhat

Differential Revision: https://reviews.facebook.net/D14301
2013-11-25 12:39:23 -08:00
Igor Canadi ec0acfbca1 Fix stack overflow
Summary:
Sure, let me put 8 bytes in that int32_t.

Brought to you by ASAN!

Test Plan: ttl_test

Reviewers: dhruba, haobo, kailiu, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14193
2013-11-18 20:56:21 -08:00
Pascal Borreli 443e04e62d Fixed typos 2013-11-16 11:21:34 +00:00
kailiu 21587760b9 Fixing the warning messages captured under mac os # Consider using git commit -m 'One line title' && arc diff. # You will save time by running lint and unit in the background.
Summary: The work to make sure mac os compiles rocksdb is not completed yet. But at least we can start cleaning some warnings captured only by g++ from mac os..

Test Plan: ran make in mac os

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14049
2013-11-12 20:05:28 -08:00
Mayank Agarwal f837f5b1c9 Making the transaction log iterator more robust
Summary:
strict essentially means that we MUST find the startsequence. Thus we should return if starteSequence is not found in the first file in case strict is set. This will take care of ending the iterator in case of permanent gaps due to corruptions in the log files
Also created NextImpl function that will have internal variable to distinguish whether Next is being called from StartSequence or by application.
Set NotFoudn::gaps status to give an indication of gaps happeneing.
Polished the inline documentation at various places

Test Plan:
* db_repl_stress test
* db_test relating to transaction log iterator
* fbcode/wormhole/rocksdb/rocks_log_iterator
* sigma production machine sigmafio032.prn1

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13689
2013-11-04 20:49:03 -08:00
Haobo Xu 8cbe5bb56b [RocksDB] Add OnCompactionStart to CompactionFilter class
Summary: This is to give application compaction filter a chance to access context information of a specific compaction run. For example, depending on whether a compaction goes through all data files, the application could do things differently.

Test Plan: make check

Reviewers: dhruba, kailiu, sdong

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13683
2013-10-31 13:36:43 -07:00
Mayank Agarwal 56305221c4 Unify DeleteFile and DeleteWalFiles
Summary:
This is to simplify rocksdb public APIs and improve the code quality.
Created an additional parameter to ParseFileName for log sub type and improved the code for deleting a wal file.
Wrote exhaustive unit-tests in delete_file_test
Unification of other redundant APIs can be taken up in a separate diff

Test Plan: Expanded delete_file test

Reviewers: dhruba, haobo, kailiu, sdong

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13647
2013-10-25 08:32:14 -07:00
Dhruba Borthakur 9cd221094c Add appropriate LICENSE and Copyright message.
Summary:
Add appropriate LICENSE and Copyright message.

Test Plan:
make check

Reviewers:

CC:

Task ID: #

Blame Rev:
2013-10-16 17:48:41 -07:00
Dhruba Borthakur 4463b11cad Migrate names of properties from 'leveldb' prefix to 'rocksdb' prefix.
Summary: Migrate names of properties from 'leveldb' prefix to 'rocksdb' prefix.

Test Plan: make check

Reviewers: emayanke, haobo

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13311
2013-10-06 00:14:26 -07:00
Dhruba Borthakur a143ef9b38 Change namespace from leveldb to rocksdb
Summary:
Change namespace from leveldb to rocksdb. This allows a single
application to link in open-source leveldb code as well as
rocksdb code into the same process.

Test Plan: compile rocksdb

Reviewers: emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13287
2013-10-04 11:59:26 -07:00
Mayank Agarwal 854d236361 Add backward compatible option in GetLiveFiles to choose whether to not Flush first
Summary:
As explained in comments in GetLiveFiles in db.h, this option will cause flush to be skipped in GetLiveFiles because some use-cases use GetSortedWalFiles after GetLiveFiles to generate more complete snapshots.
Using GetSortedWalFiles after GetLiveFiles allows us to not Flush in GetLiveFiles first because wals have everything.
Note: file deletions will be disabled before calling GLF or GSWF so live logs will not move to archive logs or get delted.
Note: Manifest file is truncated to a proper value in GLF, so it will always reply from the proper wal files on a restart

Test Plan: make

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13257
2013-10-04 10:20:10 -07:00
Rajat Goel 11c65021fb Revert "Minor fixes found while trying to compile it using clang on Mac OS X"
This reverts commit 5f2c136c32.
2013-09-15 23:01:26 -07:00
Rajat Goel 5f2c136c32 Minor fixes found while trying to compile it using clang on Mac OS X 2013-09-15 22:06:14 -07:00
Mayank Agarwal e2a093a6c3 Fix delete in db_ttl.cc
Summary: should delete the proper variable

Test Plan: make all check

Reviewers: haobo, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12921
2013-09-13 11:16:27 -07:00
Mayank Agarwal 4b785aab05 Add logdata to ttl
Summary: Ttl-write makes a new writebatch and calls Write on the base db. It should recognize LogData also

Test Plan: make

Reviewers: dhruba, haobo

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12747
2013-09-05 13:52:47 -07:00
Xing Jin 42c109cc2e New ldb command to convert compaction style
Summary:
Add new command "change_compaction_style" to ldb tool. For
universal->level, it shows "nothing to do". For level->universal, it
compacts all files into a single one and moves the file to level 0.

Also add check for number of files at level 1+ when opening db with
universal compaction style.

Test Plan:
'make all check'. New unit test for internal convertion function. Also manully test various
cmd like:

./ldb change_compaction_style --old_compaction_style=0
--new_compaction_style=1 --db=/tmp/leveldbtest-3088/db_test

Reviewers: haobo, dhruba

Reviewed By: haobo

CC: vamsi, emayanke

Differential Revision: https://reviews.facebook.net/D12603
2013-09-04 13:13:08 -07:00
Mayank Agarwal b1d09f1a51 Fix build failing becasue of ttl-keymayexist
Summary: PutValues calls Flush in ttl_test which clears memtables. KeyMayExist called after that will not be able to read those key-values

Test Plan: make all check OPT=-g

Reviewers:leveldb
2013-09-01 21:06:04 -07:00
Mayank Agarwal f121c4f504 KeyMayExist for ttl
Summary: value needed to be filtered of timestamp

Test Plan: ./ttl_test

Reviewers: dhruba, haobo, vamsi

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12657
2013-09-01 00:28:18 -07:00
Dhruba Borthakur 1186192ed1 Replace include/leveldb with include/rocksdb.
Summary: Replace include/leveldb with include/rocksdb.

Test Plan:
make clean; make check
make clean; make release

Differential Revision: https://reviews.facebook.net/D12489
2013-08-23 10:51:00 -07:00
Deon Nicholas 6f4e3ee3e8 Added include guards to stringappend and redis-list
Summary: added "#pragma once" in the .h files

Test Plan: make and run: stringappend_test, redis_test

Reviewers: emayanke, haobo

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12495
2013-08-23 10:28:16 -07:00
Deon Nicholas b87dcae1a3 Made merge_oprator a shared_ptr; and added TTL unit tests
Test Plan:
- make all check;
- make release;
- make stringappend_test; ./stringappend_test

Reviewers: haobo, emayanke

Reviewed By: haobo

CC: leveldb, kailiu

Differential Revision: https://reviews.facebook.net/D12381
2013-08-20 13:35:28 -07:00
Mayank Agarwal 8a3547d38e API for getting archived log files
Summary: Also expanded class LogFile to have startSequene and FileSize and exposed it publicly

Test Plan: make all check

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12087
2013-08-19 13:37:04 -07:00
Deon Nicholas e1346968d8 Merge operator fixes part 1.
Summary:
-Added null checks and revisions to DBIter::MergeValuesNewToOld()
-Added DBIter test to stringappend_test
-Major fix with Merge and TTL
More plans for fixes later.

Test Plan:
-make clean; make stringappend_test -j 32; ./stringappend_test
-make all check;

Reviewers: haobo, emayanke, vamsi, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12315
2013-08-19 11:42:47 -07:00
Deon Nicholas ad48c3c262 Benchmarking for Merge Operator
Summary:
Updated db_bench and utilities/merge_operators.h to allow for dynamic benchmarking
of merge operators in db_bench. Added a new test (--benchmarks=mergerandom), which performs
a bunch of random Merge() operations over random keys. Also added a "--merge_operator=" flag
so that the tester can easily benchmark different merge operators. Currently supports
the PutOperator and UInt64Add operator. Support for stringappend or list append may come later.

Test Plan:
	1. make db_bench
	2. Test the PutOperator (simulating Put) as follows:
./db_bench --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom --merge_operator=put
--threads=2

3. Test the UInt64AddOperator (simulating numeric addition) similarly:
./db_bench --value_size=8 --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom
--merge_operator=uint64add --threads=2

Reviewers: haobo, dhruba, zshao, MarkCallaghan

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11535
2013-08-15 17:13:07 -07:00
Mayank Agarwal f1bf169484 Counter for merge failure
Summary:
With Merge returning bool, it can keep failing silently(eg. While faling to fetch timestamp in TTL). We need to detect this through a rocksdb counter which can get bumped whenever Merge returns false. This will also be super-useful for the mcrocksdb-counter service where Merge may fail.
Added a counter NUMBER_MERGE_FAILURES and appropriately updated db/merge_helper.cc

I felt that it would be better to directly add counter-bumping in Merge as a default function of MergeOperator class but user should not be aware of this, so this approach seems better to me.

Test Plan: make all check

Reviewers: dnicholas, haobo, dhruba, vamsi

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12129
2013-08-13 14:25:42 -07:00
sumeet 3b81df34bd Separate compaction filter for each compaction
Summary:
If we have same compaction filter for each compaction,
application cannot know about the different compaction processes.
Later on, we can put in more details in compaction filter for the
application to consume and use it according to its needs. For e.g. In
the universal compaction, we have a compaction process involving all the
files while others don't involve all the files. Applications may want to
collect some stats only when during full compaction.

Test Plan: run existing unit tests

Reviewers: haobo, dhruba

Reviewed By: dhruba

CC: xinyaohu, leveldb

Differential Revision: https://reviews.facebook.net/D12057
2013-08-13 10:56:20 -07:00
Deon Nicholas e37eb21e92 minor change to fix build 2013-08-06 11:02:19 -07:00
Deon Nicholas c2d7826ced [RocksDB] [MergeOperator] The new Merge Interface! Uses merge sequences.
Summary:
Here are the major changes to the Merge Interface. It has been expanded
to handle cases where the MergeOperator is not associative. It does so by stacking
up merge operations while scanning through the key history (i.e.: during Get() or
Compaction), until a valid Put/Delete/end-of-history is encountered; it then
applies all of the merge operations in the correct sequence starting with the
base/sentinel value.

I have also introduced an "AssociativeMerge" function which allows the user to
take advantage of associative merge operations (such as in the case of counters).
The implementation will always attempt to merge the operations/operands themselves
together when they are encountered, and will resort to the "stacking" method if
and only if the "associative-merge" fails.

This implementation is conjectured to allow MergeOperator to handle the general
case, while still providing the user with the ability to take advantage of certain
efficiencies in their own merge-operator / data-structure.

NOTE: This is a preliminary diff. This must still go through a lot of review,
revision, and testing. Feedback welcome!

Test Plan:
  -This is a preliminary diff. I have only just begun testing/debugging it.
  -I will be testing this with the existing MergeOperator use-cases and unit-tests
(counters, string-append, and redis-lists)
  -I will be "desk-checking" and walking through the code with the help gdb.
  -I will find a way of stress-testing the new interface / implementation using
db_bench, db_test, merge_test, and/or db_stress.
  -I will ensure that my tests cover all cases: Get-Memtable,
Get-Immutable-Memtable, Get-from-Disk, Iterator-Range-Scan, Flush-Memtable-to-L0,
Compaction-L0-L1, Compaction-Ln-L(n+1), Put/Delete found, Put/Delete not-found,
end-of-history, end-of-file, etc.
  -A lot of feedback from the reviewers.

Reviewers: haobo, dhruba, zshao, emayanke

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11499
2013-08-05 20:14:32 -07:00
Mayank Agarwal 73f9518b21 Fix build
Summary: remove reference

Test Plan: make OPT=-g

Reviewers:

CC:

Task ID: #

Blame Rev:
2013-08-05 19:22:12 -07:00
Mayank Agarwal 1d7b4765c3 Expose base db object from ttl wrapper
Summary: rocksdb replicaiton will need this when writing value+TS from master to slave 'as is'

Test Plan: make

Reviewers: dhruba, vamsi, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11919
2013-08-05 18:44:14 -07:00
Mayank Agarwal cacd812fb2 Support user's compaction filter in TTL logic
Summary: TTL uses compaction filter to purge key-values and required the user to not pass one. This diff makes it accommodating of user's compaciton filter. Added test to ttl_test

Test Plan: make; ./ttl_test

Reviewers: dhruba, haobo, vamsi

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11973
2013-08-05 11:28:01 -07:00
Mayank Agarwal c42485f67c Merge operator for ttl
Summary: Implemented a TtlMergeOperator class which inherits from MergeOperator and is TTL aware. It strips out timestamp from existing_value and attaches timestamp to new_value, calling user-provided-Merge in between.

Test Plan: make all check

Reviewers: haobo, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11775
2013-08-01 09:27:17 -07:00
Mayank Agarwal 59d0b02f8b Expand KeyMayExist to return the proper value if it can be found in memory and also check block_cache
Summary: Removed KeyMayExistImpl because KeyMayExist demanded Get like semantics now. Removed no_io from memtable and imm because we need the proper value now and shouldn't just stop when we see Merge in memtable. Added checks to block_cache. Updated documentation and unit-test

Test Plan: make all check;db_stress for 1 hour

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11853
2013-08-01 09:07:46 -07:00
Haobo Xu 9ee68871dc [RocksDB] Enable manual compaction to move files back to an appropriate level.
Summary: As title. This diff added an option reduce_level to CompactRange. When set to true, it will try to move the files back to the minimum level sufficient to hold the data set. Note that the default is set to true now, just to excerise it in all existing tests. Will set the default to false before check-in, for backward compatibility.

Test Plan: make check;

Reviewers: dhruba, emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11553
2013-07-19 16:20:36 -07:00
Mayank Agarwal 2a986919d6 Make rocksdb-deletes faster using bloom filter
Summary:
Wrote a new function in db_impl.c-CheckKeyMayExist that calls Get but with a new parameter turned on which makes Get return false only if bloom filters can guarantee that key is not in database. Delete calls this function and if the option- deletes_use_filter is turned on and CheckKeyMayExist returns false, the delete will be dropped saving:
1. Put of delete type
2. Space in the db,and
3. Compaction time

Test Plan:
make all check;
will run db_stress and db_bench and enhance unit-test once the basic design gets approved

Reviewers: dhruba, haobo, vamsi

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11607
2013-07-11 12:11:11 -07:00
Deon Nicholas 34ef873290 Added stringappend_test back into the unit tests.
Summary:
With the Makefile now updated to correctly update all .o files, this
should fix the issues recompiling stringappend_test. This should also fix the
"segmentation-fault" that we were getting earlier. Now, stringappend_test should
be clean, and I have added it back to the unit-tests. Also made some minor updates
to the tests themselves.

Test Plan:
1. make clean; make stringappend_test -j 32	(will test it by itself)
2. make clean; make all check -j 32		(to run all unit tests)
3. make clean; make release			(test in release mode)
4. valgrind ./stringappend_test 		(valgrind tests)

Reviewers: haobo, jpaton, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11505
2013-06-26 11:41:13 -07:00
Mayank Agarwal b858da709a Simplify bucketing logic in ldb-ttl
Summary: [start_time, end_time) is waht I'm following for the buckets and the whole time-range. Also cleaned up some code in db_ttl.* Not correcting the spacing/indenting convention for util/ldb_cmd.cc in this diff.

Test Plan: python ldb_test.py, make ttl_test, Run mcrocksdb-backup tool, Run the ldb tool on 2 mcrocksdb production backups form sigmafio033.prn1

Reviewers: vamsi, haobo

Reviewed By: vamsi

Differential Revision: https://reviews.facebook.net/D11433
2013-06-21 09:49:24 -07:00
Mayank Agarwal 61f1baaedf Introducing timeranged scan, timeranged dump in ldb. Also the ability to count in time-batches during Dump
Summary:
Scan and Dump commands in ldb use iterator. We need to also print timestamp for ttl databases for debugging. For this I create a TtlIterator class pointer in these functions and assign it the value of Iterator pointer which actually points to t TtlIterator object, and access the new function ValueWithTS which can return TS also. Buckets feature for dump command: gives a count of different key-values in the specified time-range distributed across the time-range partitioned according to bucket-size. start_time and end_time are specified in unixtimestamp and bucket in seconds on the user-commandline
Have commented out 3 ines from ldb_test.py so that the test does not break right now. It breaks because timestamp is also printed now and I have to look at wildcards in python to compare properly.

Test Plan: python tools/ldb_test.py

Reviewers: vamsi, dhruba, haobo, sheki

Reviewed By: vamsi

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11403
2013-06-19 18:45:13 -07:00
Deon Nicholas 8926b72751 Minor tweaks to StringAppend MergeOperator.
Summary:
I'm concerned about a random seg-fault that sometimes occurs when
running stringappend_test. I will investigate further. First, I am removing
stringappend_test from the regular release tests, and making some clean-ups
to the code.

Test Plan:
1. make stringappend_test
2. ./stringappend_test

Reviewers: haobo, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11313
2013-06-14 16:44:39 -07:00
Deon Nicholas 5679107b07 Completed the implementation and test cases for Redis API.
Summary:
Completed the implementation for the Redis API for Lists.
The Redis API uses rocksdb as a backend to persistently
store maps from key->list. It supports basic operations
for appending, inserting, pushing, popping, and accessing
a list, given its key.

Test Plan:
  - Compile with: make redis_test
  - Test with: ./redis_test
  - Run all unit tests (for all rocksdb) with: make all check
  - To use an interactive REDIS client use: ./redis_test -m
  - To clean the database before use:       ./redis_test -m -d

Reviewers: haobo, dhruba, zshao

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10833
2013-06-11 11:19:49 -07:00
Mayank Agarwal 3c35eda9bd Make Write API work for TTL databases
Summary: Added logic to make another WriteBatch with Timestamps during the Write function execution in TTL class. Also expanded the ttl_test to test for it. Have done nothing for Merge for now.

Test Plan: make ttl_test;./ttl_test

Reviewers: haobo, vamsi, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10827
2013-06-10 15:23:44 -07:00
Deon Nicholas db1f0cddf3 Fixed valgrind errors 2013-06-05 17:25:16 -07:00
Deon Nicholas d8c7c45ea0 Very basic Multiget and simple test cases.
Summary:
Implemented the MultiGet operator which takes in a list of keys
and returns their associated values. Currently uses std::vector as its
container data structure. Otherwise, it works identically to "Get".

Test Plan:
 1. make db_test      ; compile it
 2. ./db_test         ; test it
 3. make all check    ; regress / run all tests
 4. make release      ; (optional) compile with release settings

Reviewers: haobo, MarkCallaghan, dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10875
2013-06-05 11:22:38 -07:00
Haobo Xu 8a59ed9bc7 [RockdDB] fix build
Summary: assert => ASSERT_TRUE

Test Plan: make release; make check

Reviewers: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10839
2013-05-17 16:15:44 -07:00
Deon Nicholas accd3debbb Implemented StringAppendOperator and unit tests.
Summary:
Implemented the StringAppendOperator class (subclass of MergeOperator).
Found in utilities/merge_operators/string_append/stringappend.{h,cc}

It is a rocksdb Merge Operator that supports string/list concatenation
 with a configurable delimiter.

The tests are found in .../stringappend_test.cc. It implements a
 map : key -> (list of strings), with core operations Append(list_key,val)
 and Get(list_key).

Test Plan:
1. Navigate to your rocksdb repository
2. Execute: make stringappend_test  (to compile)
3. Execute: ./stringappend_test (to run the tests)
4. Execute: make all check (to test the ENTIRE rocksdb codebase / regression)

Reviewers: haobo, dhruba, zshao

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10737
2013-05-13 15:09:42 -07:00
Haobo Xu 4ca3c67bd3 [RocksDB] Cleanup compaction filter to use a class interface, instead of function pointer and additional context pointer.
Summary:
This diff replaces compaction_filter_args and CompactionFilter with a single compaction_filter parameter. It gives CompactionFilter better encapsulation and a similar look to Comparator and MergeOpertor, which improves consistency of the overall interface.
The change is not backward compatible. Nevertheless, the two references in fbcode are not in production yet.

Test Plan: make check

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb, zshao

Differential Revision: https://reviews.facebook.net/D10773
2013-05-13 14:06:10 -07:00
Mayank Agarwal 3102628873 Bring read_only into ttl
Summary: added an argument to openttldb for read only and open the db in normal readonly mode if the arguments is set to true

Test Plan: make ttl_test; ./ttl_test

Reviewers: dhruba, haobo, vamsi, sheki

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10749
2013-05-10 16:13:26 -07:00
Mayank Agarwal c605e12479 Adding Sanity checks in Get and Iterator->value for ttl
Summary: The 2 checks added will increase reliabilty and help in debugging

Test Plan: make ttl_test;./ttl_test

Reviewers: vamsi, dhruba, sheki, haobo

Reviewed By: vamsi

Differential Revision: https://reviews.facebook.net/D10713
2013-05-09 18:13:27 -07:00
Haobo Xu 05e8854085 [Rocksdb] Support Merge operation in rocksdb
Summary:
This diff introduces a new Merge operation into rocksdb.
The purpose of this review is mostly getting feedback from the team (everyone please) on the design.

Please focus on the four files under include/leveldb/, as they spell the client visible interface change.
include/leveldb/db.h
include/leveldb/merge_operator.h
include/leveldb/options.h
include/leveldb/write_batch.h

Please go over local/my_test.cc carefully, as it is a concerete use case.

Please also review the impelmentation files to see if the straw man implementation makes sense.

Note that, the diff does pass all make check and truly supports forward iterator over db and a version
of Get that's based on iterator.

Future work:
- Integration with compaction
- A raw Get implementation

I am working on a wiki that explains the design and implementation choices, but coding comes
just naturally and I think it might be a good idea to share the code earlier. The code is
heavily commented.

Test Plan: run all local tests

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

CC: leveldb, zshao, sheki, emayanke, MarkCallaghan

Differential Revision: https://reviews.facebook.net/D9651
2013-05-03 16:59:02 -07:00
Mayank Agarwal 37e97b1297 Fix invalid-read to freed memory in ttl-iterator
Summary: value function in ttl-iterator was returning string which would have been freed before its usage as a slice. Thanks valgrind!

Test Plan: valgrind ./ttl_test

Reviewers: dhruba, haobo, sheki, vamsi

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10635
2013-05-03 15:09:16 -07:00
Mayank Agarwal d786b25e2d Timestamp and TTL Wrapper for rocksdb
Summary:
When opened with DBTimestamp::Open call, timestamps are prepended to and stripped from the value during subsequent Put and Get calls respectively. The Timestamp is used to discard values in Get and custom compaction filter which have exceeded their TTL which is specified during Open.
Have made a temporary change to Makefile to let us test with the temporary file TestTime.cc. Have also changed the private members of db_impl.h to protected to let them be inherited by the new class DBTimestamp

Test Plan: make db_timestamp; TestTime.cc(will not check it in) shows how to use the apis currently, but I will write unit-tests shortly

Reviewers: dhruba, vamsi, haobo, sheki, heyongqiang, vkrest

Reviewed By: vamsi

CC: zshao, xjin, vkrest, MarkCallaghan

Differential Revision: https://reviews.facebook.net/D10311
2013-05-02 16:34:42 -07:00