Commit Graph

1624 Commits

Author SHA1 Message Date
Lei Jin 0f2d768191 hints for narrowing down FindFile range and avoiding checking unrelevant L0 files
Summary:
The file tree structure in Version is prebuilt and the range of each file is known.
On the Get() code path, we do binary search in FindFile() by comparing
target key with each file's largest key and also check the range for each L0 file.
With some pre-calculated knowledge, each key comparision that has been done can serve
as a hint to narrow down further searches:
(1) If a key falls within a L0 file's range, we can safely skip the next
file if its range does not overlap with the current one.
(2) If a key falls within a file's range in level L0 - Ln-1, we should only
need to binary search in the next level for files that overlap with the current one.

(1) will be able to skip some files depending one the key distribution.
(2) can greatly reduce the range of binary search, especially for bottom
levels, given that one file most likely only overlaps with N files from
the level below (where N is max_bytes_for_level_multiplier). So on level
L, we will only look at ~N files instead of N^L files.

Some inital results: measured with 500M key DB, when write is light (10k/s = 1.2M/s), this
improves QPS ~7% on top of blocked bloom. When write is heavier (80k/s =
9.6M/s), it gives us ~13% improvement.

Test Plan: make all check

Reviewers: haobo, igor, dhruba, sdong, yhchiang

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17205
2014-04-21 09:10:12 -07:00
Igor Canadi ce353c2474 Nuke tools/shell
Summary: We don't use or build this code

Test Plan: builds

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17979
2014-04-17 14:43:42 -07:00
Igor Canadi 23c8f89b57 Revert "Don't compile ldb tool into static library"
This reverts commit e296577ef6.
2014-04-15 11:29:02 -07:00
Igor Canadi a347ffe92f Revert "Fix sst_dump and reduce_levels_test compile errors"
This reverts commit d8f00b4109.
2014-04-15 11:28:52 -07:00
Igor Canadi d8f00b4109 Fix sst_dump and reduce_levels_test compile errors 2014-04-15 11:13:12 -07:00
Igor Canadi e296577ef6 Don't compile ldb tool into static library
Summary:
This is first step of my effort to reduce size of librocksdb.a for use in mobile.

ldb object files are huge and are ment to be used as a command line tool. I moved them to `tools/` directory and include them only when compiling `ldb`

This diff reduced librocksdb.a from 42MB to 39MB on my mac (not stripped).

Test Plan: ran ldb

Reviewers: dhruba, haobo, sdong, ljin, yhchiang

Reviewed By: yhchiang

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17823
2014-04-15 10:52:39 -07:00
Igor Canadi 4daea66343 Turn on -Wmissing-prototypes
Summary: Compiling for iOS has by default turned on -Wmissing-prototypes, which causes rocksdb to fail compiling. This diff turns on -Wmissing-prototypes in our compile options and cleans up all functions with missing prototypes.

Test Plan: compiles

Reviewers: dhruba, haobo, ljin, sdong

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17649
2014-04-09 21:17:14 -07:00
Igor Canadi b947fdc89d Column family support for DB::OpenForReadOnly()
Summary: When opening DB in read-only mode, client can choose to only specify a subset of column families ("default" column family can't be omitted, though)

Test Plan: added a unit test in column_family_test

Reviewers: haobo, sdong, ljin, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17565
2014-04-09 09:56:17 -07:00
Igor Canadi 3d2fe844ab Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_impl.h
	db/memtable_list.cc
	db/version_set.cc
2014-04-07 11:31:11 -07:00
Yueh-Hsuan Chiang c0b9fa8b3e Add script auto_sanity_test.sh to perform auto sanity test
Summary:
Add script auto_sanity_test.sh to perform auto sanity test

usage: auto_sanity_test.sh [new_commit] [old_commit]

Running without commit parameter will do the sanity test with the latest
and the latest 10 commit.

Test Plan: ./auto_sanity_test.sh

Reviewers: haobo, igor

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17397
2014-04-03 10:21:46 -07:00
Igor Canadi 8555ce2dec Merge branch 'master' into columnfamilies 2014-04-02 10:48:05 -07:00
Igor Canadi 05080dae3f fix db_sanity_test 2014-03-31 17:06:53 -07:00
Igor Canadi 76642b81f4 Increase done even if progress_reports is false 2014-03-20 16:52:59 -07:00
Igor Canadi ac328a86b9 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_test.cc
2014-03-20 14:41:37 -07:00
Yiting Li 7981a43274 Consistency Check Function
Summary: Added a function/command to check the consistency of live files' meta data

Test Plan:
Manual test (size mismatch, file not exist).
Command test script.

Reviewers: haobo

Reviewed By: haobo

CC: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D16935
2014-03-20 13:42:45 -07:00
Igor Canadi 4cfb0eb40f Delete rocksdb dir after crashtest
Summary:
We should clean up after ourselves. Last crashtest failed because the disk on the cibuild machine was full.

Also, db_crashtest is supposed to run the test on the same database in every iteration. This isn't the case currently, because every run creates a new tmp directory. It's fixed with this diff.

Test Plan: ran it

Reviewers: ljin

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D17073
2014-03-20 11:11:08 -07:00
Igor Canadi 159928dfa5 Added flag progress_reports in db_stress 2014-03-19 09:58:41 -07:00
Igor Canadi 64904b39a0 Merge branch 'master' into columnfamilies
Conflicts:
	utilities/backupable/backupable_db.cc
2014-03-17 17:57:14 -07:00
Igor Canadi 5601bc4619 Check starts_with(prefix) in MultiPrefixIterate
Summary: We switched to prefix_seek method of seeking. This means that anytime we check Valid(), we also need to check starts_with(prefix)

Test Plan: ran db_stress

Reviewers: ljin

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16953
2014-03-17 17:02:34 -07:00
Igor Canadi e0c1211555 Merge branch 'master' into columnfamilies
Conflicts:
	db/version_set.cc
	tools/db_stress.cc
2014-03-17 12:21:05 -07:00
Igor Canadi 9b8a2b52d4 No prefix iterator in db_stress
Summary: We're trying to deprecate prefix iterators, so no need to test them in db_stress

Test Plan: ran it

Reviewers: ljin

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16917
2014-03-17 10:02:46 -07:00
Igor Canadi 928ee23567 Change WriteBatch interface 2014-03-14 13:40:06 -07:00
Igor Canadi e1f56e12cf Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	db/db_test.cc
	tools/db_stress.cc
2014-03-13 13:21:20 -07:00
Igor Canadi 04a1035efe Revert "DB stress with normal skip list"
This reverts commit 86926d8c6a.
2014-03-12 22:21:13 -07:00
Lei Jin 02a2cb139b fix VerifyDb in StressTest
Summary:
this should fix the hash_skip_list issue, but I still see seqno
assertion failure in the last run. Will continue investigating and
address that in a different diff

Test Plan: make whitebox_crash_test

Reviewers: igor

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16851
2014-03-12 22:20:22 -07:00
Igor Canadi 86926d8c6a DB stress with normal skip list
Summary:
Hash skip list has issues, causing db_stress to fail badly.

For now, switching back to skip_list by default before we figure out root cause.

Test Plan: db_stress is happy(ier)

Reviewers: ljin

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16845
2014-03-12 17:35:27 -07:00
Igor Canadi 7b7793e97a Don't sync in stress test
Summary: Syncing in stress test makes it run much much much slower. It also doesn't add much value IMO.

Test Plan: no

Reviewers: ljin

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16839
2014-03-12 15:12:09 -07:00
Igor Canadi dff9214165 Merge branch 'master' into columnfamilies
Conflicts:
	db/db_impl.cc
	tools/db_stress.cc
2014-03-12 10:17:41 -07:00
Igor Canadi 2b95dc1542 Revert "Fix bad merge of D16791 and D16767"
This reverts commit 839c8ecfcd.
2014-03-12 09:37:43 -07:00
Igor Canadi 5ba028c179 DBStress cleanup
Summary:
*) fixed the comment
*) constant 1 was not casted to 64-bit, which (I think) might cause overflow if we shift it too much
*) default prefix size to be 7, like it was before

Test Plan: compiled

Reviewers: ljin

Reviewed By: ljin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16827
2014-03-12 09:31:06 -07:00
sdong 839c8ecfcd Fix bad merge of D16791 and D16767
Summary: A bad Auto-Merge caused log buffer is flushed twice. Remove the unintended one.

Test Plan: Should already be tested (the code looks the same as when I ran unit tests).

Reviewers: haobo, igor

Reviewed By: haobo

CC: ljin, yhchiang, leveldb

Differential Revision: https://reviews.facebook.net/D16821
2014-03-11 21:31:57 -07:00
Lei Jin 86ba3e24e3 make assert based on FLAGS_prefix_size
Summary: as title

Test Plan: running python tools/db_crashtest.py

Reviewers: igor

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16803
2014-03-11 16:33:42 -07:00
Lei Jin 02dab3be11 fix db_stress test
Summary: Fix the db_stress test, let is run with HashSkipList for real

Test Plan:
python tools/db_crashtest.py
python tools/db_crashtest2.py

Reviewers: igor, haobo

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16773
2014-03-11 13:44:33 -07:00
Igor Canadi 56ca8338e5 initialize static const outside of class 2014-03-11 13:08:48 -07:00
Igor Canadi 457c78eb89 [CF] db_stress for column families
Summary:
I had this diff for a while to test column families implementation. Last night, I ran it sucessfully for 10 hours with the command:

   time ./db_stress --threads=30 --ops_per_thread=200000000 --max_key=5000 --column_families=20 --clear_column_family_one_in=3000000 --verify_before_write=1  --reopen=50 --max_background_compactions=10 --max_background_flushes=10 --db=/tmp/db_stress

It is ready to be committed :)

Test Plan: Ran it for 10 hours

Reviewers: dhruba, haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16797
2014-03-11 12:06:12 -07:00
Lei Jin 8d007b4aaf Consolidate SliceTransform object ownership
Summary:
(1) Fix SanitizeOptions() to also check HashLinkList. The current
dynamic case just happens to work because the 2 classes have the same
layout.
(2) Do not delete SliceTransform object in HashSkipListFactory and
HashLinkListFactory destructor. Reason: SanitizeOptions() enforces
prefix_extractor and SliceTransform to be the same object when
Hash**Factory is used. This makes the behavior strange: when
Hash**Factory is used, prefix_extractor will be released by RocksDB. If
other memtable factory is used, prefix_extractor should be released by
user.

Test Plan: db_bench && make asan_check

Reviewers: haobo, igor, sdong

Reviewed By: igor

CC: leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D16587
2014-03-10 12:56:46 -07:00
Lei Jin cff908db09 fix ldb_test TtlPutGet test
Summary:
If the last byte of the timestamp is 0, the output value of the ttl db
can cut off earlier. Change to compare hex format instead.

Test Plan:
this does not happen consistently, but it did not fail after the last
100 runs
for i in {1..100}; do python ./tools/ldb_test.py; done

Reviewers: igor, haobo

Reviewed By: igor

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16725
2014-03-10 12:11:46 -07:00
Igor Canadi 9c8ad62691 DB Sanity Test
Summary:
@kailiu mentioned on meeting yesterday that we sometimes have trouble opening DB created by old version with the new version. This will be very important to test for column families, since I'm changing disk format for the MANIFEST.

I added a tool that can help us test that. Usage:
./db_sanity_test <path> create
will create a bunch of DBs under <path>
<change RocksDB version>
./db_sanity_test <path> verify
will verify consistency of DBs created under <path>

Test Plan: ran the db_sanity_test

Reviewers: kailiu, dhruba, haobo

Reviewed By: kailiu

CC: leveldb, kailiu, xjin

Differential Revision: https://reviews.facebook.net/D16605
2014-03-06 11:36:39 -08:00
Igor Canadi 97eddef235 Reopen DB in crash test
Summary:
Why don't we automatically reopen DB when running crash test (running in our nightly build)? If I understand correctly, crashtest is manually reopenning the DB, but then the DB does not check its consistency when you kill db_stress process and then re-run it again.
Does this make sense?

Test Plan: not reall

Reviewers: dhruba, haobo, emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D16167
2014-03-03 17:10:30 -08:00
Igor Canadi 8e634d3ea4 Merge pull request #74 from alberts/lz4
Support for LZ4 compression.
2014-02-10 15:46:56 -08:00
Albert Strasheim df2f92214a Support for LZ4 compression. 2014-02-08 14:15:51 -08:00
Kai Liu b8ea5e36b3 Fix incompatible compilation in Linux server 2014-02-07 19:47:48 -08:00
kailiu 161ab42a8a Make table properties shareable
Summary:
We are going to expose properties of all tables to end users through "some" db interface.
However, current design doesn't naturally fit for this need, which is because:

1. If a table presents in table cache, we cannot simply return the reference to its table properties, because the table may be destroy after compaction (and we don't want to hold the ref of the version).
2. Copy table properties is OK, but it's slow.

Thus in this diff, I change the table reader's interface to return a shared pointer (for const table properties), instead a const refernce.

Test Plan: `make check` passed

Reviewers: haobo, sdong, dhruba

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15999
2014-02-07 19:26:49 -08:00
Yueh-Hsuan Chiang 3ce8d9a988 Add support for plain table format to sst_dump.
Summary:
This diff enables the command line tool `sst_dump` to work for sst files
under plain table format.  Changes include:
  * In tools/sst_dump.cc:
    - add support for plain table format
    - display prefix_extractor information when --show_properties is on
  * In table/format.cc
    - Now the table magic number of a Footer can be later initialized
      via ReadFooterFromFile().
  * In table/meta_bocks:
    - add function ReadTableMagicNumber() that reads the magic number of
      the specified file.

Minor fixes:
 - remove a duplicate #include in table/table_test.cc
 - fix a commentary typo in include/rocksdb/memtablerep.h
 - fix lint errors.

Test Plan:
Runs sst_dump with both block-based and plain-table format files with
different arguments, specifically those with --show-properties and --from.

* sample output:
  https://reviews.facebook.net/P261

Reviewers: kailiu, sdong, xjin

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15903
2014-02-07 11:15:00 -08:00
kailiu 84f8185fc0 Merge branch 'master' into performance
Conflicts:
	HISTORY.md
	db/db_impl.cc
	db/memtable.cc
2014-02-05 21:21:00 -08:00
Igor Canadi 2966d764cd Fix some 32-bit compile errors
Summary: RocksDB doesn't compile on 32-bit architecture apparently. This is attempt to fix some of 32-bit errors. They are reported here: https://gist.github.com/paxos/8789697

Test Plan: RocksDB still compiles on 64-bit :)

Reviewers: kailiu

Reviewed By: kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15825
2014-02-03 13:48:30 -08:00
Siying Dong d169b67680 [Performance Branch] PlainTable to encode rows with seqID 0, value type using 1 internal byte.
Summary: In PlainTable, use one single byte to represent 8 bytes of internal bytes, if seqID = 0 and it is value type (which should be common for bottom most files). It is to save 7 bytes for uncompressed cases.

Test Plan: make all check

Reviewers: haobo, dhruba, kailiu

Reviewed By: haobo

CC: igor, leveldb

Differential Revision: https://reviews.facebook.net/D15489
2014-02-03 12:19:30 -08:00
kailiu 4f6cb17bdb First phase API clean up
Summary:
Addressed all the issues in https://reviews.facebook.net/D15447.
Now most table-related modules are hidden from user land.

Test Plan: make check

Reviewers: sdong, haobo, dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15525
2014-02-03 00:30:43 -08:00
kailiu 4e0298f23c Clean up arena API
Summary:
Easy thing goes first. This patch moves arena to internal dir; based
on which, the coming patch will deal with memtable_rep.

Test Plan: make check

Reviewers: haobo, sdong, dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15615
2014-01-30 22:10:10 -08:00
Yueh-Hsuan Chiang a46ac92138 Allow command line tool sst-dump to display table properties.
Summary:
Add option '--show_properties' to sst_dump tool to allow displaying
property block of the specified files.

Test Plan:
Run sst_dump with the following arguments, which covers cases affected by
this diff:

  1. with only --file
  2. with both --file and --show_properties
  3. with --file, --show_properties, and --from

Reviewers: kailiu, xjin

Differential Revision: https://reviews.facebook.net/D15453
2014-01-30 01:50:34 -08:00
Siying Dong 8477255da3 Moving Some includes from options.h to forward declaration
Summary: By removing some includes form options.h and reply on forward declaration, we can more easily reason the dependencies.

Test Plan: make all check

Reviewers: kailiu, haobo, igor, dhruba

Reviewed By: kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15411
2014-01-24 17:16:22 -08:00
Igor Canadi e832e72b31 Revert "Moving to glibc-fb"
This reverts commit d24961b65e.

For some reason, glibc2.17-fb breaks gflags. Reverting for now
2014-01-24 11:50:38 -08:00
Igor Canadi d24961b65e Moving to glibc-fb
Summary:
It looks like we might have some trouble when building the new release with 4.8, since fbcode is using glibc2.17-fb by default and we are using glibc2.17. It was reported by Benjamin Renard in our internal group.

This diff moves our fbcode build to use glibc2.17-fb by default. I got some linker errors when compiling, complaining that `google::SetUsageMessage()` was undefined. After deleting all offending lines, the compile was successful and everything works.

Test Plan:
Compiled
Ran ./db_bench ./db_stress ./db_repl_stress

Reviewers: kailiu

Reviewed By: kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15405
2014-01-24 10:24:08 -08:00
Igor Canadi 83681bf9ef Statistics code cleanup
Summary: I'm separating code-cleanup part of https://reviews.facebook.net/D14517. This will make D14517 easier to understand and this diff easier to review.

Test Plan: make check

Reviewers: haobo, kailiu, sdong, dhruba, tnovak

Reviewed By: tnovak

CC: leveldb

Differential Revision: https://reviews.facebook.net/D15099
2014-01-17 12:46:06 -08:00
Igor Canadi eb12e47e0e Killing Transform Rep
Summary:
Let's get rid of TransformRep and it's children. We have confirmed that HashSkipListRep works better with multifeed, so there is no benefit to keeping this around.

This diff is mostly just deleting references to obsoleted functions. I also have a diff for fbcode that we'll need to push when we switch to new release.

I had to expose HashSkipListRepFactory in the client header files because db_impl.cc needs access to GetTransform() function for SanitizeOptions.

Test Plan: make check

Reviewers: dhruba, haobo, kailiu, sdong

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14397
2013-12-03 12:42:15 -08:00
Igor Canadi 469a9f32a7 Fix two nasty use-after-free-bugs
Summary:
These bugs were caught by ASAN crash test.
1. The first one, in table/filter_block.cc is very nasty. We first reference entries_ and store the reference to Slice prev. Then, we call entries_.append(), which can change the reference. The Slice prev now points to junk.
2. The second one is a bug in a test, so it's not very serious. Once we set read_opts.prefix, we never clear it, so some other function might still reference it.

Test Plan: asan crash test now runs more than 5 mins. Before, it failed immediately. I will run the full one, but the full one takes quite some time (5 hours)

Reviewers: dhruba, haobo, kailiu

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14223
2013-11-19 21:01:48 -08:00
kailiu 97d8e573a6 make util/env_posix.cc work under mac
Summary: This diff invoves some more complicated issues in the posix environment.

Test Plan: works under mac os. will need to verify dev box.

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D14061
2013-11-16 23:44:39 -08:00
Pascal Borreli 443e04e62d Fixed typos 2013-11-16 11:21:34 +00:00
Kai Liu 88ba331c1a Add the index/filter block cache
Summary: This diff leverage the existing block cache and extend it to cache index/filter block.

Test Plan:
Added new tests in db_test and table_test

The correctness is checked by:

1. make check
2. make valgrind_check

Performance is test by:

1. 10 times of build_tools/regression_build_test.sh on two versions of rocksdb before/after the code change. Test results suggests no significant difference between them. For the two key operatons `overwrite` and `readrandom`, the average iops are both 20k and ~260k, with very small variance).
2. db_stress.

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb, haobo, xjin

Differential Revision: https://reviews.facebook.net/D13167
2013-11-12 22:46:51 -08:00
Kai Liu 22e1b04deb Quick fix for a string format
Summary:

Fix one more string format issue that throws warning in mac
2013-11-12 21:22:32 -08:00
shamdor c2be2cba04 WAL log retention policy based on archive size.
Summary:
Archive cleaning will still happen every WAL_ttl seconds
but archived logs will be deleted only if archive size
is greater then a WAL_size_limit value.
Empty archived logs will be deleted evety WAL_ttl.

Test Plan:
1. Unit tests pass.
2. Benchmark.

Reviewers: emayanke, dhruba, haobo, sdong, kailiu, igor

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13869
2013-11-06 18:46:28 -08:00
Dhruba Borthakur b4ad5e89ae Implement a compressed block cache.
Summary:
Rocksdb can now support a uncompressed block cache, or a compressed
block cache or both. Lookups first look for a block in the
uncompressed cache, if it is not found only then it is looked up
in the compressed cache. If it is found in the compressed cache,
then it is uncompressed and inserted into the uncompressed cache.

It is possible that the same block resides in the compressed cache
as well as the uncompressed cache at the same time. Both caches
have their own individual LRU policy.

Test Plan: Unit test case attached.

Reviewers: kailiu, sdong, haobo, leveldb

Reviewed By: haobo

CC: xjin, haobo

Differential Revision: https://reviews.facebook.net/D12675
2013-11-01 14:31:35 -07:00
Piyush Garg 1e4375d2ef Task #3071144 Enhance ldb (db dump tool for leveldb) to report row counters for each row type
Summary: Added an option --count_delim=<char> which takes the given character as delimiter ('.' by default) and reports count of each row type found in the db

Test Plan:
1. Created test in file (for DBDumperCommand) rocksdb/tools/ldb_test.py which puts various key value pair in db and checks the output using dump --count_delim ,--count_delim="." and --count_delim=",".
2. Created test in file (for InternalDumperCommand) rocksdb/tools/ldb_test.py which puts various key value pair in db and checks the output using dump --count_delim ,--count_delim="." and --count_delim=",".
3. Manually created a database with several keys of several type and verified by running the command
   ./ldb db=<path> dump --count_delim="<char>"
   ./ldb db=<path> idump --count_delim="<char>"

Reviewers: vamsi, dhruba, emayanke, kailiu

Reviewed By: vamsi

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13815
2013-11-01 13:59:14 -07:00
Igor Canadi 138a8eee8e Fix make release
Summary: Don't define if already defined.

Test Plan: Running make release in parallel, will not commit if it fails.

Reviewers: emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13833
2013-10-31 11:47:22 -07:00
Siying Dong f03b2df010 Follow-up Cleaning-up After D13521
Summary:
This patch is to address @haobo's comments on D13521:
1. rename Table to be TableReader and make its factory function to be GetTableReader
2. move the compression type selection logic out of TableBuilder but to compaction logic
3. more accurate comments
4. Move stat name constants into BlockBasedTable implementation.
5. remove some uncleaned codes in simple_table_db_test

Test Plan: pass test suites.

Reviewers: haobo, dhruba, kailiu

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13785
2013-10-30 10:52:33 -07:00
Siying Dong d4eec30ed0 Make "Table" pluggable
Summary: This patch makes Table and TableBuilder a abstract class and make all the implementation of the current table into BlockedBasedTable and BlockedBasedTable Builder.

Test Plan: Make db_test.cc to work with block based table. Add a new test simple_table_db_test.cc where a different simple table format is implemented.

Reviewers: dhruba, haobo, kailiu, emayanke, vamsi

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13521
2013-10-28 17:54:09 -07:00
Igor Canadi 8ace6b0f91 Run benchmark with no debug
Summary: assert(Overlap) significantly slows down the benchmark. Ignore assertions when executing blob_store_bench.

Test Plan: Ran the benchmark

Reviewers: dhruba, haobo, kailiu

Reviewed By: kailiu

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13737
2013-10-28 17:42:33 -07:00
Igor Canadi 17991cd5a0 Fix data race in BlobStore benchmark
Summary: Apparently C++ doesn't like it if you copy around its atomic<> variables. When running a benchmark for a longer time, benchmark used to stall. Changed WorkerThread in config to WorkerThread*. It works now.

Test Plan: Ran benchmark

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13731
2013-10-28 16:31:44 -07:00
Slobodan Predolac e44976b199 Conversion of db_bench, db_stress and db_repl_stress to use gflags
Summary: Converted db_stress, db_repl_stress and db_bench to use gflags

Test Plan: I tested by printing out all the flags from old and new versions. Tried defaults, + various combinations with "interesting flags". Also, tested by running db_crashtest.py and db_crashtest2.py.

Reviewers: emayanke, dhruba, haobo, kailiu, sdong

Reviewed By: emayanke

CC: leveldb, xjin

Differential Revision: https://reviews.facebook.net/D13581
2013-10-24 07:43:14 -07:00
Igor Canadi 7e2c1ba173 BlobStore Benchmark
Summary:
Finally, arc diff works again! This has been sitting in my repo for a while.

I would like some comments on my BlobStore benchmark. We don't have to check this in.

Also, I don't do any fsync in the BlobStore, so this is all extremely fast. I'm not sure what durability guarantees we need from the BlobStore.

Test Plan: Nope

Reviewers: dhruba, haobo, kailiu, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13527
2013-10-23 17:31:12 -07:00
Dhruba Borthakur 9cd221094c Add appropriate LICENSE and Copyright message.
Summary:
Add appropriate LICENSE and Copyright message.

Test Plan:
make check

Reviewers:

CC:

Task ID: #

Blame Rev:
2013-10-16 17:48:41 -07:00
Siying Dong 88f2f89068 Change Function names from Compaction->Flush When they really mean Flush
Summary: When I debug the unit test failures when enabling background flush thread, I feel the function names can be made clearer for people to understand. Also, if the names are fixed, in many places, some tests' bugs are obvious (and some of those tests are failing). This patch is to clean it up for future maintenance.

Test Plan: Run test suites.

Reviewers: haobo, dhruba, xjin

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13431
2013-10-14 15:12:15 -07:00
Dhruba Borthakur 4463b11cad Migrate names of properties from 'leveldb' prefix to 'rocksdb' prefix.
Summary: Migrate names of properties from 'leveldb' prefix to 'rocksdb' prefix.

Test Plan: make check

Reviewers: emayanke, haobo

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13311
2013-10-06 00:14:26 -07:00
Dhruba Borthakur a143ef9b38 Change namespace from leveldb to rocksdb
Summary:
Change namespace from leveldb to rocksdb. This allows a single
application to link in open-source leveldb code as well as
rocksdb code into the same process.

Test Plan: compile rocksdb

Reviewers: emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13287
2013-10-04 11:59:26 -07:00
Mayank Agarwal 6b34021fc2 Triggering verify for gets also
Summary: Will use iterators to verify keys in the db for half of its keys and Gets for the other half.

Test Plan: ./db_stress --max_key=1000 --ops_per_thread=100

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D13227
2013-10-02 11:22:17 -07:00
Natalie Hildebrandt 7edb92b843 Phase 2 of iterator stress test
Summary: Using an iterator instead of the Get method, each thread goes through a portion of the database and verifies values by comparing to the shared state.

Test Plan:
./db_stress --db=/tmp/tmppp --max_key=10000 --ops_per_thread=10000

To test some basic cases, the following lines can be added (each set in turn) to the verifyDb method with the following expected results:

    // Should abort with "Unexpected value found"
    shared.Delete(start);

    // Should abort with "Value not found"
    WriteOptions write_opts;
    db_->Delete(write_opts, Key(start));

    // Should succeed
    WriteOptions write_opts;
    shared.Delete(start);
     db_->Delete(write_opts, Key(start));

    // Should abort with "Value not found"
    WriteOptions write_opts;
    db_->Delete(write_opts, Key(start + (end-start)/2));

    // Should abort with "Value not found"
    db_->Delete(write_opts, Key(end-1));

    // Should abort with "Unexpected value"
    shared.Delete(end-1);

    // Should abort with "Unexpected value"
    shared.Delete(start + (end-start)/2);

    // Should abort with "Value not found"
    db_->Delete(write_opts, Key(start));
    shared.Delete(start);
    db_->Delete(write_opts, Key(end-1));
    db_->Delete(write_opts, Key(end-2));

To test the out of range abort, change the key in the for loop to Key(i+1), so that the key defined by the index i is now outside of the supposed range of the database.

Reviewers: emayanke

Reviewed By: emayanke

CC: dhruba, xjin

Differential Revision: https://reviews.facebook.net/D13071
2013-09-30 16:48:00 -07:00
Natalie Hildebrandt 9262061b0d Fixing crashing tests to include iterpercent param
Summary: Adding in the iterpercent flag to tests.

Test Plan: make crash_test

Reviewers: emayanke

Reviewed By: emayanke

Differential Revision: https://reviews.facebook.net/D13035
2013-09-20 16:27:22 -07:00
Natalie Hildebrandt 433541823c Phase 1 of an iterator stress test
Summary:
Added MultiIterate() which does a seek and some Next/Prev
calls.  Iterator status is checked only, no data integrity check

Test Plan:
make db_stress
./db_stress --iterpercent=<nonzero value> --readpercent=, etc.

Reviewers: emayanke, dhruba, xjin

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12915
2013-09-19 16:47:24 -07:00
Dhruba Borthakur 4012ca1c7b Added a parameter to limit the maximum space amplification for universal compaction.
Summary:
Added a new field called max_size_amplification_ratio in the
CompactionOptionsUniversal structure. This determines the maximum
percentage overhead of space amplification.

The size amplification is defined to be the ratio between the size of
the oldest file to the sum of the sizes of all other files. If the
size amplification exceeds the specified value, then min_merge_width
and max_merge_width are ignored and a full compaction of all files is done.
A value of 10 means that the size a database that stores 100 bytes
of user data could occupy 110 bytes of physical storage.

Test Plan: Unit test DBTest.UniversalCompactionSpaceAmplification added.

Reviewers: haobo, emayanke, xjin

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12825
2013-09-13 16:27:18 -07:00
Dhruba Borthakur 1186192ed1 Replace include/leveldb with include/rocksdb.
Summary: Replace include/leveldb with include/rocksdb.

Test Plan:
make clean; make check
make clean; make release

Differential Revision: https://reviews.facebook.net/D12489
2013-08-23 10:51:00 -07:00
Jim Paton 74781a0c49 Add three new MemTableRep's
Summary:
This patch adds three new MemTableRep's: UnsortedRep, PrefixHashRep, and VectorRep.

UnsortedRep stores keys in an std::unordered_map of std::sets. When an iterator is requested, it dumps the keys into an std::set and iterates over that.

VectorRep stores keys in an std::vector. When an iterator is requested, it creates a copy of the vector and sorts it using std::sort. The iterator accesses that new vector.

PrefixHashRep stores keys in an unordered_map mapping prefixes to ordered sets.

I also added one API change. I added a function MemTableRep::MarkImmutable. This function is called when the rep is added to the immutable list. It doesn't do anything yet, but it seems like that could be useful. In particular, for the vectorrep, it means we could elide the extra copy and just sort in place. The only reason I haven't done that yet is because the use of the ArenaAllocator complicates things (I can elaborate on this if needed).

Test Plan:
make -j32 check
./db_stress --memtablerep=vector
./db_stress --memtablerep=unsorted
./db_stress --memtablerep=prefixhash --prefix_size=10

Reviewers: dhruba, haobo, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12117
2013-08-22 23:10:02 -07:00
Xing Jin af732c7ba8 Add universal compaction to db_stress nightly build
Summary:
Most code change in this diff is code cleanup/rewrite. The logic changes include:

(1) add universal compaction to db_crashtest2.py
(2) randomly set --test_batches_snapshots to be 0 or 1 in db_crashtest2.py. Old codes always use 1.
(3) use different tmp directory as db directory in different runs. I saw some intermittent errors in my local tests. Use of different tmp directory seems to be able to solve the issue.

Test Plan: Have run "make crashtest" for multiple times. Also run "make all check"

Reviewers: emayanke, dhruba, haobo

Reviewed By: emayanke

Differential Revision: https://reviews.facebook.net/D12369
2013-08-20 17:37:49 -07:00
Deon Nicholas b87dcae1a3 Made merge_oprator a shared_ptr; and added TTL unit tests
Test Plan:
- make all check;
- make release;
- make stringappend_test; ./stringappend_test

Reviewers: haobo, emayanke

Reviewed By: haobo

CC: leveldb, kailiu

Differential Revision: https://reviews.facebook.net/D12381
2013-08-20 13:35:28 -07:00
Mayank Agarwal 8a3547d38e API for getting archived log files
Summary: Also expanded class LogFile to have startSequene and FileSize and exposed it publicly

Test Plan: make all check

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12087
2013-08-19 13:37:04 -07:00
Deon Nicholas ad48c3c262 Benchmarking for Merge Operator
Summary:
Updated db_bench and utilities/merge_operators.h to allow for dynamic benchmarking
of merge operators in db_bench. Added a new test (--benchmarks=mergerandom), which performs
a bunch of random Merge() operations over random keys. Also added a "--merge_operator=" flag
so that the tester can easily benchmark different merge operators. Currently supports
the PutOperator and UInt64Add operator. Support for stringappend or list append may come later.

Test Plan:
	1. make db_bench
	2. Test the PutOperator (simulating Put) as follows:
./db_bench --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom --merge_operator=put
--threads=2

3. Test the UInt64AddOperator (simulating numeric addition) similarly:
./db_bench --value_size=8 --benchmarks=fillrandom,readrandom,updaterandom,readrandom,mergerandom,readrandom
--merge_operator=uint64add --threads=2

Reviewers: haobo, dhruba, zshao, MarkCallaghan

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11535
2013-08-15 17:13:07 -07:00
Tyler Harter 85d83a150b Update crashtests to match D12267
Summary: I changed the db_stress configs, but forgot to update the scripts using the old configs.

Test Plan: 'make blackbox_crash_test' and 'make whitebox_crash_test' start running normally now (I haven't run them til the end, though).

Reviewers: vamsi

Reviewed By: vamsi

CC: leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D12303
2013-08-15 10:14:32 -07:00
Xing Jin 0a5afd1afc Minor fix to current codes
Summary:
Minor fix to current codes, including: coding style, output format,
comments. No major logic change. There are only 2 real changes, please see my inline comments.

Test Plan: make all check

Reviewers: haobo, dhruba, emayanke

Differential Revision: https://reviews.facebook.net/D12297
2013-08-14 23:03:57 -07:00
Tyler Harter 7612d496ff Add prefix scans to db_stress (and bug fix in prefix scan)
Summary: Added support for prefix scans.

Test Plan: ./db_stress --max_key=4096 --ops_per_thread=10000

Reviewers: dhruba, vamsi

Reviewed By: vamsi

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12267
2013-08-14 16:58:36 -07:00
Dhruba Borthakur f3967a5132 Merge remote-tracking branch 'origin' into performance 2013-08-12 09:58:50 -07:00
Haobo Xu 58a0ae06dc [RocksDB] Improve sst_dump to take user key range
Summary: The ability to dump internal keys associated with certain user keys, directly from sst files, is very useful for diagnosis. Will incorporate it directly into ldb later.

Test Plan: run it

Reviewers: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D12075
2013-08-08 15:31:12 -07:00
Dhruba Borthakur f5fa26b6a9 Merge branch 'performance' of github.com:facebook/rocksdb into performance
Conflicts:
	db/builder.cc
	db/db_impl.cc
	db/version_set.cc
	include/leveldb/statistics.h
2013-08-07 11:58:06 -07:00
Mayank Agarwal 1d7b4765c3 Expose base db object from ttl wrapper
Summary: rocksdb replicaiton will need this when writing value+TS from master to slave 'as is'

Test Plan: make

Reviewers: dhruba, vamsi, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11919
2013-08-05 18:44:14 -07:00
Dhruba Borthakur 711a30cb30 Merge branch 'master' into performance
Conflicts:
	include/leveldb/options.h
	include/leveldb/statistics.h
	util/options.cc
2013-08-02 10:22:08 -07:00
Mayank Agarwal 59d0b02f8b Expand KeyMayExist to return the proper value if it can be found in memory and also check block_cache
Summary: Removed KeyMayExistImpl because KeyMayExist demanded Get like semantics now. Removed no_io from memtable and imm because we need the proper value now and shouldn't just stop when we see Merge in memtable. Added checks to block_cache. Updated documentation and unit-test

Test Plan: make all check;db_stress for 1 hour

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11853
2013-08-01 09:07:46 -07:00
Mayank Agarwal f3baeecd44 Adding filter_deletes to crash_tests run in jenkins
Summary: filter_deletes options introduced in db_stress makes it drop Deletes on key if KeyMayExist(key) returns false on the key. code change was simple and tested so not wasting reviewer's time.

Test Plan: maek crash_test; python tools/db_crashtest[1|2].py

CC: dhruba, vamsi

Differential Revision: https://reviews.facebook.net/D11769
2013-07-23 13:49:16 -07:00
Mayank Agarwal bf66c10b13 Use KeyMayExist for WriteBatch-Deletes
Summary:
Introduced KeyMayExist checking during writebatch-delete and removed from Outer Delete API because it uses writebatch-delete.
Added code to skip getting Table from disk if not already present in table_cache.
Some renaming of variables.
Introduced KeyMayExistImpl which allows checking since specified sequence number in GetImpl useful to check partially written writebatch.
Changed KeyMayExist to not be pure virtual and provided a default implementation.
Expanded unit-tests in db_test to check appropriately.
Ran db_stress for 1 hour with ./db_stress --max_key=100000 --ops_per_thread=10000000 --delpercent=50 --filter_deletes=1 --statistics=1.

Test Plan: db_stress;make check

Reviewers: dhruba, haobo

Reviewed By: dhruba

CC: leveldb, xjin

Differential Revision: https://reviews.facebook.net/D11745
2013-07-23 13:36:50 -07:00
Dhruba Borthakur 4a745a5666 Merge branch 'master' into performance
Conflicts:
	db/version_set.cc
	include/leveldb/options.h
	util/options.cc
2013-07-17 15:05:57 -07:00
Mayank Agarwal 2a986919d6 Make rocksdb-deletes faster using bloom filter
Summary:
Wrote a new function in db_impl.c-CheckKeyMayExist that calls Get but with a new parameter turned on which makes Get return false only if bloom filters can guarantee that key is not in database. Delete calls this function and if the option- deletes_use_filter is turned on and CheckKeyMayExist returns false, the delete will be dropped saving:
1. Put of delete type
2. Space in the db,and
3. Compaction time

Test Plan:
make all check;
will run db_stress and db_bench and enhance unit-test once the basic design gets approved

Reviewers: dhruba, haobo, vamsi

Reviewed By: haobo

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11607
2013-07-11 12:11:11 -07:00
Mayank Agarwal 821889e207 Print complete statistics in db_stress
Summary: db_stress should alos print complete statistics like db_bench. Needed this when I wanted to measure number of delete-IOs dropped due to CheckKeyMayExist to be introduced to rocksdb codebase later- to make deltes in rocksdb faster

Test Plan: make db_stress;./db_stress --max_key=100 --ops_per_thread=1000 --statistics=1

Reviewers: sheki, dhruba, vamsi, haobo

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D11655
2013-07-10 18:07:13 -07:00
Dhruba Borthakur 116ec527f2 Renamed 'hybrid_compaction' tp be "Universal Compaction'.
Summary:
All the universal compaction parameters are encapsulated in
a new file universal_compaction.h

Test Plan:
make check
2013-07-03 15:47:53 -07:00
Dhruba Borthakur 47c4191fe8 Reduce write amplification by merging files in L0 back into L0
Summary:
There is a new option called hybrid_mode which, when switched on,
causes HBase style compactions.  Files from L0 are
compacted back into L0. This meat of this compaction algorithm
is in PickCompactionHybrid().

All files reside in L0. That means all files have overlapping
keys. Each file has a time-bound, i.e. each file contains a
range of keys that were inserted around the same time. The
start-seqno and the end-seqno refers to the timeframe when
these keys were inserted.  Files that have contiguous seqno
are compacted together into a larger file. All files are
ordered from most recent to the oldest.

The current compaction algorithm starts to look for
candidate files starting from the most recent file. It continues to
add more files to the same compaction run as long as the
sum of the files chosen till now is smaller than the next
candidate file size. This logic needs to be debated
and validated.

The above logic should reduce write amplification to a
large extent... will publish numbers shortly.

Test Plan: dbstress runs for 6 hours with no data corruption (tested so far).

Differential Revision: https://reviews.facebook.net/D11289
2013-06-30 20:07:04 -07:00
Haobo Xu 0f78fad9f5 [RocksDB] add back --mmap_read options to crashtest
Summary: As title, now that db_stress supports --map_read properly

Test Plan: make crash_test

Reviewers: vamsi, emayanke, dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11391
2013-06-19 16:15:59 -07:00
Haobo Xu 96be2c4ee0 [RocksDB] Add mmap_read option for db_stress
Summary: as title, also removed an incorrect assertion

Test Plan: make check; db_stress --mmap_read=1; db_stress --mmap_read=0

Reviewers: dhruba, emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11367
2013-06-19 10:28:32 -07:00
Dhruba Borthakur 836534debd Enhance dbstress to allow specifying compaction trigger for L0.
Summary:
Rocksdb allos specifying the number of files in L0 that triggers
compactions. Expose this api as a command line parameter for
running db_stress.

Test Plan: Run test

Reviewers: sheki, emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11343
2013-06-17 14:15:09 -07:00
Haobo Xu 0c2a2dd5e8 [RocksDB] Fix build. Removed deprecated option --mmap_read from db_crashtest
Summary: As title

Test Plan: db_crashtest

Reviewers: vamsi, emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11271
2013-06-13 13:48:35 -07:00
Haobo Xu bdf1085944 [RocksDB] cleanup EnvOptions
Summary:
This diff simplifies EnvOptions by treating it as POD, similar to Options.
- virtual functions are removed and member fields are accessed directly.
- StorageOptions is removed.
- Options.allow_readahead and Options.allow_readahead_compactions are deprecated.
- Unused global variables are removed: useOsBuffer, useFsReadAhead, useMmapRead, useMmapWrite

Test Plan: make check; db_stress

Reviewers: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11175
2013-06-12 11:17:19 -07:00
Mayank Agarwal 7a6bd8e975 Modifying options to db_stress when it is run with db_crashtest
Summary: These extra options caught some bugs. Will be run via Jenkins now with the crash_test

Test Plan: ./make crashtest

Reviewers: dhruba, vamsi

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D11151
2013-06-09 09:58:46 -07:00
Vamsi Ponnekanti 3bb9449906 [Fix whilebox crash test failure]
Summary:
I think the check for "error" that I added had caused
false alarm. Fixed that.

Test Plan:
Revert Plan: OK

Task ID: #

Reviewers: emayanke, dhruba

Reviewed By: emayanke

Differential Revision: https://reviews.facebook.net/D11139
2013-06-07 11:34:46 -07:00
Vamsi Ponnekanti 5cf7a00bda [Make most of the changes suggested by Aaron]
Summary: $title

Test Plan:
Revert Plan: OK

Task ID: #

Reviewers: emayanke, akushner

Reviewed By: akushner

Differential Revision: https://reviews.facebook.net/D10923
2013-06-06 17:31:45 -07:00
Haobo Xu c2e2460f8a [RocksDB] Expose DBStatistics
Summary: Make Statistics usable by client

Test Plan: make check; db_bench

Reviewers: dhruba

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D10899
2013-05-23 11:49:38 -07:00
Vamsi Ponnekanti 760dd4750f [Kill randomly at various points in source code for testing]
Summary:
This is initial version. A few ways in which this could
be extended in the future are:
(a) Killing from more places in source code
(b) Hashing stack and using that hash in determining whether to crash.
    This is to avoid crashing more often at source lines that are executed
    more often.
(c) Raising exceptions or returning errors instead of killing

Test Plan:
This whole thing is for testing.

Here is part of output:

python2.7 tools/db_crashtest2.py -d 600
Running db_stress

db_stress retncode -15 output LevelDB version     : 1.5
Number of threads   : 32
Ops per thread      : 10000000
Read percentage     : 50
Write-buffer-size   : 4194304
Delete percentage   : 30
Max key             : 1000
Ratio #ops/#keys    : 320000
Num times DB reopens: 0
Batches/snapshots   : 1
Purge redundant %   : 50
Num keys per lock   : 4
Compression         : snappy
------------------------------------------------
No lock creation because test_batches_snapshots set
2013/04/26-17:55:17  Starting database operations
Created bg thread 0x7fc1f07ff700
... finished 60000 ops
Running db_stress

db_stress retncode -15 output LevelDB version     : 1.5
Number of threads   : 32
Ops per thread      : 10000000
Read percentage     : 50
Write-buffer-size   : 4194304
Delete percentage   : 30
Max key             : 1000
Ratio #ops/#keys    : 320000
Num times DB reopens: 0
Batches/snapshots   : 1
Purge redundant %   : 50
Num keys per lock   : 4
Compression         : snappy
------------------------------------------------
Created bg thread 0x7ff0137ff700
No lock creation because test_batches_snapshots set
2013/04/26-17:56:15  Starting database operations
... finished 90000 ops

Revert Plan: OK

Task ID: #2252691

Reviewers: dhruba, emayanke

Reviewed By: emayanke

CC: leveldb, haobo

Differential Revision: https://reviews.facebook.net/D10581
2013-05-21 18:21:49 -07:00
Mayank Agarwal 3827403c51 Check to db_stress to not allow disable_wal and reopens set together
Summary: db can't reopen safely with disable_wal set!

Test Plan: make db_stress; run db_stress with disable_wal and reopens set and see error

Reviewers: dhruba, vamsi

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10857
2013-05-21 11:49:29 -07:00
Mayank Agarwal 15ccd10c7f A nit to db_stress to terminate generated value at proper length
Summary: Will help while debugging if the generated value is truncated at proper length.

Test Plan: make db_stress;/db_stress --max_key=10000 --db=/tmp/mcr --threads=1 --ops_per_thread=10000

Reviewers: dhruba, vamsi

Reviewed By: vamsi

Differential Revision: https://reviews.facebook.net/D10845
2013-05-20 18:13:32 -07:00
Mayank Agarwal 8a48410f09 Enhance the ldb tool to support ttl databases
Summary: ldb works with raw data from the database and needs to be aware of ttl-database to work with it meaningfully. '-ttl' option now tells it that. Also added onto the ldb_test.py test. This option may be specified alongwith put, get, scan or dump. There is no support to provide a ttl-value and it uses default forever because there is no use-case for this currently.

Test Plan: make ldb_test; python tools/ldb_test.py

Reviewers: dhruba, sheki, haobo, vamsi

Reviewed By: sheki

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10797
2013-05-15 12:10:00 -07:00
Mayank Agarwal d786b25e2d Timestamp and TTL Wrapper for rocksdb
Summary:
When opened with DBTimestamp::Open call, timestamps are prepended to and stripped from the value during subsequent Put and Get calls respectively. The Timestamp is used to discard values in Get and custom compaction filter which have exceeded their TTL which is specified during Open.
Have made a temporary change to Makefile to let us test with the temporary file TestTime.cc. Have also changed the private members of db_impl.h to protected to let them be inherited by the new class DBTimestamp

Test Plan: make db_timestamp; TestTime.cc(will not check it in) shows how to use the apis currently, but I will write unit-tests shortly

Reviewers: dhruba, vamsi, haobo, sheki, heyongqiang, vkrest

Reviewed By: vamsi

CC: zshao, xjin, vkrest, MarkCallaghan

Differential Revision: https://reviews.facebook.net/D10311
2013-05-02 16:34:42 -07:00
Haobo Xu eb6d139666 [RocksDB] Move table.h to table/
Summary:
- don't see a point exposing table.h to the public.
- fixed make clean to remove also *.d files.

Test Plan: make check; db_stress

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10479
2013-04-22 16:07:56 -07:00
Abhishek Kona dae7379050 [RocksDB] Expose LDB functioanality as a library call - clients can build their own LDB binary with additional options
Summary: Primarily a refactor. Introduced LDBTool interface to which customers can plug in their options and this will create their own version of ldb tool.

Test Plan: made ldb tool and tried it.

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10191
2013-04-11 20:21:49 -07:00
Mayank Agarwal f51b375062 Printing the options that db_crashtest.py is run with
Summary: To know which options the crashtest was run with. Also changed print to sys.stdout.write which is more standard.

Test Plan: python tools/db_crashtest.py

Reviewers: vamsi, akushner, dhruba

Reviewed By: akushner

Differential Revision: https://reviews.facebook.net/D10119
2013-04-10 14:03:10 -07:00
Mayank Agarwal faa32a72a6 Invoke crash test from the Makefile
Summary: make crash_test will now invoke the crash_test. Also some cleanup in the db_crashtest.py file

Test Plan: make crash_test

Reviewers: akushner, vamsi, sheki, dhruba

Reviewed By: vamsi

Differential Revision: https://reviews.facebook.net/D9987
2013-04-08 18:11:11 -07:00
Mayank Agarwal 9b3134f5ca Make provision for db_stress to work with a pre-existing dir
Summary: The crash_test depends on db_stress to work with pre-existing dir

Test Plan: make db_stress; Run db_stress with 'destroy_db_initially=0'

Reviewers: vamsi, dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10041
2013-04-08 13:12:27 -07:00
Mayank Agarwal 26f68d3939 db_stress #reopens should be less than ops_per_thread
Summary: For sanity w.r.t. the way we split up the reopens equally among the ops/thread

Test Plan: make db_stress; db_stress --ops_per_thread=10 --reopens=10 => error

Reviewers: vamsi, dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D10023
2013-04-08 12:09:17 -07:00
Vamsi Ponnekanti 2b9a360c8b [Getting warning while running db_crashtest]
Summary:
When I run db_crashtest, I am seeing lot of warnings that say db_stress completed
before it was killed. To fix that I made ops per thread a very large value so that it keeps
running until it is killed.

I also set #reopens to 0. Since we are killing the process anyway, the 'simulated crash'
that happens during reopen may not add additional value.

I usually see 10-25K ops happening before the kill. So I increased max_key from 100 to
1000 so that we use more distinct keys.

Test Plan:
Ran a few times.

Revert Plan: OK

Task ID: #

Reviewers: emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9909
2013-04-04 00:17:05 -07:00
Mayank Agarwal e937d47180 Python script to periodically run and kill the db_stress test
Summary: The script runs and kills the stress test periodically. Default values have been used in the script now. Should I make this a part of the Makefile or automated rocksdb build? The values can be easily changed in the script right now, but should I add some support for variable values or input to the script? I believe the script achieves its objective of unsafe crashes and reopening to expect sanity in the database.

Test Plan: python tools/db_crashtest.py

Reviewers: dhruba, vamsi, MarkCallaghan

Reviewed By: vamsi

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9369
2013-04-01 12:18:46 -07:00
Abhishek Kona 63f216ee0a memory manage statistics
Summary:
Earlier Statistics object was a raw pointer. This meant the user had to clear up
the Statistics object after creating the database. In most use cases the database is created in a function and the statistics pointer is out of scope. Hence the statistics object would never be deleted.
Now Using a shared_ptr to manage this.

Want this in before the next release.

Test Plan: make all check.

Reviewers: dhruba, emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9735
2013-03-27 11:27:39 -07:00
Simon Marlow a8bf8fe504 Integrate the manifest_dump command with ldb
Summary:
Syntax:

   manifest_dump [--verbose] --num=<manifest_num>

e.g.

$ ./ldb --db=/home/smarlow/tmp/testdb manifest_dump --num=12
manifest_file_number 13 next_file_number 14 last_sequence 3 log_number
11  prev_log_number 0
--- level 0 --- version# 0 ---
 6:116['a1' @ 1 : 1 .. 'a1' @ 1 : 1]
 10:130['a3' @ 2 : 1 .. 'a4' @ 3 : 1]
--- level 1 --- version# 0 ---
--- level 2 --- version# 0 ---
--- level 3 --- version# 0 ---
--- level 4 --- version# 0 ---
--- level 5 --- version# 0 ---
--- level 6 --- version# 0 ---

Test Plan: - Tested on an example DB (see output in summary)

Reviewers: sheki, dhruba

Reviewed By: sheki

CC: leveldb, heyongqiang

Differential Revision: https://reviews.facebook.net/D9609
2013-03-22 09:17:30 -07:00
Abhishek Kona 91660e048f [Rocksdb] codemod NULL to nullptr in tools/*.cc
Summary: simple sed command to replace NULL in tools directory. Was missed by the previous codemod.

Test Plan: it compiles

Reviewers: emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9621
2013-03-21 10:45:57 -07:00
Dhruba Borthakur ad96563b79 Ability to configure bufferedio-reads, filesystem-readaheads and mmap-read-write per database.
Summary:
This patch allows an application to specify whether to use bufferedio,
reads-via-mmaps and writes-via-mmaps per database. Earlier, there
was a global static variable that was used to configure this functionality.

The default setting remains the same (and is backward compatible):
 1. use bufferedio
 2. do not use mmaps for reads
 3. use mmap for writes
 4. use readaheads for reads needed for compaction

I also added a parameter to db_bench to be able to explicitly specify
whether to do readaheads for compactions or not.

Test Plan: make check

Reviewers: sheki, heyongqiang, MarkCallaghan

Reviewed By: sheki

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9429
2013-03-20 23:14:03 -07:00
Mayank Agarwal 4e581c6ab4 Fix ldb_test.py to hide garbage from std output
Summary: ldb_test.py did a lot of assertFalse checks and displayed all the failed messages on the std output making it confusing to tell a successful from a failed run. Also many empty lines used to be needlessly printed. Also added some progression-"feel-good" lines in the tests

Test Plan: python ldb_test.py

Reviewers: dhruba, sheki, dilipj, chip

Reviewed By: dilipj

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9297
2013-03-12 21:07:07 -07:00
Vamsi Ponnekanti 8ade935971 [Report the #gets and #founds in db_stress]
Summary:
Also added some comments and fixed some bugs in
stats reporting. Now the stats seem to match what is expected.

Test Plan:
[nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --test_batches_snapshots=1 --ops_per_thread=1000 --threads=1 --max_key=320
LevelDB version     : 1.5
Number of threads   : 1
Ops per thread      : 1000
Read percentage     : 10
Delete percentage   : 30
Max key             : 320
Ratio #ops/#keys    : 3
Num times DB reopens: 10
Batches/snapshots   : 1
Num keys per lock   : 4
Compression         : snappy
------------------------------------------------
No lock creation because test_batches_snapshots set
2013/03/04-15:58:56  Starting database operations
2013/03/04-15:58:56  Reopening database for the 1th time
2013/03/04-15:58:56  Reopening database for the 2th time
2013/03/04-15:58:56  Reopening database for the 3th time
2013/03/04-15:58:56  Reopening database for the 4th time
Created bg thread 0x7f4542bff700
2013/03/04-15:58:56  Reopening database for the 5th time
2013/03/04-15:58:56  Reopening database for the 6th time
2013/03/04-15:58:56  Reopening database for the 7th time
2013/03/04-15:58:57  Reopening database for the 8th time
2013/03/04-15:58:57  Reopening database for the 9th time
2013/03/04-15:58:57  Reopening database for the 10th time
2013/03/04-15:58:57  Reopening database for the 11th time
2013/03/04-15:58:57  Limited verification already done during gets
Stress Test : 1811.551 micros/op 552 ops/sec
            : Wrote 0.10 MB (0.05 MB/sec) (598% of 1011 ops)
            : Wrote 6050 times
            : Deleted 3050 times
            : 500/900 gets found the key
            : Got errors 0 times

[nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=1000 --threads=1 --max_key=320
LevelDB version     : 1.5
Number of threads   : 1
Ops per thread      : 1000
Read percentage     : 10
Delete percentage   : 30
Max key             : 320
Ratio #ops/#keys    : 3
Num times DB reopens: 10
Batches/snapshots   : 0
Num keys per lock   : 4
Compression         : snappy
------------------------------------------------
Creating 80 locks
2013/03/04-15:58:17  Starting database operations
2013/03/04-15:58:17  Reopening database for the 1th time
2013/03/04-15:58:17  Reopening database for the 2th time
2013/03/04-15:58:17  Reopening database for the 3th time
2013/03/04-15:58:17  Reopening database for the 4th time
Created bg thread 0x7fc0f5bff700
2013/03/04-15:58:17  Reopening database for the 5th time
2013/03/04-15:58:17  Reopening database for the 6th time
2013/03/04-15:58:18  Reopening database for the 7th time
2013/03/04-15:58:18  Reopening database for the 8th time
2013/03/04-15:58:18  Reopening database for the 9th time
2013/03/04-15:58:18  Reopening database for the 10th time
2013/03/04-15:58:18  Reopening database for the 11th time
2013/03/04-15:58:18  Starting verification
Stress Test : 1836.258 micros/op 544 ops/sec
            : Wrote 0.01 MB (0.01 MB/sec) (59% of 1011 ops)
            : Wrote 605 times
            : Deleted 305 times
            : 50/90 gets found the key
            : Got errors 0 times
2013/03/04-15:58:18  Verification successful

Revert Plan: OK

Task ID: #

Reviewers: emayanke, dhruba

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9081
2013-03-10 21:57:00 -07:00
amayank 3b6653b1f8 Make db_stress Not purge redundant keys on some opens
Summary: In light of the new option introduced by commit 806e264350 where the database has an option to compact before flushing to disk, we want the stress test to test both sides of the option. Have made it to 'deterministically' and configurably change that option for reopens.

Test Plan: make db_stress; ./db_stress with some differnet options

Reviewers: dhruba, vamsi

Reviewed By: dhruba

CC: leveldb, sheki

Differential Revision: https://reviews.facebook.net/D9165
2013-03-08 04:55:07 -08:00
Abhishek Kona d68880a1b9 Do not allow Transaction Log Iterator to fall ahead when writer is writing the same file
Summary:
Store the last flushed, seq no. in db_impl. Check against it in
transaction Log iterator. Do not attempt to read ahead if we do not know
if the data is flushed completely.
Does not work if flush is disabled. Any ideas on fixing that?
* Minor change, iter->Next is called the first time automatically for
* the first time.

Test Plan:
existing test pass.
More ideas on testing this?
Planning to run some stress test.

Reviewers: dhruba, heyongqiang

CC: leveldb

Differential Revision: https://reviews.facebook.net/D9087
2013-03-06 14:05:53 -08:00
Abhishek Kona a9866b721b Refactor statistics. Remove individual functions like incNumFileOpens
Summary:
Use only the counter mechanism. Do away with
incNumFileOpens, incNumFileClose, incNumFileErrors
s/NULL/nullptr/g in db/table_cache.cc

Test Plan: make clean check

Reviewers: dhruba, heyongqiang, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D8841
2013-02-25 13:58:34 -08:00
Vamsi Ponnekanti 465b9103f8 [Add a second kind of verification to db_stress
Summary:
Currently the test tracks all writes in memory and
uses it for verification at the end. This has 4 problems:
(a) It needs mutex for each write to ensure in-memory update
and leveldb update are done atomically. This slows down the
benchmark.
(b) Verification phase at the end is time consuming as well
(c) Does not test batch writes or snapshots
(d) We cannot kill the test and restart multiple times in a
loop because in-memory state will be lost.

I am adding a FLAGS_multi that does MultiGet/MultiPut/MultiDelete
instead of get/put/delete to get/put/delete a group of related
keys with same values atomically. Every get retrieves the group
of keys and checks that their values are same. This does not have
the above problems but the downside is that it does less amount
of validation than the other approach.

Test Plan:
This whole this is a test! Here is a small run. I am doing larger run now.

[nponnekanti@dev902 /data/users/nponnekanti/rocksdb] ./db_stress --ops_per_thread=10000 --multi=1 --ops_per_key=25
LevelDB version     : 1.5
Number of threads   : 32
Ops per thread      : 10000
Read percentage     : 10
Delete percentage   : 30
Max key             : 2147483648
Num times DB reopens: 10
Num keys per lock   : 4
Compression         : snappy
------------------------------------------------
Creating 536870912 locks
2013/02/20-16:59:32  Starting database operations
Created bg thread 0x7f9ebcfff700
2013/02/20-16:59:37  Reopening database for the 1th time
2013/02/20-16:59:46  Reopening database for the 2th time
2013/02/20-16:59:57  Reopening database for the 3th time
2013/02/20-17:00:11  Reopening database for the 4th time
2013/02/20-17:00:25  Reopening database for the 5th time
2013/02/20-17:00:36  Reopening database for the 6th time
2013/02/20-17:00:47  Reopening database for the 7th time
2013/02/20-17:00:59  Reopening database for the 8th time
2013/02/20-17:01:10  Reopening database for the 9th time
2013/02/20-17:01:20  Reopening database for the 10th time
2013/02/20-17:01:31  Reopening database for the 11th time
2013/02/20-17:01:31  Starting verification
Stress Test : 109.125 micros/op 22191 ops/sec
            : Wrote 0.00 MB (0.23 MB/sec) (59% of 32 ops)
            : Deleted 10 times
2013/02/20-17:01:31  Verification successful

Revert Plan: OK

Task ID: #

Reviewers: dhruba, emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D8733
2013-02-22 12:20:11 -08:00
amayank 1052ea236f Exploring the rocksdb stress test
Summary:
Fixed a bug in the stress-test where the correct size was not being
passed to GenerateValue. This bug was there since the beginning but assertions
were switched on in our code-base only recently.
Added comments on the top detailing how the stress test works and how to
quicken/slow it down after investigation.

Test Plan: make all check. ./db_stress

Reviewers: dhruba, asad

Reviewed By: dhruba

CC: vamsi, sheki, heyongqiang, zshao

Differential Revision: https://reviews.facebook.net/D8727
2013-02-21 11:27:28 -08:00
Abhishek Kona fe10200ddc Introduce histogram in statistics.h
Summary:
* Introduce is histogram in statistics.h
* stop watch to measure time.
* introduce two timers as a poc.
Replaced NULL with nullptr to fight some lint errors
Should be useful for google.

Test Plan:
ran db_bench and check stats.
make all check

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D8637
2013-02-20 10:43:32 -08:00
Dilip Antony Joseph 11ce6a060e Enhanced ldb to support data access commands
Summary: Added put/get/scan/batchput/delete/approxsize

Test Plan: Added pyunit script to test the newly added commands

Reviewers: chip, leveldb

Reviewed By: chip

CC: zshao, emayanke

Differential Revision: https://reviews.facebook.net/D7947
2013-01-28 11:38:26 -08:00
Chip Turner 0b83a83191 Fix poor error on num_levels mismatch and few other minor improvements
Summary:
Previously, if you opened a db with num_levels set lower than
the database, you received the unhelpful message "Corruption:
VersionEdit: new-file entry."  Now you get a more verbose message
describing the issue.

Also, fix handling of compression_levels (both the run-over-the-end
issue and the memory management of it).

Lastly, unique_ptr'ify a couple of minor calls.

Test Plan: make check

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D8151
2013-01-25 15:37:26 -08:00
Chip Turner 2fdf91a4f8 Fix a number of object lifetime/ownership issues
Summary:
Replace manual memory management with std::unique_ptr in a
number of places; not exhaustive, but this fixes a few leaks with file
handles as well as clarifies semantics of the ownership of file handles
with log classes.

Test Plan: db_stress, make check

Reviewers: dhruba

Reviewed By: dhruba

CC: zshao, leveldb, heyongqiang

Differential Revision: https://reviews.facebook.net/D8043
2013-01-23 16:54:11 -08:00
Kosie van der Merwe 3c3df7402f Fixed issues Valgrind found.
Summary:
Found issues with `db_test` and `db_stress` when running valgrind.

`DBImpl` had an issue where if an compaction failed then it will use the uninitialised file size of an output file is used. This manifested as the final call to output to the log in `DoCompactionWork()` branching on uninitialized memory (all the way down in printf's innards).

Test Plan:
Ran `valgrind --track_origins=yes ./db_test` and `valgrind ./db_stress` to see if issues disappeared.

Ran `make check` to see if there were no regressions.

Reviewers: vamsi, dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D8001
2013-01-17 10:04:45 -08:00
Abhishek Kona 7d5a4383bb rollover manifest file.
Summary:
Check in LogAndApply if the file size is more than the limit set in
Options.
Things to consider : will this be expensive?

Test Plan: make all check. Inputs on a new unit test?

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7701
2013-01-16 12:09:44 -08:00
Zheng Shao 0f762ac573 ldb: Add command "ldb query" to support random read from the database
Summary: The queries will come from stdin.  One key per line.  The output will be in stdout, in the format of "<key> ==> <value>" if found, or "<key>" if not found.  "--hex" uses HEX-encoded keys and values in both input and output.

Test Plan: ldb query --db=leveldb_db --hex

Reviewers: dhruba, emayanke, sheki

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7617
2012-12-26 20:37:42 -08:00
Zheng Shao 04832dbc02 sst_dump: Fix incorrect cmd args parsing
Summary: The current parsing logic ignores any additional chars after the arg.

Test Plan: "./sst_dump --verify_checksumAAA" now outputs error.

Reviewers: dhruba, sheki, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7611
2012-12-26 10:57:04 -08:00
Abhishek Kona 396c3aa08e Create a long running test to check GetUpdatesSince.
Summary:
Create an Executable with
* A thread to do Put's
* A thread to use GetUpdatesSince. Check if we miss any sequence
  Numbers.

Test Plan: It runs.

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7383
2012-12-21 14:10:06 -08:00
Zheng Shao 127ee2e677 manifest_dump: Fix the help message and make it consistent
Summary: ldb uses --output_hex so make manifest_dump do the same thing.

Test Plan:
[zshao@dev485 ~/git/rocksdb] ./manifest_dump --output_hex --file=/data/users/zshao/test_leveldb/MANIFEST-000034
manifest_file_number 42 next_file_number 43 last_sequence 2311567 log_number 36  prev_log_number 0
--- level 0 --- version# 0 ---
--- level 1 --- version# 0 ---
--- level 2 --- version# 0 ---
--- level 3 --- version# 0 ---
 5:27788699['0000027F4FBE0000' @ 1 : 1 .. '11CE749602C90000' @ 160642 : 1]
 7:27785313['11CE773DA7E00000' @ 160643 : 1 .. '23A4C63EC55D0000' @ 321094 : 1]
 9:27784288['23A4D581FCD30000' @ 321095 : 1 .. '3576291D12D00000' @ 481428 : 1]
 38:64378271['35762BF0E0CE0000' @ 481429 : 1 .. '5E987E0604700000' @ 852910 : 1]
 39:64379046['5E987EB0BDD50000' @ 852911 : 1 .. '87C954330E840000' @ 1224603 : 1]
 40:10169201['87C95507E49C0000' @ 1224604 : 1 .. '8E48DC0933B70000' @ 1283317 : 1]
 21:27798825['8E48DFB0D7CE0000' @ 1283318 : 1 .. 'A00675F8AD7E0000' @ 1443826 : 1]
 23:27793751['A006777536E30000' @ 1443827 : 1 .. 'B1D1787FE8670000' @ 1604553 : 1]
 25:27801659['B1D179289BB30000' @ 1604554 : 1 .. 'C396D3A69DCE0000' @ 1765012 : 1]
 27:27792661['C396DA1E03B10000' @ 1765013 : 1 .. 'D55C9974FCC10000' @ 1925513 : 1]
 29:27789095['D55C9B47CBC00000' @ 1925514 : 1 .. 'E71F67D11CCC0000' @ 2085789 : 1]
 31:27793145['E71F7A667E740000' @ 2085790 : 1 .. 'F8D4712EF3D90000' @ 2246454 : 1]
 41:11246031['F8D4715916A70000' @ 2246455 : 1 .. 'FFFFFCAE97DF0000' @ 2311567 : 1]
--- level 4 --- version# 0 ---
--- level 5 --- version# 0 ---
--- level 6 --- version# 0 ---

Reviewers: dhruba, sheki, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7575
2012-12-21 10:45:24 -08:00
Zheng Shao 01d57a33fe sst_dump: Add --output_hex option and output the same format as ldb dump
Summary: Now sst_dump has the same option --output_hex as "ldb dump" and also share the same output format.  So we can do "sst_dump ... | ldb load ..." for an experiment.

Test Plan:
[zshao@dev485 ~/git/rocksdb] ./sst_dump --file=/data/users/zshao/test_leveldb/000005.sst  --output_hex | head -n 2
0000027F4FBE00000101000000000000 ==> D901000000000000000057596F7520726563656976656420746F6461792773207370656369616C20676966742120436C69636B2041636365707420746F207669657720796F75722047696674206265666F72652069742064697361707065617273210000000000000000
000007F9C2D400000102000000000000 ==> D1010000000000000000544974277320676F6F6420746F206265204B696E67E280A6206F7220517565656E212048657265277320796F75722054756573646179204D79737465727920476966742066726F6D20436173746C6556696C6C65219B7B227A636F6465223A22633566306531633039663764222C227470223A22613275222C227A6B223A22663638663061343262666264303966383435666239626235366365396536643024246234506C345139382E734C4D33522169482D4F31315A64794C4B7A4F4653766D7863746534625F2A3968684E3433786521776C427636504A414355795F70222C227473223A313334343936313737357D0000000000000000

Reviewers: dhruba, sheki, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7587
2012-12-21 10:44:19 -08:00
Zheng Shao be9b862d47 ldb: add "ldb load" command
Summary: This command accepts key-value pairs from stdin with the same format of "ldb dump" command.  This allows us to try out different compression algorithms/block sizes easily.

Test Plan: dump, load, dump, verify the data is the same.

Reviewers: dhruba

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7443
2012-12-19 01:45:59 -08:00
Zheng Shao c28097538a manifest_dump: Add --hex=1 option
Summary: Without this option, manifest_dump does not print binary keys for files in a human-readable way.

Test Plan:
./manifest_dump --hex=1 --verbose=0 --file=/data/users/zshao/fdb_comparison/leveldb/fbobj.apprequest-0_0_original/MANIFEST-000002
manifest_file_number 589 next_file_number 590 last_sequence 2311567 log_number 543  prev_log_number 0
--- level 0 --- version# 0 ---
 532:1300357['0000455BABE20000' @ 2183973 : 1 .. 'FFFCA5D7ADE20000' @ 2184254 : 1]
 536:1308170['000198C75CE30000' @ 2203313 : 1 .. 'FFFCF94A79E30000' @ 2206463 : 1]
 542:1321644['0002931AA5E50000' @ 2267055 : 1 .. 'FFF77B31C5E50000' @ 2270754 : 1]
 544:1286390['000410A309E60000' @ 2278592 : 1 .. 'FFFE470A73E60000' @ 2289221 : 1]
 538:1298778['0006BCF4D8E30000' @ 2217050 : 1 .. 'FFFD77DAF7E30000' @ 2220489 : 1]
 540:1282353['00090D5356E40000' @ 2231156 : 1 .. 'FFFFF4625CE40000' @ 2231969 : 1]
--- level 1 --- version# 0 ---
 510:2112325['000007F9C2D40000' @ 1782099 : 1 .. '146F5B67B8D80000' @ 1905458 : 1]
 511:2121742['146F8A3023D60000' @ 1824388 : 1 .. '28BC8FBB9CD40000' @ 1777993 : 1]
 512:801631['28BCD396F1DE0000' @ 2080191 : 1 .. '3082DBE9ADDB0000' @ 1989927 : 1]

Reviewers: dhruba, sheki, emayanke

Reviewed By: dhruba

CC: leveldb

Differential Revision: https://reviews.facebook.net/D7425
2012-12-16 08:58:28 -08:00
Dhruba Borthakur 7889e09455 Enhance manifest_dump to print each individual edit.
Summary:
The manifest file contains a series of edits. If the verbose
option is switched on, then print each individual edit in the
manifest file. This helps in debugging.

Test Plan: make clean manifest_dump

Reviewers: emayanke, sheki

Reviewed By: sheki

CC: leveldb

Differential Revision: https://reviews.facebook.net/D6807
2012-11-19 12:04:35 -08:00
Abhishek Kona 30742e1692 LDB can read WAL.
Summary:
Add option to read WAL and print a summary for each record.
facebook task => #1885013

E.G. Output :
./ldb dump_wal --walfile=/tmp/leveldbtest-5907/dbbench/026122.log --header
Sequence,Count,ByteSize
49981,1,100033
49981,1,100033
49982,1,100033
49981,1,100033
49982,1,100033
49983,1,100033
49981,1,100033
49982,1,100033
49983,1,100033
49984,1,100033
49981,1,100033
49982,1,100033

Test Plan:
Works run
./ldb read_wal --wal-file=/tmp/leveldbtest-5907/dbbench/000078.log --header

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

CC: emayanke, leveldb, zshao

Differential Revision: https://reviews.facebook.net/D6675
2012-11-19 12:04:34 -08:00
Dhruba Borthakur 62e7583f94 enhance dbstress to simulate hard crash
Summary:
dbstress has an option to reopen the database. Make it such that the
previous handle is not closed before we reopen, this simulates a
situation similar to a process crash.

Added new api to DMImpl to remove the lock file.

Test Plan: run db_stress

Reviewers: emayanke

Reviewed By: emayanke

CC: leveldb

Differential Revision: https://reviews.facebook.net/D6777
2012-11-18 23:16:17 -08:00
Dhruba Borthakur c3392c9fe0 The db_stress test should also test multi-threaded compaction.
Summary:
Create more than one background compaction thread if specified.
This code peice is similar to what exists in db_bench.

Test Plan: make check

Differential Revision: https://reviews.facebook.net/D6753
2012-11-14 22:01:39 -08:00
Dhruba Borthakur 6c5a4d646a Merge branch 'master' into performance
Conflicts:
	db/db_impl.h
2012-11-14 21:39:52 -08:00
amayank e626261742 Introducing "database reopens" into the stress test. Database will reopen after a specified number of iterations (configurable) of each thread when they will wait for the databse to reopen.
Summary: FLAGS_reopen (configurable) specifies the number of times the databse is to be reopened. FLAGS_ops_per_thread is divided into points based on that reopen field. At these points all threads come together to wait for the databse to reopen. Each thread "votes" for the database to reopen and when all have voted, the database reopens.

Test Plan: make all;./db_stress

Reviewers: dhruba, MarkCallaghan, sheki, asad, heyongqiang

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D6627
2012-11-12 12:26:32 -08:00
heyongqiang c64796fd34 Fix test failure of reduce_num_levels
Summary:
I changed the reduce_num_levels logic to avoid "compactRange()" call if the current number of levels in use (levels that contain files) is smaller than the new num of levels.
And that change breaks the assert in reduce_levels_test

Test Plan: run reduce_levels_test

Reviewers: dhruba, MarkCallaghan

Reviewed By: dhruba

CC: emayanke, sheki

Differential Revision: https://reviews.facebook.net/D6651
2012-11-12 12:05:38 -08:00
heyongqiang 20d18a89a3 disable size compaction in ldb reduce_levels and added compression and file size parameter to it
Summary:
disable size compaction in ldb reduce_levels, this will avoid compactions rather than the manual comapction,

added --compression=none|snappy|zlib|bzip2 and --file_size= per-file size to ldb reduce_levels command

Test Plan: run ldb

Reviewers: dhruba, MarkCallaghan

Reviewed By: dhruba

CC: sheki, emayanke

Differential Revision: https://reviews.facebook.net/D6597
2012-11-09 10:14:47 -08:00
amayank 9e97bfdcde Introducing deletes for stress test
Summary: Stress test modified to do deletes and later verify them

Test Plan: running the test: db_stress

Reviewers: dhruba, heyongqiang, asad, sheki, MarkCallaghan

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D6567
2012-11-08 16:55:18 -08:00
Dhruba Borthakur 8143062edd Merge branch 'master' into performance
Conflicts:
	db/db_impl.cc
	db/version_set.cc
	util/options.cc
2012-11-07 15:11:37 -08:00
Dhruba Borthakur aa42c66814 Fix all warnings generated by -Wall option to the compiler.
Summary:
The default compilation process now uses "-Wall" to compile.
Fix all compilation error generated by gcc.

Test Plan: make all check

Reviewers: heyongqiang, emayanke, sheki

Reviewed By: heyongqiang

CC: MarkCallaghan

Differential Revision: https://reviews.facebook.net/D6525
2012-11-06 14:07:31 -08:00
Dhruba Borthakur 5f91868cee Merge branch 'master' into performance
Conflicts:
	db/version_set.cc
	util/options.cc
2012-11-05 16:51:55 -08:00
heyongqiang d55c2ba305 Add a tool to change number of levels
Summary: as subject.

Test Plan: manually test it, will add a testcase

Reviewers: dhruba, MarkCallaghan

Differential Revision: https://reviews.facebook.net/D6345
2012-11-05 10:17:39 -08:00
Dhruba Borthakur 1ca0584345 This is the mega-patch multi-threaded compaction
published in https://reviews.facebook.net/D5997.

Summary:
This patch allows compaction to occur in multiple background threads
concurrently.

If a manual compaction is issued, the system falls back to a
single-compaction-thread model. This is done to ensure correctess
and simplicity of code. When the manual compaction is finished,
the system resumes its concurrent-compaction mode automatically.

The updates to the manifest are done via group-commit approach.

Test Plan: run db_bench
2012-10-19 14:00:53 -07:00
Dhruba Borthakur 5dc784c233 Fix compilation problem with db_stress when using C11 compiler.
Summary:

Test Plan:

Reviewers:

CC:

Task ID: #

Blame Rev:
2012-10-12 17:00:25 -07:00
Asad K Awan 24f7983b1f [tools] Add a tool to stress test concurrent writing to levelDB
Summary:
Created a tool that runs multiple threads that concurrently read and write to levelDB.
All writes to the DB are stored in an in-memory hashtable and verified at the end of the
test. All writes for a given key are serialzied.

Test Plan:
 - Verified by writing only a few keys and logging all writes and verifying that values read and written are correct.
 - Verified correctness of value generator.
 - Ran with various parameters of number of keys, locks, and threads.

Reviewers: dhruba, MarkCallaghan, heyongqiang

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D5829
2012-10-10 12:12:55 -07:00
Mark Callaghan 98804f914f Fix compiler warnings and errors in ldb.c
Summary:
stdlib.h is needed for exit()
--readhead --> --readahead

Task ID: #

Blame Rev:

Test Plan:
compile

Revert Plan:

Database Impact:

Memcache Impact:

Other Notes:

EImportant:

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -
fix compiler warnings & errors

Reviewers: dhruba

Reviewed By: dhruba

CC: heyongqiang

Differential Revision: https://reviews.facebook.net/D5805
2012-10-03 06:46:59 -07:00
Abhishek Kona fec81318b0 Commandline tool to compace LevelDB databases.
Summary:
A simple CLI which calles DB->CompactRange()
Can take String key's as range.

Test Plan:
Inserted data into a table.
Waited for a minute, used compact tool on it. File modification time's
changed so Compact did something on the files.

Existing unit tests work.

Reviewers: heyongqiang, dhruba

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D5697
2012-10-01 10:49:19 -07:00
gjain 92368ab8a2 Add db_dump tool to dump DB keys
Summary:
Create a tool to iterate through keys and dump values. Current options
as follows:

db_dump --start=[START_KEY] --end=[END_KEY] --max_keys=[NUM] --stats
[PATH]

START_KEY: First key to start at
END_KEY: Key to end at (not inclusive)
NUM: Maximum number of keys to dump
PATH: Path to leveldb DB

The --stats command line argument prints out the DB stats before dumping
the keys.

Test Plan:
- Tested with invalid args
- Tested with invalid path
- Used empty DB
- Used filled DB
- Tried various permutations of command line options

Reviewers: dhruba, heyongqiang

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D5643
2012-09-27 09:53:58 -07:00
Dhruba Borthakur 407727b75f Fix compiler warnings. Use uint64_t instead of uint.
Summary: Fix compiler warnings. Use uint64_t instead of uint.

Test Plan: build using -Wall

Reviewers: heyongqiang

Reviewed By: heyongqiang

Differential Revision: https://reviews.facebook.net/D5355
2012-09-12 14:42:36 -07:00
Dhruba Borthakur fe93631678 Clean up compiler warnings generated by -Wall option.
Summary:
Clean up compiler warnings generated by -Wall option.
make clean all OPT=-Wall

This is a pre-requisite before making a new release.

Test Plan: compile and run unit tests

Reviewers: heyongqiang

Reviewed By: heyongqiang

Differential Revision: https://reviews.facebook.net/D5019
2012-08-29 14:24:51 -07:00
heyongqiang e675351ad7 number to read is not resepected
Summary: as subject

Test Plan: sst_dump --command=scan --file=

Reviewers: dhruba

Differential Revision: https://reviews.facebook.net/D4887
2012-08-24 18:29:40 -07:00
heyongqiang 1de83cc2ac add more logs
Summary:
as subject

add a tool to read sst file

as subject.

./sst_reader --command=check --file=
./sst_reader --command=scan --file=

Test Plan:
db_test

run this command

Reviewers: dhruba

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D4881
2012-08-24 15:20:49 -07:00
Dhruba Borthakur f3ee54526f Utility to dump manifest contents.
Summary:
./manifest_dump --file=/tmp/dbbench/MANIFEST-000002

Output looks like

manifest_file_number 30 next_file_number 31 last_sequence 388082 log_number 28  prev_log_number 0
--- level 0 ---
--- level 1 ---
--- level 2 ---
 5:3244155['0000000000000000' @ 1 : 1 .. '0000000000028220' @ 28221 : 1]
 7:3244177['0000000000028221' @ 28222 : 1 .. '0000000000056441' @ 56442 : 1]
 9:3244156['0000000000056442' @ 56443 : 1 .. '0000000000084662' @ 84663 : 1]
 11:3244178['0000000000084663' @ 84664 : 1 .. '0000000000112883' @ 112884 : 1]
 13:3244158['0000000000112884' @ 112885 : 1 .. '0000000000141104' @ 141105 : 1]
 15:3244176['0000000000141105' @ 141106 : 1 .. '0000000000169325' @ 169326 : 1]
 17:3244156['0000000000169326' @ 169327 : 1 .. '0000000000197546' @ 197547 : 1]
 19:3244178['0000000000197547' @ 197548 : 1 .. '0000000000225767' @ 225768 : 1]
 21:3244155['0000000000225768' @ 225769 : 1 .. '0000000000253988' @ 253989 : 1]
 23:3244179['0000000000253989' @ 253990 : 1 .. '0000000000282209' @ 282210 : 1]
 25:3244157['0000000000282210' @ 282211 : 1 .. '0000000000310430' @ 310431 : 1]
 27:3244176['0000000000310431' @ 310432 : 1 .. '0000000000338651' @ 338652 : 1]
 29:3244156['0000000000338652' @ 338653 : 1 .. '0000000000366872' @ 366873 : 1]
--- level 3 ---
--- level 4 ---
--- level 5 ---
--- level 6 ---

Test Plan: run on test directory created by dbbench

Reviewers: heyongqiang

Reviewed By: heyongqiang

CC: hustliubo

Differential Revision: https://reviews.facebook.net/D4743
2012-08-24 15:17:09 -07:00
bol 89f0c25360 Merge remote-tracking branch 'origin/master' into task1181723
Conflicts:
	Makefile
2012-08-24 03:34:55 -07:00
Dhruba Borthakur 2b443ba887 Titile: a command line shell to read/write data from a leveldb thrift server
Summary: implemented a commond line shell to talk with leveldb thrift server, which is based on state pattern and can be easily extended.

Test Plan: build and run

Reviewers: dhruba, zshao, heyongqiang

Differential Revision: https://reviews.facebook.net/D4713
2012-08-23 09:41:05 -07:00
Dhruba Borthakur 2aa514ec8c Utility to dump manifest contents.
Summary:
./manifest_dump --file=/tmp/dbbench/MANIFEST-000002

Output looks like

manifest_file_number 30 next_file_number 31 last_sequence 388082 log_number 28  prev_log_number 0
--- level 0 ---
--- level 1 ---
--- level 2 ---
 5:3244155['0000000000000000' @ 1 : 1 .. '0000000000028220' @ 28221 : 1]
 7:3244177['0000000000028221' @ 28222 : 1 .. '0000000000056441' @ 56442 : 1]
 9:3244156['0000000000056442' @ 56443 : 1 .. '0000000000084662' @ 84663 : 1]
 11:3244178['0000000000084663' @ 84664 : 1 .. '0000000000112883' @ 112884 : 1]
 13:3244158['0000000000112884' @ 112885 : 1 .. '0000000000141104' @ 141105 : 1]
 15:3244176['0000000000141105' @ 141106 : 1 .. '0000000000169325' @ 169326 : 1]
 17:3244156['0000000000169326' @ 169327 : 1 .. '0000000000197546' @ 197547 : 1]
 19:3244178['0000000000197547' @ 197548 : 1 .. '0000000000225767' @ 225768 : 1]
 21:3244155['0000000000225768' @ 225769 : 1 .. '0000000000253988' @ 253989 : 1]
 23:3244179['0000000000253989' @ 253990 : 1 .. '0000000000282209' @ 282210 : 1]
 25:3244157['0000000000282210' @ 282211 : 1 .. '0000000000310430' @ 310431 : 1]
 27:3244176['0000000000310431' @ 310432 : 1 .. '0000000000338651' @ 338652 : 1]
 29:3244156['0000000000338652' @ 338653 : 1 .. '0000000000366872' @ 366873 : 1]
--- level 3 ---
--- level 4 ---
--- level 5 ---
--- level 6 ---

Test Plan: run on test directory created by dbbench

Reviewers: heyongqiang

Reviewed By: heyongqiang

CC: hustliubo

Differential Revision: https://reviews.facebook.net/D4743
2012-08-17 22:36:59 -07:00