Commit graph

61 commits

Author SHA1 Message Date
zhutao aeda36e925 add exe and script path check (#11621)
Summary:
Add path existence check in the script to avoid script running even when db_bench executable does not exist or relative path is not right.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/11621

Reviewed By: jowlyzhang

Differential Revision: D47552590

Pulled By: ajkr

fbshipit-source-id: f09ea069f69e067212b249a22ad755b76bc6063a
2023-07-19 12:05:24 -07:00
Peter Dillinger a2eea18fc9 Fix file modes (#10815)
Summary:
*.sh files need execute permission. Benchmark-linux failing in CircleCI due to https://github.com/facebook/rocksdb/issues/10803

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10815

Test Plan: CI

Reviewed By: ltamasi

Differential Revision: D40346922

Pulled By: pdillinger

fbshipit-source-id: 658f185b5d2e906ee50e1de1b12f27fa9968ba5d
2022-10-13 09:00:37 -07:00
Mark Callaghan 6ff0c204cb Several small improvements (#10803)
Summary:
This has several small improvements.

benchmark.sh
* add BYTES_PER_SYNC as an env variable
* use --prepopulate_block_cache when O_DIRECT is used
* use --undefok to list options that don't work for all 7.x releases
* print "failure" in report.tsv when a benchmark fails
* parse the slightly different throughput line used by db_bench for multireadrandom
* remove the trailing comma for BlobDB size before printing it in report.tsv
* use the last line of the output from /bin/time as there can be more than one line when db_bench has a non-zero exit
* fix more bash lint warnings
* add ",stats" to the --benchmark=... lines to get stats at the end of each benchmark

benchmark_compare.sh
* run revrange immediately after fillseq to let compaction debt get removed
* add --multiread_batched when --benchmarks=multireadrandom is used
* use --benchmarks=overwriteandwait when supported to get a more accurate measure of write-amp

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10803

Test Plan: Run it for leveled, universal and BlobDB

Reviewed By: jay-zhuang

Differential Revision: D40278315

Pulled By: mdcallag

fbshipit-source-id: 793134ddc7d48d05a07436cd8942c375a23983a7
2022-10-12 15:13:28 -07:00
Gang Liao 275cd80cdb Add a blob-specific cache priority (#10461)
Summary:
RocksDB's `Cache` abstraction currently supports two priority levels for items: high (used for frequently accessed/highly valuable SST metablocks like index/filter blocks) and low (used for SST data blocks). Blobs are typically lower-value targets for caching than data blocks, since 1) with BlobDB, data blocks containing blob references conceptually form an index structure which has to be consulted before we can read the blob value, and 2) cached blobs represent only a single key-value, while cached data blocks generally contain multiple KVs. Since we would like to make it possible to use the same backing cache for the block cache and the blob cache, it would make sense to add a new, lower-than-low cache priority level (bottom level) for blobs so data blocks are prioritized over them.

This task is a part of https://github.com/facebook/rocksdb/issues/10156

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10461

Reviewed By: siying

Differential Revision: D38672823

Pulled By: ltamasi

fbshipit-source-id: 90cf7362036563d79891f47be2cc24b827482743
2022-08-12 17:59:06 -07:00
Peter Dillinger 65036e4217 Revert "Add a blob-specific cache priority (#10309)" (#10434)
Summary:
This reverts commit 8d178090be
because of a clear performance regression seen in internal dashboard
https://fburl.com/unidash/tpz75iee

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10434

Reviewed By: ltamasi

Differential Revision: D38256373

Pulled By: pdillinger

fbshipit-source-id: 134aa00f50dd7b1bbe037c227884a351342ec44b
2022-07-29 07:18:15 -07:00
Gang Liao 8d178090be Add a blob-specific cache priority (#10309)
Summary:
RocksDB's `Cache` abstraction currently supports two priority levels for items: high (used for frequently accessed/highly valuable SST metablocks like index/filter blocks) and low (used for SST data blocks). Blobs are typically lower-value targets for caching than data blocks, since 1) with BlobDB, data blocks containing blob references conceptually form an index structure which has to be consulted before we can read the blob value, and 2) cached blobs represent only a single key-value, while cached data blocks generally contain multiple KVs. Since we would like to make it possible to use the same backing cache for the block cache and the blob cache, it would make sense to add a new, lower-than-low cache priority level (bottom level) for blobs so data blocks are prioritized over them.

This task is a part of https://github.com/facebook/rocksdb/issues/10156

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10309

Reviewed By: ltamasi

Differential Revision: D38211655

Pulled By: gangliao

fbshipit-source-id: 65ef33337db4d85277cc6f9782d67c421ad71dd5
2022-07-27 19:09:24 -07:00
Gang Liao ec4ebeff30 Support prepopulating/warming the blob cache (#10298)
Summary:
Many workloads have temporal locality, where recently written items are read back in a short period of time. When using remote file systems, this is inefficient since it involves network traffic and higher latencies. Because of this, we would like to support prepopulating the blob cache during flush.

This task is a part of https://github.com/facebook/rocksdb/issues/10156

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10298

Reviewed By: ltamasi

Differential Revision: D37908743

Pulled By: gangliao

fbshipit-source-id: 9feaed234bc719d38f0c02975c1ad19fa4bb37d1
2022-07-17 07:13:59 -07:00
Mark Callaghan 9eced1a344 Add the git hash and full RocksDB version to report.tsv (#10277)
Summary:
Previously the version was displayed as $major.$minor
This changes it to $major.$minor.$path

This also adds the git hash for the time from which RocksDB was built to the end of report.tsv. I confirmed that benchmark_log_tool.py still parses it and that the people
who consume/graph these results are OK with it.

Example output:
ops_sec	mb_sec	lsm_sz	blob_sz	c_wgb	w_amp	c_mbps	c_wsecs	c_csecs	b_rgb	b_wgb	usec_op	p50	p99	p99.9	p99.99	pmax	uptime	stall%	Nstall	u_cpu	s_cpu	rss	test	date	version	job_id	githash
609488	244.1	1GB	0.0GB,	1.4	0.7	93.3	39	38	0	0	1.6	1.0	4	15	26	5365	15	0.0	0	0.1	0.0	0.5	fillseq.wal_disabled.v400	2022-06-29T13:36:05	7.5.0		6115254416

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10277

Test Plan: Run it

Reviewed By: jay-zhuang

Differential Revision: D37532418

Pulled By: mdcallag

fbshipit-source-id: 55e472640d51265819b228d3373c9fa9b62b660d
2022-07-05 11:46:36 -07:00
Mark Callaghan 720ab355f9 Add undefok for BlobDB options not supported prior to 7.5 (#10276)
Summary:
This adds --undefok to support use of this script with BlobDB for db_bench versions prior
to 7.5 when the options land in a release.

While there is a limit to how far back this script can go WRT backwards compatiblity,
this is an easy change to support early 7.x releases.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10276

Test Plan: Run it with versions of db_bench that do not and then do support these options

Reviewed By: gangliao

Differential Revision: D37529299

Pulled By: mdcallag

fbshipit-source-id: 7bb1feec5c68760e6d64792c585bfbde4f5e52d8
2022-06-30 14:07:26 -07:00
Mark Callaghan 28f2d3cca6 Benchmark fix write amplification computation (#10236)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/10236

Reviewed By: ajkr

Differential Revision: D37489898

Pulled By: mdcallag

fbshipit-source-id: 4b4565973b1f2c47342b4d1b857c8f89e91da145
2022-06-29 07:22:22 -07:00
Gang Liao 2352e2dfda Add the blob cache to the stress tests and the benchmarking tool (#10202)
Summary:
In order to facilitate correctness and performance testing, we would like to add the new blob cache to our stress test tool `db_stress` and our continuously running crash test script `db_crashtest.py`, as well as our synthetic benchmarking tool `db_bench` and the BlobDB performance testing script `run_blob_bench.sh`.
As part of this task, we would also like to utilize these benchmarking tools to get some initial performance numbers about the effectiveness of caching blobs.

This PR is a part of https://github.com/facebook/rocksdb/issues/10156

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10202

Reviewed By: ltamasi

Differential Revision: D37325739

Pulled By: gangliao

fbshipit-source-id: deb65d0d414502270dd4c324d987fd5469869fa8
2022-06-22 16:04:03 -07:00
Mark Callaghan 04bd347995 Increase num_levels for universal from 8 to 40 (#10158)
Summary:
See https://github.com/facebook/rocksdb/issues/10082 for more details. Trivial move
isn't done for universal when compaction is from L0 into L0. So a too small value for
num_levels with db_bench means there will be fewer trivial moves with universal and
that means that write-amp will increase.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10158

Test Plan: run it

Reviewed By: siying

Differential Revision: D37122519

Pulled By: mdcallag

fbshipit-source-id: 1cb39049676f68a6cc3ea8d105a9965f89d4d09e
2022-06-13 16:24:32 -07:00
Mark Callaghan 9efae14428 Fix parsing of db_bench output (#10124)
Summary:
A recent diff add a few more fields to one of the db_bench output lines that gets parsed.
This diff updates tools/benchmark.sh to handle that.

overwrite    :       7.939 micros/op 125963 ops/sec;   50.5 MB/s

overwrite    :       7.854 micros/op 127320 ops/sec 1800.001 seconds 229176999 operations;   51.0 MB/s

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10124

Test Plan: Run it

Reviewed By: jay-zhuang

Differential Revision: D36945137

Pulled By: mdcallag

fbshipit-source-id: 9c96f79491411da997e369a3be9c6b921a21d0fa
2022-06-08 09:23:36 -07:00
Mark Callaghan 5506954b1f Enhance to support more tuning options, and universal and integrated… (#9704)
Summary:
… BlobDB for all tests

This does two big things:
* provides more tuning options
* supports universal and integrated BlobDB for all of the benchmarks that are leveled-only

It does several smaller things, and I will list a few
* sets l0_slowdown_writes_trigger which wasn't set before this diff.
* improves readability in report.tsv by using smaller field names in the header
* adds more columns to report.tsv

report.tsv before this diff:
```
ops_sec mb_sec  total_size_gb   level0_size_gb  sum_gb  write_amplification     write_mbps      usec_op percentile_50   percentile_75   percentile_99   percentile_99.9 percentile_99.99        uptime  stall_time      stall_percent   test_name       test_date      rocksdb_version  job_id
823294  329.8   0.0     21.5    21.5    1.0     183.4   1.2     1.0     1.0     3       6       14      120     00:00:0.000     0.0     fillseq.wal_disabled.v400       2022-03-16T15:46:45.000-07:00   7.0
326520  130.8   0.0     0.0     0.0     0.0     0       12.2    139.8   155.1   170     234     250     60      00:00:0.000     0.0     multireadrandom.t4      2022-03-16T15:48:47.000-07:00   7.0
86313   345.7   0.0     0.0     0.0     0.0     0       46.3    44.8    50.6    75      84      108     60      00:00:0.000     0.0     revrangewhilewriting.t4 2022-03-16T15:50:48.000-07:00   7.0
101294  405.7   0.0     0.1     0.1     1.0     1.6     39.5    40.4    45.9    64      75      103     62      00:00:0.000     0.0     fwdrangewhilewriting.t4 2022-03-16T15:52:50.000-07:00   7.0
258141  103.4   0.0     0.1     1.2     18.2    19.8    15.5    14.3    18.1    28      34      48      62      00:00:0.000     0.0     readwhilewriting.t4     2022-03-16T15:54:51.000-07:00   7.0
334690  134.1   0.0     7.6     18.7    4.2     308.8   12.0    11.8    13.7    21      30      62      62      00:00:0.000     0.0     overwrite.t4.s0 2022-03-16T15:56:53.000-07:00   7.0
```
report.tsv with this diff:
```
ops_sec mb_sec  lsm_sz  blob_sz c_wgb   w_amp   c_mbps  c_wsecs c_csecs b_rgb   b_wgb   usec_op p50     p99     p99.9   p99.99  pmax    uptime  stall%  Nstall  u_cpu   s_cpu   rss     test    date    version job_id
831144  332.9   22GB    0.0GB,  21.7    1.0     185.1   264     262     0       0       1.2     1.0     3       6       14      9198    120     0.0     0       0.4     0.0     0.7     fillseq.wal_disabled.v400       2022-03-16T16:21:23     7.0
325229  130.3   22GB    0.0GB,  0.0             0.0     0       0       0       0       12.3    139.8   170     237     249     572     60      0.0     0       0.4     0.1     1.2     multireadrandom.t4      2022-03-16T16:23:25     7.0
312920  125.3   26GB    0.0GB,  11.1    2.6     189.3   115     113     0       0       12.8    11.8    21      34      1255    6442    60      0.2     1       0.7     0.1     0.6     overwritesome.t4.s0     2022-03-16T16:25:27     7.0
81698   327.2   25GB    0.0GB,  0.0             0.0     0       0       0       0       48.9    46.2    79      246     369     9445    60      0.0     0       0.4     0.1     1.4     revrangewhilewriting.t4 2022-03-16T16:30:21     7.0
92484   370.4   25GB    0.0GB,  0.1     1.5     1.1     1       0       0       0       43.2    42.3    75      103     110     9512    62      0.0     0       0.4     0.1     1.4     fwdrangewhilewriting.t4 2022-03-16T16:32:24     7.0
241661  96.8    25GB    0.0GB,  0.1     1.5     1.1     1       0       0       0       16.5    17.1    30      34      49      9092    62      0.0     0       0.4     0.1     1.4     readwhilewriting.t4     2022-03-16T16:34:27     7.0
305234  122.3   30GB    0.0GB,  12.1    2.7     201.7   127     124     0       0       13.1    11.8    21      128     1934    6339    62      0.0     0       0.7     0.1     0.7     overwrite.t4.s0 2022-03-16T16:36:30     7.0
```

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9704

Test Plan: run it

Reviewed By: jay-zhuang

Differential Revision: D36864627

Pulled By: mdcallag

fbshipit-source-id: d5af1cfc258a16865210163fa6fd1b803ab1a7d3
2022-06-03 08:20:10 -07:00
Hui Xiao 42cca28ebb Remove deprecated API AdvancedColumnFamilyOptions::rate_limit_delay_max_milliseconds (#9455)
Summary:
**Context/Summary:**
AdvancedColumnFamilyOptions::rate_limit_delay_max_milliseconds has been marked as deprecated and it's time to actually remove the code.
- Keep `soft_rate_limit`/`hard_rate_limit` in `cf_mutable_options_type_info` to prevent throwing `InvalidArgument` in `GetColumnFamilyOptionsFromMap` when reading an option file still with these options (e.g, old option file generated from RocksDB before the deprecation)
- Keep `soft_rate_limit`/`hard_rate_limit` in under `OptionsOldApiTest.GetOptionsFromMapTest` to test the case mentioned above.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9455

Test Plan: Rely on my eyeball and CI

Reviewed By: ajkr

Differential Revision: D33811664

Pulled By: hx235

fbshipit-source-id: 866859427fe710354a90f1095057f80116365ff0
2022-01-28 16:47:08 -08:00
Hui Xiao 1e0e883ca5 Remove deprecated API AdvancedColumnFamilyOptions::soft_rate_limit/hard_rate_limit (#9452)
Summary:
**Context/Summary:**
AdvancedColumnFamilyOptions::soft_rate_limit/hard_rate_limit have been marked as deprecated and it's time to actually remove the code.
- Keep `soft_rate_limit`/`hard_rate_limit` in `cf_mutable_options_type_info` to prevent throwing `InvalidArgument` in `GetColumnFamilyOptionsFromMap` when reading an option file still with these options (e.g, old option file generated from RocksDB before the deprecation)
- Keep `soft_rate_limit`/`hard_rate_limit` in under `OptionsOldApiTest.GetOptionsFromMapTest` to test the case mentioned above.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9452

Test Plan: Rely on my eyeball and CI

Reviewed By: ajkr

Differential Revision: D33804938

Pulled By: hx235

fbshipit-source-id: 133d49f7ec5238d7efceeb0a3122a5792a2b9945
2022-01-27 13:01:09 -08:00
Levi Tamasi b4e59a48fd Add a benchmarking wrapper script for BlobDB (#9015)
Summary:
The patch adds a new BlobDB benchmarking script called `run_blob_bench.sh`.
It is a thin wrapper around `benchmark.sh` (similarly to `run_flash_bench.sh`):
it actually calls `benchmark.sh` a number of times, cycling through six workloads,
two write-only ones (bulk load and overwrite), two read/write ones (point lookups
while writing, range scans while writing), and two read-only ones (point lookups
and range scans).

Note: this is a simpler/cleaned up/reworked version of the script used to produce the
benchmark results in http://rocksdb.org/blog/2021/05/26/integrated-blob-db.html .
The new version takes advantage of several recent `benchmark.sh` improvements
like the ability to pass in arbitrary `db_bench` options or the possibility of using a
job ID.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/9015

Test Plan: Ran the script manually with different parameter combinations.

Reviewed By: riversand963

Differential Revision: D31555277

Pulled By: ltamasi

fbshipit-source-id: 0e151b2f7b2cf6f66ed7f95455571492ad7ea87f
2021-10-12 11:36:03 -07:00
Levi Tamasi 8df334342e Use the write amplification value calculated by RocksDB in benchmark.sh (#8915)
Summary:
Currently, `benchmark.sh` computes write amplification itself; the patch
changes the script to use the value calculated by RocksDB (which is
printed as part of the periodic statistics). This also has the benefit
of being correct for BlobDB as well, since it also considers the amount
of data written to blob files.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/8915

Test Plan:
```
DB_DIR=/tmp/rocksdbtest/dbbench/ WAL_DIR=/tmp/rocksdbtest/dbbench/ NUM_KEYS=20000000 NUM_THREADS=32 tools/benchmark.sh overwrite --enable_blob_files=1 --enable_blob_garbage_collection=1

...

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      7/5   43.93 MB   0.5      0.3     0.0      0.3       0.5      0.3       0.0   1.0      1.3     59.9    201.35            101.88       109    1.847     22M   499K       0.0      11.2
  L4      4/4   244.03 MB   0.0     11.4     0.3      1.6       1.6      0.0       0.0   1.1     50.6     49.3    231.10            288.84         7   33.014    156M    26M       9.5       9.5
  L5     36/0    3.28 GB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
 Sum     47/9    3.56 GB   0.0     11.7     0.3      1.8       2.2      0.3       0.0   2.0     27.6     54.3    432.45            390.72       116    3.728    179M    26M       9.5      20.8
 Int      0/0    0.00 KB   0.0      3.5     0.1      0.5       0.6      0.1       0.0   2.2     31.2     55.6    115.01            109.53        29    3.966     51M  7353K       2.9       5.6

...

Completed overwrite (ID: ) in 289 seconds
ops/sec	mb/sec	Size-GB	L0_GB	Sum_GB	W-Amp	W-MB/s	usec/op	p50	p75	p99	p99.9	p99.99	Uptime	Stall-time	Stall%	Test	Date	Version	Job-ID
111784	44.8	0.0	0.5	2.2	2.0	9.2	285.9	215.3	264.4	1232	13299	23310	243	00:00:0.000	0.0	overwrite.t32.s0	2021-09-14T11:58:26.000-07:00	6.24
```

Reviewed By: zhichao-cao

Differential Revision: D30940352

Pulled By: ltamasi

fbshipit-source-id: ae7f5cd5440c8529788dda043266121fc2be0853
2021-09-15 12:16:59 -07:00
Adam Retter e10e4162c8 Improve benchmark.sh (#8730)
Summary:
* Started on some proper usage text to document the options
* Added a `JOB_ID` parameter, so that we can trace jobs and relate them to other assets
* Now generates a correct TSV file of the summary
* Summary has new additional fields:
    * RocksDB Version
    * Date
    * Job ID
* db_bench log files now also include the Job ID

Pull Request resolved: https://github.com/facebook/rocksdb/pull/8730

Reviewed By: mrambacher

Differential Revision: D30747344

Pulled By: jay-zhuang

fbshipit-source-id: 87eb78d20959b6d95804aebf129606fa9c71f407
2021-09-14 11:09:55 -07:00
Adam Retter 48c468c22e Use non-zero exit codes in benchmark.sh when the benchmark cannot be run (#8554)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/8554

Reviewed By: ajkr

Differential Revision: D29756562

Pulled By: mrambacher

fbshipit-source-id: ab2f5ef988c8ac7ea7c633e6a3dacaf16f021529
2021-08-16 06:25:28 -07:00
HappyUncle d56f74a4db Update benchmark.sh (#8615)
Summary:
Fix help message.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/8615

Reviewed By: siying

Differential Revision: D30136092

Pulled By: mrambacher

fbshipit-source-id: edf4112570514d709560baaf96a47c5f36f00665
2021-08-06 14:35:34 -07:00
mrambacher da90e23998 Improvements to benchmark.sh script (#8346)
Summary:
1.  Fix printing of stats when there are no writes (wamp=0).  Previously had a div0 error

2.  Added multireadrandom command as a valid target

3.  Added ability to pass additional command line options to db_bench.  Now can say things like benchmark.sh readrandom --mmap_read and the option will be passed to db_bench.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/8346

Reviewed By: zhichao-cao

Differential Revision: D29500436

Pulled By: mrambacher

fbshipit-source-id: 54e90708aae9133be3a903e35efdf8f8abbd86fa
2021-07-12 12:18:17 -07:00
Remington Brasga a993cc3a62 Fixed typo in benchmark.sh (#6434)
Summary:
TB =  1024 * GB
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6434

Differential Revision: D19978339

Pulled By: zhichao-cao

fbshipit-source-id: 5a89890110b23f0ebda4a95223f66da6736321ac
2020-02-19 17:08:02 -08:00
Fosco Marotto 6c2bf9e916 Add copyright headers per FB open-source checkup tool. (#5199)
Summary:
internal task: T35568575
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5199

Differential Revision: D14962794

Pulled By: gfosco

fbshipit-source-id: 93838ede6d0235eaecff90d200faed9a8515bbbe
2019-04-18 10:55:01 -07:00
Fosco Marotto 311cd8cf2f Updated benchmark script (#4134)
Summary:
When producing the updated performance on flash results for the wiki, these are the updates which were made.

https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4134

Differential Revision: D13491052

Pulled By: gfosco

fbshipit-source-id: dcd92f24659e0917cb1ac54a4446aa8e7aac8b0d
2018-12-17 16:34:30 -08:00
Young Tack Jin c648d90f8e benchmark.sh: to fix divide by zero runtime error (#4442)
Summary:
"Write (GB)" of $9 rather than "Rnp1 (GB)" of $8
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4442

Differential Revision: D10318193

Pulled By: yiwu-arbug

fbshipit-source-id: 03a7ef1938d9332e06fb3fd8490ca212f61fac6b
2018-10-10 21:03:19 -07:00
Andrew Kryczka d56070d875 Fix benchmark script with vector memtable (#4428)
Summary:
I guess we didn't update this script when `--allow_concurrent_memtable_write` became true by default.

Fixes #4413.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4428

Differential Revision: D10036452

Pulled By: ajkr

fbshipit-source-id: f464be0642bd096d9040f82cdc3eae614a902183
2018-09-26 13:22:45 -07:00
Siying Dong 6383e42362 benchmark.sh to use --max_background_job
Summary: Closes https://github.com/facebook/rocksdb/pull/3632

Differential Revision: D7347012

Pulled By: siying

fbshipit-source-id: 46230ec4a917ccf4c478825b07e92b4665a4820b
2018-03-20 18:57:55 -07:00
Mark Isaacson b8eb32f8cf Suppress lint in old files
Summary: Grandfather in super old lint issues to make a clean slate for moving forward that allows us to have stronger enforcement on new issues.

Reviewed By: yiwu-arbug

Differential Revision: D6821806

fbshipit-source-id: 22797d31ec58e9eb0255d3b66fedfcfcb0dc127c
2018-01-29 12:56:42 -08:00
Alan Somers 5883a1ae24 Fix /bin/bash shebangs
Summary:
"/bin/bash" is a Linuxism.  "/usr/bin/env bash" is portable.
Closes https://github.com/facebook/rocksdb/pull/2646

Differential Revision: D5556259

Pulled By: ajkr

fbshipit-source-id: cbffd38ecdbfffb2438969ec007ab345ed893ccb
2017-08-03 15:56:46 -07:00
Leonidas Galanis a2a883318b remove deleted option from benchmark.sh
Summary:
Removed max_grandparent_overlap_factor from benchmark.sh since it is not a valid option anymore.
Closes https://github.com/facebook/rocksdb/pull/2015

Differential Revision: D4748229

Pulled By: lgalanis

fbshipit-source-id: c3869ea
2017-03-21 12:54:13 -07:00
Sagar Vemuri eb912a927e Remove disableDataSync option
Summary:
Remove disableDataSync, and another similarly named disable_data_sync options.
This is being done to simplify options, and also because the performance gains of this feature can be achieved by other methods.
Closes https://github.com/facebook/rocksdb/pull/1859

Differential Revision: D4541292

Pulled By: sagar0

fbshipit-source-id: 5b3a6ca
2017-02-13 11:09:13 -08:00
Yueh-Hsuan Chiang fca5aa6fcc Initial script for the new regression test
Summary:
This diff includes an initial script running a set of benchmarks for
regression test.  The script does the following things:

  checkout the specified rocksdb commit (or origin/master as default)
  make clean && DEBUG_LEVEL=0 make db_bench
  setup test directories
  run set of benchmarks and store results

Currently, the script will run couple benchmarks, store all the benchmark
output, extract micros per op and percentile information for each benchmark
and store them in a single SUMMARY.csv file.  The SUMMARY.csv will make the
follow-up regression detection easier.

In addition, the current script only takes env arguments to set important
attributes of db_bench.  Will follow-up with a patch that allows db_bench
to construct options from an options file.

Test Plan:
NUM_KEYS=100 ./tools/regression_test.sh

  Sample SUMMARY.csv file:

                                     commit id,                      benchmark,  ms-per-op,        p50,        p75,        p99,      p99.9,     p99.99
      7e23ddf575890510e7d2fc7a79b31a1bbf317917,                        fillseq,      15.28,      54.66,      77.14,    5000.00,   17900.00,   18483.00
      7e23ddf575890510e7d2fc7a79b31a1bbf317917,                      overwrite,      13.54,      57.69,      86.39,    3000.00,   15600.00,   17013.00
      7e23ddf575890510e7d2fc7a79b31a1bbf317917,                     readrandom,       1.04,       0.80,       1.67,     293.33,     395.00,     504.00
      7e23ddf575890510e7d2fc7a79b31a1bbf317917,               readwhilewriting,       2.75,       1.01,       1.87,     200.00,     460.00,     485.00
      7e23ddf575890510e7d2fc7a79b31a1bbf317917,                   deleterandom,       3.64,      48.12,      70.09,     200.00,     336.67,     347.00
      7e23ddf575890510e7d2fc7a79b31a1bbf317917,                     seekrandom,      24.31,     391.87,     513.69,     872.73,     990.00,    1048.00
      7e23ddf575890510e7d2fc7a79b31a1bbf317917,         seekrandomwhilewriting,      14.02,     185.14,     294.15,     700.00,    1440.00,    1527.00

Reviewers: sdong, IslamAbdelRahman, kradhakrishnan, yiwu, andrewkr, gunnarku

Reviewed By: gunnarku

Subscribers: gunnarku, MarkCallaghan, andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D57597
2016-05-09 13:32:57 -07:00
Andrew Kryczka 4032145adc Configurable compression in db_bench
Summary:
Made compression type and dictionary size configurable via environment
variables.

Depends on D52287.

Test Plan:
check these options are passed to the db.

  $ COMPRESSION_MAX_DICT_BYTES=65536 COMPRESSION_TYPE=LZ4 NUM_KEYS=10000000 DB_DIR=./tmp/ WAL_DIR=./tmp/ ./tools/benchmark.sh filluniquerandom
  ...
  $ grep Options.compression tmp/LOG
  2016/04/22-19:11:30.397829 7f5f263a2980          Options.compression: LZ4
  ...
  2016/04/22-19:11:30.397837 7f5f263a2980         Options.compression_opts.max_dict_bytes: 65536

Reviewers: IslamAbdelRahman, sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D57141
2016-04-27 17:39:18 -07:00
Andrew Kryczka c3c389d542 Fix column label for L0 write sum
Summary:
This is taken from the "Write(GB)" column in compaction stats, so the
units should be GB, not MB.

Test Plan: none

Reviewers: sdong, yhchiang, IslamAbdelRahman

Reviewed By: IslamAbdelRahman

Subscribers: leveldb, andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D56889
2016-04-18 14:34:45 -07:00
Marton Trencseni 9b51987521 Adding pin_l0_filter_and_index_blocks_in_cache feature and related fixes.
Summary:
When a block based table file is opened, if prefetch_index_and_filter is true, it will prefetch the index and filter blocks, putting them into the block cache.
What this feature adds: when a L0 block based table file is opened, if pin_l0_filter_and_index_blocks_in_cache is true in the options (and prefetch_index_and_filter is true), then the filter and index blocks aren't released back to the block cache at the end of BlockBasedTableReader::Open(). Instead the table reader takes ownership of them, hence pinning them, ie. the LRU cache will never push them out. Meanwhile in the table reader, further accesses will not hit the block cache, thus avoiding lock contention.

Test Plan:
'export TEST_TMPDIR=/dev/shm/ && DISABLE_JEMALLOC=1 OPT=-g make all valgrind_check -j32' is OK.
I didn't run the Java tests, I don't have Java set up on my devserver.

Reviewers: sdong

Reviewed By: sdong

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D56133
2016-04-01 10:42:39 -07:00
sdong b1fafcaca6 Revert "Adding pin_l0_filter_and_index_blocks_in_cache feature."
This reverts commit 522de4f59e.

It has bug of index block cleaning up.
2016-03-21 11:50:42 -07:00
Marton Trencseni 522de4f59e Adding pin_l0_filter_and_index_blocks_in_cache feature.
Summary:
When a block based table file is opened, if prefetch_index_and_filter is true, it will prefetch the index and filter blocks, putting them into the block cache.
What this feature adds: when a L0 block based table file is opened, if pin_l0_filter_and_index_blocks_in_cache is true in the options (and prefetch_index_and_filter is true), then the filter and index blocks aren't released back to the block cache at the end of BlockBasedTableReader::Open(). Instead the table reader takes ownership of them, hence pinning them, ie. the LRU cache will never push them out. Meanwhile in the table reader, further accesses will not hit the block cache, thus avoiding lock contention.
When the table reader is destroyed, it releases the pinned blocks (if there were any). This has to happen before the cache is destroyed, so I had to introduce a TableReader::Close(), to guarantee the order of destruction.

Test Plan:
Added two unit tests for this. Existing unit tests run fine (default is pin_l0_filter_and_index_blocks_in_cache=false).

DISABLE_JEMALLOC=1 OPT=-g make all valgrind_check -j32
  Mac: OK.
  Linux: with D55287 patched in it's OK.

Reviewers: sdong

Reviewed By: sdong

Subscribers: andrewkr, leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D54801
2016-03-17 22:40:01 +00:00
Gunnar Kudrjavets 90aff0c444 Update --max_write_buffer_number for compaction benchmarks
Summary: For compactions benchmarks (both level and universal) we'll use `--max_write_buffer_number=4`. For all the other benchmarks which don't customize the value of `--max_background_flushes` we'll continue using `--max_write_buffer_number=8`.

Test Plan:
To validate basic correctness and command-line options:

```
cd ~/rocksdb
NKEYS=10000000 ./tools/run_flash_bench.sh
```

Reviewers: MarkCallaghan

Reviewed By: MarkCallaghan

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D55497
2016-03-17 10:14:23 -07:00
Gunnar Kudrjavets 697fab820a Updates to RocksDB subcompaction benchmarking script
Summary: Set of updates to the subcompaction benchmark script which are based on our internal discussions. The intent behind the changes is to make sure that the scripts will correctly reflect how we're doing the actual benchmarking.

Test Plan: Tested by exercising the full set of compaction benchmarks and validating the execution and consistency of results.

Reviewers: MarkCallaghan, sdong, yhchiang

Reviewed By: yhchiang

Subscribers: andrewkr, dhruba

Differential Revision: https://reviews.facebook.net/D55461
2016-03-14 23:09:04 -07:00
Gunnar Kudrjavets 68189f7e1b Update benchmarks used to measure subcompaction performance
Summary: After closely working with Mark, Siying, and Yueh-Hsuan this set of changes reflects the updates needed to measure RocksDB subcompaction performance in a correct manner. The essence of the benchmark is executing `fillrandom` followed by `compact` with the correct set of options for various number of subcompactions specified.

Test Plan: Tested internally to verify correctness and reliability.

Reviewers: sdong, yhchiang, MarkCallaghan

Reviewed By: MarkCallaghan

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D55089
2016-03-04 12:32:11 -08:00
Gunnar Kudrjavets 337671b688 Add universal compaction benchmarks to run_flash_bench.sh
Summary:
Implement a benchmark for universal compaction based on the feature description (see below), in-person discussions, and reading source code:

https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
https://github.com/facebook/rocksdb/wiki/Universal-Compaction
https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide#universal-compaction

Universal compaction benchmark is based on `overwrite` benchmark, adding compaction specific options to it, and executing it for different values of subcompaction to understand the impact of scaling out subcompactions for a particular scenario.

Test Plan:
  - Execute the benchmark on various machines for multiple iterations to verify the reliability.
  - Observe the output to make sure that compaction is taking place.
  - Observe the execution to make sure that arguments passed to `db_bench` are correct.

Reviewers: sdong, MarkCallaghan

Reviewed By: MarkCallaghan

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D54045
2016-02-10 15:30:47 -08:00
Gunnar Kudrjavets 8ed3438778 Add option to run fillseq with WAL enabled in addition to WAL disabled
Summary: This set of changes is part of the work to introduce benchmark for universal style compaction in RocksDB. It's conceptually separate from the compaction work, so sending it out as a separate diff to get it out of the way.

Test Plan:
  - Run `./tools/run_flash_bench.sh`.
  - Look at the contents of `report.txt` and `report2.txt` to make sure that data is reported and attributed correctly.
  - During `db_bench` execution time make sure that the correct flags are passed to `--disable_wal` depending on the benchmark being executed.

Reviewers: MarkCallaghan

Reviewed By: MarkCallaghan

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D53865
2016-02-05 13:20:56 -08:00
Mark Callaghan 4041903ecd Enhance db_bench write rate limit
Summary:
1) changes tools/{benchmark,run_flash_bench}.sh to optionally use the write rate limit
2) removes code for --writes_per_second and switches the 'background' write rate limit
to use --benchmark_write_rate_limit

Replaces https://reviews.facebook.net/D49113

Task ID: #9555881

Blame Rev:

Test Plan:
tools/run_flash_bench.sh

Revert Plan:

Database Impact:

Memcache Impact:

Other Notes:

EImportant:

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -

Reviewers: igor

Reviewed By: igor

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D52485
2016-01-04 12:01:27 -08:00
Mark Callaghan 4c81ac0c59 Fix benchmark report script
Summary:
db_bench output now displays Percentile many times with --statistics after
read IO latency histograms were added. So I only need the last one in the report output.

Task ID: #

Blame Rev:

Test Plan:
run run_flash_bench.sh

Revert Plan:

Database Impact:

Memcache Impact:

Other Notes:

EImportant:

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -

Reviewers: igor

Reviewed By: igor

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D45093
2015-08-22 12:18:00 -07:00
Mark Callaghan 41a0e2811d Improve defaults for benchmarks
Summary:
Changes include:
* don't sync-on-commit for single writer thread in readwhile... tests
* make default block size 8kb rather than 4kb to avoid too small blocks after compression
* use snappy instead of zlib to avoid stalls from compression latency
* disable statistics
* use bytes_per_sync=8M to reduce throughput loss on disk
* use open_files=-1 to reduce mutex contention

Task ID: #

Blame Rev:

Test Plan:
run benchmark

Revert Plan:

Database Impact:

Memcache Impact:

Other Notes:

EImportant:

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -

Reviewers: igor

Reviewed By: igor

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D44961
2015-08-20 18:59:10 -07:00
agiardullo dc9d70de65 Optimistic Transactions
Summary: Optimistic transactions supporting begin/commit/rollback semantics.  Currently relies on checking the memtable to determine if there are any collisions at commit time.  Not yet implemented would be a way of enuring the memtable has some minimum amount of history so that we won't fail to commit when the memtable is empty.  You should probably start with transaction.h to get an overview of what is currently supported.

Test Plan: Added a new test, but still need to look into stress testing.

Reviewers: yhchiang, igor, rven, sdong

Reviewed By: sdong

Subscribers: adamretter, MarkCallaghan, leveldb, dhruba

Differential Revision: https://reviews.facebook.net/D33435
2015-05-29 14:36:35 -07:00
Mark Callaghan 88044340c1 Add Size-GB column to benchmark reports
Summary:
See https://gist.github.com/mdcallag/b867ee051d765760be0d for a sample

Task ID: #

Blame Rev:

Test Plan:
Revert Plan:

Database Impact:

Memcache Impact:

Other Notes:

EImportant:

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -

Reviewers: igor

Reviewed By: igor

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D37971
2015-05-02 07:46:12 -07:00
Mark Callaghan 283a042969 Set --seed per test
Summary:
This is done to avoid having each thread use the same seed between runs
of db_bench. Without this we can inflate the OS filesystem cache hit rate on
reads for read heavy tests and generally see the same key sequences get generated
between teste runs.

Task ID: #

Blame Rev:

Test Plan:
Revert Plan:

Database Impact:

Memcache Impact:

Other Notes:

EImportant:

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -

Reviewers: igor

Reviewed By: igor

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D37563
2015-04-23 09:18:25 -07:00
Mark Callaghan 78dbd087d1 Improve benchmark scripts
Summary:
This adds:
1) use of --level_compaction_dynamic_level_bytes=true
2) use of --bytes_per_sync=2M
The second is a big win for disks. The first helps in general.

This also adds a new test, fillseq with 32kb values to increase the peak
ingest and make it more likely that storage limits throughput.

Sample outpout from the first 3 tests - https://gist.github.com/mdcallag/e793bd3038e367b05d6f

Task ID: #

Blame Rev:

Test Plan:
Revert Plan:

Database Impact:

Memcache Impact:

Other Notes:

EImportant:

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -

Reviewers: igor

Reviewed By: igor

Subscribers: dhruba

Differential Revision: https://reviews.facebook.net/D37509
2015-04-22 13:23:08 -07:00