De-template block based table iterator (#6531)
Summary:
Right now block based table iterator is used as both of iterating data for block based table, and for the index iterator for partitioend index. This was initially convenient for introducing a new iterator and block type for new index format, while reducing code change. However, these two usage doesn't go with each other very well. For example, Prev() is never called for partitioned index iterator, and some other complexity is maintained in block based iterators, which is not needed for index iterator but maintainers will always need to reason about it. Furthermore, the template usage is not following Google C++ Style which we are following, and makes a large chunk of code tangled together. This commit separate the two iterators. Right now, here is what it is done:
1. Copy the block based iterator code into partitioned index iterator, and de-template them.
2. Remove some code not needed for partitioned index. The upper bound check and tricks are removed. We never tested performance for those tricks when partitioned index is enabled in the first place. It's unlikelyl to generate performance regression, as creating new partitioned index block is much rarer than data blocks.
3. Separate out the prefetch logic to a helper class and both classes call them.
This commit will enable future follow-ups. One direction is that we might separate index iterator interface for data blocks and index blocks, as they are quite different.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6531
Test Plan: build using make and cmake. And build release
Differential Revision: D20473108
fbshipit-source-id: e48011783b339a4257c204cc07507b171b834b0f
2020-03-16 19:17:34 +00:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
//
|
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
#include "table/block_based/block_prefetcher.h"
|
|
|
|
|
Set Read rate limiter priority dynamically and pass it to FS (#9996)
Summary:
### Context:
Background compactions and flush generate large reads and writes, and can be long running, especially for universal compaction. In some cases, this can impact foreground reads and writes by users.
### Solution
User, Flush, and Compaction reads share some code path. For this task, we update the rate_limiter_priority in ReadOptions for code paths (e.g. FindTable (mainly in BlockBasedTable::Open()) and various iterators), and eventually update the rate_limiter_priority in IOOptions for FSRandomAccessFile.
**This PR is for the Read path.** The **Read:** dynamic priority for different state are listed as follows:
| State | Normal | Delayed | Stalled |
| ----- | ------ | ------- | ------- |
| Flush (verification read in BuildTable()) | IO_USER | IO_USER | IO_USER |
| Compaction | IO_LOW | IO_USER | IO_USER |
| User | User provided | User provided | User provided |
We will respect the read_options that the user provided and will not set it.
The only sst read for Flush is the verification read in BuildTable(). It claims to be "regard as user read".
**Details**
1. Set read_options.rate_limiter_priority dynamically:
- User: Do not update the read_options. Use the read_options that the user provided.
- Compaction: Update read_options in CompactionJob::ProcessKeyValueCompaction().
- Flush: Update read_options in BuildTable().
2. Pass the rate limiter priority to FSRandomAccessFile functions:
- After calling the FindTable(), read_options is passed through GetTableReader(table_cache.cc), BlockBasedTableFactory::NewTableReader(block_based_table_factory.cc), and BlockBasedTable::Open(). The Open() needs some updates for the ReadOptions variable and the updates are also needed for the called functions, including PrefetchTail(), PrepareIOOptions(), ReadFooterFromFile(), ReadMetaIndexblock(), ReadPropertiesBlock(), PrefetchIndexAndFilterBlocks(), and ReadRangeDelBlock().
- In RandomAccessFileReader, the functions to be updated include Read(), MultiRead(), ReadAsync(), and Prefetch().
- Update the downstream functions of NewIndexIterator(), NewDataBlockIterator(), and BlockBasedTableIterator().
### Test Plans
Add unit tests.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9996
Reviewed By: anand1976
Differential Revision: D36452483
Pulled By: gitbw95
fbshipit-source-id: 60978204a4f849bb9261cb78d9bc1cb56d6008cf
2022-05-19 02:41:44 +00:00
|
|
|
#include "rocksdb/file_system.h"
|
Improve / clean up meta block code & integrity (#9163)
Summary:
* Checksums are now checked on meta blocks unless specifically
suppressed or not applicable (e.g. plain table). (Was other way around.)
This means a number of cases that were not checking checksums now are,
including direct read TableProperties in Version::GetTableProperties
(fixed in meta_blocks ReadTableProperties), reading any block from
PersistentCache (fixed in BlockFetcher), read TableProperties in
SstFileDumper (ldb/sst_dump/BackupEngine) before table reader open,
maybe more.
* For that to work, I moved the global_seqno+TableProperties checksum
logic to the shared table/ code, because that is used by many utilies
such as SstFileDumper.
* Also for that to work, we have to know when we're dealing with a block
that has a checksum (trailer), so added that capability to Footer based
on magic number, and from there BlockFetcher.
* Knowledge of trailer presence has also fixed a problem where other
table formats were reading blocks including bytes for a non-existant
trailer--and awkwardly kind-of not using them, e.g. no shared code
checking checksums. (BlockFetcher compression type was populated
incorrectly.) Now we only read what is needed.
* Minimized code duplication and differing/incompatible/awkward
abstractions in meta_blocks.{cc,h} (e.g. SeekTo in metaindex block
without parsing block handle)
* Moved some meta block handling code from table_properties*.*
* Moved some code specific to block-based table from shared table/ code
to BlockBasedTable class. The checksum stuff means we can't completely
separate it, but things that don't need to be in shared table/ code
should not be.
* Use unique_ptr rather than raw ptr in more places. (Note: you can
std::move from unique_ptr to shared_ptr.)
Without enhancements to GetPropertiesOfAllTablesTest (see below),
net reduction of roughly 100 lines of code.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9163
Test Plan:
existing tests and
* Enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to verify that
checksums are now checked on direct read of table properties by TableCache
(new test would fail before this change)
* Also enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to test
putting table properties under old meta name
* Also generally enhanced that same test to actually test what it was
supposed to be testing already, by kicking things out of table cache when
we don't want them there.
Reviewed By: ajkr, mrambacher
Differential Revision: D32514757
Pulled By: pdillinger
fbshipit-source-id: 507964b9311d186ae8d1131182290cbd97a99fa9
2021-11-18 19:42:12 +00:00
|
|
|
#include "table/block_based/block_based_table_reader.h"
|
|
|
|
|
De-template block based table iterator (#6531)
Summary:
Right now block based table iterator is used as both of iterating data for block based table, and for the index iterator for partitioend index. This was initially convenient for introducing a new iterator and block type for new index format, while reducing code change. However, these two usage doesn't go with each other very well. For example, Prev() is never called for partitioned index iterator, and some other complexity is maintained in block based iterators, which is not needed for index iterator but maintainers will always need to reason about it. Furthermore, the template usage is not following Google C++ Style which we are following, and makes a large chunk of code tangled together. This commit separate the two iterators. Right now, here is what it is done:
1. Copy the block based iterator code into partitioned index iterator, and de-template them.
2. Remove some code not needed for partitioned index. The upper bound check and tricks are removed. We never tested performance for those tricks when partitioned index is enabled in the first place. It's unlikelyl to generate performance regression, as creating new partitioned index block is much rarer than data blocks.
3. Separate out the prefetch logic to a helper class and both classes call them.
This commit will enable future follow-ups. One direction is that we might separate index iterator interface for data blocks and index blocks, as they are quite different.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6531
Test Plan: build using make and cmake. And build release
Differential Revision: D20473108
fbshipit-source-id: e48011783b339a4257c204cc07507b171b834b0f
2020-03-16 19:17:34 +00:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
2023-09-23 01:12:08 +00:00
|
|
|
void BlockPrefetcher::PrefetchIfNeeded(
|
|
|
|
const BlockBasedTable::Rep* rep, const BlockHandle& handle,
|
|
|
|
const size_t readahead_size, bool is_for_compaction,
|
|
|
|
const bool no_sequential_checking, const ReadOptions& read_options,
|
|
|
|
const std::function<void(uint64_t, size_t, size_t&)>& readaheadsize_cb) {
|
2023-07-21 21:52:52 +00:00
|
|
|
const size_t len = BlockBasedTable::BlockSizeWithTrailer(handle);
|
|
|
|
const size_t offset = handle.offset();
|
2020-08-28 01:15:11 +00:00
|
|
|
if (is_for_compaction) {
|
2023-09-26 17:08:43 +00:00
|
|
|
if (!rep->file->use_direct_io() && compaction_readahead_size_ > 0) {
|
2023-07-21 21:52:52 +00:00
|
|
|
// If FS supports prefetching (readahead_limit_ will be non zero in that
|
|
|
|
// case) and current block exists in prefetch buffer then return.
|
|
|
|
if (offset + len <= readahead_limit_) {
|
|
|
|
return;
|
|
|
|
}
|
Group rocksdb.sst.read.micros stat by different user read IOActivity + misc (#11444)
Summary:
**Context/Summary:**
- Similar to https://github.com/facebook/rocksdb/pull/11288 but for user read such as `Get(), MultiGet(), DBIterator::XXX(), Verify(File)Checksum()`.
- For this, I refactored some user-facing `MultiGet` calls in `TransactionBase` and various types of `DB` so that it does not call a user-facing `Get()` but `GetImpl()` for passing the `ReadOptions::io_activity` check (see PR conversation)
- New user read stats breakdown are guarded by `kExceptDetailedTimers` since measurement shows they have 4-5% regression to the upstream/main.
- Misc
- More refactoring: with https://github.com/facebook/rocksdb/pull/11288, we complete passing `ReadOptions/IOOptions` to FS level. So we can now replace the previously [added](https://github.com/facebook/rocksdb/pull/9424) `rate_limiter_priority` parameter in `RandomAccessFileReader`'s `Read/MultiRead/Prefetch()` with `IOOptions::rate_limiter_priority`
- Also, `ReadAsync()` call time is measured in `SST_READ_MICRO` now
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11444
Test Plan:
- CI fake db crash/stress test
- Microbenchmarking
**Build** `make clean && ROCKSDB_NO_FBCODE=1 DEBUG_LEVEL=0 make -jN db_basic_bench`
- google benchmark version: https://github.com/google/benchmark/commit/604f6fd3f4b34a84ec4eb4db81d842fa4db829cd
- db_basic_bench_base: upstream
- db_basic_bench_pr: db_basic_bench_base + this PR
- asyncread_db_basic_bench_base: upstream + [db basic bench patch for IteratorNext](https://github.com/facebook/rocksdb/compare/main...hx235:rocksdb:micro_bench_async_read)
- asyncread_db_basic_bench_pr: asyncread_db_basic_bench_base + this PR
**Test**
Get
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_{null_stat|base|pr} --benchmark_filter=DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:1/negative_query:0/enable_filter:0/mmap:1/threads:1 --benchmark_repetitions=1000
```
Result
```
Coming soon
```
AsyncRead
```
TEST_TMPDIR=/dev/shm ./asyncread_db_basic_bench_{base|pr} --benchmark_filter=IteratorNext/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:1/async_io:1/include_detailed_timers:0 --benchmark_repetitions=1000 > syncread_db_basic_bench_{base|pr}.out
```
Result
```
Base:
1956,1956,1968,1977,1979,1986,1988,1988,1988,1990,1991,1991,1993,1993,1993,1993,1994,1996,1997,1997,1997,1998,1999,2001,2001,2002,2004,2007,2007,2008,
PR (2.3% regression, due to measuring `SST_READ_MICRO` that wasn't measured before):
1993,2014,2016,2022,2024,2027,2027,2028,2028,2030,2031,2031,2032,2032,2038,2039,2042,2044,2044,2047,2047,2047,2048,2049,2050,2052,2052,2052,2053,2053,
```
Reviewed By: ajkr
Differential Revision: D45918925
Pulled By: hx235
fbshipit-source-id: 58a54560d9ebeb3a59b6d807639692614dad058a
2023-08-09 00:26:50 +00:00
|
|
|
IOOptions opts;
|
|
|
|
Status s = rep->file->PrepareIOOptions(read_options, opts);
|
|
|
|
if (!s.ok()) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
s = rep->file->Prefetch(opts, offset, len + compaction_readahead_size_);
|
2023-07-21 21:52:52 +00:00
|
|
|
if (s.ok()) {
|
|
|
|
readahead_limit_ = offset + len + compaction_readahead_size_;
|
|
|
|
return;
|
2023-09-27 01:44:41 +00:00
|
|
|
} else if (!s.IsNotSupported()) {
|
|
|
|
return;
|
2023-07-21 21:52:52 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
// If FS prefetch is not supported, fall back to use internal prefetch
|
2023-09-27 01:44:41 +00:00
|
|
|
// buffer.
|
2023-07-21 21:52:52 +00:00
|
|
|
//
|
|
|
|
// num_file_reads is used by FilePrefetchBuffer only when
|
|
|
|
// implicit_auto_readahead is set.
|
2022-03-21 14:12:43 +00:00
|
|
|
rep->CreateFilePrefetchBufferIfNotExists(
|
|
|
|
compaction_readahead_size_, compaction_readahead_size_,
|
2022-06-16 03:17:35 +00:00
|
|
|
&prefetch_buffer_, /*implicit_auto_readahead=*/false,
|
2023-08-18 22:52:04 +00:00
|
|
|
/*num_file_reads=*/0, /*num_file_reads_for_auto_readahead=*/0,
|
2023-09-23 01:12:08 +00:00
|
|
|
/*upper_bound_offset=*/0, /*readaheadsize_cb=*/nullptr);
|
2020-08-28 01:15:11 +00:00
|
|
|
return;
|
De-template block based table iterator (#6531)
Summary:
Right now block based table iterator is used as both of iterating data for block based table, and for the index iterator for partitioend index. This was initially convenient for introducing a new iterator and block type for new index format, while reducing code change. However, these two usage doesn't go with each other very well. For example, Prev() is never called for partitioned index iterator, and some other complexity is maintained in block based iterators, which is not needed for index iterator but maintainers will always need to reason about it. Furthermore, the template usage is not following Google C++ Style which we are following, and makes a large chunk of code tangled together. This commit separate the two iterators. Right now, here is what it is done:
1. Copy the block based iterator code into partitioned index iterator, and de-template them.
2. Remove some code not needed for partitioned index. The upper bound check and tricks are removed. We never tested performance for those tricks when partitioned index is enabled in the first place. It's unlikelyl to generate performance regression, as creating new partitioned index block is much rarer than data blocks.
3. Separate out the prefetch logic to a helper class and both classes call them.
This commit will enable future follow-ups. One direction is that we might separate index iterator interface for data blocks and index blocks, as they are quite different.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6531
Test Plan: build using make and cmake. And build release
Differential Revision: D20473108
fbshipit-source-id: e48011783b339a4257c204cc07507b171b834b0f
2020-03-16 19:17:34 +00:00
|
|
|
}
|
2020-08-28 01:15:11 +00:00
|
|
|
|
2021-04-28 19:52:53 +00:00
|
|
|
// Explicit user requested readahead.
|
2020-08-28 01:15:11 +00:00
|
|
|
if (readahead_size > 0) {
|
2022-03-21 14:12:43 +00:00
|
|
|
rep->CreateFilePrefetchBufferIfNotExists(
|
2022-06-16 03:17:35 +00:00
|
|
|
readahead_size, readahead_size, &prefetch_buffer_,
|
2022-09-01 18:56:00 +00:00
|
|
|
/*implicit_auto_readahead=*/false, /*num_file_reads=*/0,
|
2023-09-23 01:12:08 +00:00
|
|
|
/*num_file_reads_for_auto_readahead=*/0, upper_bound_offset_,
|
2023-10-23 21:42:44 +00:00
|
|
|
readaheadsize_cb,
|
|
|
|
/*usage=*/FilePrefetchBufferUsage::kUserScanPrefetch);
|
2020-08-28 01:15:11 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-04-28 19:52:53 +00:00
|
|
|
// Implicit readahead.
|
|
|
|
|
|
|
|
// If max_auto_readahead_size is set to be 0 by user, no data will be
|
|
|
|
// prefetched.
|
|
|
|
size_t max_auto_readahead_size = rep->table_options.max_auto_readahead_size;
|
2022-04-16 00:28:09 +00:00
|
|
|
if (max_auto_readahead_size == 0 || initial_auto_readahead_size_ == 0) {
|
2021-04-28 19:52:53 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-09-12 21:48:06 +00:00
|
|
|
if (initial_auto_readahead_size_ > max_auto_readahead_size) {
|
|
|
|
initial_auto_readahead_size_ = max_auto_readahead_size;
|
|
|
|
}
|
|
|
|
|
2022-07-06 18:42:59 +00:00
|
|
|
// In case of no_sequential_checking, it will skip the num_file_reads_ and
|
|
|
|
// will always creates the FilePrefetchBuffer.
|
|
|
|
if (no_sequential_checking) {
|
2022-05-20 23:09:33 +00:00
|
|
|
rep->CreateFilePrefetchBufferIfNotExists(
|
|
|
|
initial_auto_readahead_size_, max_auto_readahead_size,
|
2022-06-16 03:17:35 +00:00
|
|
|
&prefetch_buffer_, /*implicit_auto_readahead=*/true,
|
2022-09-01 18:56:00 +00:00
|
|
|
/*num_file_reads=*/0,
|
2023-08-18 22:52:04 +00:00
|
|
|
rep->table_options.num_file_reads_for_auto_readahead,
|
2023-10-23 21:42:44 +00:00
|
|
|
upper_bound_offset_, readaheadsize_cb,
|
|
|
|
/*usage=*/FilePrefetchBufferUsage::kUserScanPrefetch);
|
2022-05-20 23:09:33 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-04-28 19:52:53 +00:00
|
|
|
// If FS supports prefetching (readahead_limit_ will be non zero in that case)
|
|
|
|
// and current block exists in prefetch buffer then return.
|
|
|
|
if (offset + len <= readahead_limit_) {
|
|
|
|
UpdateReadPattern(offset, len);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!IsBlockSequential(offset)) {
|
|
|
|
UpdateReadPattern(offset, len);
|
2022-04-16 00:28:09 +00:00
|
|
|
ResetValues(rep->table_options.initial_auto_readahead_size);
|
2021-04-28 19:52:53 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
UpdateReadPattern(offset, len);
|
|
|
|
|
2020-08-28 01:15:11 +00:00
|
|
|
// Implicit auto readahead, which will be enabled if the number of reads
|
2022-09-01 18:56:00 +00:00
|
|
|
// reached `table_options.num_file_reads_for_auto_readahead` (default: 2) and
|
|
|
|
// scans are sequential.
|
2020-08-28 01:15:11 +00:00
|
|
|
num_file_reads_++;
|
2022-09-01 18:56:00 +00:00
|
|
|
if (num_file_reads_ <= rep->table_options.num_file_reads_for_auto_readahead) {
|
2020-08-28 01:15:11 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (rep->file->use_direct_io()) {
|
2022-06-16 03:17:35 +00:00
|
|
|
rep->CreateFilePrefetchBufferIfNotExists(
|
|
|
|
initial_auto_readahead_size_, max_auto_readahead_size,
|
2022-09-01 18:56:00 +00:00
|
|
|
&prefetch_buffer_, /*implicit_auto_readahead=*/true, num_file_reads_,
|
2023-08-18 22:52:04 +00:00
|
|
|
rep->table_options.num_file_reads_for_auto_readahead,
|
2023-10-23 21:42:44 +00:00
|
|
|
upper_bound_offset_, readaheadsize_cb,
|
|
|
|
/*usage=*/FilePrefetchBufferUsage::kUserScanPrefetch);
|
2020-08-28 01:15:11 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-02-24 00:52:35 +00:00
|
|
|
if (readahead_size_ > max_auto_readahead_size) {
|
|
|
|
readahead_size_ = max_auto_readahead_size;
|
|
|
|
}
|
|
|
|
|
2020-08-28 01:15:11 +00:00
|
|
|
// If prefetch is not supported, fall back to use internal prefetch buffer.
|
Group rocksdb.sst.read.micros stat by different user read IOActivity + misc (#11444)
Summary:
**Context/Summary:**
- Similar to https://github.com/facebook/rocksdb/pull/11288 but for user read such as `Get(), MultiGet(), DBIterator::XXX(), Verify(File)Checksum()`.
- For this, I refactored some user-facing `MultiGet` calls in `TransactionBase` and various types of `DB` so that it does not call a user-facing `Get()` but `GetImpl()` for passing the `ReadOptions::io_activity` check (see PR conversation)
- New user read stats breakdown are guarded by `kExceptDetailedTimers` since measurement shows they have 4-5% regression to the upstream/main.
- Misc
- More refactoring: with https://github.com/facebook/rocksdb/pull/11288, we complete passing `ReadOptions/IOOptions` to FS level. So we can now replace the previously [added](https://github.com/facebook/rocksdb/pull/9424) `rate_limiter_priority` parameter in `RandomAccessFileReader`'s `Read/MultiRead/Prefetch()` with `IOOptions::rate_limiter_priority`
- Also, `ReadAsync()` call time is measured in `SST_READ_MICRO` now
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11444
Test Plan:
- CI fake db crash/stress test
- Microbenchmarking
**Build** `make clean && ROCKSDB_NO_FBCODE=1 DEBUG_LEVEL=0 make -jN db_basic_bench`
- google benchmark version: https://github.com/google/benchmark/commit/604f6fd3f4b34a84ec4eb4db81d842fa4db829cd
- db_basic_bench_base: upstream
- db_basic_bench_pr: db_basic_bench_base + this PR
- asyncread_db_basic_bench_base: upstream + [db basic bench patch for IteratorNext](https://github.com/facebook/rocksdb/compare/main...hx235:rocksdb:micro_bench_async_read)
- asyncread_db_basic_bench_pr: asyncread_db_basic_bench_base + this PR
**Test**
Get
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_{null_stat|base|pr} --benchmark_filter=DBGet/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:1/negative_query:0/enable_filter:0/mmap:1/threads:1 --benchmark_repetitions=1000
```
Result
```
Coming soon
```
AsyncRead
```
TEST_TMPDIR=/dev/shm ./asyncread_db_basic_bench_{base|pr} --benchmark_filter=IteratorNext/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:1/async_io:1/include_detailed_timers:0 --benchmark_repetitions=1000 > syncread_db_basic_bench_{base|pr}.out
```
Result
```
Base:
1956,1956,1968,1977,1979,1986,1988,1988,1988,1990,1991,1991,1993,1993,1993,1993,1994,1996,1997,1997,1997,1998,1999,2001,2001,2002,2004,2007,2007,2008,
PR (2.3% regression, due to measuring `SST_READ_MICRO` that wasn't measured before):
1993,2014,2016,2022,2024,2027,2027,2028,2028,2030,2031,2031,2032,2032,2038,2039,2042,2044,2044,2047,2047,2047,2048,2049,2050,2052,2052,2052,2053,2053,
```
Reviewed By: ajkr
Differential Revision: D45918925
Pulled By: hx235
fbshipit-source-id: 58a54560d9ebeb3a59b6d807639692614dad058a
2023-08-09 00:26:50 +00:00
|
|
|
IOOptions opts;
|
|
|
|
Status s = rep->file->PrepareIOOptions(read_options, opts);
|
|
|
|
if (!s.ok()) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
s = rep->file->Prefetch(
|
|
|
|
opts, handle.offset(),
|
|
|
|
BlockBasedTable::BlockSizeWithTrailer(handle) + readahead_size_);
|
2020-08-28 01:15:11 +00:00
|
|
|
if (s.IsNotSupported()) {
|
2022-06-16 03:17:35 +00:00
|
|
|
rep->CreateFilePrefetchBufferIfNotExists(
|
|
|
|
initial_auto_readahead_size_, max_auto_readahead_size,
|
2022-09-01 18:56:00 +00:00
|
|
|
&prefetch_buffer_, /*implicit_auto_readahead=*/true, num_file_reads_,
|
2023-08-18 22:52:04 +00:00
|
|
|
rep->table_options.num_file_reads_for_auto_readahead,
|
2023-10-23 21:42:44 +00:00
|
|
|
upper_bound_offset_, readaheadsize_cb,
|
|
|
|
/*usage=*/FilePrefetchBufferUsage::kUserScanPrefetch);
|
2020-08-28 01:15:11 +00:00
|
|
|
return;
|
|
|
|
}
|
2021-02-24 00:52:35 +00:00
|
|
|
|
2021-04-28 19:52:53 +00:00
|
|
|
readahead_limit_ = offset + len + readahead_size_;
|
2020-08-28 01:15:11 +00:00
|
|
|
// Keep exponentially increasing readahead size until
|
2021-02-24 00:52:35 +00:00
|
|
|
// max_auto_readahead_size.
|
|
|
|
readahead_size_ = std::min(max_auto_readahead_size, readahead_size_ * 2);
|
De-template block based table iterator (#6531)
Summary:
Right now block based table iterator is used as both of iterating data for block based table, and for the index iterator for partitioend index. This was initially convenient for introducing a new iterator and block type for new index format, while reducing code change. However, these two usage doesn't go with each other very well. For example, Prev() is never called for partitioned index iterator, and some other complexity is maintained in block based iterators, which is not needed for index iterator but maintainers will always need to reason about it. Furthermore, the template usage is not following Google C++ Style which we are following, and makes a large chunk of code tangled together. This commit separate the two iterators. Right now, here is what it is done:
1. Copy the block based iterator code into partitioned index iterator, and de-template them.
2. Remove some code not needed for partitioned index. The upper bound check and tricks are removed. We never tested performance for those tricks when partitioned index is enabled in the first place. It's unlikelyl to generate performance regression, as creating new partitioned index block is much rarer than data blocks.
3. Separate out the prefetch logic to a helper class and both classes call them.
This commit will enable future follow-ups. One direction is that we might separate index iterator interface for data blocks and index blocks, as they are quite different.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6531
Test Plan: build using make and cmake. And build release
Differential Revision: D20473108
fbshipit-source-id: e48011783b339a4257c204cc07507b171b834b0f
2020-03-16 19:17:34 +00:00
|
|
|
}
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|