2016-02-09 23:12:00 +00:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
2017-07-15 23:03:42 +00:00
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
2013-10-16 21:59:46 +00:00
|
|
|
//
|
2011-03-18 22:37:00 +00:00
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
//
|
|
|
|
// Thread-safe (provides internal synchronization)
|
|
|
|
|
2013-10-05 05:32:05 +00:00
|
|
|
#pragma once
|
2021-03-19 19:08:09 +00:00
|
|
|
#include <cstdint>
|
2011-03-18 22:37:00 +00:00
|
|
|
#include <string>
|
2014-07-02 16:54:20 +00:00
|
|
|
#include <vector>
|
2014-01-28 05:58:46 +00:00
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
#include "db/dbformat.h"
|
2018-12-18 01:26:56 +00:00
|
|
|
#include "db/range_del_aggregator.h"
|
2017-04-06 02:02:00 +00:00
|
|
|
#include "options/cf_options.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
#include "port/port.h"
|
2014-01-28 05:58:46 +00:00
|
|
|
#include "rocksdb/cache.h"
|
|
|
|
#include "rocksdb/env.h"
|
2014-09-04 23:18:36 +00:00
|
|
|
#include "rocksdb/options.h"
|
2016-09-02 21:16:31 +00:00
|
|
|
#include "rocksdb/table.h"
|
2014-01-28 05:58:46 +00:00
|
|
|
#include "table/table_reader.h"
|
2019-06-13 22:39:52 +00:00
|
|
|
#include "trace_replay/block_cache_tracer.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
class Env;
|
In DB::NewIterator(), try to allocate the whole iterator tree in an arena
Summary:
In this patch, try to allocate the whole iterator tree starting from DBIter from an arena
1. ArenaWrappedDBIter is created when serves as the entry point of an iterator tree, with an arena in it.
2. Add an option to create iterator from arena for following iterators: DBIter, MergingIterator, MemtableIterator, all mem table's iterators, all table reader's iterators and two level iterator.
3. MergeIteratorBuilder is created to incrementally build the tree of internal iterators. It is passed to mem table list and version set and add iterators to it.
Limitations:
(1) Only DB::NewIterator() without tailing uses the arena. Other cases, including readonly DB and compactions are still from malloc
(2) Two level iterator itself is allocated in arena, but not iterators inside it.
Test Plan: make all check
Reviewers: ljin, haobo
Reviewed By: haobo
Subscribers: leveldb, dhruba, yhchiang, igor
Differential Revision: https://reviews.facebook.net/D18513
2014-06-02 23:38:00 +00:00
|
|
|
class Arena;
|
2014-06-13 22:54:19 +00:00
|
|
|
struct FileDescriptor;
|
2014-09-29 18:09:09 +00:00
|
|
|
class GetContext;
|
Measure file read latency histogram per level
Summary: In internal stats, remember read latency histogram, if statistics is enabled. It can be retrieved from DB::GetProperty() with "rocksdb.dbstats" property, if it is enabled.
Test Plan: Manually run db_bench and prints out "rocksdb.dbstats" by hand and make sure it prints out as expected
Reviewers: igor, IslamAbdelRahman, rven, kradhakrishnan, anthony, yhchiang
Reviewed By: yhchiang
Subscribers: MarkCallaghan, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D44193
2015-08-13 21:35:54 +00:00
|
|
|
class HistogramImpl;
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2019-05-24 21:22:42 +00:00
|
|
|
// Manages caching for TableReader objects for a column family. The actual
|
|
|
|
// cache is allocated separately and passed to the constructor. TableCache
|
|
|
|
// wraps around the underlying SST file readers by providing Get(),
|
|
|
|
// MultiGet() and NewIterator() methods that hide the instantiation,
|
|
|
|
// caching and access to the TableReader. The main purpose of this is
|
|
|
|
// performance - by caching the TableReader, it avoids unnecessary file opens
|
|
|
|
// and object allocation and instantiation. One exception is compaction, where
|
|
|
|
// a new TableReader may be instantiated - see NewIterator() comments
|
|
|
|
//
|
|
|
|
// Another service provided by TableCache is managing the row cache - if the
|
|
|
|
// DB is configured with a row cache, and the lookup key is present in the row
|
|
|
|
// cache, lookup is very fast. The row cache is obtained from
|
|
|
|
// ioptions.row_cache
|
2011-03-18 22:37:00 +00:00
|
|
|
class TableCache {
|
|
|
|
public:
|
2021-05-05 20:59:21 +00:00
|
|
|
TableCache(const ImmutableOptions& ioptions,
|
Fix use-after-free on implicit temporary FileOptions (#8571)
Summary:
FileOptions has an implicit conversion from EnvOptions and some
internal APIs take `const FileOptions&` and save the reference, which is
counter to Google C++ guidelines,
> Avoid defining functions that require a const reference parameter to outlive the call, because const reference parameters bind to temporaries. Instead, find a way to eliminate the lifetime requirement (for example, by copying the parameter), or pass it by const pointer and document the lifetime and non-null requirements.
This is at least a problem for repair.cc, which passes an EnvOptions to
TableCache(), which would save a reference to the temporary copy as
FileOptions. This was unfortunately only caught as a side effect of
changes in https://github.com/facebook/rocksdb/issues/8544.
This change fixes the repair.cc case and updates the involved internal
APIs that save a reference to use `const FileOptions*` instead.
Unfortunately, I don't know how to get any of our sanitizers to reliably
report bugs like this, so I can't rule out more existing in our
codebase.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8571
Test Plan:
Test that issues seen with https://github.com/facebook/rocksdb/issues/8544 are fixed (can reproduce on
AWS EC2)
Reviewed By: ajkr
Differential Revision: D29943890
Pulled By: pdillinger
fbshipit-source-id: 95f9c5251548777b4dc994c1a083dd2add5799c9
2021-07-28 04:48:22 +00:00
|
|
|
const FileOptions* storage_options, Cache* cache,
|
2020-08-27 18:20:08 +00:00
|
|
|
BlockCacheTracer* const block_cache_tracer,
|
2021-06-10 18:01:44 +00:00
|
|
|
const std::shared_ptr<IOTracer>& io_tracer,
|
|
|
|
const std::string& db_session_id);
|
2011-03-18 22:37:00 +00:00
|
|
|
~TableCache();
|
|
|
|
|
2011-03-28 20:43:44 +00:00
|
|
|
// Return an iterator for the specified file number (the corresponding
|
2019-05-24 21:22:42 +00:00
|
|
|
// file length must be exactly "file_size" bytes). If "table_reader_ptr"
|
|
|
|
// is non-nullptr, also sets "*table_reader_ptr" to point to the Table object
|
2013-03-01 02:04:58 +00:00
|
|
|
// underlying the returned iterator, or nullptr if no Table object underlies
|
2019-05-24 21:22:42 +00:00
|
|
|
// the returned iterator. The returned "*table_reader_ptr" object is owned
|
|
|
|
// by the cache and should not be deleted, and is valid for as long as the
|
2011-03-28 20:43:44 +00:00
|
|
|
// returned iterator is live.
|
2020-08-03 22:21:56 +00:00
|
|
|
// @param options Must outlive the returned iterator.
|
2016-11-16 01:18:32 +00:00
|
|
|
// @param range_del_agg If non-nullptr, adds range deletions to the
|
|
|
|
// aggregator. If an error occurs, returns it in a NewErrorInternalIterator
|
2019-05-24 21:22:42 +00:00
|
|
|
// @param for_compaction If true, a new TableReader may be allocated (but
|
|
|
|
// not cached), depending on the CF options
|
Skip bottom-level filter block caching when hit-optimized
Summary:
When Get() or NewIterator() trigger file loads, skip caching the filter block if
(1) optimize_filters_for_hits is set and (2) the file is on the bottommost
level. Also skip checking filters under the same conditions, which means that
for a preloaded file or a file that was trivially-moved to the bottom level, its
filter block will eventually expire from the cache.
- added parameters/instance variables in various places in order to propagate the config ("skip_filters") from version_set to block_based_table_reader
- in BlockBasedTable::Rep, this optimization prevents filter from being loaded when the file is opened simply by setting filter_policy = nullptr
- in BlockBasedTable::Get/BlockBasedTable::NewIterator, this optimization prevents filter from being used (even if it was loaded already) by setting filter = nullptr
Test Plan:
updated unit test:
$ ./db_test --gtest_filter=DBTest.OptimizeFiltersForHits
will also run 'make check'
Reviewers: sdong, igor, paultuckfield, anthony, rven, kradhakrishnan, IslamAbdelRahman, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D51633
2015-12-23 18:15:07 +00:00
|
|
|
// @param skip_filters Disables loading/accessing the filter block
|
Adding pin_l0_filter_and_index_blocks_in_cache feature and related fixes.
Summary:
When a block based table file is opened, if prefetch_index_and_filter is true, it will prefetch the index and filter blocks, putting them into the block cache.
What this feature adds: when a L0 block based table file is opened, if pin_l0_filter_and_index_blocks_in_cache is true in the options (and prefetch_index_and_filter is true), then the filter and index blocks aren't released back to the block cache at the end of BlockBasedTableReader::Open(). Instead the table reader takes ownership of them, hence pinning them, ie. the LRU cache will never push them out. Meanwhile in the table reader, further accesses will not hit the block cache, thus avoiding lock contention.
Test Plan:
'export TEST_TMPDIR=/dev/shm/ && DISABLE_JEMALLOC=1 OPT=-g make all valgrind_check -j32' is OK.
I didn't run the Java tests, I don't have Java set up on my devserver.
Reviewers: sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56133
2016-04-01 17:42:39 +00:00
|
|
|
// @param level The level this table is at, -1 for "not set / don't know"
|
2015-10-12 22:06:38 +00:00
|
|
|
InternalIterator* NewIterator(
|
Introduce a new storage specific Env API (#5761)
Summary:
The current Env API encompasses both storage/file operations, as well as OS related operations. Most of the APIs return a Status, which does not have enough metadata about an error, such as whether its retry-able or not, scope (i.e fault domain) of the error etc., that may be required in order to properly handle a storage error. The file APIs also do not provide enough control over the IO SLA, such as timeout, prioritization, hinting about placement and redundancy etc.
This PR separates out the file/storage APIs from Env into a new FileSystem class. The APIs are updated to return an IOStatus with metadata about the error, as well as to take an IOOptions structure as input in order to allow more control over the IO.
The user can set both ```options.env``` and ```options.file_system``` to specify that RocksDB should use the former for OS related operations and the latter for storage operations. Internally, a ```CompositeEnvWrapper``` has been introduced that inherits from ```Env``` and redirects individual methods to either an ```Env``` implementation or the ```FileSystem``` as appropriate. When options are sanitized during ```DB::Open```, ```options.env``` is replaced with a newly allocated ```CompositeEnvWrapper``` instance if both env and file_system have been specified. This way, the rest of the RocksDB code can continue to function as before.
This PR also ports PosixEnv to the new API by splitting it into two - PosixEnv and PosixFileSystem. PosixEnv is defined as a sub-class of CompositeEnvWrapper, and threading/time functions are overridden with Posix specific implementations in order to avoid an extra level of indirection.
The ```CompositeEnvWrapper``` translates ```IOStatus``` return code to ```Status```, and sets the severity to ```kSoftError``` if the io_status is retryable. The error handling code in RocksDB can then recover the DB automatically.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5761
Differential Revision: D18868376
Pulled By: anand1976
fbshipit-source-id: 39efe18a162ea746fabac6360ff529baba48486f
2019-12-13 22:47:08 +00:00
|
|
|
const ReadOptions& options, const FileOptions& toptions,
|
2015-10-12 22:06:38 +00:00
|
|
|
const InternalKeyComparator& internal_comparator,
|
2018-12-18 01:26:56 +00:00
|
|
|
const FileMetaData& file_meta, RangeDelAggregator* range_del_agg,
|
2019-06-20 21:28:22 +00:00
|
|
|
const SliceTransform* prefix_extractor, TableReader** table_reader_ptr,
|
|
|
|
HistogramImpl* file_read_hist, TableReaderCaller caller, Arena* arena,
|
2020-06-09 23:49:07 +00:00
|
|
|
bool skip_filters, int level, size_t max_file_size_for_l0_meta_pin,
|
|
|
|
const InternalKey* smallest_compaction_key,
|
Properly report IO errors when IndexType::kBinarySearchWithFirstKey is used (#6621)
Summary:
Context: Index type `kBinarySearchWithFirstKey` added the ability for sst file iterator to sometimes report a key from index without reading the corresponding data block. This is useful when sst blocks are cut at some meaningful boundaries (e.g. one block per key prefix), and many seeks land between blocks (e.g. for each prefix, the ranges of keys in different sst files are nearly disjoint, so a typical seek needs to read a data block from only one file even if all files have the prefix). But this added a new error condition, which rocksdb code was really not equipped to deal with: `InternalIterator::value()` may fail with an IO error or Status::Incomplete, but it's just a method returning a Slice, with no way to report error instead. Before this PR, this type of error wasn't handled at all (an empty slice was returned), and kBinarySearchWithFirstKey implementation was considered a prototype.
Now that we (LogDevice) have experimented with kBinarySearchWithFirstKey for a while and confirmed that it's really useful, this PR is adding the missing error handling.
It's a pretty inconvenient situation implementation-wise. The error needs to be reported from InternalIterator when trying to access value. But there are ~700 call sites of `InternalIterator::value()`, most of which either can't hit the error condition (because the iterator is reading from memtable or from index or something) or wouldn't benefit from the deferred loading of the value (e.g. compaction iterator that reads all values anyway). Adding error handling to all these call sites would needlessly bloat the code. So instead I made the deferred value loading optional: only the call sites that may use deferred loading have to call the new method `PrepareValue()` before calling `value()`. The feature is enabled with a new bool argument `allow_unprepared_value` to a bunch of methods that create iterators (it wouldn't make sense to put it in ReadOptions because it's completely internal to iterators, with virtually no user-visible effect). Lmk if you have better ideas.
Note that the deferred value loading only happens for *internal* iterators. The user-visible iterator (DBIter) always prepares the value before returning from Seek/Next/etc. We could go further and add an API to defer that value loading too, but that's most likely not useful for LogDevice, so it doesn't seem worth the complexity for now.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6621
Test Plan: make -j5 check . Will also deploy to some logdevice test clusters and look at stats.
Reviewed By: siying
Differential Revision: D20786930
Pulled By: al13n321
fbshipit-source-id: 6da77d918bad3780522e918f17f4d5513d3e99ee
2020-04-16 00:37:23 +00:00
|
|
|
const InternalKey* largest_compaction_key, bool allow_unprepared_value);
|
2016-11-16 01:18:32 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
// If a seek to internal key "k" in specified file finds an entry,
|
2019-05-24 21:22:42 +00:00
|
|
|
// call get_context->SaveValue() repeatedly until
|
|
|
|
// it returns false. As a side effect, it will insert the TableReader
|
|
|
|
// into the cache and potentially evict another entry
|
|
|
|
// @param get_context Context for get operation. The result of the lookup
|
|
|
|
// can be retrieved by calling get_context->State()
|
|
|
|
// @param file_read_hist If non-nullptr, the file reader statistics are
|
|
|
|
// recorded
|
Skip bottom-level filter block caching when hit-optimized
Summary:
When Get() or NewIterator() trigger file loads, skip caching the filter block if
(1) optimize_filters_for_hits is set and (2) the file is on the bottommost
level. Also skip checking filters under the same conditions, which means that
for a preloaded file or a file that was trivially-moved to the bottom level, its
filter block will eventually expire from the cache.
- added parameters/instance variables in various places in order to propagate the config ("skip_filters") from version_set to block_based_table_reader
- in BlockBasedTable::Rep, this optimization prevents filter from being loaded when the file is opened simply by setting filter_policy = nullptr
- in BlockBasedTable::Get/BlockBasedTable::NewIterator, this optimization prevents filter from being used (even if it was loaded already) by setting filter = nullptr
Test Plan:
updated unit test:
$ ./db_test --gtest_filter=DBTest.OptimizeFiltersForHits
will also run 'make check'
Reviewers: sdong, igor, paultuckfield, anthony, rven, kradhakrishnan, IslamAbdelRahman, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D51633
2015-12-23 18:15:07 +00:00
|
|
|
// @param skip_filters Disables loading/accessing the filter block
|
Adding pin_l0_filter_and_index_blocks_in_cache feature and related fixes.
Summary:
When a block based table file is opened, if prefetch_index_and_filter is true, it will prefetch the index and filter blocks, putting them into the block cache.
What this feature adds: when a L0 block based table file is opened, if pin_l0_filter_and_index_blocks_in_cache is true in the options (and prefetch_index_and_filter is true), then the filter and index blocks aren't released back to the block cache at the end of BlockBasedTableReader::Open(). Instead the table reader takes ownership of them, hence pinning them, ie. the LRU cache will never push them out. Meanwhile in the table reader, further accesses will not hit the block cache, thus avoiding lock contention.
Test Plan:
'export TEST_TMPDIR=/dev/shm/ && DISABLE_JEMALLOC=1 OPT=-g make all valgrind_check -j32' is OK.
I didn't run the Java tests, I don't have Java set up on my devserver.
Reviewers: sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56133
2016-04-01 17:42:39 +00:00
|
|
|
// @param level The level this table is at, -1 for "not set / don't know"
|
2012-04-17 15:36:46 +00:00
|
|
|
Status Get(const ReadOptions& options,
|
2014-01-27 21:53:22 +00:00
|
|
|
const InternalKeyComparator& internal_comparator,
|
2018-07-14 00:34:54 +00:00
|
|
|
const FileMetaData& file_meta, const Slice& k,
|
2018-05-21 21:33:55 +00:00
|
|
|
GetContext* get_context,
|
|
|
|
const SliceTransform* prefix_extractor = nullptr,
|
|
|
|
HistogramImpl* file_read_hist = nullptr, bool skip_filters = false,
|
2020-06-09 23:49:07 +00:00
|
|
|
int level = -1, size_t max_file_size_for_l0_meta_pin = 0);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2019-08-15 23:59:42 +00:00
|
|
|
// Return the range delete tombstone iterator of the file specified by
|
|
|
|
// `file_meta`.
|
|
|
|
Status GetRangeTombstoneIterator(
|
|
|
|
const ReadOptions& options,
|
|
|
|
const InternalKeyComparator& internal_comparator,
|
|
|
|
const FileMetaData& file_meta,
|
|
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator>* out_iter);
|
|
|
|
|
2019-05-24 21:22:42 +00:00
|
|
|
// If a seek to internal key "k" in specified file finds an entry,
|
|
|
|
// call get_context->SaveValue() repeatedly until
|
|
|
|
// it returns false. As a side effect, it will insert the TableReader
|
|
|
|
// into the cache and potentially evict another entry
|
|
|
|
// @param mget_range Pointer to the structure describing a batch of keys to
|
|
|
|
// be looked up in this table file. The result is stored
|
|
|
|
// in the embedded GetContext
|
|
|
|
// @param skip_filters Disables loading/accessing the filter block
|
|
|
|
// @param level The level this table is at, -1 for "not set / don't know"
|
Introduce a new MultiGet batching implementation (#5011)
Summary:
This PR introduces a new MultiGet() API, with the underlying implementation grouping keys based on SST file and batching lookups in a file. The reason for the new API is twofold - the definition allows callers to allocate storage for status and values on stack instead of std::vector, as well as return values as PinnableSlices in order to avoid copying, and it keeps the original MultiGet() implementation intact while we experiment with batching.
Batching is useful when there is some spatial locality to the keys being queries, as well as larger batch sizes. The main benefits are due to -
1. Fewer function calls, especially to BlockBasedTableReader::MultiGet() and FullFilterBlockReader::KeysMayMatch()
2. Bloom filter cachelines can be prefetched, hiding the cache miss latency
The next step is to optimize the binary searches in the level_storage_info, index blocks and data blocks, since we could reduce the number of key comparisons if the keys are relatively close to each other. The batching optimizations also need to be extended to other formats, such as PlainTable and filter formats. This also needs to be added to db_stress.
Benchmark results from db_bench for various batch size/locality of reference combinations are given below. Locality was simulated by offsetting the keys in a batch by a stride length. Each SST file is about 8.6MB uncompressed and key/value size is 16/100 uncompressed. To focus on the cpu benefit of batching, the runs were single threaded and bound to the same cpu to eliminate interference from other system events. The results show a 10-25% improvement in micros/op from smaller to larger batch sizes (4 - 32).
Batch Sizes
1 | 2 | 4 | 8 | 16 | 32
Random pattern (Stride length 0)
4.158 | 4.109 | 4.026 | 4.05 | 4.1 | 4.074 - Get
4.438 | 4.302 | 4.165 | 4.122 | 4.096 | 4.075 - MultiGet (no batching)
4.461 | 4.256 | 4.277 | 4.11 | 4.182 | 4.14 - MultiGet (w/ batching)
Good locality (Stride length 16)
4.048 | 3.659 | 3.248 | 2.99 | 2.84 | 2.753
4.429 | 3.728 | 3.406 | 3.053 | 2.911 | 2.781
4.452 | 3.45 | 2.833 | 2.451 | 2.233 | 2.135
Good locality (Stride length 256)
4.066 | 3.786 | 3.581 | 3.447 | 3.415 | 3.232
4.406 | 4.005 | 3.644 | 3.49 | 3.381 | 3.268
4.393 | 3.649 | 3.186 | 2.882 | 2.676 | 2.62
Medium locality (Stride length 4096)
4.012 | 3.922 | 3.768 | 3.61 | 3.582 | 3.555
4.364 | 4.057 | 3.791 | 3.65 | 3.57 | 3.465
4.479 | 3.758 | 3.316 | 3.077 | 2.959 | 2.891
dbbench command used (on a DB with 4 levels, 12 million keys)-
TEST_TMPDIR=/dev/shm numactl -C 10 ./db_bench.tmp -use_existing_db=true -benchmarks="readseq,multireadrandom" -write_buffer_size=4194304 -target_file_size_base=4194304 -max_bytes_for_level_base=16777216 -num=12000000 -reads=12000000 -duration=90 -threads=1 -compression_type=none -cache_size=4194304000 -batch_size=32 -disable_auto_compactions=true -bloom_bits=10 -cache_index_and_filter_blocks=true -pin_l0_filter_and_index_blocks_in_cache=true -multiread_batched=true -multiread_stride=4
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5011
Differential Revision: D14348703
Pulled By: anand1976
fbshipit-source-id: 774406dab3776d979c809522a67bedac6c17f84b
2019-04-11 21:24:09 +00:00
|
|
|
Status MultiGet(const ReadOptions& options,
|
|
|
|
const InternalKeyComparator& internal_comparator,
|
|
|
|
const FileMetaData& file_meta,
|
|
|
|
const MultiGetContext::Range* mget_range,
|
|
|
|
const SliceTransform* prefix_extractor = nullptr,
|
|
|
|
HistogramImpl* file_read_hist = nullptr,
|
|
|
|
bool skip_filters = false, int level = -1);
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
// Evict any entry for the specified file number
|
[CF] Rethink table cache
Summary:
Adapting table cache to column families is interesting. We want table cache to be global LRU, so if some column families are use not as often as others, we want them to be evicted from cache. However, current TableCache object also constructs tables on its own. If table is not found in the cache, TableCache automatically creates new table. We want each column family to be able to specify different table factory.
To solve the problem, we still have a single LRU, but we provide the LRUCache object to TableCache on construction. We have one TableCache per column family, but the underyling cache is shared by all TableCache objects.
This allows us to have a global LRU, but still be able to support different table factories for different column families. Also, in the future it will also be able to support different directories for different column families.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15915
2014-02-05 17:07:55 +00:00
|
|
|
static void Evict(Cache* cache, uint64_t file_number);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
Adding pin_l0_filter_and_index_blocks_in_cache feature and related fixes.
Summary:
When a block based table file is opened, if prefetch_index_and_filter is true, it will prefetch the index and filter blocks, putting them into the block cache.
What this feature adds: when a L0 block based table file is opened, if pin_l0_filter_and_index_blocks_in_cache is true in the options (and prefetch_index_and_filter is true), then the filter and index blocks aren't released back to the block cache at the end of BlockBasedTableReader::Open(). Instead the table reader takes ownership of them, hence pinning them, ie. the LRU cache will never push them out. Meanwhile in the table reader, further accesses will not hit the block cache, thus avoiding lock contention.
Test Plan:
'export TEST_TMPDIR=/dev/shm/ && DISABLE_JEMALLOC=1 OPT=-g make all valgrind_check -j32' is OK.
I didn't run the Java tests, I don't have Java set up on my devserver.
Reviewers: sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56133
2016-04-01 17:42:39 +00:00
|
|
|
// Clean table handle and erase it from the table cache
|
|
|
|
// Used in DB close, or the file is not live anymore.
|
|
|
|
void EraseHandle(const FileDescriptor& fd, Cache::Handle* handle);
|
|
|
|
|
2014-01-07 04:29:17 +00:00
|
|
|
// Find table reader
|
Skip bottom-level filter block caching when hit-optimized
Summary:
When Get() or NewIterator() trigger file loads, skip caching the filter block if
(1) optimize_filters_for_hits is set and (2) the file is on the bottommost
level. Also skip checking filters under the same conditions, which means that
for a preloaded file or a file that was trivially-moved to the bottom level, its
filter block will eventually expire from the cache.
- added parameters/instance variables in various places in order to propagate the config ("skip_filters") from version_set to block_based_table_reader
- in BlockBasedTable::Rep, this optimization prevents filter from being loaded when the file is opened simply by setting filter_policy = nullptr
- in BlockBasedTable::Get/BlockBasedTable::NewIterator, this optimization prevents filter from being used (even if it was loaded already) by setting filter = nullptr
Test Plan:
updated unit test:
$ ./db_test --gtest_filter=DBTest.OptimizeFiltersForHits
will also run 'make check'
Reviewers: sdong, igor, paultuckfield, anthony, rven, kradhakrishnan, IslamAbdelRahman, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D51633
2015-12-23 18:15:07 +00:00
|
|
|
// @param skip_filters Disables loading/accessing the filter block
|
Adding pin_l0_filter_and_index_blocks_in_cache feature and related fixes.
Summary:
When a block based table file is opened, if prefetch_index_and_filter is true, it will prefetch the index and filter blocks, putting them into the block cache.
What this feature adds: when a L0 block based table file is opened, if pin_l0_filter_and_index_blocks_in_cache is true in the options (and prefetch_index_and_filter is true), then the filter and index blocks aren't released back to the block cache at the end of BlockBasedTableReader::Open(). Instead the table reader takes ownership of them, hence pinning them, ie. the LRU cache will never push them out. Meanwhile in the table reader, further accesses will not hit the block cache, thus avoiding lock contention.
Test Plan:
'export TEST_TMPDIR=/dev/shm/ && DISABLE_JEMALLOC=1 OPT=-g make all valgrind_check -j32' is OK.
I didn't run the Java tests, I don't have Java set up on my devserver.
Reviewers: sdong
Reviewed By: sdong
Subscribers: andrewkr, dhruba
Differential Revision: https://reviews.facebook.net/D56133
2016-04-01 17:42:39 +00:00
|
|
|
// @param level == -1 means not specified
|
2020-06-29 21:51:57 +00:00
|
|
|
Status FindTable(const ReadOptions& ro, const FileOptions& toptions,
|
2014-01-27 21:53:22 +00:00
|
|
|
const InternalKeyComparator& internal_comparator,
|
2014-06-13 22:54:19 +00:00
|
|
|
const FileDescriptor& file_fd, Cache::Handle**,
|
2018-05-21 21:33:55 +00:00
|
|
|
const SliceTransform* prefix_extractor = nullptr,
|
Measure file read latency histogram per level
Summary: In internal stats, remember read latency histogram, if statistics is enabled. It can be retrieved from DB::GetProperty() with "rocksdb.dbstats" property, if it is enabled.
Test Plan: Manually run db_bench and prints out "rocksdb.dbstats" by hand and make sure it prints out as expected
Reviewers: igor, IslamAbdelRahman, rven, kradhakrishnan, anthony, yhchiang
Reviewed By: yhchiang
Subscribers: MarkCallaghan, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D44193
2015-08-13 21:35:54 +00:00
|
|
|
const bool no_io = false, bool record_read_stats = true,
|
Skip bottom-level filter block caching when hit-optimized
Summary:
When Get() or NewIterator() trigger file loads, skip caching the filter block if
(1) optimize_filters_for_hits is set and (2) the file is on the bottommost
level. Also skip checking filters under the same conditions, which means that
for a preloaded file or a file that was trivially-moved to the bottom level, its
filter block will eventually expire from the cache.
- added parameters/instance variables in various places in order to propagate the config ("skip_filters") from version_set to block_based_table_reader
- in BlockBasedTable::Rep, this optimization prevents filter from being loaded when the file is opened simply by setting filter_policy = nullptr
- in BlockBasedTable::Get/BlockBasedTable::NewIterator, this optimization prevents filter from being used (even if it was loaded already) by setting filter = nullptr
Test Plan:
updated unit test:
$ ./db_test --gtest_filter=DBTest.OptimizeFiltersForHits
will also run 'make check'
Reviewers: sdong, igor, paultuckfield, anthony, rven, kradhakrishnan, IslamAbdelRahman, yhchiang
Reviewed By: yhchiang
Subscribers: leveldb
Differential Revision: https://reviews.facebook.net/D51633
2015-12-23 18:15:07 +00:00
|
|
|
HistogramImpl* file_read_hist = nullptr,
|
2016-07-20 18:23:31 +00:00
|
|
|
bool skip_filters = false, int level = -1,
|
2020-06-09 23:49:07 +00:00
|
|
|
bool prefetch_index_and_filter_in_cache = true,
|
2021-10-07 21:57:02 +00:00
|
|
|
size_t max_file_size_for_l0_meta_pin = 0,
|
|
|
|
Temperature file_temperature = Temperature::kUnknown);
|
2014-01-07 04:29:17 +00:00
|
|
|
|
|
|
|
// Get TableReader from a cache handle.
|
|
|
|
TableReader* GetTableReaderFromHandle(Cache::Handle* handle);
|
|
|
|
|
2014-02-14 00:28:21 +00:00
|
|
|
// Get the table properties of a given table.
|
|
|
|
// @no_io: indicates if we should load table to the cache if it is not present
|
|
|
|
// in table cache yet.
|
|
|
|
// @returns: `properties` will be reset on success. Please note that we will
|
|
|
|
// return Status::Incomplete() if table is not present in cache and
|
|
|
|
// we set `no_io` to be true.
|
Introduce a new storage specific Env API (#5761)
Summary:
The current Env API encompasses both storage/file operations, as well as OS related operations. Most of the APIs return a Status, which does not have enough metadata about an error, such as whether its retry-able or not, scope (i.e fault domain) of the error etc., that may be required in order to properly handle a storage error. The file APIs also do not provide enough control over the IO SLA, such as timeout, prioritization, hinting about placement and redundancy etc.
This PR separates out the file/storage APIs from Env into a new FileSystem class. The APIs are updated to return an IOStatus with metadata about the error, as well as to take an IOOptions structure as input in order to allow more control over the IO.
The user can set both ```options.env``` and ```options.file_system``` to specify that RocksDB should use the former for OS related operations and the latter for storage operations. Internally, a ```CompositeEnvWrapper``` has been introduced that inherits from ```Env``` and redirects individual methods to either an ```Env``` implementation or the ```FileSystem``` as appropriate. When options are sanitized during ```DB::Open```, ```options.env``` is replaced with a newly allocated ```CompositeEnvWrapper``` instance if both env and file_system have been specified. This way, the rest of the RocksDB code can continue to function as before.
This PR also ports PosixEnv to the new API by splitting it into two - PosixEnv and PosixFileSystem. PosixEnv is defined as a sub-class of CompositeEnvWrapper, and threading/time functions are overridden with Posix specific implementations in order to avoid an extra level of indirection.
The ```CompositeEnvWrapper``` translates ```IOStatus``` return code to ```Status```, and sets the severity to ```kSoftError``` if the io_status is retryable. The error handling code in RocksDB can then recover the DB automatically.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5761
Differential Revision: D18868376
Pulled By: anand1976
fbshipit-source-id: 39efe18a162ea746fabac6360ff529baba48486f
2019-12-13 22:47:08 +00:00
|
|
|
Status GetTableProperties(const FileOptions& toptions,
|
2014-02-14 00:28:21 +00:00
|
|
|
const InternalKeyComparator& internal_comparator,
|
2014-06-13 22:54:19 +00:00
|
|
|
const FileDescriptor& file_meta,
|
2014-02-14 00:28:21 +00:00
|
|
|
std::shared_ptr<const TableProperties>* properties,
|
2018-05-21 21:33:55 +00:00
|
|
|
const SliceTransform* prefix_extractor = nullptr,
|
2014-02-14 00:28:21 +00:00
|
|
|
bool no_io = false);
|
|
|
|
|
2014-08-05 18:27:34 +00:00
|
|
|
// Return total memory usage of the table reader of the file.
|
2015-08-26 17:10:26 +00:00
|
|
|
// 0 if table reader of the file is not loaded.
|
2014-08-05 18:27:34 +00:00
|
|
|
size_t GetMemoryUsageByTableReader(
|
Introduce a new storage specific Env API (#5761)
Summary:
The current Env API encompasses both storage/file operations, as well as OS related operations. Most of the APIs return a Status, which does not have enough metadata about an error, such as whether its retry-able or not, scope (i.e fault domain) of the error etc., that may be required in order to properly handle a storage error. The file APIs also do not provide enough control over the IO SLA, such as timeout, prioritization, hinting about placement and redundancy etc.
This PR separates out the file/storage APIs from Env into a new FileSystem class. The APIs are updated to return an IOStatus with metadata about the error, as well as to take an IOOptions structure as input in order to allow more control over the IO.
The user can set both ```options.env``` and ```options.file_system``` to specify that RocksDB should use the former for OS related operations and the latter for storage operations. Internally, a ```CompositeEnvWrapper``` has been introduced that inherits from ```Env``` and redirects individual methods to either an ```Env``` implementation or the ```FileSystem``` as appropriate. When options are sanitized during ```DB::Open```, ```options.env``` is replaced with a newly allocated ```CompositeEnvWrapper``` instance if both env and file_system have been specified. This way, the rest of the RocksDB code can continue to function as before.
This PR also ports PosixEnv to the new API by splitting it into two - PosixEnv and PosixFileSystem. PosixEnv is defined as a sub-class of CompositeEnvWrapper, and threading/time functions are overridden with Posix specific implementations in order to avoid an extra level of indirection.
The ```CompositeEnvWrapper``` translates ```IOStatus``` return code to ```Status```, and sets the severity to ```kSoftError``` if the io_status is retryable. The error handling code in RocksDB can then recover the DB automatically.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5761
Differential Revision: D18868376
Pulled By: anand1976
fbshipit-source-id: 39efe18a162ea746fabac6360ff529baba48486f
2019-12-13 22:47:08 +00:00
|
|
|
const FileOptions& toptions,
|
2014-08-05 18:27:34 +00:00
|
|
|
const InternalKeyComparator& internal_comparator,
|
2018-05-21 21:33:55 +00:00
|
|
|
const FileDescriptor& fd,
|
|
|
|
const SliceTransform* prefix_extractor = nullptr);
|
2014-08-05 18:27:34 +00:00
|
|
|
|
2019-07-23 22:30:59 +00:00
|
|
|
// Returns approximated offset of a key in a file represented by fd.
|
|
|
|
uint64_t ApproximateOffsetOf(
|
|
|
|
const Slice& key, const FileDescriptor& fd, TableReaderCaller caller,
|
|
|
|
const InternalKeyComparator& internal_comparator,
|
|
|
|
const SliceTransform* prefix_extractor = nullptr);
|
|
|
|
|
2019-08-16 21:16:49 +00:00
|
|
|
// Returns approximated data size between start and end keys in a file
|
|
|
|
// represented by fd (the start key must not be greater than the end key).
|
|
|
|
uint64_t ApproximateSize(const Slice& start, const Slice& end,
|
|
|
|
const FileDescriptor& fd, TableReaderCaller caller,
|
|
|
|
const InternalKeyComparator& internal_comparator,
|
|
|
|
const SliceTransform* prefix_extractor = nullptr);
|
|
|
|
|
2014-01-07 04:29:17 +00:00
|
|
|
// Release the handle from a cache
|
|
|
|
void ReleaseHandle(Cache::Handle* handle);
|
|
|
|
|
2018-12-29 02:00:00 +00:00
|
|
|
Cache* get_cache() const { return cache_; }
|
|
|
|
|
2021-03-26 04:17:17 +00:00
|
|
|
// Capacity of the backing Cache that indicates infinite TableCache capacity.
|
2017-05-04 17:28:22 +00:00
|
|
|
// For example when max_open_files is -1 we set the backing Cache to this.
|
|
|
|
static const int kInfiniteCapacity = 0x400000;
|
|
|
|
|
2018-06-28 00:09:29 +00:00
|
|
|
// The tables opened with this TableCache will be immortal, i.e., their
|
|
|
|
// lifetime is as long as that of the DB.
|
|
|
|
void SetTablesAreImmortal() {
|
|
|
|
if (cache_->GetCapacity() >= kInfiniteCapacity) {
|
|
|
|
immortal_tables_ = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
private:
|
2015-08-26 22:25:59 +00:00
|
|
|
// Build a table reader
|
2020-06-29 21:51:57 +00:00
|
|
|
Status GetTableReader(const ReadOptions& ro, const FileOptions& file_options,
|
2015-08-26 22:25:59 +00:00
|
|
|
const InternalKeyComparator& internal_comparator,
|
|
|
|
const FileDescriptor& fd, bool sequential_mode,
|
2019-06-19 21:07:36 +00:00
|
|
|
bool record_read_stats, HistogramImpl* file_read_hist,
|
2018-11-09 19:17:34 +00:00
|
|
|
std::unique_ptr<TableReader>* table_reader,
|
2018-05-21 21:33:55 +00:00
|
|
|
const SliceTransform* prefix_extractor = nullptr,
|
2016-07-20 18:23:31 +00:00
|
|
|
bool skip_filters = false, int level = -1,
|
2020-06-09 23:49:07 +00:00
|
|
|
bool prefetch_index_and_filter_in_cache = true,
|
2021-10-07 21:57:02 +00:00
|
|
|
size_t max_file_size_for_l0_meta_pin = 0,
|
|
|
|
Temperature file_temperature = Temperature::kUnknown);
|
2015-08-26 22:25:59 +00:00
|
|
|
|
2019-08-28 23:10:38 +00:00
|
|
|
// Create a key prefix for looking up the row cache. The prefix is of the
|
|
|
|
// format row_cache_id + fd_number + seq_no. Later, the user key can be
|
|
|
|
// appended to form the full key
|
|
|
|
void CreateRowCacheKeyPrefix(const ReadOptions& options,
|
|
|
|
const FileDescriptor& fd,
|
|
|
|
const Slice& internal_key,
|
2019-09-20 19:00:55 +00:00
|
|
|
GetContext* get_context, IterKey& row_cache_key);
|
2019-08-28 23:10:38 +00:00
|
|
|
|
|
|
|
// Helper function to lookup the row cache for a key. It appends the
|
|
|
|
// user key to row_cache_key at offset prefix_size
|
|
|
|
bool GetFromRowCache(const Slice& user_key, IterKey& row_cache_key,
|
|
|
|
size_t prefix_size, GetContext* get_context);
|
|
|
|
|
2021-05-05 20:59:21 +00:00
|
|
|
const ImmutableOptions& ioptions_;
|
Introduce a new storage specific Env API (#5761)
Summary:
The current Env API encompasses both storage/file operations, as well as OS related operations. Most of the APIs return a Status, which does not have enough metadata about an error, such as whether its retry-able or not, scope (i.e fault domain) of the error etc., that may be required in order to properly handle a storage error. The file APIs also do not provide enough control over the IO SLA, such as timeout, prioritization, hinting about placement and redundancy etc.
This PR separates out the file/storage APIs from Env into a new FileSystem class. The APIs are updated to return an IOStatus with metadata about the error, as well as to take an IOOptions structure as input in order to allow more control over the IO.
The user can set both ```options.env``` and ```options.file_system``` to specify that RocksDB should use the former for OS related operations and the latter for storage operations. Internally, a ```CompositeEnvWrapper``` has been introduced that inherits from ```Env``` and redirects individual methods to either an ```Env``` implementation or the ```FileSystem``` as appropriate. When options are sanitized during ```DB::Open```, ```options.env``` is replaced with a newly allocated ```CompositeEnvWrapper``` instance if both env and file_system have been specified. This way, the rest of the RocksDB code can continue to function as before.
This PR also ports PosixEnv to the new API by splitting it into two - PosixEnv and PosixFileSystem. PosixEnv is defined as a sub-class of CompositeEnvWrapper, and threading/time functions are overridden with Posix specific implementations in order to avoid an extra level of indirection.
The ```CompositeEnvWrapper``` translates ```IOStatus``` return code to ```Status```, and sets the severity to ```kSoftError``` if the io_status is retryable. The error handling code in RocksDB can then recover the DB automatically.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5761
Differential Revision: D18868376
Pulled By: anand1976
fbshipit-source-id: 39efe18a162ea746fabac6360ff529baba48486f
2019-12-13 22:47:08 +00:00
|
|
|
const FileOptions& file_options_;
|
[CF] Rethink table cache
Summary:
Adapting table cache to column families is interesting. We want table cache to be global LRU, so if some column families are use not as often as others, we want them to be evicted from cache. However, current TableCache object also constructs tables on its own. If table is not found in the cache, TableCache automatically creates new table. We want each column family to be able to specify different table factory.
To solve the problem, we still have a single LRU, but we provide the LRUCache object to TableCache on construction. We have one TableCache per column family, but the underyling cache is shared by all TableCache objects.
This allows us to have a global LRU, but still be able to support different table factories for different column families. Also, in the future it will also be able to support different directories for different column families.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15915
2014-02-05 17:07:55 +00:00
|
|
|
Cache* const cache_;
|
2015-06-23 17:25:45 +00:00
|
|
|
std::string row_cache_id_;
|
2018-06-28 00:09:29 +00:00
|
|
|
bool immortal_tables_;
|
2019-06-13 22:39:52 +00:00
|
|
|
BlockCacheTracer* const block_cache_tracer_;
|
2020-04-21 20:14:03 +00:00
|
|
|
Striped<port::Mutex, Slice> loader_mutex_;
|
2020-08-27 18:20:08 +00:00
|
|
|
std::shared_ptr<IOTracer> io_tracer_;
|
2021-06-10 18:01:44 +00:00
|
|
|
std::string db_session_id_;
|
2011-03-18 22:37:00 +00:00
|
|
|
};
|
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|