mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-29 18:33:58 +00:00
8f763bdeab
Summary: **Context:** We prefetch the tail part of a SST file (i.e, the blocks after data blocks till the end of the file) during each SST file open in hope to prefetch all the stuff at once ahead of time for later read e.g, footer, meta index, filter/index etc. The existing approach to estimate the tail size to prefetch is through `TailPrefetchStats` heuristics introduced in https://github.com/facebook/rocksdb/pull/4156, which has caused small reads in unlucky case (e.g, small read into the tail buffer during table open in thread 1 under the same BlockBasedTableFactory object can make thread 2's tail prefetching use a small size that it shouldn't) and is hard to debug. Therefore we decide to record the exact tail size and use it directly to prefetch tail of the SST instead of relying heuristics. **Summary:** - Obtain and record in manifest the tail size in `BlockBasedTableBuilder::Finish()` - For backward compatibility, we fall back to TailPrefetchStats and last to simple heuristics that the tail size is a linear portion of the file size - see PR conversation for more. - Make`tail_start_offset` part of the table properties and deduct tail size to record in manifest for external files (e.g, file ingestion, import CF) and db repair (with no access to manifest). Pull Request resolved: https://github.com/facebook/rocksdb/pull/11406 Test Plan: 1. New UT 2. db bench Note: db bench on /tmp/ where direct read is supported is too slow to finish and the default pinning setting in db bench is not helpful to profile # sst read of Get. Therefore I hacked the following to obtain the following comparison. ``` diff --git a/table/block_based/block_based_table_reader.cc b/table/block_based/block_based_table_reader.cc index bd5669f0f..791484c1f 100644 --- a/table/block_based/block_based_table_reader.cc +++ b/table/block_based/block_based_table_reader.cc @@ -838,7 +838,7 @@ Status BlockBasedTable::PrefetchTail( &tail_prefetch_size); // Try file system prefetch - if (!file->use_direct_io() && !force_direct_prefetch) { + if (false && !file->use_direct_io() && !force_direct_prefetch) { if (!file->Prefetch(prefetch_off, prefetch_len, ro.rate_limiter_priority) .IsNotSupported()) { prefetch_buffer->reset(new FilePrefetchBuffer( diff --git a/tools/db_bench_tool.cc b/tools/db_bench_tool.cc index ea40f5fa0..39a0ac385 100644 --- a/tools/db_bench_tool.cc +++ b/tools/db_bench_tool.cc @@ -4191,6 +4191,8 @@ class Benchmark { std::shared_ptr<TableFactory>(NewCuckooTableFactory(table_options)); } else { BlockBasedTableOptions block_based_options; + block_based_options.metadata_cache_options.partition_pinning = + PinningTier::kAll; block_based_options.checksum = static_cast<ChecksumType>(FLAGS_checksum_type); if (FLAGS_use_hash_search) { ``` Create DB ``` ./db_bench --bloom_bits=3 --use_existing_db=1 --seed=1682546046158958 --partition_index_and_filters=1 --statistics=1 -db=/dev/shm/testdb/ -benchmarks=readrandom -key_size=3200 -value_size=512 -num=1000000 -write_buffer_size=6550000 -disable_auto_compactions=false -target_file_size_base=6550000 -compression_type=none ``` ReadRandom ``` ./db_bench --bloom_bits=3 --use_existing_db=1 --seed=1682546046158958 --partition_index_and_filters=1 --statistics=1 -db=/dev/shm/testdb/ -benchmarks=readrandom -key_size=3200 -value_size=512 -num=1000000 -write_buffer_size=6550000 -disable_auto_compactions=false -target_file_size_base=6550000 -compression_type=none ``` (a) Existing (Use TailPrefetchStats for tail size + use seperate prefetch buffer in PartitionedFilter/IndexReader::CacheDependencies()) ``` rocksdb.table.open.prefetch.tail.hit COUNT : 3395 rocksdb.sst.read.micros P50 : 5.655570 P95 : 9.931396 P99 : 14.845454 P100 : 585.000000 COUNT : 999905 SUM : 6590614 ``` (b) This PR (Record tail size + use the same tail buffer in PartitionedFilter/IndexReader::CacheDependencies()) ``` rocksdb.table.open.prefetch.tail.hit COUNT : 14257 rocksdb.sst.read.micros P50 : 5.173347 P95 : 9.015017 P99 : 12.912610 P100 : 228.000000 COUNT : 998547 SUM : 5976540 ``` As we can see, we increase the prefetch tail hit count and decrease SST read count with this PR 3. Test backward compatibility by stepping through reading with post-PR code on a db generated pre-PR. Reviewed By: pdillinger Differential Revision: D45413346 Pulled By: hx235 fbshipit-source-id: 7d5e36a60a72477218f79905168d688452a4c064
210 lines
8.1 KiB
C++
210 lines
8.1 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
//
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
#pragma once
|
|
#include <stdint.h>
|
|
|
|
#include <array>
|
|
#include <limits>
|
|
#include <string>
|
|
#include <utility>
|
|
#include <vector>
|
|
|
|
#include "db/version_edit.h"
|
|
#include "rocksdb/flush_block_policy.h"
|
|
#include "rocksdb/listener.h"
|
|
#include "rocksdb/options.h"
|
|
#include "rocksdb/status.h"
|
|
#include "rocksdb/table.h"
|
|
#include "table/meta_blocks.h"
|
|
#include "table/table_builder.h"
|
|
#include "util/compression.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
class BlockBuilder;
|
|
class BlockHandle;
|
|
class WritableFile;
|
|
struct BlockBasedTableOptions;
|
|
|
|
extern const uint64_t kBlockBasedTableMagicNumber;
|
|
extern const uint64_t kLegacyBlockBasedTableMagicNumber;
|
|
|
|
class BlockBasedTableBuilder : public TableBuilder {
|
|
public:
|
|
// Create a builder that will store the contents of the table it is
|
|
// building in *file. Does not close the file. It is up to the
|
|
// caller to close the file after calling Finish().
|
|
BlockBasedTableBuilder(const BlockBasedTableOptions& table_options,
|
|
const TableBuilderOptions& table_builder_options,
|
|
WritableFileWriter* file);
|
|
|
|
// No copying allowed
|
|
BlockBasedTableBuilder(const BlockBasedTableBuilder&) = delete;
|
|
BlockBasedTableBuilder& operator=(const BlockBasedTableBuilder&) = delete;
|
|
|
|
// REQUIRES: Either Finish() or Abandon() has been called.
|
|
~BlockBasedTableBuilder();
|
|
|
|
// Add key,value to the table being constructed.
|
|
// REQUIRES: Unless key has type kTypeRangeDeletion, key is after any
|
|
// previously added non-kTypeRangeDeletion key according to
|
|
// comparator.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
void Add(const Slice& key, const Slice& value) override;
|
|
|
|
// Return non-ok iff some error has been detected.
|
|
Status status() const override;
|
|
|
|
// Return non-ok iff some error happens during IO.
|
|
IOStatus io_status() const override;
|
|
|
|
// Finish building the table. Stops using the file passed to the
|
|
// constructor after this function returns.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
Status Finish() override;
|
|
|
|
// Indicate that the contents of this builder should be abandoned. Stops
|
|
// using the file passed to the constructor after this function returns.
|
|
// If the caller is not going to call Finish(), it must call Abandon()
|
|
// before destroying this builder.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
void Abandon() override;
|
|
|
|
// Number of calls to Add() so far.
|
|
uint64_t NumEntries() const override;
|
|
|
|
bool IsEmpty() const override;
|
|
|
|
// Size of the file generated so far. If invoked after a successful
|
|
// Finish() call, returns the size of the final generated file.
|
|
uint64_t FileSize() const override;
|
|
|
|
// Estimated size of the file generated so far. This is used when
|
|
// FileSize() cannot estimate final SST size, e.g. parallel compression
|
|
// is enabled.
|
|
uint64_t EstimatedFileSize() const override;
|
|
|
|
// Get the size of the "tail" part of a SST file. "Tail" refers to
|
|
// all blocks after data blocks till the end of the SST file.
|
|
uint64_t GetTailSize() const override;
|
|
|
|
bool NeedCompact() const override;
|
|
|
|
// Get table properties
|
|
TableProperties GetTableProperties() const override;
|
|
|
|
// Get file checksum
|
|
std::string GetFileChecksum() const override;
|
|
|
|
// Get file checksum function name
|
|
const char* GetFileChecksumFuncName() const override;
|
|
|
|
void SetSeqnoTimeTableProperties(
|
|
const std::string& encoded_seqno_to_time_mapping,
|
|
uint64_t oldest_ancestor_time) override;
|
|
|
|
private:
|
|
bool ok() const { return status().ok(); }
|
|
|
|
// Transition state from buffered to unbuffered. See `Rep::State` API comment
|
|
// for details of the states.
|
|
// REQUIRES: `rep_->state == kBuffered`
|
|
void EnterUnbuffered();
|
|
|
|
// Call block's Finish() method and then
|
|
// - in buffered mode, buffer the uncompressed block contents.
|
|
// - in unbuffered mode, write the compressed block contents to file.
|
|
void WriteBlock(BlockBuilder* block, BlockHandle* handle,
|
|
BlockType blocktype);
|
|
|
|
// Compress and write block content to the file.
|
|
void WriteBlock(const Slice& block_contents, BlockHandle* handle,
|
|
BlockType block_type);
|
|
// Directly write data to the file.
|
|
void WriteMaybeCompressedBlock(
|
|
const Slice& block_contents, CompressionType, BlockHandle* handle,
|
|
BlockType block_type, const Slice* uncompressed_block_data = nullptr);
|
|
|
|
void SetupCacheKeyPrefix(const TableBuilderOptions& tbo);
|
|
|
|
template <typename TBlocklike>
|
|
Status InsertBlockInCache(const Slice& block_contents,
|
|
const BlockHandle* handle, BlockType block_type);
|
|
|
|
Status InsertBlockInCacheHelper(const Slice& block_contents,
|
|
const BlockHandle* handle,
|
|
BlockType block_type);
|
|
|
|
Status InsertBlockInCompressedCache(const Slice& block_contents,
|
|
const CompressionType type,
|
|
const BlockHandle* handle);
|
|
|
|
void WriteFilterBlock(MetaIndexBuilder* meta_index_builder);
|
|
void WriteIndexBlock(MetaIndexBuilder* meta_index_builder,
|
|
BlockHandle* index_block_handle);
|
|
void WritePropertiesBlock(MetaIndexBuilder* meta_index_builder);
|
|
void WriteCompressionDictBlock(MetaIndexBuilder* meta_index_builder);
|
|
void WriteRangeDelBlock(MetaIndexBuilder* meta_index_builder);
|
|
void WriteFooter(BlockHandle& metaindex_block_handle,
|
|
BlockHandle& index_block_handle);
|
|
|
|
struct Rep;
|
|
class BlockBasedTablePropertiesCollectorFactory;
|
|
class BlockBasedTablePropertiesCollector;
|
|
Rep* rep_;
|
|
|
|
struct ParallelCompressionRep;
|
|
|
|
// Advanced operation: flush any buffered key/value pairs to file.
|
|
// Can be used to ensure that two adjacent entries never live in
|
|
// the same data block. Most clients should not need to use this method.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
void Flush();
|
|
|
|
// Some compression libraries fail when the uncompressed size is bigger than
|
|
// int. If uncompressed size is bigger than kCompressionSizeLimit, don't
|
|
// compress it
|
|
const uint64_t kCompressionSizeLimit = std::numeric_limits<int>::max();
|
|
|
|
// Get blocks from mem-table walking thread, compress them and
|
|
// pass them to the write thread. Used in parallel compression mode only
|
|
void BGWorkCompression(const CompressionContext& compression_ctx,
|
|
UncompressionContext* verify_ctx);
|
|
|
|
// Given uncompressed block content, try to compress it and return result and
|
|
// compression type
|
|
void CompressAndVerifyBlock(const Slice& uncompressed_block_data,
|
|
bool is_data_block,
|
|
const CompressionContext& compression_ctx,
|
|
UncompressionContext* verify_ctx,
|
|
std::string* compressed_output,
|
|
Slice* result_block_contents,
|
|
CompressionType* result_compression_type,
|
|
Status* out_status);
|
|
|
|
// Get compressed blocks from BGWorkCompression and write them into SST
|
|
void BGWorkWriteMaybeCompressedBlock();
|
|
|
|
// Initialize parallel compression context and
|
|
// start BGWorkCompression and BGWorkWriteMaybeCompressedBlock threads
|
|
void StartParallelCompression();
|
|
|
|
// Stop BGWorkCompression and BGWorkWriteMaybeCompressedBlock threads
|
|
void StopParallelCompression();
|
|
};
|
|
|
|
Slice CompressBlock(const Slice& uncompressed_data, const CompressionInfo& info,
|
|
CompressionType* type, uint32_t format_version,
|
|
bool do_sample, std::string* compressed_output,
|
|
std::string* sampled_output_fast,
|
|
std::string* sampled_output_slow);
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|