mirror of
https://github.com/facebook/rocksdb.git
synced 2024-12-03 14:52:53 +00:00
cb08a682d4
Summary: The SeqnoToTimeMapping class (RocksDB internal) used by the preserve_internal_time_seconds / preclude_last_level_data_seconds options was essentially in a prototype state with some significant flaws that would risk biting us some day. This is a big, complicated change because both the implementation and the behavioral requirements of the class needed to be upgraded together. In short, this makes SeqnoToTimeMapping more internally responsible for maintaining good invariants, so that callers don't easily encounter dangerous scenarios. * Some API functions were confusingly named and structured, so I fully refactored the APIs to use clear naming (e.g. `DecodeFrom` and `CopyFromSeqnoRange`), object states, function preconditions, etc. * Previously the object could informally be sorted / compacted or not, and there was limited checking or enforcement on these states. Now there's a well-defined "enforced" state that is consistently checked in debug mode for applicable operations. (I attempted to create a separate "builder" class for unenforced states, but IIRC found that more cumbersome for existing uses than it was worth.) * Previously operations would coalesce data in a way that was better for `GetProximalTimeBeforeSeqno` than for `GetProximalSeqnoBeforeTime` which is odd because the latter is the only one used by DB code currently (what is the seqno cut-off for data definitely older than this given time?). This is now reversed to consistently favor `GetProximalSeqnoBeforeTime`, with that logic concentrated in one place: `SeqnoToTimeMapping::SeqnoTimePair::Merge()`. Unfortunately, a lot of unit test logic was specifically testing the old, suboptimal behavior. * Previously, the natural behavior of SeqnoToTimeMapping was to THROW AWAY data needed to get reasonable answers to the important `GetProximalSeqnoBeforeTime` queries. This is because SeqnoToTimeMapping only had a FIFO policy for staying within the entry capacity (except in aggregate+sort+serialize mode). If the DB wasn't extremely careful to avoid gathering too many time mappings, it could lose track of where the seqno cutoff was for cold data (`GetProximalSeqnoBeforeTime()` returning 0) and preventing all further data migration to the cold tier--until time passes etc. for mappings to catch up with FIFO purging of them. (The problem is not so acute because SST files contain relevant snapshots of the mappings, but the problem would apply to long-lived memtables.) * Now the SeqnoToTimeMapping class has fully-integrated smarts for keeping a sufficiently complete history, within capacity limits, to give good answers to `GetProximalSeqnoBeforeTime` queries. * Fixes old `// FIXME: be smarter about how we erase to avoid data falling off the front prematurely.` * Fix an apparent bug in how entries are selected for storing into SST files. Previously, it only selected entries within the seqno range of the file, but that would easily leave a gap at the beginning of the timeline for data in the file for the purposes of answering GetProximalXXX queries with reasonable accuracy. This could probably lead to the same problem discussed above in naively throwing away entries in FIFO order in the old SeqnoToTimeMapping. The updated testing of GetProximalSeqnoBeforeTime in BasicSeqnoToTimeMapping relies on the fixed behavior. * Fix a potential compaction CPU efficiency/scaling issue in which each compaction output file would iterate over and sort all seqno-to-time mappings from all compaction input files. Now we distill the input file entries to a constant size before processing each compaction output file. Intended follow-up (me or others): * Expand some direct testing of SeqnoToTimeMapping APIs. Here I've focused on updating existing tests to make sense. * There are likely more gaps in availability of needed SeqnoToTimeMapping data when the DB shuts down and is restarted, at least with WAL. * The data tracked in the DB could be kept more accurate and limited if it used the oldest seqno of unflushed data. This might require some more API refactoring. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12253 Test Plan: unit tests updated Reviewed By: jowlyzhang Differential Revision: D52913733 Pulled By: pdillinger fbshipit-source-id: 020737fcbbe6212f6701191a6ab86565054c9593
209 lines
8.1 KiB
C++
209 lines
8.1 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
//
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
#pragma once
|
|
#include <stdint.h>
|
|
|
|
#include <array>
|
|
#include <limits>
|
|
#include <string>
|
|
#include <utility>
|
|
#include <vector>
|
|
|
|
#include "db/version_edit.h"
|
|
#include "rocksdb/flush_block_policy.h"
|
|
#include "rocksdb/listener.h"
|
|
#include "rocksdb/options.h"
|
|
#include "rocksdb/status.h"
|
|
#include "rocksdb/table.h"
|
|
#include "table/meta_blocks.h"
|
|
#include "table/table_builder.h"
|
|
#include "util/compression.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
class BlockBuilder;
|
|
class BlockHandle;
|
|
class WritableFile;
|
|
struct BlockBasedTableOptions;
|
|
|
|
extern const uint64_t kBlockBasedTableMagicNumber;
|
|
extern const uint64_t kLegacyBlockBasedTableMagicNumber;
|
|
|
|
class BlockBasedTableBuilder : public TableBuilder {
|
|
public:
|
|
// Create a builder that will store the contents of the table it is
|
|
// building in *file. Does not close the file. It is up to the
|
|
// caller to close the file after calling Finish().
|
|
BlockBasedTableBuilder(const BlockBasedTableOptions& table_options,
|
|
const TableBuilderOptions& table_builder_options,
|
|
WritableFileWriter* file);
|
|
|
|
// No copying allowed
|
|
BlockBasedTableBuilder(const BlockBasedTableBuilder&) = delete;
|
|
BlockBasedTableBuilder& operator=(const BlockBasedTableBuilder&) = delete;
|
|
|
|
// REQUIRES: Either Finish() or Abandon() has been called.
|
|
~BlockBasedTableBuilder();
|
|
|
|
// Add key,value to the table being constructed.
|
|
// REQUIRES: Unless key has type kTypeRangeDeletion, key is after any
|
|
// previously added non-kTypeRangeDeletion key according to
|
|
// comparator.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
void Add(const Slice& key, const Slice& value) override;
|
|
|
|
// Return non-ok iff some error has been detected.
|
|
Status status() const override;
|
|
|
|
// Return non-ok iff some error happens during IO.
|
|
IOStatus io_status() const override;
|
|
|
|
// Finish building the table. Stops using the file passed to the
|
|
// constructor after this function returns.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
Status Finish() override;
|
|
|
|
// Indicate that the contents of this builder should be abandoned. Stops
|
|
// using the file passed to the constructor after this function returns.
|
|
// If the caller is not going to call Finish(), it must call Abandon()
|
|
// before destroying this builder.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
void Abandon() override;
|
|
|
|
// Number of calls to Add() so far.
|
|
uint64_t NumEntries() const override;
|
|
|
|
bool IsEmpty() const override;
|
|
|
|
// Size of the file generated so far. If invoked after a successful
|
|
// Finish() call, returns the size of the final generated file.
|
|
uint64_t FileSize() const override;
|
|
|
|
// Estimated size of the file generated so far. This is used when
|
|
// FileSize() cannot estimate final SST size, e.g. parallel compression
|
|
// is enabled.
|
|
uint64_t EstimatedFileSize() const override;
|
|
|
|
// Get the size of the "tail" part of a SST file. "Tail" refers to
|
|
// all blocks after data blocks till the end of the SST file.
|
|
uint64_t GetTailSize() const override;
|
|
|
|
bool NeedCompact() const override;
|
|
|
|
// Get table properties
|
|
TableProperties GetTableProperties() const override;
|
|
|
|
// Get file checksum
|
|
std::string GetFileChecksum() const override;
|
|
|
|
// Get file checksum function name
|
|
const char* GetFileChecksumFuncName() const override;
|
|
|
|
void SetSeqnoTimeTableProperties(const SeqnoToTimeMapping& relevant_mapping,
|
|
uint64_t oldest_ancestor_time) override;
|
|
|
|
private:
|
|
bool ok() const { return status().ok(); }
|
|
|
|
// Transition state from buffered to unbuffered. See `Rep::State` API comment
|
|
// for details of the states.
|
|
// REQUIRES: `rep_->state == kBuffered`
|
|
void EnterUnbuffered();
|
|
|
|
// Call block's Finish() method and then
|
|
// - in buffered mode, buffer the uncompressed block contents.
|
|
// - in unbuffered mode, write the compressed block contents to file.
|
|
void WriteBlock(BlockBuilder* block, BlockHandle* handle,
|
|
BlockType blocktype);
|
|
|
|
// Compress and write block content to the file.
|
|
void WriteBlock(const Slice& block_contents, BlockHandle* handle,
|
|
BlockType block_type);
|
|
// Directly write data to the file.
|
|
void WriteMaybeCompressedBlock(
|
|
const Slice& block_contents, CompressionType, BlockHandle* handle,
|
|
BlockType block_type, const Slice* uncompressed_block_data = nullptr);
|
|
|
|
void SetupCacheKeyPrefix(const TableBuilderOptions& tbo);
|
|
|
|
template <typename TBlocklike>
|
|
Status InsertBlockInCache(const Slice& block_contents,
|
|
const BlockHandle* handle, BlockType block_type);
|
|
|
|
Status InsertBlockInCacheHelper(const Slice& block_contents,
|
|
const BlockHandle* handle,
|
|
BlockType block_type);
|
|
|
|
Status InsertBlockInCompressedCache(const Slice& block_contents,
|
|
const CompressionType type,
|
|
const BlockHandle* handle);
|
|
|
|
void WriteFilterBlock(MetaIndexBuilder* meta_index_builder);
|
|
void WriteIndexBlock(MetaIndexBuilder* meta_index_builder,
|
|
BlockHandle* index_block_handle);
|
|
void WritePropertiesBlock(MetaIndexBuilder* meta_index_builder);
|
|
void WriteCompressionDictBlock(MetaIndexBuilder* meta_index_builder);
|
|
void WriteRangeDelBlock(MetaIndexBuilder* meta_index_builder);
|
|
void WriteFooter(BlockHandle& metaindex_block_handle,
|
|
BlockHandle& index_block_handle);
|
|
|
|
struct Rep;
|
|
class BlockBasedTablePropertiesCollectorFactory;
|
|
class BlockBasedTablePropertiesCollector;
|
|
Rep* rep_;
|
|
|
|
struct ParallelCompressionRep;
|
|
|
|
// Advanced operation: flush any buffered key/value pairs to file.
|
|
// Can be used to ensure that two adjacent entries never live in
|
|
// the same data block. Most clients should not need to use this method.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
void Flush();
|
|
|
|
// Some compression libraries fail when the uncompressed size is bigger than
|
|
// int. If uncompressed size is bigger than kCompressionSizeLimit, don't
|
|
// compress it
|
|
const uint64_t kCompressionSizeLimit = std::numeric_limits<int>::max();
|
|
|
|
// Get blocks from mem-table walking thread, compress them and
|
|
// pass them to the write thread. Used in parallel compression mode only
|
|
void BGWorkCompression(const CompressionContext& compression_ctx,
|
|
UncompressionContext* verify_ctx);
|
|
|
|
// Given uncompressed block content, try to compress it and return result and
|
|
// compression type
|
|
void CompressAndVerifyBlock(const Slice& uncompressed_block_data,
|
|
bool is_data_block,
|
|
const CompressionContext& compression_ctx,
|
|
UncompressionContext* verify_ctx,
|
|
std::string* compressed_output,
|
|
Slice* result_block_contents,
|
|
CompressionType* result_compression_type,
|
|
Status* out_status);
|
|
|
|
// Get compressed blocks from BGWorkCompression and write them into SST
|
|
void BGWorkWriteMaybeCompressedBlock();
|
|
|
|
// Initialize parallel compression context and
|
|
// start BGWorkCompression and BGWorkWriteMaybeCompressedBlock threads
|
|
void StartParallelCompression();
|
|
|
|
// Stop BGWorkCompression and BGWorkWriteMaybeCompressedBlock threads
|
|
void StopParallelCompression();
|
|
};
|
|
|
|
Slice CompressBlock(const Slice& uncompressed_data, const CompressionInfo& info,
|
|
CompressionType* type, uint32_t format_version,
|
|
bool do_sample, std::string* compressed_output,
|
|
std::string* sampled_output_fast,
|
|
std::string* sampled_output_slow);
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|