rocksdb/db/internal_stats.h
Peter Dillinger 5724348689 Revamp, optimize new experimental clock cache (#10626)
Summary:
* Consolidates most metadata into a single word per slot so that more
can be accomplished with a single atomic update. In the common case,
Lookup was previously about 4 atomic updates, now just 1 atomic update.
Common case Release was previously 1 atomic read + 1 atomic update,
now just 1 atomic update.
* Eliminate spins / waits / yields, which likely threaten some "lock free"
benefits. Compare-exchange loops are only used in explicit Erase, and
strict_capacity_limit=true Insert. Eviction uses opportunistic compare-
exchange.
* Relaxes some aggressiveness and guarantees. For example,
  * Duplicate Inserts will sometimes go undetected and the shadow duplicate
    will age out with eviction.
  * In many cases, the older Inserted value for a given cache key will be kept
  (i.e. Insert does not support overwrite).
  * Entries explicitly erased (rather than evicted) might not be freed
  immediately in some rare cases.
  * With strict_capacity_limit=false, capacity limit is not tracked/enforced as
  precisely as LRUCache, but is self-correcting and should only deviate by a
  very small number of extra or fewer entries.
* Use smaller "computed default" number of cache shards in many cases,
because benefits to larger usage tracking / eviction pools outweigh the small
cost of more lock-free atomic contention. The improvement in CPU and I/O
is dramatic in some limit-memory cases.
* Even without the sharding change, the eviction algorithm is likely more
effective than LRU overall because it's more stateful, even though the
"hot path" state tracking for it is essentially free with ref counting. It
is like a generalized CLOCK with aging (see code comments). I don't have
performance numbers showing a specific improvement, but in theory, for a
Poisson access pattern to each block, keeping some state allows better
estimation of time to next access (Poisson interval) than strict LRU. The
bounded randomness in CLOCK can also reduce "cliff" effect for repeated
range scans approaching and exceeding cache size.

## Hot path algorithm comparison
Rough descriptions, focusing on number and kind of atomic operations:
* Old `Lookup()` (2-5 atomic updates per probe):
```
Loop:
  Increment internal ref count at slot
  If possible hit:
    Check flags atomic (and non-atomic fields)
    If cache hit:
      Three distinct updates to 'flags' atomic
      Increment refs for internal-to-external
      Return
  Decrement internal ref count
while atomic read 'displacements' > 0
```
* New `Lookup()` (1-2 atomic updates per probe):
```
Loop:
  Increment acquire counter in meta word (optimistic)
  If visible entry (already read meta word):
    If match (read non-atomic fields):
      Return
    Else:
      Decrement acquire counter in meta word
  Else if invisible entry (rare, already read meta word):
    Decrement acquire counter in meta word
while atomic read 'displacements' > 0
```
* Old `Release()` (1 atomic update, conditional on atomic read, rarely more):
```
Read atomic ref count
If last reference and invisible (rare):
  Use CAS etc. to remove
  Return
Else:
  Decrement ref count
```
* New `Release()` (1 unconditional atomic update, rarely more):
```
Increment release counter in meta word
If last reference and invisible (rare):
  Use CAS etc. to remove
  Return
```

## Performance test setup
Build DB with
```
TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=fillrandom -num=30000000 -disable_wal=1 -bloom_bits=16
```
Test with
```
TEST_TMPDIR=/dev/shm ./db_bench -benchmarks=readrandom -readonly -num=30000000 -bloom_bits=16 -cache_index_and_filter_blocks=1 -cache_size=${CACHE_MB}000000 -duration 60 -threads=$THREADS -statistics
```
Numbers on a single socket Skylake Xeon system with 48 hardware threads, DEBUG_LEVEL=0 PORTABLE=0. Very similar story on a dual socket system with 80 hardware threads. Using (every 2nd) Fibonacci MB cache sizes to sample the territory between powers of two. Configurations:

base: LRUCache before this change, but with db_bench change to default cache_numshardbits=-1 (instead of fixed at 6)
folly: LRUCache before this change, with folly enabled (distributed mutex) but on an old compiler (sorry)
gt_clock: experimental ClockCache before this change
new_clock: experimental ClockCache with this change

## Performance test results
First test "hot path" read performance, with block cache large enough for whole DB:
4181MB 1thread base -> kops/s: 47.761
4181MB 1thread folly -> kops/s: 45.877
4181MB 1thread gt_clock -> kops/s: 51.092
4181MB 1thread new_clock -> kops/s: 53.944

4181MB 16thread base -> kops/s: 284.567
4181MB 16thread folly -> kops/s: 249.015
4181MB 16thread gt_clock -> kops/s: 743.762
4181MB 16thread new_clock -> kops/s: 861.821

4181MB 24thread base -> kops/s: 303.415
4181MB 24thread folly -> kops/s: 266.548
4181MB 24thread gt_clock -> kops/s: 975.706
4181MB 24thread new_clock -> kops/s: 1205.64 (~= 24 * 53.944)

4181MB 32thread base -> kops/s: 311.251
4181MB 32thread folly -> kops/s: 274.952
4181MB 32thread gt_clock -> kops/s: 1045.98
4181MB 32thread new_clock -> kops/s: 1370.38

4181MB 48thread base -> kops/s: 310.504
4181MB 48thread folly -> kops/s: 268.322
4181MB 48thread gt_clock -> kops/s: 1195.65
4181MB 48thread new_clock -> kops/s: 1604.85 (~= 24 * 1.25 * 53.944)

4181MB 64thread base -> kops/s: 307.839
4181MB 64thread folly -> kops/s: 272.172
4181MB 64thread gt_clock -> kops/s: 1204.47
4181MB 64thread new_clock -> kops/s: 1615.37

4181MB 128thread base -> kops/s: 310.934
4181MB 128thread folly -> kops/s: 267.468
4181MB 128thread gt_clock -> kops/s: 1188.75
4181MB 128thread new_clock -> kops/s: 1595.46

Whether we have just one thread on a quiet system or an overload of threads, the new version wins every time in thousand-ops per second, sometimes dramatically so. Mutex-based implementation quickly becomes contention-limited. New clock cache shows essentially perfect scaling up to number of physical cores (24), and then each hyperthreaded core adding about 1/4 the throughput of an additional physical core (see 48 thread case). Block cache miss rates (omitted above) are negligible across the board. With partitioned instead of full filters, the maximum speed-up vs. base is more like 2.5x rather than 5x.

Now test a large block cache with low miss ratio, but some eviction is required:
1597MB 1thread base -> kops/s: 46.603 io_bytes/op: 1584.63 miss_ratio: 0.0201066 max_rss_mb: 1589.23
1597MB 1thread folly -> kops/s: 45.079 io_bytes/op: 1530.03 miss_ratio: 0.019872 max_rss_mb: 1550.43
1597MB 1thread gt_clock -> kops/s: 48.711 io_bytes/op: 1566.63 miss_ratio: 0.0198923 max_rss_mb: 1691.4
1597MB 1thread new_clock -> kops/s: 51.531 io_bytes/op: 1589.07 miss_ratio: 0.0201969 max_rss_mb: 1583.56

1597MB 32thread base -> kops/s: 301.174 io_bytes/op: 1439.52 miss_ratio: 0.0184218 max_rss_mb: 1656.59
1597MB 32thread folly -> kops/s: 273.09 io_bytes/op: 1375.12 miss_ratio: 0.0180002 max_rss_mb: 1586.8
1597MB 32thread gt_clock -> kops/s: 904.497 io_bytes/op: 1411.29 miss_ratio: 0.0179934 max_rss_mb: 1775.89
1597MB 32thread new_clock -> kops/s: 1182.59 io_bytes/op: 1440.77 miss_ratio: 0.0185449 max_rss_mb: 1636.45

1597MB 128thread base -> kops/s: 309.91 io_bytes/op: 1438.25 miss_ratio: 0.018399 max_rss_mb: 1689.98
1597MB 128thread folly -> kops/s: 267.605 io_bytes/op: 1394.16 miss_ratio: 0.0180286 max_rss_mb: 1631.91
1597MB 128thread gt_clock -> kops/s: 691.518 io_bytes/op: 9056.73 miss_ratio: 0.0186572 max_rss_mb: 1982.26
1597MB 128thread new_clock -> kops/s: 1406.12 io_bytes/op: 1440.82 miss_ratio: 0.0185463 max_rss_mb: 1685.63

610MB 1thread base -> kops/s: 45.511 io_bytes/op: 2279.61 miss_ratio: 0.0290528 max_rss_mb: 615.137
610MB 1thread folly -> kops/s: 43.386 io_bytes/op: 2217.29 miss_ratio: 0.0289282 max_rss_mb: 600.996
610MB 1thread gt_clock -> kops/s: 46.207 io_bytes/op: 2275.51 miss_ratio: 0.0290057 max_rss_mb: 637.934
610MB 1thread new_clock -> kops/s: 48.879 io_bytes/op: 2283.1 miss_ratio: 0.0291253 max_rss_mb: 613.5

610MB 32thread base -> kops/s: 306.59 io_bytes/op: 2250 miss_ratio: 0.0288721 max_rss_mb: 683.402
610MB 32thread folly -> kops/s: 269.176 io_bytes/op: 2187.86 miss_ratio: 0.0286938 max_rss_mb: 628.742
610MB 32thread gt_clock -> kops/s: 855.097 io_bytes/op: 2279.26 miss_ratio: 0.0288009 max_rss_mb: 733.062
610MB 32thread new_clock -> kops/s: 1121.47 io_bytes/op: 2244.29 miss_ratio: 0.0289046 max_rss_mb: 666.453

610MB 128thread base -> kops/s: 305.079 io_bytes/op: 2252.43 miss_ratio: 0.0288884 max_rss_mb: 723.457
610MB 128thread folly -> kops/s: 269.583 io_bytes/op: 2204.58 miss_ratio: 0.0287001 max_rss_mb: 676.426
610MB 128thread gt_clock -> kops/s: 53.298 io_bytes/op: 8128.98 miss_ratio: 0.0292452 max_rss_mb: 956.273
610MB 128thread new_clock -> kops/s: 1301.09 io_bytes/op: 2246.04 miss_ratio: 0.0289171 max_rss_mb: 788.812

The new version is still winning every time, sometimes dramatically so, and we can tell from the maximum resident memory numbers (which contain some noise, by the way) that the new cache is not cheating on memory usage. IMPORTANT: The previous generation experimental clock cache appears to hit a serious bottleneck in the higher thread count configurations, presumably due to some of its waiting functionality. (The same bottleneck is not seen with partitioned index+filters.)

Now we consider even smaller cache sizes, with higher miss ratios, eviction work, etc.

233MB 1thread base -> kops/s: 10.557 io_bytes/op: 227040 miss_ratio: 0.0403105 max_rss_mb: 247.371
233MB 1thread folly -> kops/s: 15.348 io_bytes/op: 112007 miss_ratio: 0.0372238 max_rss_mb: 245.293
233MB 1thread gt_clock -> kops/s: 6.365 io_bytes/op: 244854 miss_ratio: 0.0413873 max_rss_mb: 259.844
233MB 1thread new_clock -> kops/s: 47.501 io_bytes/op: 2591.93 miss_ratio: 0.0330989 max_rss_mb: 242.461

233MB 32thread base -> kops/s: 96.498 io_bytes/op: 363379 miss_ratio: 0.0459966 max_rss_mb: 479.227
233MB 32thread folly -> kops/s: 109.95 io_bytes/op: 314799 miss_ratio: 0.0450032 max_rss_mb: 400.738
233MB 32thread gt_clock -> kops/s: 2.353 io_bytes/op: 385397 miss_ratio: 0.048445 max_rss_mb: 500.688
233MB 32thread new_clock -> kops/s: 1088.95 io_bytes/op: 2567.02 miss_ratio: 0.0330593 max_rss_mb: 303.402

233MB 128thread base -> kops/s: 84.302 io_bytes/op: 378020 miss_ratio: 0.0466558 max_rss_mb: 1051.84
233MB 128thread folly -> kops/s: 89.921 io_bytes/op: 338242 miss_ratio: 0.0460309 max_rss_mb: 812.785
233MB 128thread gt_clock -> kops/s: 2.588 io_bytes/op: 462833 miss_ratio: 0.0509158 max_rss_mb: 1109.94
233MB 128thread new_clock -> kops/s: 1299.26 io_bytes/op: 2565.94 miss_ratio: 0.0330531 max_rss_mb: 361.016

89MB 1thread base -> kops/s: 0.574 io_bytes/op: 5.35977e+06 miss_ratio: 0.274427 max_rss_mb: 91.3086
89MB 1thread folly -> kops/s: 0.578 io_bytes/op: 5.16549e+06 miss_ratio: 0.27276 max_rss_mb: 96.8984
89MB 1thread gt_clock -> kops/s: 0.512 io_bytes/op: 4.13111e+06 miss_ratio: 0.242817 max_rss_mb: 119.441
89MB 1thread new_clock -> kops/s: 48.172 io_bytes/op: 2709.76 miss_ratio: 0.0346162 max_rss_mb: 100.754

89MB 32thread base -> kops/s: 5.779 io_bytes/op: 6.14192e+06 miss_ratio: 0.320399 max_rss_mb: 311.812
89MB 32thread folly -> kops/s: 5.601 io_bytes/op: 5.83838e+06 miss_ratio: 0.313123 max_rss_mb: 252.418
89MB 32thread gt_clock -> kops/s: 0.77 io_bytes/op: 3.99236e+06 miss_ratio: 0.236296 max_rss_mb: 396.422
89MB 32thread new_clock -> kops/s: 1064.97 io_bytes/op: 2687.23 miss_ratio: 0.0346134 max_rss_mb: 155.293

89MB 128thread base -> kops/s: 4.959 io_bytes/op: 6.20297e+06 miss_ratio: 0.323945 max_rss_mb: 823.43
89MB 128thread folly -> kops/s: 4.962 io_bytes/op: 5.9601e+06 miss_ratio: 0.319857 max_rss_mb: 626.824
89MB 128thread gt_clock -> kops/s: 1.009 io_bytes/op: 4.1083e+06 miss_ratio: 0.242512 max_rss_mb: 1095.32
89MB 128thread new_clock -> kops/s: 1224.39 io_bytes/op: 2688.2 miss_ratio: 0.0346207 max_rss_mb: 218.223

^ Now something interesting has happened: the new clock cache has gained a dramatic lead in the single-threaded case, and this is because the cache is so small, and full filters are so big, that dividing the cache into 64 shards leads to significant (random) imbalances in cache shards and excessive churn in imbalanced shards. This new clock cache only uses two shards for this configuration, and that helps to ensure that entries are part of a sufficiently big pool that their eviction order resembles the single-shard order. (This effect is not seen with partitioned index+filters.)

Even smaller cache size:
34MB 1thread base -> kops/s: 0.198 io_bytes/op: 1.65342e+07 miss_ratio: 0.939466 max_rss_mb: 48.6914
34MB 1thread folly -> kops/s: 0.201 io_bytes/op: 1.63416e+07 miss_ratio: 0.939081 max_rss_mb: 45.3281
34MB 1thread gt_clock -> kops/s: 0.448 io_bytes/op: 4.43957e+06 miss_ratio: 0.266749 max_rss_mb: 100.523
34MB 1thread new_clock -> kops/s: 1.055 io_bytes/op: 1.85439e+06 miss_ratio: 0.107512 max_rss_mb: 75.3125

34MB 32thread base -> kops/s: 3.346 io_bytes/op: 1.64852e+07 miss_ratio: 0.93596 max_rss_mb: 180.48
34MB 32thread folly -> kops/s: 3.431 io_bytes/op: 1.62857e+07 miss_ratio: 0.935693 max_rss_mb: 137.531
34MB 32thread gt_clock -> kops/s: 1.47 io_bytes/op: 4.89704e+06 miss_ratio: 0.295081 max_rss_mb: 392.465
34MB 32thread new_clock -> kops/s: 8.19 io_bytes/op: 3.70456e+06 miss_ratio: 0.20826 max_rss_mb: 519.793

34MB 128thread base -> kops/s: 2.293 io_bytes/op: 1.64351e+07 miss_ratio: 0.931866 max_rss_mb: 449.484
34MB 128thread folly -> kops/s: 2.34 io_bytes/op: 1.6219e+07 miss_ratio: 0.932023 max_rss_mb: 396.457
34MB 128thread gt_clock -> kops/s: 1.798 io_bytes/op: 5.4241e+06 miss_ratio: 0.324881 max_rss_mb: 1104.41
34MB 128thread new_clock -> kops/s: 10.519 io_bytes/op: 2.39354e+06 miss_ratio: 0.136147 max_rss_mb: 1050.52

As the miss ratio gets higher (say, above 10%), the CPU time spent in eviction starts to erode the advantage of using fewer shards (13% miss rate much lower than 94%). LRU's O(1) eviction time can eventually pay off when there's enough block cache churn:

13MB 1thread base -> kops/s: 0.195 io_bytes/op: 1.65732e+07 miss_ratio: 0.946604 max_rss_mb: 45.6328
13MB 1thread folly -> kops/s: 0.197 io_bytes/op: 1.63793e+07 miss_ratio: 0.94661 max_rss_mb: 33.8633
13MB 1thread gt_clock -> kops/s: 0.519 io_bytes/op: 4.43316e+06 miss_ratio: 0.269379 max_rss_mb: 100.684
13MB 1thread new_clock -> kops/s: 0.176 io_bytes/op: 1.54148e+07 miss_ratio: 0.91545 max_rss_mb: 66.2383

13MB 32thread base -> kops/s: 3.266 io_bytes/op: 1.65544e+07 miss_ratio: 0.943386 max_rss_mb: 132.492
13MB 32thread folly -> kops/s: 3.396 io_bytes/op: 1.63142e+07 miss_ratio: 0.943243 max_rss_mb: 101.863
13MB 32thread gt_clock -> kops/s: 2.758 io_bytes/op: 5.13714e+06 miss_ratio: 0.310652 max_rss_mb: 396.121
13MB 32thread new_clock -> kops/s: 3.11 io_bytes/op: 1.23419e+07 miss_ratio: 0.708425 max_rss_mb: 321.758

13MB 128thread base -> kops/s: 2.31 io_bytes/op: 1.64823e+07 miss_ratio: 0.939543 max_rss_mb: 425.539
13MB 128thread folly -> kops/s: 2.339 io_bytes/op: 1.6242e+07 miss_ratio: 0.939966 max_rss_mb: 346.098
13MB 128thread gt_clock -> kops/s: 3.223 io_bytes/op: 5.76928e+06 miss_ratio: 0.345899 max_rss_mb: 1087.77
13MB 128thread new_clock -> kops/s: 2.984 io_bytes/op: 1.05341e+07 miss_ratio: 0.606198 max_rss_mb: 898.27

gt_clock is clearly blowing way past its memory budget for lower miss rates and best throughput. new_clock also seems to be exceeding budgets, and this warrants more investigation but is not the use case we are targeting with the new cache. With partitioned index+filter, the miss ratio is much better, and although still high enough that the eviction CPU time is definitely offsetting mutex contention:

13MB 1thread base -> kops/s: 16.326 io_bytes/op: 23743.9 miss_ratio: 0.205362 max_rss_mb: 65.2852
13MB 1thread folly -> kops/s: 15.574 io_bytes/op: 19415 miss_ratio: 0.184157 max_rss_mb: 56.3516
13MB 1thread gt_clock -> kops/s: 14.459 io_bytes/op: 22873 miss_ratio: 0.198355 max_rss_mb: 63.9688
13MB 1thread new_clock -> kops/s: 16.34 io_bytes/op: 24386.5 miss_ratio: 0.210512 max_rss_mb: 61.707

13MB 128thread base -> kops/s: 289.786 io_bytes/op: 23710.9 miss_ratio: 0.205056 max_rss_mb: 103.57
13MB 128thread folly -> kops/s: 185.282 io_bytes/op: 19433.1 miss_ratio: 0.184275 max_rss_mb: 116.219
13MB 128thread gt_clock -> kops/s: 354.451 io_bytes/op: 23150.6 miss_ratio: 0.200495 max_rss_mb: 102.871
13MB 128thread new_clock -> kops/s: 295.359 io_bytes/op: 24626.4 miss_ratio: 0.212452 max_rss_mb: 121.109

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10626

Test Plan: updated unit tests, stress/crash test runs including with TSAN, ASAN, UBSAN

Reviewed By: anand1976

Differential Revision: D39368406

Pulled By: pdillinger

fbshipit-source-id: 5afc44da4c656f8f751b44552bbf27bd3ca6fef9
2022-09-16 00:24:11 -07:00

975 lines
35 KiB
C++

// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
//
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file. See the AUTHORS file for names of contributors.
//
#pragma once
#include <map>
#include <memory>
#include <string>
#include <vector>
#include "cache/cache_entry_roles.h"
#include "db/version_set.h"
#include "rocksdb/system_clock.h"
#include "util/hash_containers.h"
namespace ROCKSDB_NAMESPACE {
template <class Stats>
class CacheEntryStatsCollector;
class DBImpl;
class MemTableList;
// Config for retrieving a property's value.
struct DBPropertyInfo {
bool need_out_of_mutex;
// gcc had an internal error for initializing union of pointer-to-member-
// functions. Workaround is to populate exactly one of the following function
// pointers with a non-nullptr value.
// @param value Value-result argument for storing the property's string value
// @param suffix Argument portion of the property. For example, suffix would
// be "5" for the property "rocksdb.num-files-at-level5". So far, only
// certain string properties take an argument.
bool (InternalStats::*handle_string)(std::string* value, Slice suffix);
// @param value Value-result argument for storing the property's uint64 value
// @param db Many of the int properties rely on DBImpl methods.
// @param version Version is needed in case the property is retrieved without
// holding db mutex, which is only supported for int properties.
bool (InternalStats::*handle_int)(uint64_t* value, DBImpl* db,
Version* version);
// @param props Map of general properties to populate
// @param suffix Argument portion of the property. (see handle_string)
bool (InternalStats::*handle_map)(std::map<std::string, std::string>* props,
Slice suffix);
// handle the string type properties rely on DBImpl methods
// @param value Value-result argument for storing the property's string value
bool (DBImpl::*handle_string_dbimpl)(std::string* value);
};
extern const DBPropertyInfo* GetPropertyInfo(const Slice& property);
#ifndef ROCKSDB_LITE
#undef SCORE
enum class LevelStatType {
INVALID = 0,
NUM_FILES,
COMPACTED_FILES,
SIZE_BYTES,
SCORE,
READ_GB,
RN_GB,
RNP1_GB,
WRITE_GB,
W_NEW_GB,
MOVED_GB,
WRITE_AMP,
READ_MBPS,
WRITE_MBPS,
COMP_SEC,
COMP_CPU_SEC,
COMP_COUNT,
AVG_SEC,
KEY_IN,
KEY_DROP,
R_BLOB_GB,
W_BLOB_GB,
TOTAL // total number of types
};
struct LevelStat {
// This what will be L?.property_name in the flat map returned to the user
std::string property_name;
// This will be what we will print in the header in the cli
std::string header_name;
};
struct DBStatInfo {
// This what will be property_name in the flat map returned to the user
std::string property_name;
};
class InternalStats {
public:
static const std::map<LevelStatType, LevelStat> compaction_level_stats;
enum InternalCFStatsType {
L0_FILE_COUNT_LIMIT_SLOWDOWNS,
LOCKED_L0_FILE_COUNT_LIMIT_SLOWDOWNS,
MEMTABLE_LIMIT_STOPS,
MEMTABLE_LIMIT_SLOWDOWNS,
L0_FILE_COUNT_LIMIT_STOPS,
LOCKED_L0_FILE_COUNT_LIMIT_STOPS,
PENDING_COMPACTION_BYTES_LIMIT_SLOWDOWNS,
PENDING_COMPACTION_BYTES_LIMIT_STOPS,
WRITE_STALLS_ENUM_MAX,
BYTES_FLUSHED,
BYTES_INGESTED_ADD_FILE,
INGESTED_NUM_FILES_TOTAL,
INGESTED_LEVEL0_NUM_FILES_TOTAL,
INGESTED_NUM_KEYS_TOTAL,
INTERNAL_CF_STATS_ENUM_MAX,
};
enum InternalDBStatsType {
kIntStatsWalFileBytes,
kIntStatsWalFileSynced,
kIntStatsBytesWritten,
kIntStatsNumKeysWritten,
kIntStatsWriteDoneByOther,
kIntStatsWriteDoneBySelf,
kIntStatsWriteWithWal,
kIntStatsWriteStallMicros,
kIntStatsNumMax,
};
static const std::map<InternalDBStatsType, DBStatInfo> db_stats_type_to_info;
InternalStats(int num_levels, SystemClock* clock, ColumnFamilyData* cfd);
// Per level compaction stats
struct CompactionOutputsStats {
uint64_t num_output_records = 0;
uint64_t bytes_written = 0;
uint64_t bytes_written_blob = 0;
uint64_t num_output_files = 0;
uint64_t num_output_files_blob = 0;
void Add(const CompactionOutputsStats& stats) {
this->num_output_records += stats.num_output_records;
this->bytes_written += stats.bytes_written;
this->bytes_written_blob += stats.bytes_written_blob;
this->num_output_files += stats.num_output_files;
this->num_output_files_blob += stats.num_output_files_blob;
}
};
// Per level compaction stats. comp_stats_[level] stores the stats for
// compactions that produced data for the specified "level".
struct CompactionStats {
uint64_t micros;
uint64_t cpu_micros;
// The number of bytes read from all non-output levels (table files)
uint64_t bytes_read_non_output_levels;
// The number of bytes read from the compaction output level (table files)
uint64_t bytes_read_output_level;
// The number of bytes read from blob files
uint64_t bytes_read_blob;
// Total number of bytes written to table files during compaction
uint64_t bytes_written;
// Total number of bytes written to blob files during compaction
uint64_t bytes_written_blob;
// Total number of bytes moved to the output level (table files)
uint64_t bytes_moved;
// The number of compaction input files in all non-output levels (table
// files)
int num_input_files_in_non_output_levels;
// The number of compaction input files in the output level (table files)
int num_input_files_in_output_level;
// The number of compaction output files (table files)
int num_output_files;
// The number of compaction output files (blob files)
int num_output_files_blob;
// Total incoming entries during compaction between levels N and N+1
uint64_t num_input_records;
// Accumulated diff number of entries
// (num input entries - num output entries) for compaction levels N and N+1
uint64_t num_dropped_records;
// Total output entries from compaction
uint64_t num_output_records;
// Number of compactions done
int count;
// Number of compactions done per CompactionReason
int counts[static_cast<int>(CompactionReason::kNumOfReasons)]{};
explicit CompactionStats()
: micros(0),
cpu_micros(0),
bytes_read_non_output_levels(0),
bytes_read_output_level(0),
bytes_read_blob(0),
bytes_written(0),
bytes_written_blob(0),
bytes_moved(0),
num_input_files_in_non_output_levels(0),
num_input_files_in_output_level(0),
num_output_files(0),
num_output_files_blob(0),
num_input_records(0),
num_dropped_records(0),
num_output_records(0),
count(0) {
int num_of_reasons = static_cast<int>(CompactionReason::kNumOfReasons);
for (int i = 0; i < num_of_reasons; i++) {
counts[i] = 0;
}
}
explicit CompactionStats(CompactionReason reason, int c)
: micros(0),
cpu_micros(0),
bytes_read_non_output_levels(0),
bytes_read_output_level(0),
bytes_read_blob(0),
bytes_written(0),
bytes_written_blob(0),
bytes_moved(0),
num_input_files_in_non_output_levels(0),
num_input_files_in_output_level(0),
num_output_files(0),
num_output_files_blob(0),
num_input_records(0),
num_dropped_records(0),
num_output_records(0),
count(c) {
int num_of_reasons = static_cast<int>(CompactionReason::kNumOfReasons);
for (int i = 0; i < num_of_reasons; i++) {
counts[i] = 0;
}
int r = static_cast<int>(reason);
if (r >= 0 && r < num_of_reasons) {
counts[r] = c;
} else {
count = 0;
}
}
CompactionStats(const CompactionStats& c)
: micros(c.micros),
cpu_micros(c.cpu_micros),
bytes_read_non_output_levels(c.bytes_read_non_output_levels),
bytes_read_output_level(c.bytes_read_output_level),
bytes_read_blob(c.bytes_read_blob),
bytes_written(c.bytes_written),
bytes_written_blob(c.bytes_written_blob),
bytes_moved(c.bytes_moved),
num_input_files_in_non_output_levels(
c.num_input_files_in_non_output_levels),
num_input_files_in_output_level(c.num_input_files_in_output_level),
num_output_files(c.num_output_files),
num_output_files_blob(c.num_output_files_blob),
num_input_records(c.num_input_records),
num_dropped_records(c.num_dropped_records),
num_output_records(c.num_output_records),
count(c.count) {
int num_of_reasons = static_cast<int>(CompactionReason::kNumOfReasons);
for (int i = 0; i < num_of_reasons; i++) {
counts[i] = c.counts[i];
}
}
CompactionStats& operator=(const CompactionStats& c) {
micros = c.micros;
cpu_micros = c.cpu_micros;
bytes_read_non_output_levels = c.bytes_read_non_output_levels;
bytes_read_output_level = c.bytes_read_output_level;
bytes_read_blob = c.bytes_read_blob;
bytes_written = c.bytes_written;
bytes_written_blob = c.bytes_written_blob;
bytes_moved = c.bytes_moved;
num_input_files_in_non_output_levels =
c.num_input_files_in_non_output_levels;
num_input_files_in_output_level = c.num_input_files_in_output_level;
num_output_files = c.num_output_files;
num_output_files_blob = c.num_output_files_blob;
num_input_records = c.num_input_records;
num_dropped_records = c.num_dropped_records;
num_output_records = c.num_output_records;
count = c.count;
int num_of_reasons = static_cast<int>(CompactionReason::kNumOfReasons);
for (int i = 0; i < num_of_reasons; i++) {
counts[i] = c.counts[i];
}
return *this;
}
void Clear() {
this->micros = 0;
this->cpu_micros = 0;
this->bytes_read_non_output_levels = 0;
this->bytes_read_output_level = 0;
this->bytes_read_blob = 0;
this->bytes_written = 0;
this->bytes_written_blob = 0;
this->bytes_moved = 0;
this->num_input_files_in_non_output_levels = 0;
this->num_input_files_in_output_level = 0;
this->num_output_files = 0;
this->num_output_files_blob = 0;
this->num_input_records = 0;
this->num_dropped_records = 0;
this->num_output_records = 0;
this->count = 0;
int num_of_reasons = static_cast<int>(CompactionReason::kNumOfReasons);
for (int i = 0; i < num_of_reasons; i++) {
counts[i] = 0;
}
}
void Add(const CompactionStats& c) {
this->micros += c.micros;
this->cpu_micros += c.cpu_micros;
this->bytes_read_non_output_levels += c.bytes_read_non_output_levels;
this->bytes_read_output_level += c.bytes_read_output_level;
this->bytes_read_blob += c.bytes_read_blob;
this->bytes_written += c.bytes_written;
this->bytes_written_blob += c.bytes_written_blob;
this->bytes_moved += c.bytes_moved;
this->num_input_files_in_non_output_levels +=
c.num_input_files_in_non_output_levels;
this->num_input_files_in_output_level +=
c.num_input_files_in_output_level;
this->num_output_files += c.num_output_files;
this->num_output_files_blob += c.num_output_files_blob;
this->num_input_records += c.num_input_records;
this->num_dropped_records += c.num_dropped_records;
this->num_output_records += c.num_output_records;
this->count += c.count;
int num_of_reasons = static_cast<int>(CompactionReason::kNumOfReasons);
for (int i = 0; i< num_of_reasons; i++) {
counts[i] += c.counts[i];
}
}
void Add(const CompactionOutputsStats& stats) {
this->num_output_files += static_cast<int>(stats.num_output_files);
this->num_output_records += stats.num_output_records;
this->bytes_written += stats.bytes_written;
this->bytes_written_blob += stats.bytes_written_blob;
this->num_output_files_blob +=
static_cast<int>(stats.num_output_files_blob);
}
void Subtract(const CompactionStats& c) {
this->micros -= c.micros;
this->cpu_micros -= c.cpu_micros;
this->bytes_read_non_output_levels -= c.bytes_read_non_output_levels;
this->bytes_read_output_level -= c.bytes_read_output_level;
this->bytes_read_blob -= c.bytes_read_blob;
this->bytes_written -= c.bytes_written;
this->bytes_written_blob -= c.bytes_written_blob;
this->bytes_moved -= c.bytes_moved;
this->num_input_files_in_non_output_levels -=
c.num_input_files_in_non_output_levels;
this->num_input_files_in_output_level -=
c.num_input_files_in_output_level;
this->num_output_files -= c.num_output_files;
this->num_output_files_blob -= c.num_output_files_blob;
this->num_input_records -= c.num_input_records;
this->num_dropped_records -= c.num_dropped_records;
this->num_output_records -= c.num_output_records;
this->count -= c.count;
int num_of_reasons = static_cast<int>(CompactionReason::kNumOfReasons);
for (int i = 0; i < num_of_reasons; i++) {
counts[i] -= c.counts[i];
}
}
void ResetCompactionReason(CompactionReason reason) {
int num_of_reasons = static_cast<int>(CompactionReason::kNumOfReasons);
assert(count == 1); // only support update one compaction reason
for (int i = 0; i < num_of_reasons; i++) {
counts[i] = 0;
}
int r = static_cast<int>(reason);
assert(r >= 0 && r < num_of_reasons);
counts[r] = 1;
}
};
// Compaction stats, for per_key_placement compaction, it includes 2 levels
// stats: the last level and the penultimate level.
struct CompactionStatsFull {
// the stats for the target primary output level
CompactionStats stats;
// stats for penultimate level output if exist
bool has_penultimate_level_output = false;
CompactionStats penultimate_level_stats;
explicit CompactionStatsFull() : stats(), penultimate_level_stats() {}
explicit CompactionStatsFull(CompactionReason reason, int c)
: stats(reason, c), penultimate_level_stats(reason, c){};
uint64_t TotalBytesWritten() const {
uint64_t bytes_written = stats.bytes_written + stats.bytes_written_blob;
if (has_penultimate_level_output) {
bytes_written += penultimate_level_stats.bytes_written +
penultimate_level_stats.bytes_written_blob;
}
return bytes_written;
}
uint64_t DroppedRecords() {
uint64_t output_records = stats.num_output_records;
if (has_penultimate_level_output) {
output_records += penultimate_level_stats.num_output_records;
}
if (stats.num_input_records > output_records) {
return stats.num_input_records - output_records;
}
return 0;
}
void SetMicros(uint64_t val) {
stats.micros = val;
penultimate_level_stats.micros = val;
}
void AddCpuMicros(uint64_t val) {
stats.cpu_micros += val;
penultimate_level_stats.cpu_micros += val;
}
};
// For use with CacheEntryStatsCollector
struct CacheEntryRoleStats {
uint64_t cache_capacity = 0;
uint64_t cache_usage = 0;
size_t table_size = 0;
size_t occupancy = 0;
std::string cache_id;
std::array<uint64_t, kNumCacheEntryRoles> total_charges;
std::array<size_t, kNumCacheEntryRoles> entry_counts;
uint32_t collection_count = 0;
uint32_t copies_of_last_collection = 0;
uint64_t last_start_time_micros_ = 0;
uint64_t last_end_time_micros_ = 0;
void Clear() {
// Wipe everything except collection_count
uint32_t saved_collection_count = collection_count;
*this = CacheEntryRoleStats();
collection_count = saved_collection_count;
}
void BeginCollection(Cache*, SystemClock*, uint64_t start_time_micros);
std::function<void(const Slice&, void*, size_t, Cache::DeleterFn)>
GetEntryCallback();
void EndCollection(Cache*, SystemClock*, uint64_t end_time_micros);
void SkippedCollection();
std::string ToString(SystemClock* clock) const;
void ToMap(std::map<std::string, std::string>* values,
SystemClock* clock) const;
private:
UnorderedMap<Cache::DeleterFn, CacheEntryRole> role_map_;
uint64_t GetLastDurationMicros() const;
};
void Clear() {
for (int i = 0; i < kIntStatsNumMax; i++) {
db_stats_[i].store(0);
}
for (int i = 0; i < INTERNAL_CF_STATS_ENUM_MAX; i++) {
cf_stats_count_[i] = 0;
cf_stats_value_[i] = 0;
}
for (auto& comp_stat : comp_stats_) {
comp_stat.Clear();
}
per_key_placement_comp_stats_.Clear();
for (auto& h : file_read_latency_) {
h.Clear();
}
blob_file_read_latency_.Clear();
cf_stats_snapshot_.Clear();
db_stats_snapshot_.Clear();
bg_error_count_ = 0;
started_at_ = clock_->NowMicros();
}
void AddCompactionStats(int level, Env::Priority thread_pri,
const CompactionStats& stats) {
comp_stats_[level].Add(stats);
comp_stats_by_pri_[thread_pri].Add(stats);
}
void AddCompactionStats(int level, Env::Priority thread_pri,
const CompactionStatsFull& comp_stats_full) {
AddCompactionStats(level, thread_pri, comp_stats_full.stats);
if (comp_stats_full.has_penultimate_level_output) {
per_key_placement_comp_stats_.Add(
comp_stats_full.penultimate_level_stats);
}
}
void IncBytesMoved(int level, uint64_t amount) {
comp_stats_[level].bytes_moved += amount;
}
void AddCFStats(InternalCFStatsType type, uint64_t value) {
cf_stats_value_[type] += value;
++cf_stats_count_[type];
}
void AddDBStats(InternalDBStatsType type, uint64_t value,
bool concurrent = false) {
auto& v = db_stats_[type];
if (concurrent) {
v.fetch_add(value, std::memory_order_relaxed);
} else {
v.store(v.load(std::memory_order_relaxed) + value,
std::memory_order_relaxed);
}
}
uint64_t GetDBStats(InternalDBStatsType type) {
return db_stats_[type].load(std::memory_order_relaxed);
}
HistogramImpl* GetFileReadHist(int level) {
return &file_read_latency_[level];
}
HistogramImpl* GetBlobFileReadHist() { return &blob_file_read_latency_; }
uint64_t GetBackgroundErrorCount() const { return bg_error_count_; }
uint64_t BumpAndGetBackgroundErrorCount() { return ++bg_error_count_; }
bool GetStringProperty(const DBPropertyInfo& property_info,
const Slice& property, std::string* value);
bool GetMapProperty(const DBPropertyInfo& property_info,
const Slice& property,
std::map<std::string, std::string>* value);
bool GetIntProperty(const DBPropertyInfo& property_info, uint64_t* value,
DBImpl* db);
bool GetIntPropertyOutOfMutex(const DBPropertyInfo& property_info,
Version* version, uint64_t* value);
// Unless there is a recent enough collection of the stats, collect and
// saved new cache entry stats. If `foreground`, require data to be more
// recent to skip re-collection.
//
// This should only be called while NOT holding the DB mutex.
void CollectCacheEntryStats(bool foreground);
const uint64_t* TEST_GetCFStatsValue() const { return cf_stats_value_; }
const std::vector<CompactionStats>& TEST_GetCompactionStats() const {
return comp_stats_;
}
const CompactionStats& TEST_GetPerKeyPlacementCompactionStats() const {
return per_key_placement_comp_stats_;
}
void TEST_GetCacheEntryRoleStats(CacheEntryRoleStats* stats, bool foreground);
// Store a mapping from the user-facing DB::Properties string to our
// DBPropertyInfo struct used internally for retrieving properties.
static const UnorderedMap<std::string, DBPropertyInfo> ppt_name_to_info;
private:
void DumpDBMapStats(std::map<std::string, std::string>* db_stats);
void DumpDBStats(std::string* value);
void DumpCFMapStats(std::map<std::string, std::string>* cf_stats);
void DumpCFMapStats(
const VersionStorageInfo* vstorage,
std::map<int, std::map<LevelStatType, double>>* level_stats,
CompactionStats* compaction_stats_sum);
void DumpCFMapStatsByPriority(
std::map<int, std::map<LevelStatType, double>>* priorities_stats);
void DumpCFMapStatsIOStalls(std::map<std::string, std::string>* cf_stats);
void DumpCFStats(std::string* value);
void DumpCFStatsNoFileHistogram(std::string* value);
void DumpCFFileHistogram(std::string* value);
Cache* GetBlockCacheForStats();
Cache* GetBlobCacheForStats();
// Per-DB stats
std::atomic<uint64_t> db_stats_[kIntStatsNumMax];
// Per-ColumnFamily stats
uint64_t cf_stats_value_[INTERNAL_CF_STATS_ENUM_MAX];
uint64_t cf_stats_count_[INTERNAL_CF_STATS_ENUM_MAX];
// Initialize/reference the collector in constructor so that we don't need
// additional synchronization in InternalStats, relying on synchronization
// in CacheEntryStatsCollector::GetStats. This collector is pinned in cache
// (through a shared_ptr) so that it does not get immediately ejected from
// a full cache, which would force a re-scan on the next GetStats.
std::shared_ptr<CacheEntryStatsCollector<CacheEntryRoleStats>>
cache_entry_stats_collector_;
// Per-ColumnFamily/level compaction stats
std::vector<CompactionStats> comp_stats_;
std::vector<CompactionStats> comp_stats_by_pri_;
CompactionStats per_key_placement_comp_stats_;
std::vector<HistogramImpl> file_read_latency_;
HistogramImpl blob_file_read_latency_;
// Used to compute per-interval statistics
struct CFStatsSnapshot {
// ColumnFamily-level stats
CompactionStats comp_stats;
uint64_t ingest_bytes_flush; // Bytes written to L0 (Flush)
uint64_t stall_count; // Stall count
// Stats from compaction jobs - bytes written, bytes read, duration.
uint64_t compact_bytes_write;
uint64_t compact_bytes_read;
uint64_t compact_micros;
double seconds_up;
// AddFile specific stats
uint64_t ingest_bytes_addfile; // Total Bytes ingested
uint64_t ingest_files_addfile; // Total number of files ingested
uint64_t ingest_l0_files_addfile; // Total number of files ingested to L0
uint64_t ingest_keys_addfile; // Total number of keys ingested
CFStatsSnapshot()
: ingest_bytes_flush(0),
stall_count(0),
compact_bytes_write(0),
compact_bytes_read(0),
compact_micros(0),
seconds_up(0),
ingest_bytes_addfile(0),
ingest_files_addfile(0),
ingest_l0_files_addfile(0),
ingest_keys_addfile(0) {}
void Clear() {
comp_stats.Clear();
ingest_bytes_flush = 0;
stall_count = 0;
compact_bytes_write = 0;
compact_bytes_read = 0;
compact_micros = 0;
seconds_up = 0;
ingest_bytes_addfile = 0;
ingest_files_addfile = 0;
ingest_l0_files_addfile = 0;
ingest_keys_addfile = 0;
}
} cf_stats_snapshot_;
struct DBStatsSnapshot {
// DB-level stats
uint64_t ingest_bytes; // Bytes written by user
uint64_t wal_bytes; // Bytes written to WAL
uint64_t wal_synced; // Number of times WAL is synced
uint64_t write_with_wal; // Number of writes that request WAL
// These count the number of writes processed by the calling thread or
// another thread.
uint64_t write_other;
uint64_t write_self;
// Total number of keys written. write_self and write_other measure number
// of write requests written, Each of the write request can contain updates
// to multiple keys. num_keys_written is total number of keys updated by all
// those writes.
uint64_t num_keys_written;
// Total time writes delayed by stalls.
uint64_t write_stall_micros;
double seconds_up;
DBStatsSnapshot()
: ingest_bytes(0),
wal_bytes(0),
wal_synced(0),
write_with_wal(0),
write_other(0),
write_self(0),
num_keys_written(0),
write_stall_micros(0),
seconds_up(0) {}
void Clear() {
ingest_bytes = 0;
wal_bytes = 0;
wal_synced = 0;
write_with_wal = 0;
write_other = 0;
write_self = 0;
num_keys_written = 0;
write_stall_micros = 0;
seconds_up = 0;
}
} db_stats_snapshot_;
// Handler functions for getting property values. They use "value" as a value-
// result argument, and return true upon successfully setting "value".
bool HandleNumFilesAtLevel(std::string* value, Slice suffix);
bool HandleCompressionRatioAtLevelPrefix(std::string* value, Slice suffix);
bool HandleLevelStats(std::string* value, Slice suffix);
bool HandleStats(std::string* value, Slice suffix);
bool HandleCFMapStats(std::map<std::string, std::string>* compaction_stats,
Slice suffix);
bool HandleCFStats(std::string* value, Slice suffix);
bool HandleCFStatsNoFileHistogram(std::string* value, Slice suffix);
bool HandleCFFileHistogram(std::string* value, Slice suffix);
bool HandleDBMapStats(std::map<std::string, std::string>* compaction_stats,
Slice suffix);
bool HandleDBStats(std::string* value, Slice suffix);
bool HandleSsTables(std::string* value, Slice suffix);
bool HandleAggregatedTableProperties(std::string* value, Slice suffix);
bool HandleAggregatedTablePropertiesAtLevel(std::string* value, Slice suffix);
bool HandleAggregatedTablePropertiesMap(
std::map<std::string, std::string>* values, Slice suffix);
bool HandleAggregatedTablePropertiesAtLevelMap(
std::map<std::string, std::string>* values, Slice suffix);
bool HandleNumImmutableMemTable(uint64_t* value, DBImpl* db,
Version* version);
bool HandleNumImmutableMemTableFlushed(uint64_t* value, DBImpl* db,
Version* version);
bool HandleMemTableFlushPending(uint64_t* value, DBImpl* db,
Version* version);
bool HandleNumRunningFlushes(uint64_t* value, DBImpl* db, Version* version);
bool HandleCompactionPending(uint64_t* value, DBImpl* db, Version* version);
bool HandleNumRunningCompactions(uint64_t* value, DBImpl* db,
Version* version);
bool HandleBackgroundErrors(uint64_t* value, DBImpl* db, Version* version);
bool HandleCurSizeActiveMemTable(uint64_t* value, DBImpl* db,
Version* version);
bool HandleCurSizeAllMemTables(uint64_t* value, DBImpl* db, Version* version);
bool HandleSizeAllMemTables(uint64_t* value, DBImpl* db, Version* version);
bool HandleNumEntriesActiveMemTable(uint64_t* value, DBImpl* db,
Version* version);
bool HandleNumEntriesImmMemTables(uint64_t* value, DBImpl* db,
Version* version);
bool HandleNumDeletesActiveMemTable(uint64_t* value, DBImpl* db,
Version* version);
bool HandleNumDeletesImmMemTables(uint64_t* value, DBImpl* db,
Version* version);
bool HandleEstimateNumKeys(uint64_t* value, DBImpl* db, Version* version);
bool HandleNumSnapshots(uint64_t* value, DBImpl* db, Version* version);
bool HandleOldestSnapshotTime(uint64_t* value, DBImpl* db, Version* version);
bool HandleOldestSnapshotSequence(uint64_t* value, DBImpl* db,
Version* version);
bool HandleNumLiveVersions(uint64_t* value, DBImpl* db, Version* version);
bool HandleCurrentSuperVersionNumber(uint64_t* value, DBImpl* db,
Version* version);
bool HandleIsFileDeletionsEnabled(uint64_t* value, DBImpl* db,
Version* version);
bool HandleBaseLevel(uint64_t* value, DBImpl* db, Version* version);
bool HandleTotalSstFilesSize(uint64_t* value, DBImpl* db, Version* version);
bool HandleLiveSstFilesSize(uint64_t* value, DBImpl* db, Version* version);
bool HandleEstimatePendingCompactionBytes(uint64_t* value, DBImpl* db,
Version* version);
bool HandleEstimateTableReadersMem(uint64_t* value, DBImpl* db,
Version* version);
bool HandleEstimateLiveDataSize(uint64_t* value, DBImpl* db,
Version* version);
bool HandleMinLogNumberToKeep(uint64_t* value, DBImpl* db, Version* version);
bool HandleMinObsoleteSstNumberToKeep(uint64_t* value, DBImpl* db,
Version* version);
bool HandleActualDelayedWriteRate(uint64_t* value, DBImpl* db,
Version* version);
bool HandleIsWriteStopped(uint64_t* value, DBImpl* db, Version* version);
bool HandleEstimateOldestKeyTime(uint64_t* value, DBImpl* db,
Version* version);
bool HandleBlockCacheCapacity(uint64_t* value, DBImpl* db, Version* version);
bool HandleBlockCacheUsage(uint64_t* value, DBImpl* db, Version* version);
bool HandleBlockCachePinnedUsage(uint64_t* value, DBImpl* db,
Version* version);
bool HandleBlockCacheEntryStats(std::string* value, Slice suffix);
bool HandleBlockCacheEntryStatsMap(std::map<std::string, std::string>* values,
Slice suffix);
bool HandleLiveSstFilesSizeAtTemperature(std::string* value, Slice suffix);
bool HandleNumBlobFiles(uint64_t* value, DBImpl* db, Version* version);
bool HandleBlobStats(std::string* value, Slice suffix);
bool HandleTotalBlobFileSize(uint64_t* value, DBImpl* db, Version* version);
bool HandleLiveBlobFileSize(uint64_t* value, DBImpl* db, Version* version);
bool HandleLiveBlobFileGarbageSize(uint64_t* value, DBImpl* db,
Version* version);
bool HandleBlobCacheCapacity(uint64_t* value, DBImpl* db, Version* version);
bool HandleBlobCacheUsage(uint64_t* value, DBImpl* db, Version* version);
bool HandleBlobCachePinnedUsage(uint64_t* value, DBImpl* db,
Version* version);
// Total number of background errors encountered. Every time a flush task
// or compaction task fails, this counter is incremented. The failure can
// be caused by any possible reason, including file system errors, out of
// resources, or input file corruption. Failing when retrying the same flush
// or compaction will cause the counter to increase too.
uint64_t bg_error_count_;
const int number_levels_;
SystemClock* clock_;
ColumnFamilyData* cfd_;
uint64_t started_at_;
};
#else
class InternalStats {
public:
enum InternalCFStatsType {
L0_FILE_COUNT_LIMIT_SLOWDOWNS,
LOCKED_L0_FILE_COUNT_LIMIT_SLOWDOWNS,
MEMTABLE_LIMIT_STOPS,
MEMTABLE_LIMIT_SLOWDOWNS,
L0_FILE_COUNT_LIMIT_STOPS,
LOCKED_L0_FILE_COUNT_LIMIT_STOPS,
PENDING_COMPACTION_BYTES_LIMIT_SLOWDOWNS,
PENDING_COMPACTION_BYTES_LIMIT_STOPS,
WRITE_STALLS_ENUM_MAX,
BYTES_FLUSHED,
BYTES_INGESTED_ADD_FILE,
INGESTED_NUM_FILES_TOTAL,
INGESTED_LEVEL0_NUM_FILES_TOTAL,
INGESTED_NUM_KEYS_TOTAL,
INTERNAL_CF_STATS_ENUM_MAX,
};
enum InternalDBStatsType {
kIntStatsWalFileBytes,
kIntStatsWalFileSynced,
kIntStatsBytesWritten,
kIntStatsNumKeysWritten,
kIntStatsWriteDoneByOther,
kIntStatsWriteDoneBySelf,
kIntStatsWriteWithWal,
kIntStatsWriteStallMicros,
kIntStatsNumMax,
};
InternalStats(int /*num_levels*/, SystemClock* /*clock*/,
ColumnFamilyData* /*cfd*/) {}
// Per level compaction stats
struct CompactionOutputsStats {
uint64_t num_output_records = 0;
uint64_t bytes_written = 0;
uint64_t bytes_written_blob = 0;
uint64_t num_output_files = 0;
uint64_t num_output_files_blob = 0;
void Add(const CompactionOutputsStats& stats) {
this->num_output_records += stats.num_output_records;
this->bytes_written += stats.bytes_written;
this->bytes_written_blob += stats.bytes_written_blob;
this->num_output_files += stats.num_output_files;
this->num_output_files_blob += stats.num_output_files_blob;
}
};
struct CompactionStats {
uint64_t micros;
uint64_t cpu_micros;
uint64_t bytes_read_non_output_levels;
uint64_t bytes_read_output_level;
uint64_t bytes_read_blob;
uint64_t bytes_written;
uint64_t bytes_written_blob;
uint64_t bytes_moved;
int num_input_files_in_non_output_levels;
int num_input_files_in_output_level;
int num_output_files;
int num_output_files_blob;
uint64_t num_input_records;
uint64_t num_dropped_records;
uint64_t num_output_records;
int count;
explicit CompactionStats() {}
explicit CompactionStats(CompactionReason /*reason*/, int /*c*/) {}
explicit CompactionStats(const CompactionStats& /*c*/) {}
void Add(const CompactionStats& /*c*/) {}
void Add(const CompactionOutputsStats& /*c*/) {}
void Subtract(const CompactionStats& /*c*/) {}
};
struct CompactionStatsFull {
// the stats for the target primary output level (per level stats)
CompactionStats stats;
// stats for output_to_penultimate_level level (per level stats)
bool has_penultimate_level_output = false;
CompactionStats penultimate_level_stats;
explicit CompactionStatsFull(){};
explicit CompactionStatsFull(CompactionReason /*reason*/, int /*c*/){};
uint64_t TotalBytesWritten() const { return 0; }
uint64_t DroppedRecords() { return 0; }
void SetMicros(uint64_t /*val*/){};
void AddCpuMicros(uint64_t /*val*/){};
};
void AddCompactionStats(int /*level*/, Env::Priority /*thread_pri*/,
const CompactionStats& /*stats*/) {}
void AddCompactionStats(int /*level*/, Env::Priority /*thread_pri*/,
const CompactionStatsFull& /*unmerged_stats*/) {}
void IncBytesMoved(int /*level*/, uint64_t /*amount*/) {}
void AddCFStats(InternalCFStatsType /*type*/, uint64_t /*value*/) {}
void AddDBStats(InternalDBStatsType /*type*/, uint64_t /*value*/,
bool /*concurrent */ = false) {}
HistogramImpl* GetFileReadHist(int /*level*/) { return nullptr; }
HistogramImpl* GetBlobFileReadHist() { return nullptr; }
uint64_t GetBackgroundErrorCount() const { return 0; }
uint64_t BumpAndGetBackgroundErrorCount() { return 0; }
bool GetStringProperty(const DBPropertyInfo& /*property_info*/,
const Slice& /*property*/, std::string* /*value*/) {
return false;
}
bool GetMapProperty(const DBPropertyInfo& /*property_info*/,
const Slice& /*property*/,
std::map<std::string, std::string>* /*value*/) {
return false;
}
bool GetIntProperty(const DBPropertyInfo& /*property_info*/, uint64_t* /*value*/,
DBImpl* /*db*/) const {
return false;
}
bool GetIntPropertyOutOfMutex(const DBPropertyInfo& /*property_info*/,
Version* /*version*/, uint64_t* /*value*/) const {
return false;
}
};
#endif // !ROCKSDB_LITE
} // namespace ROCKSDB_NAMESPACE