mirror of
https://github.com/google/snappy.git
synced 2024-11-25 22:47:10 +00:00
f540673740
Whenever we try to enter a copy fast-path, there is a certain cost in checking that all the preconditions are in place, but it's normally offset by the fact that we can usually take the cheaper path. However, in a certain path we've already established that "avail < literal_length", which usually means that either the available space is small, or the literal is big. Both will disqualify us from taking the fast path, and thus we take the hit from the precondition checking without gaining much from having a fast path. Thus, simply don't try the fast path in this situation -- we're already on a slow path anyway (one where we need to refill more data from the reader). I'm a bit surprised at how much this gained; it could be that this path is more common than I thought, or that the simpler structure somehow makes the compiler happier. I haven't looked at the assembler, but it's a win across the board on both Core 2, Core i7 and Opteron, at least for the cases we typically care about. The gains seem to be the largest on Core i7, though. Results from my Core i7 workstation: Benchmark Time(ns) CPU(ns) Iterations --------------------------------------------------- BM_UFlat/0 73337 73091 190996 1.3GB/s html [ +1.7%] BM_UFlat/1 696379 693501 20173 965.5MB/s urls [ +2.7%] BM_UFlat/2 9765 9734 1472135 12.1GB/s jpg [ +0.7%] BM_UFlat/3 29720 29621 472973 3.0GB/s pdf [ +1.8%] BM_UFlat/4 294636 293834 47782 1.3GB/s html4 [ +2.3%] BM_UFlat/5 28399 28320 494700 828.5MB/s cp [ +3.5%] BM_UFlat/6 12795 12760 1000000 833.3MB/s c [ +1.2%] BM_UFlat/7 3984 3973 3526448 893.2MB/s lsp [ +5.7%] BM_UFlat/8 991996 989322 14141 992.6MB/s xls [ +3.3%] BM_UFlat/9 228620 227835 61404 636.6MB/s txt1 [ +4.0%] BM_UFlat/10 197114 196494 72165 607.5MB/s txt2 [ +3.5%] BM_UFlat/11 605240 603437 23217 674.4MB/s txt3 [ +3.7%] BM_UFlat/12 804157 802016 17456 573.0MB/s txt4 [ +3.9%] BM_UFlat/13 347860 346998 40346 1.4GB/s bin [ +1.2%] BM_UFlat/14 44684 44559 315315 818.4MB/s sum [ +2.3%] BM_UFlat/15 5120 5106 2739726 789.4MB/s man [ +3.3%] BM_UFlat/16 76591 76355 183486 1.4GB/s pb [ +2.8%] BM_UFlat/17 238564 237828 58824 739.1MB/s gaviota [ +1.6%] BM_UValidate/0 42194 42060 333333 2.3GB/s html [ -0.1%] BM_UValidate/1 433182 432005 32407 1.5GB/s urls [ -0.1%] BM_UValidate/2 197 196 71428571 603.3GB/s jpg [ +0.5%] BM_UValidate/3 14494 14462 972222 6.1GB/s pdf [ +0.5%] BM_UValidate/4 168444 167836 83832 2.3GB/s html4 [ +0.1%] R=jeff Revision created by MOE tool push_codebase. git-svn-id: https://snappy.googlecode.com/svn/trunk@42 03e5f5b5-db94-4691-08a0-1a8bf15f6143 |
||
---|---|---|
m4 | ||
testdata | ||
AUTHORS | ||
autogen.sh | ||
ChangeLog | ||
configure.ac | ||
COPYING | ||
format_description.txt | ||
Makefile.am | ||
NEWS | ||
README | ||
snappy-c.cc | ||
snappy-c.h | ||
snappy-internal.h | ||
snappy-sinksource.cc | ||
snappy-sinksource.h | ||
snappy-stubs-internal.cc | ||
snappy-stubs-internal.h | ||
snappy-stubs-public.h.in | ||
snappy-test.cc | ||
snappy-test.h | ||
snappy.cc | ||
snappy.h | ||
snappy_unittest.cc |
Snappy, a fast compressor/decompressor. Introduction ============ Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. (For more information, see "Performance", below.) Snappy has the following properties: * Fast: Compression speeds at 250 MB/sec and beyond, with no assembler code. See "Performance" below. * Stable: Over the last few years, Snappy has compressed and decompressed petabytes of data in Google's production environment. The Snappy bitstream format is stable and will not change between versions. * Robust: The Snappy decompressor is designed not to crash in the face of corrupted or malicious input. * Free and open source software: Snappy is licensed under a BSD-type license. For more information, see the included COPYING file. Snappy has previously been called "Zippy" in some Google presentations and the like. Performance =========== Snappy is intended to be fast. On a single core of a Core i7 processor in 64-bit mode, it compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more. (These numbers are for the slowest inputs in our benchmark suite; others are much faster.) In our tests, Snappy usually is faster than algorithms in the same class (e.g. LZO, LZF, FastLZ, QuickLZ, etc.) while achieving comparable compression ratios. Typical compression ratios (based on the benchmark suite) are about 1.5-1.7x for plain text, about 2-4x for HTML, and of course 1.0x for JPEGs, PNGs and other already-compressed data. Similar numbers for zlib in its fastest mode are 2.6-2.8x, 3-7x and 1.0x, respectively. More sophisticated algorithms are capable of achieving yet higher compression rates, although usually at the expense of speed. Of course, compression ratio will vary significantly with the input. Although Snappy should be fairly portable, it is primarily optimized for 64-bit x86-compatible processors, and may run slower in other environments. In particular: - Snappy uses 64-bit operations in several places to process more data at once than would otherwise be possible. - Snappy assumes unaligned 32- and 64-bit loads and stores are cheap. On some platforms, these must be emulated with single-byte loads and stores, which is much slower. - Snappy assumes little-endian throughout, and needs to byte-swap data in several places if running on a big-endian platform. Experience has shown that even heavily tuned code can be improved. Performance optimizations, whether for 64-bit x86 or other platforms, are of course most welcome; see "Contact", below. Usage ===== Note that Snappy, both the implementation and the main interface, is written in C++. However, several third-party bindings to other languages are available; see the Google Code page at http://code.google.com/p/snappy/ for more information. Also, if you want to use Snappy from C code, you can use the included C bindings in snappy-c.h. To use Snappy from your own C++ program, include the file "snappy.h" from your calling file, and link against the compiled library. There are many ways to call Snappy, but the simplest possible is snappy::Compress(input, &output); and similarly snappy::Uncompress(input, &output); where "input" and "output" are both instances of std::string. There are other interfaces that are more flexible in various ways, including support for custom (non-array) input sources. See the header file for more information. Tests and benchmarks ==================== When you compile Snappy, snappy_unittest is compiled in addition to the library itself. You do not need it to use the compressor from your own library, but it contains several useful components for Snappy development. First of all, it contains unit tests, verifying correctness on your machine in various scenarios. If you want to change or optimize Snappy, please run the tests to verify you have not broken anything. Note that if you have the Google Test library installed, unit test behavior (especially failures) will be significantly more user-friendly. You can find Google Test at http://code.google.com/p/googletest/ You probably also want the gflags library for handling of command-line flags; you can find it at http://code.google.com/p/google-gflags/ In addition to the unit tests, snappy contains microbenchmarks used to tune compression and decompression performance. These are automatically run before the unit tests, but you can disable them using the flag --run_microbenchmarks=false if you have gflags installed (otherwise you will need to edit the source). Finally, snappy can benchmark Snappy against a few other compression libraries (zlib, LZO, LZF, FastLZ and QuickLZ), if they were detected at configure time. To benchmark using a given file, give the compression algorithm you want to test Snappy against (e.g. --zlib) and then a list of one or more file names on the command line. The testdata/ directory contains the files used by the microbenchmark, which should provide a reasonably balanced starting point for benchmarking. (Note that baddata[1-3].snappy are not intended as benchmarks; they are used to verify correctness in the presence of corrupted data in the unit test.) Contact ======= Snappy is distributed through Google Code. For the latest version, a bug tracker, and other information, see http://code.google.com/p/snappy/