rocksdb/hdfs
sdong 09899f0b51 DB::Open() to automatically increase thread pool size if it is smaller than max number of parallel compactions or flushes
Summary:
With the patch, thread pool size will be automatically increased if DB's options ask for more parallelism of compactions or flushes.

Too many users have been confused by the API. Change it to make it harder for users to make mistakes

Test Plan: Add two unit tests to cover the function.

Reviewers: yhchiang, rven, igor, MarkCallaghan, ljin

Reviewed By: ljin

Subscribers: dhruba, leveldb

Differential Revision: https://reviews.facebook.net/D27555
2014-11-03 17:22:34 -08:00
..
README hdfs cleanup and compile test against CDH 4.4. 2014-05-20 17:22:12 -04:00
env_hdfs.h DB::Open() to automatically increase thread pool size if it is smaller than max number of parallel compactions or flushes 2014-11-03 17:22:34 -08:00
setup.sh hdfs cleanup and compile test against CDH 4.4. 2014-05-20 17:22:12 -04:00

README

This directory contains the hdfs extensions needed to make rocksdb store
files in HDFS.

It has been compiled and testing against CDH 4.4 (2.0.0+1475-1.cdh4.4.0.p0.23~precise-cdh4.4.0).

The configuration assumes that packages libhdfs0, libhdfs0-dev are 
installed which basically means that hdfs.h is in /usr/include and libhdfs in /usr/lib

The env_hdfs.h file defines the rocksdb objects that are needed to talk to an
underlying filesystem. 

If you want to compile rocksdb with hdfs support, please set the following
enviroment variables appropriately (also defined in setup.sh for convenience)
   USE_HDFS=1
   JAVA_HOME=/usr/local/jdk-6u22-64
   LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/jdk-6u22-64/jre/lib/amd64/server:/usr/local/jdk-6u22-64/jre/lib/amd64/:./snappy/libs
   make clean all db_bench

To run dbbench,
  set CLASSPATH to include your hadoop distribution
  db_bench --hdfs="hdfs://hbaseudbperf001.snc1.facebook.com:9000"