Minor typos in benchmark page.
git-svn-id: https://leveldb.googlecode.com/svn/trunk@44 62dab493-f737-651d-591e-8d6aee1b9529
This commit is contained in:
parent
e8dee348b6
commit
b9ef9141ba
@ -103,7 +103,7 @@ div.bsql {
|
|||||||
</ul>
|
</ul>
|
||||||
|
|
||||||
<h2>1. Baseline Performance</h2>
|
<h2>1. Baseline Performance</h2>
|
||||||
<p>This section gives the baseline performance of a all of the
|
<p>This section gives the baseline performance of all the
|
||||||
databases. Following sections show how performance changes as various
|
databases. Following sections show how performance changes as various
|
||||||
parameters are varied. For the baseline:</p>
|
parameters are varied. For the baseline:</p>
|
||||||
<ul>
|
<ul>
|
||||||
@ -234,7 +234,7 @@ of sequential writes. However SQLite3 sees a significant slowdown
|
|||||||
writes. This is because each random batch write in SQLite3 has to
|
writes. This is because each random batch write in SQLite3 has to
|
||||||
update approximately as many pages as there are keys in the batch.</p>
|
update approximately as many pages as there are keys in the batch.</p>
|
||||||
|
|
||||||
<h3>C. Synchronous writes</h3>
|
<h3>C. Synchronous Writes</h3>
|
||||||
<p>In the following benchmark, we enable the synchronous writing modes
|
<p>In the following benchmark, we enable the synchronous writing modes
|
||||||
of all of the databases. Since this change significantly slows down the
|
of all of the databases. Since this change significantly slows down the
|
||||||
benchmark, we stop after 10,000 writes.</p>
|
benchmark, we stop after 10,000 writes.</p>
|
||||||
@ -329,7 +329,7 @@ better without compression than with compression. Presumably this is
|
|||||||
because TreeDB's compression library (LZO) is more expensive than
|
because TreeDB's compression library (LZO) is more expensive than
|
||||||
LevelDB's compression library (Snappy).<p>
|
LevelDB's compression library (Snappy).<p>
|
||||||
|
|
||||||
<h3>E. Using more memory</h3>
|
<h3>E. Using More Memory</h3>
|
||||||
<p>We increased the overall cache size for each database to 128 MB. For LevelDB, we partitioned 128 MB into a 120 MB write buffer and 8 MB of cache (up from 2 MB of write buffer and 2 MB of cache). For SQLite3, we kept the page size at 1024 bytes, but increased the number of pages to 131,072 (up from 4096). For TreeDB, we also kept the page size at 1024 bytes, but increased the cache size to 128 MB (up from 4 MB).</p>
|
<p>We increased the overall cache size for each database to 128 MB. For LevelDB, we partitioned 128 MB into a 120 MB write buffer and 8 MB of cache (up from 2 MB of write buffer and 2 MB of cache). For SQLite3, we kept the page size at 1024 bytes, but increased the number of pages to 131,072 (up from 4096). For TreeDB, we also kept the page size at 1024 bytes, but increased the cache size to 128 MB (up from 4 MB).</p>
|
||||||
<h4>Sequential Writes</h4>
|
<h4>Sequential Writes</h4>
|
||||||
<table class="bn">
|
<table class="bn">
|
||||||
@ -370,8 +370,8 @@ because a larger write buffer reduces the need to merge sorted files
|
|||||||
performance goes up because the entire database is available in memory
|
performance goes up because the entire database is available in memory
|
||||||
for fast in-place updates.</p>
|
for fast in-place updates.</p>
|
||||||
|
|
||||||
<h2>2. Read Performance under Different Configurations</h2>
|
<h2>3. Read Performance under Different Configurations</h2>
|
||||||
<h3>A. Larger caches</h3>
|
<h3>A. Larger Caches</h3>
|
||||||
<p>We increased the overall memory usage to 128 MB for each database.
|
<p>We increased the overall memory usage to 128 MB for each database.
|
||||||
For LevelDB, we allocated 8 MB to LevelDB's write buffer and 120 MB
|
For LevelDB, we allocated 8 MB to LevelDB's write buffer and 120 MB
|
||||||
to LevelDB's cache. The other databases don't differentiate between a
|
to LevelDB's cache. The other databases don't differentiate between a
|
||||||
@ -414,7 +414,7 @@ when the caches are enlarged. In particular, TreeDB seems to make
|
|||||||
very effective use of a cache that is large enough to hold the entire
|
very effective use of a cache that is large enough to hold the entire
|
||||||
database.</p>
|
database.</p>
|
||||||
|
|
||||||
<h3>B. No compression reads </h3>
|
<h3>B. No Compression Reads </h3>
|
||||||
<p>For this benchmark, we populated a database with 1 million entries consisting of 16 byte keys and 100 byte values. We compiled LevelDB and Kyoto Cabinet without compression support, so results that are read out from the database are already uncompressed. We've listed the SQLite3 baseline read performance as a point of comparison.</p>
|
<p>For this benchmark, we populated a database with 1 million entries consisting of 16 byte keys and 100 byte values. We compiled LevelDB and Kyoto Cabinet without compression support, so results that are read out from the database are already uncompressed. We've listed the SQLite3 baseline read performance as a point of comparison.</p>
|
||||||
<h4>Sequential Reads</h4>
|
<h4>Sequential Reads</h4>
|
||||||
<table class="bn">
|
<table class="bn">
|
||||||
|
Loading…
Reference in New Issue
Block a user