You are on page 1of 7

Column-Based RLE in Row-Oriented Database

Mingyuan An
Key Laboratory of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences
Graduate University of Chinese Academy of Sciences
Beijing, China
anmingyuan@ncic.ac.cn

Abstract—In database systems, disk I/O performance is usually in row-oriented systems. Some work [1][8][9][10] concludes
the bottleneck of the whole query processing. Among many that RLE could barely be used in row-oriented system.
techniques, compression is one of the most important ones to We propose column-based RLE (CRLE) algorithm which
reduce disk accesses so to improve system performance. RLE can apply RLE based on columns in a row-oriented database.
(Run-Length Encoding) is one light-weight compression CRLE incorporates RLE into the row-oriented system
algorithm which incurs negligible CPU cost. A lot of work compression framework. Experiments on the real dataset and
show that, although RLE is one of the most effective the synthetic dataset show that CRLE could totally bring the
compression techniques in column-oriented systems, it is very advantage of the RLE into the row-oriented storage.
hard to use due to bad value locality in row-oriented systems
where values from multiple attributes are stored in the same II. BACKGROUND AND RELATED WORKS
page. We propose CRLE (Column-based RLE), one
compression algorithm to apply RLE to row-oriented data A. Database Compression
storage. On row-oriented storage page, CRLE can exploit
value locality in individual column and encode values from the According to whether the data semantics could be used,
same column in run-length format. Experiments show that the database compression techniques can be classified into
CRLE can lead to very good compression ratio and physical compression and logical compression [11]. The
performance in spite of row-oriented data storage. physical compression treats data just as unstructured byte
stream. Before read or after written by the upper query
Keywords-RLE; column-based compression; row-oriented engine, data goes through the compression module which is
database; storage apparent to the query engine, and the compression module
need not know any extra information about the data. Any
I. INTRODUCTION kind of compression techniques can be used as physical
compression in a direct way in the database system. To the
In database systems, the cost of disk I/O is one of the contrast, the logical compression takes use of the schema
most important factors influencing the whole query information about the data when compressing and
performance. Many techniques in the database are for decompressing. For example, the data can be taken as fields
reducing the disk I/O. Of these techniques, compression is not just bytes so that better value locality can benefit the
one effective approach to improve system performance. The compression. Because of this better use of data semantics,
research in database compression has been around nearly as the logical compression is more effective than the physical
long as there has been research in databases [1], and there compression and is more widely used in database systems.
has been much work in this field [2][3][4][5][6]. Data The database compression we discuss in this paper refers to
compression trades off CPU power for disk bandwidth the logical compression.
resource, and one should balance the CPU and I/O capacity In a compression process, one can chose fields or a whole
to chose an appropriate compression method in a real system. record as basic compression unit. For example, in Huffman
Although some heavy-weight compression techniques could coding, one field can be a message, or the record can be a
lead to much better compression ratios, they may incur too message. Reference [3] is the first who studied the
many CPU instructions and tend to make the system CPU compression taking the field as basic compression unit. This
bound [7]. But with the gap between CPU and disk kind of compression need only decompress the required field
performance being larger and larger [1], more so-called when querying the data, but it causes some space overhead to
heavy-weight compression techniques have become popular record the variable field length after compression.
in practice and database compression has obtained much Commercial database systems usually use the record as basic
attention recently. compression unit [2][5][10][12], so the length of the
Database compression has different effect in row- compressed record could be stored into the length field in the
oriented systems and column-oriented systems. Researches record header. It should decompress the whole record instead
comparing these two architectures show that column- of just the revolved fields during query.
oriented storage are better for compression [1][8][9], and Popular implementations of compression take page as
some important compression approaches such as run-length compression granularity [10][12]. Sometimes, like for text
encoding (RLE) in column-oriented systems are hard to use
attribute, the granularity may be a field [1], which means a About which one is better between column-store and
single field value is compressed as a complete compression row-store, there is much debate [7][8][9][10]. Actually,
process. column-store and row-store are optimized for different
The mostly used compression approach is dictionary workloads. It is impossible that one form take place of the
coding [1]. It writes all distinct values into a dictionary, and other form. Many techniques in the column-store can also be
in the source dataset, each value is symbolled as the index used in the row-store. Row-store can use dense-packed page
value of the dictionary. For example, we would use 4 bytes format to improve read performance. In the terms of
to represent an IP address, but if in one compression compression, row-store can compress the data column-wise,
granularity, there are only 4 distinct IP value, we could use for example, MySQL builds different Huffman tree for
just 2 bits to represent each occurrence of an IP address. different fields during compression. Dense-packed page in
Dictionary coding is one kind of fixed length encoding, and the row-store also facilitates compression like that in the
decompressing need only lookup the dictionary using the column-store [10][17].
index code, so it has very high decompressing performance. All seem available except for RLE. In the case that there
Huffman coding [13] creates a Huffman tree according to are many consecutive identical values in some attribute, it is
the word frequencies, the higher the word frequency is, the unclear how to exploit this property in row-oriented storage.
shorter its Huffman code. Using these codes, the whole No matter whether we take record or field as compression
compressed data has the smallest expected value in length. unit, it will make no sense to use RLE directly. So far as we
Although Huffman coding can reach high compression ratio, know, there is no work on this topic. In the later sections we
it is a heavy-weight approach needing more CPU cycles. will introduce our approach on how to run-length encode the
Run-length encoding (RLE) packs consecutive same data based on column in the row-store.
values into a (value, length) pair to compress the data. For
example, sequence ‘1, 1, 1, 2, 2’ will be encoded into ‘(1, 3), III. COLUMN-BASED RLE
(2, 2)’. When there exists many runs of the same values, In this section, we will first give a brief introduction to
RLE can lead to a very high compression ratio. At the same the storage module and the whole process of compression
time, RLE is very light-weighted in terms of both the and decompression, and then we will introduce CRLE. For
compression and decompression performance. the basic algorithm, we further make some improvement.
Beside the ones mentioned above, there are other
compression approaches used in real systems such as null A. Storage Module and the Compression/Decompression
suppression, delta coding, bit packing [1][8], et al. Process
CRLE is not limited to the specific storage architecture,
B. Column-Oriented Storage, Row-Oriented Storage and
but the whole compression and decompression process
Database Compression involves many environmental details. So we will give a brief
Traditional databases use row-oriented storage (column- introduction to the storage module and the
store) under which all the fields from the same record are compression/decompression module.
stored consecutively on the same page. Some other databases We have a read-optimized storage implementation in
adopt column-oriented storage (row-store) [14] under which MySQL using the pluggable storage engine mechanism. Our
values from the same attribute of all records are stored storage is row-oriented and dense-packed. In traditional
consecutively on the page, and values from different storage, there will be a header for each record which contains
attributes are on different pages. Compared to row-oriented meta data such as record length, variable length field offsets,
storage, column-oriented storage is read-optimized. During a et al. We design our storage for fixed-length record, so the
query, only the required fields are accessed so that the disk header is omitted in the storage. The page size is set
I/O is reduced. appropriately so that any record will not cross the page
Researches on column-oriented systems show that boundary.
compression in column-store has better effect than in row- The compression/decompression chooses page as
store. In a row-oriented system, values from different granularity. Compressed page will be variable in length, so
attributes are stored together, and different attributes often we need a compression index for compressed data. Every
have different value domains. This makes the value locality block offset after compression will be recorded into the
on the page very poor. To the contrast, column-oriented compression index. When access the compressed data, it is
system apparently overcomes this disadvantage which can convenient to get the required block using this index.
lead to good compression result. Dense-packed other than Fig. 1 is the storage module. When a full table scan is
slotted page format also make the column-oriented page taken, the compressor will scan and decompress the data in
more compressible [8]. As one of the most important the support of the compression index. Fixed-length block
compression approach in the column-oriented system, RLE after decompression is sent to the page buffer. The tuple
is reluctant to use in the row-oriented system [1][8][10]. That reader parses the record and sends it up to the query engine
is because it is rare that there are consecutive identical in a iterative manner. When an index scan is used, the record
records in the storage and it also has no effective method to offset got from the index will be transformed into block id
compress values from the same attribute of different records, and offset in the block. Through the compression index,
so RLE in the row-oriented system is often used to compress block id is used to get the block offset in the data file. After
the text fields [1]. reading the compressed block at the offset, the compressor
to li; if li is greater than 0, it means this value has already
Query Engine
been coded into the previous run-length code, and will be
tuple skipped with only decreasing li by 1 for this attribute.
Decompression. In decompression, we read the input
Index Tuple reader Storage Engine buffer, and fill into the fields of records in the output buffer.
Page buffer
For uncompressed fields, directly put the next value in the
input buffer into the output buffer. For compressed fields, if
li equals 0, read the (value, len) pair from the input buffer,
Compressor
then put the value into the next len records in the output
Compressed data buffer and assign len – 1 to li; if li is greater than 0, it means
Compression the value of current field in the output buffer has already
index been set previously and we just skip this field with
decreasing li by 1.
Figure 1. The storage module.
C. Refinement
decompresses the whole block and sends it to the page buffer. The basic algorithm is intuitive. But it doesn’t do a
The tuple reader then gets the specific record using the offset totally sequential access to the buffer, instead it jumps over
in the block. To the index scan, it will incur some extra cost the records to handle run-length code. Doing this could cause
because of the page level decompression. This problem two problems:
exists when it comes to some compression approaches like First, random access to the memory leads to poor cache
RLE and delta coding. One size doesn’t fit all, and we only performance. Especially when the work size exceeds the L2
care about the performance of table scan queries. cache capacity, L2 cache misses give a considerable penalty.
B. Basic Algorithm This means that the compression granularity should be less
than the L2 cache size, while small granularity loses much
The basic process is to scan the fields of all records, write compression efficiency for RLE in terms of compression
out uncompressed fields directly, for RLE fields, make a run- ratio.
length recording for the run of the same values from the Second, for fixed-length record, random access is easy.
same field. Doing this mixes the records together during run- But if we consider records with variable length, it is no easy
length encode values from different records. task to jump between records. Random access to fields of
Here is an example in Fig. 2. The left is the block before records with variable length needs to resort to some kind of
compression. There are 4 records each consisting of 4 fields. index such as record offset or length information. So it will
There exists long runs of the same value in the second and cost much overhead using the basic algorithm on variable
the fourth fields, so CRLE should be used in these two fields. length records.
The right is the data storage after compression. In row-store, We adjusted the basic algorithm to overcome the
CRLE packs identical values from the same attribute into problems. In compression, we use a new counter structure
run-length format. Next we will describe the CRLE li{value, len, offset}, where value and len are used to record
algorithm. the run length information, offset is to record the storage
Assume a record consists of m attributes. Set a counter li location in the output buffer for this (value, len) pair. The
(1 <= i <= m) for each attribute. We refer compressed fields initial value of li is set when scanning the first record: for
to the fields on which RLE runs, and uncompressed fields to uncompressed fields, len is set to -1; for compressed fields,
the other ones. All counters for compressed fields are set to 0 value is set to the field value, len is set to 1, and offset is set
initially, and other counters are set to -1. to the current pointer to the output buffer, then the buffer
Compression. Sequentially scan every field of each pointer is added to pre-allocate space for later storage of the
record, if current accessing field is a uncompressed field, it (value, len) pair. Sequentially scan the subsequent records
is written to the output buffer without any operation; if the one by one, uncompressed fields are written into output
field is a compressed field, read the counter li for this buffer, and compressed fields need some process: if the
attribute: if li equals 0, starting from current record, compute current field value is identical to li.value, increase the li.len
the run length len on this field of subsequent records, write by 1; otherwise, write (li.value, li.len) pair into the li.offset of
the (value, len) pair into the output buffer, then assign len – 1 the output buffer, then assign the field value to li.value, set
the li.len to 1, and set li.offset to current output buffer pointer,
in the mean while increase the pointer for reallocation
purpose. Table I gives a brief description of the algorithm.
Accordingly, in decompression, when reading in the
(value, len) pair for some compressed field, it doesn’t
immediately assign this value to the fields of all subsequent
Figure 2. Example of CRLE compression. len records, instead it records this information into the
counter li. During the process of sequentially writing the
record in the output buffer, li.value is put into the
corresponding field, and the li.len is decreased by 1 at a time
TABLE I. CRLE COMPRESSION ALGORITHM TABLE II. CRLE DECOMPRESSION ALGORITHM

Counter l[fieldCount]; Counter l[fieldCount];


for (c = 0; c < fieldCount; c++) // Initiate counters. for (c = 0; c < fieldCount; c++) // Initiate counters.
{ {
if (column c needs compressing) if (column c needs compressing)
l[c].len = 0; l[c].len = 0;
else else
l[c].len = -1; l[c].len = -1;
} }
for (r = 0; r < recCount; r++) for (r = 0; r < recCount; r++)
{ {
for (c = 0; c < fieldCount; c++) for (c = 0; c < fieldCount; c++)
{ {
Read v from input buffer; if (l[c].len == -1) //uncompressed field
if (l[c].len == -1) //uncompressed field {
{ Read v from input buffer;
Write v into output buffer; Write v into output buffer;
current_offset_in_outBuf += size_of_v; }
} else //compressed field
else //compressed field {
{ if (l[c].len == 0)
if (v == l[c].value) Read l[c].value, l[c].len from input buffer;
l[c].len++; Wirte l[c].value into output buffer;
else l[c].len--;
{ }
if ( this is not the first record) // We can extract out }
codes for the first record in real implementation. }
Write (l[c].value, l[c].len) into l[c].offset of the out put
buffer; need to code every value in this column before writing it into
l[c].value = v; the output buffer. So CRLE doesn’t conflict with other
compression approaches, but it actually incorporates RLE
l[c].len = 1;
into the row-store compression framework.
l[c].offset = current_offset_in_outBuf;
current_offset_in_outBuf += size_of_value_and_len; IV. EXPERIMENTS
} The experiments compare CRLE with other compression
} approaches in the compression ratio and
} compression/decompression performance. We also choose
} different compression granularities to analyze its impact on
the compression.
until it reaches 0, then a new (value, len) pair should be read
A. Experiments Configuration
from the input buffer. Table II gives the description of
decompression algorithm. We use two datasets, real dataset A and synthetic dataset
B.
D. Discussion Dataset A is from the network security monitor
To compress based on column, there is another approach. application. The whole dataset consists of about 181,000,000
We can set separate buffers for each field, and physically records each of which has 8 integer attributes. The original
split the record into field buffers. This way actually is much storage used by A is 5.4GB. There are large runs of the same
like a column-store approach. There will be a problem value respectively in five attributes including the time stamp
allocating buffers for all fields. Because fields have variable attribute. This property is the one motivated our research in
length, we have to allocate a big enough buffer for each this paper. Here we don’t describe the detailed data
attribute, and that will cause much memory overhead. distribution of A, and experiments on A are to show the
Alternatively, using just one buffer and compressing all effect of CRLE in real application.
attributes together greatly reduce the memory usage. We give more detailed analysis in experiments on B.
In the CRLE process, for the non-RLE fields, we can also Dataset B is the one used in [10]. It is a TPCH dataset with
use other compression approach. For example, if we want to some skew adjustments. The data comes from the natural
use Huffman coding to compress the first column, we just join of the lineitem, customer and supplier. Seven attributes
are extracted, and they are partkey, l_extendedprice, traditional RLE, the data size will be larger than the original
o_quantity, o_totalprice, o_orderdate, s_nationkey and one because there are barely consecutive identical field
c_nationkey. Introduce some skew into the o_orderdate, values or records.
s_nationkey and c_nationkey: (1) 99% the o_orderdate are Fig. 4 shows the compression time of different
weekdays, 40% of that are in the two weeks before approaches. We denote basic and improved CRLE by
Christmas Day and Mother’s Day respectively; (2) CRLE-1 and CRLE-2 respectively. Before the compression,
s_nationkey and c_nationkey are according the distribution source data is not in the memory buffer, and during
of the statistics released by WTO. There are two reasons of compression, the compressed data is written to output file
making these adjustments. First, to evaluate compression, continuously. We break down the whole time into
some data skew is required. Second, it is more real after compression time, OS relevant time used in system call, and
adjustments. To better use the locality of o_orderdate, this disk I/O time. The disk I/O time is the I/O time excluding the
attribute is split into the week, the day of week and the year. overlap part with CPU time. Although Huffman coding need
The final schema of B thus becomes (LPK, LPR, QTY, OPR, create the Huffman tree, the number of the distinct values is
WK, DAYOFWK, YR, SNAT, CNAT). The data consists of very small, and this cost is negligible. So the performances
about 134,000,000 records sorted on the skewed attributes of Huffman coding and dictionary coding are almost the
and uses 4.5GB storage. same. CRLE is 25% faster than the other two in terms of
We choose two other compression approaches that are fit completion time, and 1.5~2.5 times faster in terms of
for the data property. One is the heavy-weight Huffman compression time. Because of the better cache efficiency,
coding, and the other is light-weight dictionary coding. The CRLE-2 compression time is 33% faster than CRLE-1.
Huffman coding we use is Canonical Huffman coding [8, 14], Fig. 5 is the data access time. Before reading, the data is
the one optimized for decoding. not in the memory buffer. The init represents the access time
Experiments of compression and decompression are in an when the data is not compressed at all. When compressed,
integrated system to avoid the disparity between papers and the disk I/O is greatly reduced, so the data access is
real scenes. Compression is done in an offline manner using 60%~90% faster than the init. If all data is in the memory
the compression program directly on the data file. before access, there will be decompression overhead when
Decompression is evaluated by using ‘select count(*) from using compression than that without compression. However,
T’. The whole running time will be broken up to give in application with large scale data, compression can make
detailed analysis. more data reside into the memory, hence leads to a better
The hardware configuration is two way Dual-Core AMD overall performance. The access time of CRLE is 20% faster
Opteron™ Processor 2216, 4GB DRAM, 135GB SCSI disk. than that of Huffman coding and dictionary coding. For the
The operating system is Linux kernel 2.6.9-42. We use
160 io
OPROFILE to catch performance data.
140 os
B. Experiments on Dataset A compression
elapsed time (sec)

120
We choose 32KB compression granularity and compress 100
on the five compressible attributes. Of the whole 5.4GB, the 80
storage of the data on the five compressible attributes are
3.4GB. 60
Fig. 3 gives the data size of compressible attributes after 40
compression. Three approaches all achieve very high 20
compression ratio. The data sizes are below 150MB. 0
Huffman coding and dictionary coding have similar effect. CRLE-1 CRLE-2 Huffman Dic
Data being Huffman coded is a little bigger than that of
dictionary coding because of the Huffman symbol table cost. Figure 4. Compression time.
CRLE is 10 times better than the other two. If we use
140 80
120 70 io
elapsed time (sec)

db/os
60 decompression
data size (MB)

100
80 50
60 40
30
40
20
20
10
0
0
CRLE Huffman Dictionary
init CRLE-1 CRLE-2 Huffman Dic

Figure 3. Data size of compressible attributes after compression. Figure 5. Data access time
decompression time, dictionary coding is 20% faster than and dictionary coding from the left to the right. Because the
Huffman coding, and CRLE-2 is 1 time faster than CPU L2 cache size is 1MB, the performance degrades when
dictionary coding, 40% faster than CRLE-1. the granularity is larger than 512KB for every approach. For
Huffman coding and dictionary coding, the growth of
C. Experiments on Dataset B symbol table size is also one important reason for longer
Like that on dataset B, we run running time under big granularity. Compared to CRLE-1,
compression/decompression on the data and actually data of CRLE-2 doesn’t vary too much in terms of running time
five compressible attributes is compressed. Of the whole because of better cache behavior.
4.5GB data, the data size of the five compressible attributes The decompression performance under different
is 2.5GB. We choose different compression granularities to granularities has similar property with compression. Fig. 8
give detailed analysis. shows the access time of compressed data. Access time
Fig. 6 shows the data size of compressible attributes after becomes longer after 1MB granularity. The decompression
compression. Under different granularities, CRLE has a performance of Huffman coding relates to the Huffman
compression ratio 10 times larger than Huffman coding and symbol table size, so bigger granularity leads to longer
dictionary coding at the best case. For Huffman coding and decompression time. To the contrast, Dictionary coding
dictionary coding, the compression ratio becomes larger refrains from performance degradation because of the direct
when the granularity ranges from 4KB to 32KB. The index decompression approach. The decompression time of
Huffman symbol table cost makes the compression ratio CRLE-1 varies greatly with the growth of the granularity.
smaller than dictionary coding. When the granularity is Especially when the granularity exceeds the L2 cache, the
bigger than 32KB, the compression ratio begins to drop, decompression time almost reaches that of the Huffman
because the number of distinct values in one page grows coding. CRLE-2 keeps a steady performance whatever the
which makes the codes for the values get longer. For CRLE, granularity is.
the compression ratio goes up with the growth of the We can see from Fig. 9 that CRLE-1 has a very poor
granularity. cache efficiency when the granularity is larger than 512KB.
Fig. 7 shows the time of compression under different This supports the decompression performance finger of
granularities. In the chart, in each granularity group, the bars CRLE-1.
respectively represent CRLE1, CRLE2, Huffman coding, Fig. 10 shows the micro-ops executed by the CPU during

250 CRLE
Huffman coding
200 Dictionary coding
data size (MB)

150

100

50

0
4MB
2MB
1MB
512KB
256KB
128KB
64KB
32KB
16KB
8KB
4KB

Figure 6. Data size of compressible fields after compression Figure 7. Compression time

500 CRLE-1
L2 cache misses (million)

450 CRLE-2
400 Huffman coding
350 Dictionary coding
300
250
200
150
100
50
0
4MB
2MB
1MB
512KB
256KB
128KB
64KB
32KB
16KB
8KB
4KB

Figure 8. Data access time Figure 9. Number of L2 cache misses during decompression
70
REFERENCES
[1] Daniel J. Abadi, Samuel R. Madden, and Miguel C. Ferreira.,
60 “Integrating compression and execution in column-oriented database
micro-ops (billion)

50 systems,” Proc. SIGMOD 2006, ACM, 2006, pp. 671-682.


40 [2] G. V. Cormack, “Data compression on a database system,” Commun.
CRLE-1
CRLE-2 ACM, 28(12):1336-1342, 1985.
30 Huffman coding [3] G. Graefe, and L. Shapiro, “Data compression and database
Dictionary coding
20 performance,” Proc. ACM/IEEE-CS Symp. On Applied Computing,
1991, pp. 22.
10 [4] J. Goldstein, R. Ramakrishnan, and U. Shaft, “Compressing relations
0 and indexes,” Proc. ICDE 1998, IEEE Computer Society, 1998, pp.
370-379.
4MB
2MB
1MB
512KB

256KB
128KB
64KB

32KB
16KB
8KB
4KB
[5] B. R. Iyer, and D. Wilhite, “Data compression support in databases,”
Proc. VLDB 1994, Morgan Kaufmann, 1994, pp. 695-704.
[6] G. Ray, J. R. Haritsa, and S. Seshadri, “Database compression: A
performance enhancement tool,” Proc. COMAD 1995, 1995.
Figure 10. Micro-ops executed by CPU during decompression
[7] S. Harizopoulos, V. Liang, D. J. Abadi, and S. Madden,
“Performance tradeoffs in read-optimized databases,” Proc. VLDB
decompression. CRLE needs much less operations than 2006, Morgan Kaufmann, 2006, pp. 487-498.
Huffman coding and dictionary coding. Due to the [8] Allison L. Holloway, and David J. DeWitt, “Read-optimized
implementation, CRLE-2 takes a bit less ops than CRLE1, so databases, in depth,” Proc. VLDB 2008, Morgan Kaufmann, 2008, pp.
under small granularity CRLE-2 is still a little faster than 502-513.
CRLE-1. [9] D. J. Abadi, S. R. Madden, and N. Hachem, “Column-stores vs. row-
stores: how different are they really?” Proc. SIGMOD 2008, ACM,
V. CONCLUSION 2008, pp. 967-980.
[10] A. L. Holloway, V. Ramon, G. Swart, and D. J. Dewitt, “How to
Traditional RLE is often used in column-oriented barter bits for chronons: compression and bandwidth trade offs for
systems. In row-oriented systems, RLE compresses data database scans,” Proc. SIGMOD 2007, ACM, 2007, pp. 389-400.
from different columns together, so it is hard to achieve good [11] Oracle 11g Data Compression Tips for the DBA. http://www.dba-
compression ratio due to poor value locality. CRLE exploits oracle.com/oracle11g/sf_Oracle_11g_Data_Compression_Tips_for_t
the value locality in columns, so brings the efficiency of he_DBA.html.
RLE into row-oriented system. CRLE doesn’t conflict with [12] M. Poess et al, “Data compression in oracle,” Proc. VLDB 2003,
other compression approaches, but incorporates RLE into the Morgan Kaufmann, 2003, pp 937-947. .
row-store compression frame work. [13] D. Huffman, “A method for the construction of minimum redundancy
Refinement to basic CRLE further improves the codes,” Proc. I.R.E, 40(9), pp. 1098-1101, September 1952.
performance through optimizing the cache efficiency. [14] M. Stonebraker, et al, “C-Store: a column-oriented DBMS,” Proc.
VLDB 2005, Morgan Kaufmann, 2005, pp. 553-564.
Experiments show that the adjustments can refrain CRLE
from performance degradation when the granularity exceeds [15] Partrick O’Neil, Elizabeth O’Neil, “Database: principles,
programming, and performance”, 2nd edition.
the L2 cache capacity.
[16] The MySQL documentation. http://dev.mysql.com/doc.
[17] V. Raman, and G. Swart, “How to wring a table dry : entropy
compression of relations and querying of compressed relations.,”
Proc. VLDB 2006, Morgan Kaufmann, 2006, pp. 858-869.

You might also like