Release date: 10/22/2025
Project due: 2025-11-12 23:59:59-05
Write-up due: 2025-11-13 23:59:59-05
Last updated: 10/21/2025
Update 10/21/2025: The submission for project 4 will open in about two weeks and please check this website for updates. Most of the tests in this project will not be visible to you, and you should write simple tests to test your bulk loading implementation. write-up templates will also be released in about two weeks.
This project will be built upon your code for previous projects. Starting from this project, the solution code for the previous projects will not be released.
To get started with project 4, extract
/ro-data/labs/lab4.tar.xz when your current
working directory is your local repository root, and import the
supplemental files as your did
in previous projects.
The code should
build without compilation errors once the supplemental files
are imported, but most of the additional tests are likely to
fail. Note: some of the past test cases may also fail because we
enabled new features that you haven't implemented (e.g.,
BasicTestRepoCompilesAndRuns.TestShouldAlwaysSucceed).
Once you complete the tasks below,
these past test cases should pass automatically. If not, it is highly
likely that your code has undiscovered bugs and may not pass
the hidden system tests. You may list
all the tests in your build directory using the ctest
-N command.
Please also run ./import_data.sh in the repository root to import
data if you're working within the dev container. If you're not working within
the dev container, please copy the data tarballs into the data
subdirectory (which may not exist yet) under the repository root, and extract
all of them. Once you successfully import the data, you should find the
following directories under data/:
(IMPORTANT!) Before committing, make sure the
followling lines are added to .gitignore in
your repository to avoid committing these large files to
Github.
/data/preloaded_btree/data/tpch_s01/data/tpch_s01_ans/data/tpch_s1/data/tpch_s1_ans
A few useful hints:
VarlenDataPage. Some
test cases also depend on a correct implementation Table
for storing test data.
Hence, you need to make sure your implementation of
them conforms to the specification. However, we do not
recommend overwriting VarlenDataPage with the solution
code, because you have new tasks to complete in
VarlenDataPage. It might be harder for you to
implement and debug new functionalities based on the
solution code than your own code.
tests/ directory.
In this project, you will build a B-Tree index, one of the
index access methods in Taco-DB. More specifically, your
goal is to understand the structure of the B-Tree and
implement the core pieces of insert/delete/search/scan
functions. This B-Tree supports variable-length keys, thus
does not have a stable fanout. To simplify our implementation and maximize
code reuse, TacoDB’s B-Tree utilizes
VarlenDataPage to organize all its nodes.
Source files to read:
include/index/Index.hinclude/index/IndexKey.hinclude/index/tuple_compare.hsrc/index/Index.cppsrc/index/tuple_compare.cppinclude/index/volatiletree/VolatileTree.hsrc/index/volatiletree/VolatileTree.cppSource files to modify: none
As a general and important interface in an RDBMS,
Index is the base class of all different types of
indexes. Its public interface is defined in
Index.h. The metadata, including its base table ID
(the table over which the index is built), index type, virtual
file FID (the vfile that contains the index data), and etc.,
are stored in the catalog. These metadata are passed around in
the system as IndexDesc when they are loaded into
memory.
When a new index is created for a table, a new virtual file is
allocated for it and all the metadata including the virtual
file ID are stored into the catalog. The system will then
create an IndexDesc filled with the metadata and
call a static function Index::Initialize() to
properly initialize the index, which in turn dispatches the
call to the specified index's initialization function. The
initialize function should perform essential checks and perform
the necessary initialization such that the vfile becomes
a valid empty index. At this point, the system should
be able to create any number of instances of index objects that
reference this index vfile. As the final step, the system
calls Index::BulkLoad() to load the existing data
in the base table into the newly created index. As a default
and often a less efficient implementation of bulk loading,
Index::BulkLoad() will scan the base table and
insert data entries one by one using the
Index::InsertKey() interface. (Note that you do
not need to implement the more efficient B-tree bulk loading
algorithm in this project).
To implement an index, one should implement the the static
Initialize and Create functions and
override public interfaces for inserting, deleting, searching
for, and scanning through data entries.
The data entries in all the indexes in TacoDB are organized in
Alternative 2, where each index entry is a
(key(s), rid) pair. Note that rid is
unique for each record. When we compare data entries,
rid is included as the last key field and thus
serves as a tie-breaker in case there are duplicate index keys.
In other words, all data entries are unique even though keys in
the records may have duplicates.
Data entries or index entries are collectively called index
records in Taco-DB. An index record is usually encoded as a
record with a fixed-size header followed by a record payload
that stores the actual index key values.
Index::StartScan() will initialize an iterator
that may be used to scan through all data entries falling into
a particular search range. When scanning through the iterator,
you need to ensure that each data entry will be touched
EXACTLY ONCE. Besides, since all indexes of
the current version of TacoDB are range indexes, your index
iterators should scan through data entries in the
ascending order.
There is a flag in the catalog information for indexes to indicate whether this index is a unique index. If an index is non-unique, you should reject an insertion only if you are inserting a duplicate data entry with the same key and same record ID. However, if an index is unique, you should reject an insertion if there exists some data entry in the index with the same key but different record id.
We provided a volatile (in-memory) tree index implemented
using std::multimap in index/volatiletree as a
reference implementation. If you are unsure about what the
specification of a specific interface means, you may always
refer to the volatile tree implementation. Any sequence of
insert/delete/update/scan in your B-Tree implementation should
produce excatly the same result in the eyes of the user (i.e.,
the same set of data entries are in the index in the same
order, they will generate the same error if there's any, an
index scan will produce the same set of index key and record ID
pairs). However, you MUST NOT use any STL container,
Abseil container, or any other in-memory container to
implement your B-Tree.
We want to make a quick note about the difference between
IndexKey and the key payload of an index record.
IndexKey is an in-memory structure that stores an
array of references to in-memory Datum
objects, which are either primitive C/C++ values or pointers to
variable-length data. You may think of an
IndexKey as an
std::vector<NullableDatumRef> with a more
compact memory layout and a trivial destructor. We use that to
pass search keys or keys constructed from heap records.
A key payload in an index
record is the keys of this record stored in a serialized record
format (as defined by the Schema class, see also
Project 2, Section
1.3). To use a value in the record, you must use
Schema::GetField()/Schema::DissemblePayload()
or other related functions to extract the field into a
Datum. In this project, you don't have to worry
about extracting individual fields from a record to form an
index key, as the code of that is provided. However, it is
important that you understand the distinctions. Never copy
IndexKey into an index record. Instead, you need to use the key
schema to serialize it into a record payload.
One last thing to note is the ordering of data entries. For any
two data entries with m key fields, say d = (k1,
k2, .., km, rid) and d' = (k1', k2', ..., km',
rid), assuming the type of any key field has a total order
among all its non-NULL values,
then we order the two data entries in lexicographical order using the following rule:
NULL is always smaller than
any non-NULL. NULL values compare
equal (which is different from the semantics of SQL
expressions).i, such that for all j < i,
kj == kj', and ki < ki', then
d < d'.
i, such that for all j < i,
kj == kj', and ki > ki', then
d > d'.
include/storage/Record.h). However, an
invalid record ID should always be treated as smaller
than any other record ID (which is not handled by the
comparison operators of RecordId).
Because the majority of comparisons in indexes are between a
search key and a serialized key payload, we provide two
routines for compare an IndexKey with a key
payload buffer in include/index/tuple_compare.h
and src/index/tuple_compare.c. Note that, if there
are missing keys in the search key, we ignore the extra key
fields in the key payload. Because of that, you'll need to
handle partial key search and the
comparison of the tie-breaker (record ID) on top of these in
later tasks for B-tree.
Source file to read:
include/index/btree/BTreeInternal.hSource files to modify:
include/index/btree/BTree.hsrc/index/btree/BTree.cppsrc/index/btree/BTreeBulkload.cppsrc/index/btree/BTreeIterator.cpp
The physical structure of the B-Tree in TacoDB is very similar
to the one explained during our lectures, which contains
internal nodes and leaf nodes. All the structures of a B-Tree
live in a virtual File managed by
FileManager, whose first page is a meta page
BTreeMetaPageData containing a page number
pointing to the root. Note the root node of a B-Tree can be
either a leaf or internal node, depending on whether there is
enough space for a leaf page to fit all the data entries.
One key deviation from the B-tree described in the lecture is that every level in this tree is doubly-linked, rather than just the leaf level. Your code should maintain these horizontal links when performing structural modification (split/merge of pages).
In this project, you will need to rely on your
VarlenDataPage implementation in project 2 to
construct both leaf and internal nodes. A B-Tree page is
encoded as a VarlenDataPage. The figure above
shows the physical layout of the page representing a B-Tree
node. You may find its first 24 bytes are exactly the same as
those of a regular heap file page. The main differences are:
BTreePageHeaderData in the user-defined data
region. You should reserve enough space for that region
through passing usr_data_sz when calling
VarlenDataPage::Initialize.
Specifically, it includes flags indicating if this is the
root node and if this is a leaf node, the pointers to its
two neighbor siblings on the same level, and the total
number of bytes used by index records in the page.
To compare two data entries or index entries, you should
compare not only the index key but also the record ID. You can
compare the index key using TupleCompare() and
then compare them on the record IDs further if the index key
comparison turns out to be equal. Note that an invalid record
id should be treated as being smaller than any record id (including any
invalid record id and there are many different possible invalid
record ids). We will never try to insert a data entry with
invalid record id. Different from the three-valued logic
in SQL queries, NULL values are considered to be
smaller than any non-NULL values in index and
NULL values compare equal in index, because we
need to index NULL values as well.
Further, we will enforce lexicographical order for partial key
matches in B-Tree. This means a key like (1, 2) is
always greater than its prefix (1, phi) (where
phi means unspecified). If you compare a full key
with its prefix using TupleCompare, it will
indicate they are equal, which is not what we want. Therefore,
you have to handle such special cases in your code.
You should write a simple binary search to look up the key in a B-Tree Node.
When a B-tree search reaches a correct leaf page, you may use the sibling pointers on the leaf node to iterate through data entries in the sorting order . You should also check if the scan is completed (either there are no more data entries or the next data entry falls out of query range).
Task 1: Complete the B-tree initialization, creation, deallocation, and search logics.
BTree::Initialize()BTree::Create()BTree::~BTree()BTree::StartScan()BTree::Iterator::Next()BTree::Iterator::IsAtValidItem()BTree::Iterator::GetCurrentItem()BTree::Iterator::GetCurrentRecordId()BTree::Iterator::EndScan()Your code is likely to pass the following tests at this point:
BasicTestBtreeSearch1.TestBTreeFullScanFixedlenBasicTestBtreeSearch2.TestBTreeRangeScanFixedlenBasicTestBtreeSearch3.TestBTreeFullScanVarlenBasicTestBtreeSearch4.TestBTreeRangeScanVarlen
Source files to modify:
src/index/btree/BTreeBulkLoad.cppBulk loading loads existing data in a table into an index, which happens when you create an index over an existing database table. Inserting the records one by one using the standard insert interface is less efficient -- it not only incurs additional disk reads and writes but also often creates a B-tree that is below the desired load factor. In this project, your next task is to implement the bulk load operation. It should first sort the given data by performing an external sorting on the given data. The given data can be accessed through the bulk loading iterator, and you may create temporary files to host them. After that, you should implement a linear-time algorithm to create the tree structure in one single pass as we discussed in class, i.e., loading the tree structure from the bottom level to upper levels. It is up for you to decide the details, whether to use buffer pool for I/O or bypass the buffer pool to write into the underlying file, whether to build it level by level or incrementally.
The only hard constraints are:
Index::BulkLoad (which does exactly
that).
n-way merges as given in FLAGS_sort_nways.
VarlenDataPage metadata and slot arrays)
should be between 70% - 90%.
Task 2: Complete the B-tree bulk loading logic.
BTree::BulkLoad()
Source files to modify:
include/plan/IndexScan.hsrc/plan/IndexScan.cppinclude/execution/IndexScanState.hsrc/execution/IndexScanState.cpp
Now that we have indexes available in Taco-DB, we will be able
to introduce a new operator, IndexScan, which
leverages an alternative-2 index (B-tree or volatile tree in
our code base), to perform selection scans over a range
condition. More specifically, this operator will allow you to
draw heap records from a table R using an index on
a set of columns A that satisfy a range condition
l < A < r, where the comparison is done in
lexicographical order, and the lower and upper bound may be
missing or made non-strict (i.e., allowing equality). For
instance, given a B-tree index on (major,
adm_year) over the student table we use as example
in class S(sid, name, login, major, adm_year),
the following shows a few examples of how we can use the operator.
(Recall that phi denotes unspecified and is different from NULL
in our codebase).
('CS', phi) <= (major, adm_year) <= ('CS', phi)('C', phi) <= (major, adm_year) < ('D', NULL)('CS', 2020) <= (major, adm_year) <= ('CS', 2024)('CS', 2019) < (major, adm_year) <= ('CS', phi)Your task in this bonus project is to implement the relevant functions in these files and pass all of the following tests.
IndexScan.TestIndexScanEmptyIndexScan.TestIndexScanAllIndexScan.TestScanSomeSystemTestTPCHQ1Small.*SystemTestTPCHQ1.*