CSE 462/562: Database Systems (Spring 2024)

Project 3: B-Tree Index

Release date: Monday, 3/4/2024
Project due: Monday, 4/1/2024, 23:59 am EDT (new)
Last updated: 3/15/2024

[Project 3 Nightly Testing Results]

Updates (3/8/24) we will run the large system tests at 10 pm and 2 am every day starting on 3/8/2024 until the project deadline, which will include the unavailable tests on Autolab. You may find the test results, and nightly test logs through the link above (UB VPN required). Note that, we always test the latest submission on Autolab. To test a specific commit, you should submit the commit that you'd like to be tested to Autolab before 10 pm or 2 am and make no other submissions until around 10:05 pm or 2:05 am. The nightly test board may not be updated until all the groups included in the test are finished, so it may take a while before the results are shown in the link above.

0. Getting Started

This project will be built upon your code for previous projects. If you need the solution code for projects 1 and/or 2, please download it from Autolab, under the Attachements of project 2. Project 1 Solution, Project 2 Solution. You may want to import these in the project order, and before you import the lab3 supplemental files.

To get started, extract /local/Spring_2024/cse562/lab3.tar.xz on the student servers and import the supplemental files as your did in previous projects. The code should build without compilation errors once the supplemental files are imported, but most of the additional tests are likely to fail. Note: some of the past test cases may also fail because we enabled new features that you haven't implemented (e.g., BasicTestRepoCompilesAndRuns.TestShouldAlwaysSucceed). Once you complete the tasks below, these past test cases should pass automatically. If not, it is highly likely that your code has undiscovered bugs and may not pass the hidden system tests. You may list all the tests in your build directory using the ctest -N command. As before, you may attempt any number of code submissions at Autolab, project 3 (10 per hour per group rate limit still applies). To do so, please create a new tag of the commit you want to submit, and provide the tag name in Autolab.

Bonus: the additional tests for the bonus project is at /local/Spring_2024/cse562/lab3_bonus.tar.xz. You should only extract and import the bonus project supplemental files after the files in lab3.tar.xz have been imported. However, you may skip importing anything from lab3_bonus.tar.xz if you do not wish to do the bonus project. Skipping the bonus project will have no impact on your later projects.

A few useful hints:

  1. The project requires you to understand both the B-Tree data structure in theory and the specific design and requirement of the indexing layer in Taco-DB, so you might want to start very early. We strongly encourage you to read through the entire document before getting started with coding and at least before the Spring recess because we won't have any office hours during Spring recess (3/18 - 3/23). Note that some of the tasks involve the source files distributed to you in the previous projects. You might also find it useful to review the lectures slides for indexing and tree-based index. Please also consider joining the office hours if you are not sure about how to proceed or have any questions about the tasks.
  2. This lab will heavily rely on your implementation from the previous projects. In particular, we provide preloaded B-Tree indexes in some of the test cases, where the B-Tree pages are encoded using the VarlenDataPage. Some test cases also depend on a correct implementation Table for storing test data. Hence, you need to make sure your implementation of them conforms to the specification. However, we do not recommend overwriting VarlenDataPage with the solution code, because you have new tasks to complete in VarlenDataPage. It might be harder for you to implement and debug new functionalities based on the solution code than your own code.
  3. The description below is meant to be a brief overview of the tasks. Always refer to the special document blocks in the headers and source files for a more detailed specification of how the functions should behave. While we try to provide as many hints as possible to alleviate your burden of reading code in the provided code, there might always be some declarations or definitions you need to look for in the code docs (in the navbar of the course page), or in the source code.
  4. While you may add any definitions/declarations to the classes or functions you're implementing, it is unlikely you'll need to do that in project 3. Your design is probably incorrect if you do, but in case you need to add any definitions/declarations, you should not change the signatures of any public and private fucntions. Also do not edit those source files not listed in the source files to modify below, because they will be replaced during tests.
  5. You do not have to consider concurrency control for multi-threading in your implementation.
  6. We provide a list of tasks throughout this document, which is a suggestion of the order of implementation such that your code may pass a few more tests after you complete a task. It, however, is not the only possible order and you may also find alternative order that makes more sense to you. It is also possible that your code still could not pass certain test that is supposed to because of some unexpected dependencies on later tasks or undiscovered bugs in your implementation for previous projects, in which case you might want to refer to the test case implementation in the tests/ directory.
  7. If you're working in a team of two, you might want to first go through the list of tasks, agree on a big picture of design, and divide the implementation tasks into indepedent pieces. One strategy is to implement tasks that may be tested independently. Alternatively, you could also design the algorithm and data structures together, and identify independent components to complete. You may also come up with better ways of collaboration that makes more sense to you. In any case, the grade for the code implementation will be based the test result of your group's final submission.

1. Project Overview

In this project, you will build a B-Tree index, one of the index access methods in Taco-DB. More specifically, your goal is to understand the structure of the B-Tree and implement the core pieces of insert/delete/search/scan functions. This B-Tree supports variable-length keys, thus does not have a stable fanout. Splitting and merging are based on the actual load of internal and leaf pages rather than the number of entries. To simplify our implementation and maximize code reuse, TacoDB’s B-Tree utilizes VarlenDataPage to organize all its nodes. In the lab handout, you will find the basic tests for B-trees. During online and offline grading, there will be larger-scale system tests (which are hidden to you) to further evaluate the correctness and efficiency of your implementation. Note that, some of the system tests are not available on Autolab and will be only tested during the offline tests, because they are too large to run on the Autolab VMs. Your grade on those test cases will be posted on UBLearns after the final submission deadline once we finish the offline tests. The following table lists a break down of the tests:

Test suite Max score Hidden? Available on Autolab? Available in offline test?
BTreeBasicTests
  • BasicTestSortedDataPage
  • BasicTestBTreeUtils
  • BasicTestBTreeInsert
  • BasicTestBTreeSearch
  • BasicTestBTreeDelete
  • BasicTestBTreeSplitPage
  • BasicTestBTreeMergePages
7.0 No Yes Yes
BTreeSystestSmall 0.6 Yes Yes Yes
BTreeSystestLarge 2.4 Yes No Yes
BTreeBonusTests
  • BonusTestSortedDataPage
  • BonusTestBTreeRebalancePages
3.4 No Yes Yes

1.1 Index Interface

Source files to read:

Source files to modify: none

As a general and important interface in an RDBMS, Index is the base class of all different types of indexes. Its public interface is defined in Index.h. The metadata, including its base table ID (the table over which the index is built), index type, virtual file FID (the vfile that contains the index data), and etc., are stored in the catalog. These metadata are passed around in the system as IndexDesc when they are loaded into memory.

When a new index is created for a table, a new virtual file is allocated for it and all the metadata including the virtual file ID are stored into the catalog. The system will then create an IndexDesc filled with the metadata and call a static function Index::Initialize() to properly initialize the index, which in turn dispatches the call to the specified index's initialization function. The initialize function should perform essential checks and perform the necessary initialization such that the vfile becomes a valid empty index. At this point, the system should be able to create any number of instances of index objects that reference this index vfile. As the final step, the system calls Index::BulkLoad() to load the existing data in the base table into the newly created index. As a default and often a less efficient implementation of bulk loading, Index::BulkLoad() will scan the base table and insert data entries one by one using the Index::InsertKey() interface. (Note that you do not need to implement the more efficient B-tree bulk loading algorithm in this project).

To implement an index, one should implement the the static Initialize and Create functions and override public interfaces for inserting, deleting, searching for, and scanning through data entries. The data entries in all the indexes in TacoDB are organized in Alternative 2, where each index entry is a (key(s), rid) pair. Note that rid is unique for each record. When we compare data entries, rid is included as the last key field and thus serves as a tie-breaker in case there are duplicate index keys. In other words, all data entries are unique even though keys in the records may have duplicates.

Data entries or index entries are collectively called index records in Taco-DB. An index record is usually encoded as a record with a fixed-size header followed by a record payload that stores the actual index key values. Index::StartScan() will initialize an iterator that may be used to scan through all data entries falling into a particular search range. When scanning through the iterator, you need to ensure that each data entry will be touched EXACTLY ONCE. Besides, since all indexes of the current version of TacoDB are range indexes, your index iterators should scan through data entries in the ascending order.

There is a flag in the catalog information for indexes to indicate whether this index is a unique index. If an index is non-unique, you should reject an insertion only if you are inserting a duplicate data entry with the same key and same record ID. However, if an index is unique, you should reject an insertion if there exists some data entry in the index with the same key but different record id.

We provided a volatile (in-memory) tree index implemented using std::multimap in index/volatiletree as a reference implementation. If you are unsure about what the specification of a specific interface means, you may always refer to the volatile tree implementation. Any sequence of insert/delete/update/scan in your B-Tree implementation should produce excatly the same result in the eyes of the user (i.e., the same set of data entries are in the index in the same order, they will generate the same error if there's any, an index scan will produce the same set of index key and record ID pairs). However, you MUST NOT use any STL container, Abseil container, or any other in-memory container to implement your B-Tree.

We want to make a quick note about the difference between IndexKey and the key payload of an index record. IndexKey is an in-memory structure that stores an array of references to in-memory Datum objects, which are either primitive C/C++ values or pointers to variable-length data. You may think of an IndexKey as an std::vector<NullableDatumRef> with a more compact memory layout and a trivial destructor. We use that to pass search keys or keys constructed from heap records. A key payload in an index record is the keys of this record stored in a serialized record format (as defined by the Schema class, see also Project 2, Section 1.3). To use a value in the record, you must use Schema::GetField()/Schema::DissemblePayload() or other related functions to extract the field into a Datum. In this project, you don't have to worry about extracting individual fields from a record to form an index key, as the code of that is provided. However, it is important that you understand the distinctions. Never copy IndexKey into an index record. Instead, you need to use the key schema to serialize it into a record payload.

One last thing to note is the ordering of data entries. For any two data entries with m key fields, say d = (k1, k2, .., km, rid) and d' = (k1', k2', ..., km', rid), assuming the type of any key field has a total order among all its non-NULL values, then we order the two data entries in lexicographical order using the following rule:

Because the majority of comparisons in indexes are between a search key and a serialized key payload, we provide two routines for compare an IndexKey with a key payload buffer in include/index/tuple_compare.h and src/index/tuple_compare.c. Note that, if there are missing keys in the search key, we ignore the extra key fields in the key payload. Because of that, you'll need to handle partial key search and the comparison of the tie-breaker (record ID) on top of these in later tasks for B-tree.

1.2 B-Tree Structure

Source file to read:

Source files to modify:

The physical structure of the B-Tree in TacoDB is very similar to the one explained during our lectures, which contains internal nodes and leaf nodes. All the structures of a B-Tree live in a virtual File managed by FileManager, whose first page is a meta page BTreeMetaPageData containing a page number pointing to the root. Note the root node of a B-Tree can be either a leaf or internal node, depending on whether there is enough space for a leaf page to fit all the data entries.

One key deviation from the B-tree described in the lecture is that every level in this tree is doubly-linked, rather than just the leaf level. Your code should maintain these horizontal links when performing structural modification (split/merge of pages).

B-tree page format

In this project, you will need to rely on your VarlenDataPage implementation in project 2 to construct both leaf and internal nodes. A B-Tree page is encoded as a VarlenDataPage. The figure above shows the physical layout of the page representing a B-Tree node. You may find its first 24 bytes are exactly the same as those of a regular heap file page. The main differences are:

  1. B-Tree nodes store their specialized meta information BTreePageHeaderData in the user-defined data region. You should reserve enough space for that region through passing usr_data_sz when calling VarlenDataPage::Initialize. Specifically, it includes flags indicating if this is the root node and if this is a leaf node, the pointers to its two neighbor siblings on the same level, and the total number of bytes used by index records in the page.
  2. The records stored in B-tree pages are index records (internal records on internal nodes, or leaf records on leaf nodes) rather than heap records. In contrast to a heap record which does not have a header in Taco-DB, an index record has a header preceeding its key payload.
  3. There may not be unused slots on the data page. Index records in the slots are always sorted in ascending order, with the smallest record in the lowest numbered slot.

Task 1: Implement functions for laying out B-Tree pages and some other basic utilities:

Your code should allocate a new page and pin it after BTree::CreateNewBTreePage() is called, then return the location of pinned buffer frame. BTree::GetBTreeMetaPage() should pin the meta page of this B-Tree and return the pinned frame. BTree::CreateLeafRecord() and BTree::CreateInternalRecord() are used for creating index records in B-Tree pages. In particular, index entries in internal nodes should be contiguous bytes containing BTreeInternalRecordHeaderData and the index key (strictly in this order with alignment). You can see the m_heap_rec_id is also the part of BTreeInternalRecordHeaderData to serve as a tie breaker when there are duplicate key values. The first record in any internal node should only contain BTreeInternalRecordHeaderData, without any key payload. The key of all records stored in the subtree between two neighboring keys k1 and k2 of an internal node falls in a left-closed right-open range [k1, k2). The leaf node is much simpler, where each data entry is contiguous bytes containing BTreeLeafRecordHeaderData (which is effectively just a RID) and the index key (in this order with alignment).

A B-Tree page, as a VarlenDataPage, must not have any unoccupied valid slot. In other words, the valid SID on a B-Tree page must be in a consecutive range [GetMinSlotId(), GetMaxSlotId()] (if there's any slot). In addition, the records on a B-Tree page must be in ascending order. If you need to insert or remove an index record from an internal or a leaf B-tree node, you should do so by specifying the correct insert/delete slot ID in the VarlenDataPage::InsertRecordAt() and VarlenDataPage::RemoveSlot() functions so that the keys are still sorted after the insertion/deletion. You should not use the VarlenDataPage::InsertRecord() and VarlenDataPage::EraseRecord() functions you implemented in project 2 on any B-Tree page. Updating an index record with VarlenDataPage::UpdateRecord() is only safe when you are sure the updated index record will fit into the page. In addition, you should also implement a utility function VarlenDataPage::ComputeFreeSpace() to compute the page usage given the total number of slots and the total length of record payloads so that you can accurately determine whether records fit into a page for the B-Tree split, merge, rebalance operations in the later tasks.

Task 2: Implement the following additional utility functions for variable-length data page. These utility functions are useful for maintaing a data page with sorted (index) records, especially in handling B-Tree insertion and deletion. You may find the specification for these functions in the header file include/storage/VarlenDataPage.h.

Your code is likely to pass the following tests at this point:

1.3 B-Tree Search

Source files to modify:

To compare two data entries or index entries, you should compare not only the index key but also the record ID. You can compare the index key using TupleCompare() and then compare them on the record IDs further if the index key comparison turns out to be equal. Note that an invalid record id should be treated as being smaller than any record id (including any invalid record id and there are many different possible invalid record ids). We will never try to insert a data entry with invalid record id. Different from the three-valued logic in SQL queries, NULL values are considered to be smaller than any non-NULL values in index and NULL values compare equal in index, because we need to index NULL values as well.

Further, we will enforce lexicographical order for partial key matches in B-Tree. This means a key like (1, 2) is always greater than its prefix (1, phi) (where phi means unspecified). If you compare a full key with its prefix using TupleCompare, it will indicate they are equal, which is not what we want. Therefore, you have to handle such special cases in BTree::BTreeTupleCompare() after you call TupleCompare.

You should write a simple binary search to look up the key in a B-Tree Node. The second overload of BTree::FindLeafPage() is the core recursion logic of traversing the tree. Please be careful about pinning and unpinning the pages during the traversal. At any time during a search, there should be ONLY ONE B-Tree node pinned in the buffer.

Task 3: Implement the utility function to compare two keys and complete the recursive B-Tree point lookup logic:

Your code is likely to pass the following tests at this point:

When a B-tree search reaches a correct leaf page, you may use the sibling pointers on the leaf node to iterate through data entries in the sorting order (see Index::StartScan() and BTree::StartScan()). You should also check if the scan is completed (either there are no more data entries or the next data entry falls out of query range).

Task 4: Finish the B-Tree iterator scan logic:

Your code is likely to pass the following tests at this point:

1.4 B-Tree Insertion

Source files to modify:

You may reuse the B-Tree search code to locate the correct leaf page to perform the insertion. In addition, you should remember the path down by keeping track of which B-Tree nodes (a.k.a B-Tree page ID) you have touched and which slot number you took during the traversal. This information is useful in case of page splits.

The uniqueness check of the data entry for a non-unique index or key for a unique index happens in BTree::FindInsertionSlotIdOnLeaf(). You should return an appropriate slot ID if a duplicate is found. A small nuance to note here is that we allow duplicate NULL values to appear in a unique index, which is SQL-compliant but differs from the practice of serveral major DBMS. (Think about why? Hint: unique indexes are used to enforce UNIQUE constraints.)

Task 5: Implement the functions to find the correct slot on a leaf node for insertion, and insert an index record on a B-Tree page. You may just implement these functions without the logic for recursively splitting the page initially and do that in task 6 instead.

Your code is likely to pass the following tests at this point:

You should also add the logic for splitting a page in BTree:InsertRecordOnPage() when there is no space left to insert a new index record. Consider a local case of splitting a node into two. There will be two nodes originally in the tree being affected: the node you are splitting and its parent. During this procedure, BTree::SplitPage() will trigger another BTree::InsertRecordOnPage() at the parent node. This procedure can trigger another split on the parent node, thus can be recursive and go all the way to the root. You will need to handle this special case of root node splitting with BTree::CreateNewRoot().

Again, during this recursive procedure, you should be very careful about pinning and unpinning B-Tree pages. At any time during insertion, there should be AT MOST FOUR pages pinned at the same time.

Task 6: Implement node splitting and complete BTree::InsertRecordOnPage() by adding triggers for recursive splitting:

1.5 B-Tree Deletion

Source files to modify:

When we locate the record for deletion in the leaf level, we should also remember the traversal path for possible page merging and rebalancing. Please note that BTree::DeleteKey may be called with a key payload without a valid record ID, in which case, it is expected to delete an arbitrary one of the data entries with matching key. When it is called with a valid record ID, there should be at most one matching data entry to be deleted.

When we search for a leaf page to locate a key to delete without specifying a valid record ID, it might end up on a leaf page that is to the left of any matching keys, because an invalid record id is treated as smaller than any valid record id. In this case, you may have to move to the right page (m_next_page) to find a matching key for deletion (if there is any, it must be in the first slot of the right page). One tricky issue is that the parent of the original leaf page may be different from the leaf page on the right. If that happens, we ask you to simply add one to the slot ID of the last PathItem of the search path in the BTree::FindDeletionSlotIdOnLeaf(), because BTree::HandleMinPageUsage() will handle the logic of moving to the right to find the correct parent.

Task 7: Implement the function to find the location of the data entry to delete and the function for erasing an index record on a B-Tree page:

Your code is likely to pass the following tests at this point:

Merging should happen when a B-Tree node is under-utilized (it has free space of more than 60% of the page size), which is triggered by BTree::HandleMinPageUsage() called in BTree::DeleteKey(). Since node rebalancing is assigned as a bonus, you don’t have to implement it to get the full score. The tests that are not part of the bonus project will use a flag to disable rebalancing so that your code on the rebalancing will not be called there.

Node merging involves two sibling pages next to each other that share the same parent node, and you should also delete an index record on the parent. This may recursively trigger another BTree::MergePages() on the parent level, which is already handled recursively in BTree::HandleMinPageUsage(). The merge may go all the way up to the root, and eventually change the root node PID stored in B-Tree meta page. You don’t have to deal with this either, since we have already handled it for you in BTree::HandleMinPageUsage(). During this recursive deletion and merging process, there should be AT MOST FOUR pages pinned at the same time.

Task 8: Implement node merging and complete BTree::DeleteSlotOnPage():

Your code is likely to pass the following tests at this point:

RebalancePages() is called when we have an under-utilized page, but it can’t be merged with a sibling page because its page usage would exceed the page size if they were merged. RebalancePages() should try to move records from the page with a higher page usage to the one with a lower page usage, and minimizes the absolute difference of page usages. Different from SplitPage() which can always succeed, RebalancePages() may fail because the new separator key after the rebalancing may not fit into the parent page (as it might be longer than the old separator key). You should not make a choice such that the new separator key cannot fit into the parent page. Out of all valid choices, you should choose the one that minimizes the absolute page usage difference between the two pages. If there’s no valid choice, you should not change either of the pages and should return false from RebalancePages().

During this rebalancing process, there should be AT MOST THREE pages pinned at the same time.

You should also implement VarlenDataPage::ShiftSlots() to help you preallocate invalid slots at the start of the slot array or remove a batch of slots at the start of the slot array. Note if ShiftSlots() is called on an empty page with zero slots, some positive number n indicating the number of slots to pre-allocate and truncate == false, it is expected to reserve n unoccupied slots on the page although that will temporarily violate the constraint that the left-most slot on the page must be occupied. This is only allowed in this very specific corner case. It is expected that the caller will immediately fill in these slots by calling InsertRecordAt() (and possibly GetMaxSlotId()) before calling any other functions on the page. That said, this corner case never appears in our test cases and should not appear in your implementation of B-Tree rebalancing (think: why?). As a result, whether your code passes the bonus tests does not depend on the correct handling of the corner case.

Task 9: (Bonus, optional): Implement node rebalancing and optimize BTree::DeleteSlotOnPage() with rebalancing logic

Your code is likely to pass the following tests at this point: