Abstract
The cost of join computation, which uses a join-index in a sequential system with limited buffer space, depends primarily on the page access sequence used to fetch the pages of the base relations. In this paper, we introduce a graph-partitioning model that will minimize the length of the page access sequence thus minimizes the redundant I/O, given a fixed buffer. Experiments with Sequoia 2000 data sets show that, the graph-partitioning method outperforms the existing methods based on sorting and online clustering, particularly for a small number of buffers and high join selectivity.
| Original language | English |
|---|---|
| Pages (from-to) | 302-308 |
| Number of pages | 7 |
| Journal | Proceedings of the IEEE Symposium on Reliable Distributed Systems |
| Publication status | Published - 1998 |
| Externally published | Yes |
| Event | Proceedings of the 1998 IEEE 17th Symposium on Reliable Distributed Systems, SRDS - West Lafayette, IN, USA Duration: 20 Oct 1998 → 23 Oct 1998 |