Lecture notes for CSC 173, Tues Sept. 6 -- Thurs. Sept. 15, 2005 ------------------------------------------------- READING ASSIGNMENT: AU chapter 8 FIRST PROGRAMMING ASSIGNMENT is on the web also the "ZEROth" assignment, for those unfamiliar with Unix ------------------------------------------------- OPTIONAL LAB sessions will be held next week; see the web site. These sessions are STRONGLY recommended for students without significant prior Unix experience. ------------------------------------------------- Let's begin with a little review from CSC 172 Abstract Data Types v. Data Structures the former are ways of organizing information in a computer program the latter present a "black box" interface to information and the operations on it ADT examples: stack, queue, list, set, function, mapping, tree (embodying information about parents and children), graph DS examples: array, singly-linked list, doubly-linked list, hash table, AVL tree, 2-3 tree, splay tree, skip list, binary tree embedded in array, ... One of the key understandings of modern CS is that it is very important to maintain a clear distinction between ADTs and DSes. That's what lets us build reusable code (e.g. the Java and C++ standard libraries). You presumably spent a lot of time in 172 on sets and their relatives. set: collection of items, no two the same multiset: collection of items, possibly with repeats function: mapping from domain elements to (unique) range elements binary relation: collection of (unique) (domain, range) pairs, not necessarily one-one These were implemented in a variety of ways, including arrays, linked lists, balanced trees, and hash tables. A relation is a generalization of a binary relation. It's simply a collection of unique n-tuples. The formal study of relations revolutionized databases in the mid 1970s. Almost all modern databases are based on the relational model. ======================================================================== Relations ------------------------------------------------------------------------ Example: A Simple Database Consider a database that might be maintained by the Computer Science Department of the people and machines in the laboratories. The database contains information about the workstations, printers, servers, and registered users. Here are tables that describe the various parts of the database (info adapted from the CS research network a year ago): Workstations Name Room Mem OS Model Speed ================================================ crabgrass 727 128MB Solaris U10 333MHz fusion 715 176MB MacOS G3 500MHz ripple 724 128MB Solaris U5 270MHz chemical 735 96MB MacOS G3 300MHz saturn 727 64MB Win2000 P2 265MHz Printers Name Room Status Model =============================== chaucer 737 up LJ4 shake 602 up LJ4 keats 706 up LJ2 poe 707 down LJ2 picasso 737 up P560 Cycle Servers Name Status Free Mem #Users OS ============================================= heart up 95MB 23 Solaris brain up 1.2GB 4 Solaris aorta down 3.2GB 0 Solaris ganga down 2.0GB 0 True64 tapti up 850MB 2 True64 File Servers Name Status Free Disk OS ================================== nuclear up 5.8GB MacOS steward up 8.5GB Solaris slate up 452MB Solaris butler down 12.6GB Solaris Users Id Name Status File Server Console Idle =================================================================== scott M. Scott Faculty nuclear fusion 0 blaylock N. Blaylock Grad steward ripple 1:13 papathan A. Papathanasiou Grad steward heart 12:06:14 cluis C. Luis UG butler heart :07 pawlicki T. Pawlicki Faculty butler heart 0 marty M. Guenther Staff nuclear chemical :15 kparkins K. Parkins Staff butler saturn 0 ---------------- Notes Information is in tables. Each row is called a *tuple*. The entries within a tuple are its *attributes*. Attributes have names, which appear at the tops of columns. Attributes in a given column are drawn from the column's *domain*. Because a tuple is often implemented as a record (struct), I'll often refer to attributes as *fields*. Order of rows doesn't matter. Individual data fields aren't interesting: it's the *relationships* that matter. E.g. marty has been idle 15 minutes. Each row describes a unique set of relationships: two rows in the same table may have the same information in one or more columns, but no two rows in the same table are exactly the same. The number of columns is constant (within a given relation). The number of rows can be arbitrarily large. (Image a table containing all workstations in the world, or all users on the Internet.) Different tables describe different sorts of relationships. We may want columns of similar info in multiple tables, so we can navigate among relationships of different kinds. E.g. scott's console machine runs MacOS. The matching columns may or may not have the same name (Users: Console and Workstations: Name, v. File servers: OS and Workstations: OS). It is also possible for columns in two tables to have the same name even when they don't represent the same information (e.g. Users: Status and File Servers: Status). Some columns uniquely determine the other entries in a row (such as a user's login or a machine's, printer's, or server's name, all of which must be unique in the lab). It's also possible to have relations in which no column uniquely determines the rest, but some combination does. Any column or _minimal_set_ of columns that uniquely determines the data in all tuples is called a *key*. Worst case, the entire set of columns is a key, because tuples within a relation are unique. As a general rule, we try to avoid storing the same relationship in more than one place (i.e. no two columns the same in two places) because that produces redundancy and the possibility of inconsistency. ---------------- Basic operations: * insert * delete * lookup (simple query) Lookup operations can - use a single entry: how long has scott been idle? - use a whole table: how many servers are up? - cross tables: Suppose we want a list of all the users who have been idle less than an hour, and whose file server is currently down. Here's how that operation might be implemented: * Extract from the server table a list of servers that are down. * Extract from user table a list of users idle less than an hour. * For each such user, extract the server field from the entry, and see if that server name appears on the list of servers that are currently down. * If so, output the name of the user. There are other ways to get the same information: we could, for example, iterate over each server that is down, looking for users of that server. ---------------- How can Relations and Relational Algebra Help? - Conceptual framework for organizing information -- helps you decide what information to keep together, based on expected relationships and queries. - Abstract language for expressing queries and other operations on databases against which implementations can be compared (e.g. for completeness). - Algebra for expression simplification, to optimize queries for performance. ---------------- Getting formal Remember our example ADTs: function: mapping from domain elements to (unique) range elements binary relation: collection of (unique) (domain, range) pairs, not necessarily one-one Relations are the N-ary extension of binary relations. Formally, a relation on the set of domains D1, D2, ... Dn is a set of n-tuples, each of which is an element of the Cartesian product D1 x D2 x D3 x ... x Dn. Informally, we can think of a relation as a table, where the columns (attributes) correspond to the domains of the relation, and the rows correspond to the tuples. * Since a relation is a set, it doesn't matter in what order we list the tuples (rows) in the table. * Since a relation is NOT a multiset, no two rows can be the same. * Since each of the domains (attributes) has a distinct name, we can list the columns in any order in the table. The *scheme* of a relation is the list of domain/attribute names D1...Dn. Workstations: Name-Room-Mem-OS-Model-Speed Printers: Name-Room-Status-Model Cycle Servers: Name-Status-Free Mem-#Users-OS File Servers: Name-Status-Free Disk-OS Users: Id-Name-Status-File Server-Console-Idle A *database* is a collection of relations. The scheme for a database is the set of schemes for the relations in the database. We require domain names within a relation to be unique, but do not require this of all domain names in the database as a whole. ---------------- Keys As noted informally above, a key K for relation R on domains D1...Dn is a subset of the domains D1...Dn such that 1. the value of K uniquely identifies each tuple in R 2. no proper subset of K exhibits property 1 Since every tuple in R must be unique, it follows that the set {D1,D2,...,Dn} obeys property 1. To find a key then, we need only look for subsets that obey property 2. IMPORTANT NOTE: we cannot in general tell whether an attribute or set of attributes is a key simply by inspecting the current contents of the relation, because some future inserts might introduce tuples that have the same "key" but different other attributes. "Keyness" depends on the semantics (meaning) of the relation as determined by the human user; it's an assertion about *all* data that might *ever* be put in the relation. (Consider the printers relation above, with and without picasso. Is "room number" a key?) In our simple database The various Name domains can serve as keys for the Workstations, Printers, Cycle Servers, and File Servers relations. Id can serve as a key for the Users relation. Name does NOT work as a key for the Users relation: two users could have the same name. If we assume that two people with the same name are never assigned to the same file server {Name, File Server} is a key for the Users relation. (That may not be a safe assumption!) If we assume no room has more than one printer of the same model, then {Room, Model} is a key for the Printer relation (Again, this may not be a safe assumption!) ------------------------------------------------------------------------ Implementing Relations Tuples are basically records, one field per domain. As an ADT, they're objects with put_field and get_field methods. The natural data structure uses contiguous allocation, like a C struct. A relation is a set of tuples. It can be realized as an array a linked list a tree a hash table In most (but not all) cases, the relation is sorted or hashed on a particular *primary* key, and the tree or hash structure constutues a *primary index*. The alternatives are (a) not to sort (but this forces most operations to take time linear in the size of the relation), or (b) to sort or hash on something other than a key, in which case we have to accommodate duplicates. We can of course build as many tree or hash-table based indices as we want. We can also choose to sort without keeping an index. If we have random access that at least allows us to do binary search, but it makes inserts and deletes really expensive. Indices that don't determine the layout of the data in memory (disk, tape) are called *secondary* indices. NB: the distinction between primary and secondary indices is not a strong one. We can easily imagine organizations (e.g. a master doubly linked list with a bunch of trees or hash tables pointing into the list) in which there is no discernable difference between primary and secondary. The difference matters a little more in databases maintained in secondary storage -- especially on tape -- where sequential access is much faster than random access. Ultimately the decision on how best to implement a relation depends on the size of the relation the keys the sorts of lookup operations (queries) that must be supported the expected frequency of each sort of query ------------------------------------------------------------------------ More on operations * insert(t,R) - Insert tuple t in the relation R (if not already present). * delete(P,R) - Delete every tuple from relation R that satisfies the predicate P. * lookup(P,R) - Return a relation consisting of the tuples in R that satisfy the predicate P. Depending on the complexity of the queries to be supported, the predicates can be based on either equality [room == 727] or more general (e.g. arithmetic [idle > 60]) comparisons. They may be expressed either as general logical statements ["status == faculty && file_server == butler"] or as patterns with "wildcard" (don't care) indicators ["(*,*,faculty,butler,*,*)"]. In the latter case querries are restricted to conjunctions (ANDs) of equality tests. ---------------- Implementation Tradeoffs Insert requires that we not insert a tuple that is already present, therefore it requires an efficient test of membership. Delete and Lookup require only those tuples that match a predicate. This is fast if the specified (non-wildcard) fields constitute a key for which we have an index. It's still reasonably fast if some subset of the specified fields constitute a key. unsorted linked list insert (without lookup) O(N) to avoid duplicates lookup, delete O(N) balanced search tree insert O(log N) lookup, delete O(k log N), when k tuples have a matching key, or O(log N + k) if the tree is threaded characteristic vector (limited domains) insert O(1), assuming characteristic field is a key or tuples are randomly distributed lookup, delete O(k), when k tuples have a matching key hash table insert O(1), assuming a good hash function and well distributed keys lookup, delete O(k), when k tuples have a matching key Note that absent an index for at least some of the fields specified in a query, lookup (and thus delete) will require that we search the entire relation: O(N) time. ------------------------------------------------------------------------ ------------------------------------------------------------------------ Hash Table Example for Workstations relation Name Room Mem OS Model Speed ================================================ crabgrass 727 128MB Solaris U10 333MHz fusion 715 176MB MacOS G3 500MHz ripple 724 128MB Solaris U5 270MHz chemical 735 96MB MacOS G3 300MHz saturn 727 64MB Win2000 P2 265MHz Assume a simple (too simple!) hash function * h(machine name) = first letter of name The resulting table: a --> NIL b --> NIL c --> [crabgrass, 727, 128MB, Solaris, U10, 333MHz] --> [chemical, 735, 96MB, MacOS, G3, 300MHz] --> NIL d --> NIL e --> NIL f --> [fusion, 715, 176MB, MacOS, G3, 500MHz] --> NIL : r --> [ripple, 724, 128MB, Solaris, U5, 270MHz] --> NIL s --> [saturn, 727, 64MB, Win2000, P2, 265MHz] --> NIL : z --> NIL * To insert a tuple, make sure it is not already on the list in the selected bucket, then add it to the list. * To delete a tuple, examine each entry on the list for the selected bucket, evaluate the predicate for each entry, and delete those that match. * To lookup tuples, examine each entry on the list for the selected bucket, evaluate the predicate for each entry, and return as a result those tuples that match. Given this structure, lookups in which only machine name is specified will take O(1) time. If the relation were indexed by room, lookups in which room and OS are specified would take O(number of tuples in the room). In either case, lookups in which only OS is specified would take O(N). ---------------- Secondary Index Structures Remember that we can create as many indices as we like. They work especially well when indexing by a key, but are possible even for non-keys. If a primary index determines the layout of the data in memory, secondary indices contain pointers into the primary structure, rather than the data itself. ---------------- Operations that Span Relations All of the operations we've considered so far (insert, delete, lookup) operate on a single relation. Operations may also span relations in order to query relationships among tuples in different relations. * lookup users whose server is down * lookup machines in a room with a printer ---------------- Multi-Relation Operations: Example Find the name of all users who have been idle less than 1 hour, and whose server is down. for all tuples t in User relation do for all tuples s in Server relation do if (t.Idle < 1:00) and (s.Name = t.Server) then if s.Status = down then print t.Name The running time of the algorithm is O(|Users| * |Servers|). In general, any query that requires that we look at every item in one relation for every item in another relation is inefficient. To improve the algorithm, we can * reorder the steps taken * use an index ---------------- Reordering Operations: Example We can improve the previous implementation by reordering operations so that we select items from one relation before iterating over the other relation. Find the name of all users who have been idle less than 1 hour, and whose server is down. for all tuples t in User relation do if (t.Idle < 1:00) then for all tuples s in Server relation do if (s.Name = t.Server) then if (s.Status = down) then print t.Name Assume that there are k users idle less than 1 hour. The running time of the algorithm is O(|Users| + k|Servers|). This implementation is considerably more efficient than the previous one in cases where k << |Users|. ---------------- Reordering Again: Example Suppose we reorder the two loops, and select from the Server relation before iterating over the User relation. Find the name of all users who have been idle less than 1 hour, and whose server is down. for all tuples s in Server relation do if (s.Status = down) then for all tuples t in User relation do if (t.Idle < 1:00) then if (s.Name = t.Server) then print t.Name Assume that j servers are down. The running time of the algorithm is O(|Servers| + j|Users|). Which is better? * Are there more servers or users? * Are there more idle users or down servers? ---------------- Indexes for Complex Queries Rather than search among all tuples in a relation we can use an index to quickly find the tuples that match a given predicate. To implement a query about idle time and server status, we would have to implement a secondary index for each of those domains. Such queries are not likely to be common, and such an index may not be worthwhile. An index based on the room number may make sense for both the Workstation and Printer relations. An index based on User Name (the character string) may be valuable for the User relation. In general, we would like to answer queries in time linear in the size of the answer. We can do this only if we have a (good) hash table for every attribute in the query, and all comparisons in the query are based on equality or inequality, rather than < or >, which hash tables don't support. (< and > queries can benefit from tree-structured indices, but these are O(log N) instead of O(1) for == and != queries.) Note the downside of extra indices: - space - time to update on insert and delete operations ------------------------------------------------------------------------ Database Design A database designer must define * the database scheme * the keys for each relation * primary index structures for each relation * (optional) secondary index structures for each relation These decisions depend on * the information to be supported * the relationships between pieces of information * the expected size of the relations * the types of queries expected Two important principles guide scheme design 1. Two pieces of information that are independent should reside in two different relations. 2. Two pieces of information that are related should reside in the same relation. Violations of principle 1 result in a database full of redundant information. * Course prerequisite information is independent of the time and day the course is offered. * A server's status is independent of the users it serves * Company information is independent of the mutual fund that owns shares in the company. Violations of principle 2 result in a loss of information in the database. * A Course-Day-Hour relation cannot be split into Course-Day and Course-Hour relations without losing track of the meeting time of a course on a particular day. Key (index) selection is also important, as it dictates what will be fast. * Unnecessary indices slow down the database, because they have to be updated on every insert and delete. * Remember that you can't identify keys simply by looking at the values of a relation; only the database designer knows for sure * The best choice of indices depends on knowledge of the data in the relations, and the expected pattern of queries * If the common queries don't use a key (eg. User Name in the Users relation), you can still use a hash table based on this domain, but the distribution of entries in hash buckets may be less even than if you used a key (such as user login). ------------------------------------------------------------------------ Relational Algebra An algebra is a formal structure consisting of sets and operations on those sets. Relational algebra is a formal system for manipulating relations. * Operands of this algebra are relations. * Operations of this algebra include the usual set operations (since relations are sets of tuples) -- union, intersection, and difference -- and special operations defined for relations: - selection - projection - join ------------------------------------------------------------------------ ------------------------------------------------------------------------ Selection: chooses tuples from a relation whose attributes meet the selection criteria, which is normally expressed as a predicate. R2 = select (R1, P) That is, from R1 we create a new relation R2 containing those tuples from R1 that satisfy (make true) the predicate P. Clearly R2 is a subset of R1. A predicate is a boolean expression whose operators are the logical connectives (and, or, not) and arithmetic comparisons (LT, LE, GT, GE, EQ, NE), and whose operands are either domain names or domain constants. select (Workstations, Room = 727) = Name Room Mem OS Model Speed ================================================ crabgrass 727 128MB Solaris U10 333MHz saturn 727 64MB Win2000 P2 265MHz select (Users, Status=Staff and Idle < :10) = Id Name Status File Server Console Idle =================================================================== kparkins K. Parkins Staff butler saturn 0 ---------------- Projection: chooses a subset of the columns in a relation, and discards the rest. R2 = project (R1, D1, D2, ..., Dn) That is, from the tuples in R1 we create a new relation R2 containing attributes from the domains D1, D2, ..., Dn only. project (Cycle Servers, Name, Status) = Name Status ============== heart up brain up aorta down ganga down tapti up project (select (Users, Console = heart), Name, Status) = Name Status ========================= A. Papathanasiou Grad C. Luis UG T. Pawlicki Faculty ---------------- Join: combines attributes of two relations into one. R3 = join (R1, D1, R2, D2) Given a domain from each relation, join considers all possible pairs of tuples from the two relations, and if their values for the chosen domains are equal, it adds a tuple to the result containing all the attributes of both tuples (discarding the duplicate domain D2). Natural join: If the two relations being joined have exactly one attribute (domain) name in common, then we assume that the single attribute in common is the one being compared to see if a new tuple will be inserted in the result. Suppose we augment the domain names in our lab database so that columns that really indicate different things have different names. The Workstations scheme might be WorkstationName Room Mem OS WorkstationModel Speed The File Servers scheme might be FileServerName Status FreeDisk OS A natural join on the Workstations and File Servers relations would then produce a relation of all workstation/file server attribute pairs that are running the same OS: Work- File- station- Server- Free Name Room Mem OS Model Speed Name Status Disk =================================================================== crabgrass 727 128MB Solaris U10 333MHz steward up 8.5GB crabgrass 727 128MB Solaris U10 333MHz slate up 452MB crabgrass 727 128MB Solaris U10 333MHz butler down 12.6GB fusion 715 176MB MacOS G3 500MHz nuclear up 5.8GB ripple 724 128MB Solaris U5 270MHz steward up 8.5GB ripple 724 128MB Solaris U5 270MHz slate up 452MB ripple 724 128MB Solaris U5 270MHz butler down 12.6GB chemical 735 96MB MacOS G3 300MHz nuclear up 5.8GB Note that entries for saturn do not appear, because there is no Win2000 file server in the database. Note also that we have introduced a LOT of redundant information. Joins generally produce wider relations than those chosen (presumably with care) by the database designer. That's ok, however, because we'll usually be performing joins only as an intermediate step in answering a query; there will almost always be a project that tosses columns we don't need. For example: find all workstations for which there exists a file server, running the same OS, that is currently up and has more than 5GB of disk space free; return (Workstation, FileServer, Room) triples: R1 = join as above R2 = select (R1, Status = up and FreeDisk > 5GB) Work- File- station- Server- Free Name Room Mem OS Model Speed Name Status Disk =================================================================== crabgrass 727 128MB Solaris U10 333MHz steward up 8.5GB fusion 715 176MB MacOS G3 500MHz nuclear up 5.8GB ripple 724 128MB Solaris U5 270MHz steward up 8.5GB chemical 735 96MB MacOS G3 300MHz nuclear up 5.8GB R3 = project (R2, WorkstationName, FileServerName, Room) WorkstationName FileServerName Room ===================================== crabgrass steward 727 fusion nuclear 715 ripple steward 724 chemical nuclear 735 Note that this is not the only way to get the result (and in particular it's not the best way!); more on this later. ---------------- Implementing Set Operations (largely review of 172 material) Assume our relations are implemented as lists. To implement R1 U R2 (while eliminating duplicates) we can * sort R1 in O(N lg N) * sort R2 in O(M lg M) * merge R1 and R2 in O(N+M) This takes care of the data. What about the indices? Hash tables using the same hash function can be merged in linear time. Trees can be merged in N lg N time. So O(N lg N + M lg M) definitely covers the whole thing. If our indices are hash tables, we can actually do the whole thing in O(N + M): * copy R1 to destination in O(N) * for each tuple in R2 (which is O(M)) add to destination iff not already there Intersection and set difference have corresponding implementations. ---------------- Implementing Projection To implement projection we must * process every tuple in the relation * remove any duplicates that result To avoid duplicates we can * sort the result and remove consecutive tuples that are equal - requires time O(N log N) where N is the size of the original relation * implement the result as a set - set insertion guarantees no duplicates - by using a hash table, insertion is O(1), so projection is O(N) Note that we have to have a key for the hash table. If nothing else, the entire tuple can always be used as a key. ---------------- Implementing Selection In the absence of an index we * apply the predicate to every tuple in the relation * insert matches in the resulting relation (duplicates can't occur) * take O(N) time Given an index, and an (in)equality-based predicate that uses the index key, we * Lookup tuples using the key * evaluate only those tuples with the predicate * take O(K) time, where K tuples match the key Note that hash tables won't support < or > predicates, and no index helps if it doesn't cover the fields used in the predicates. Why not just build an index any time we have an operation that might benefit from it? (After all, asymptotically we can hide the cost of index creation in any one such operation.) Because then we have to maintain the index in every future insert and delete. Only worth it if we need the index for a reasonable fraction of all operations. ---------------- Implementing Join with Nested Loops A nested loop join on relations R1 (with N attributes) and R2 (with M attributes), considers all |R1| x |R2| pairs of tuples. R3 := join (R1, Ai, R2, Bj): R3 := for each tuple t in R1 do for each tuple s in R2 do if t.Ai = s.Bj then insert (R3, t.A1, t.A2, ..., t.AN, s.B1, ..., s.B(j-1), s.B(j+1), ..., s.BM) This implementation takes time O(|R1|*|R2|). ---------------- Index Join An index join exploits the existence of an index for one of the domains used in the join to find matching tuples more quickly. R3 := join (R1, Ai, R2, Bj): for each tuple t in R1 do for each tuple s in R2 at index(t.Ai) do insert (R3, t.A1, t.A2, ..., t.AN, s.B1, ..., s.B(j-1), s.B(j+1), ..., s.BM) We could choose to use an index for R2, and reverse the order of the loops. Presumably we'd choose to do the outer loop on the smaller relation, and make use of the index on the larger relation. As usual, if we want any secondary indices for the joined relation we should update them along the way. ---------------- Sort Join If we don't have an index for a domain in the join, we can still improve on the nested-loop join using sort join. R3 := join (R1, Ai, R2, Bj): * Sort the tuples of each relation on the join attribute. * Peruse the two lists. If the initial tuples of one relation have a join attribute less than that of the initial tuples of the other relation, toss them. Else they're equal: pair all (consecutive) tuples from the two relations with the same value in the join attribute. NB: the book suggests merging the tuples of the relations into a single list and sorting that. This works, and may be faster, but only by a small constant factor, and it's harder to understand. ---------------- Comparison of Join Implementations Assumptions * Join R1 and R2 on some common attribute to produce R3 * R1 has i tuples, R2 has j tuples * |R3| = m, 0 <= m <= i * j * Every implementation takes at least time O(m) Comparison * Nested-loop join takes time O(i * j) * Index join (using R2 index) takes time O(i+m) - lookup is O(1) for each tuple in R1 - at most O(m) tuples match * Sort join takes time O(i log i + j log j + m) - O(i log i + j log j) to sort the relations - O(i + j + m) = O(m) to produce the output (0 <= m <= i*j) [NB: Time is O((i+j) log (i+j) + m) if you do it the way the book suggests.] ------------------------------------------------------------------------ Expressing Queries in Relational Algebra Relational algebra is an unambiguous notation (or formalism) for expressing queries. Queries are simply expressions in relational algebra. Expressions can be manipulated symbolically to produce simpler or easier-to-compute expressions according to the laws of relational algebra. In particular * early selection reduces the number of tuples * early projection reduces the number of domains Notation: Infix operators are easier to manipulate than prefix (function-call) style operators. Traditional to use lc sigma for selection, lc pi for projection, and bowtie for join. Since I'm typing in ascii, I'll use + for union * for intersection - for difference $P for selection on predicate P #A,B,C for projection onto attributes A, B, and C >< for join, with stacked equate for non-natural join [Note from last year: please DO NOT feed dollar signs and pound signs back at me on an exam. The real symbols are lower-case sigma and pi!] There are a bunch of algebraic laws for set union, intersection, and difference that you all probably remember from grade school. E.g.: S + T == T + S S * T == T * S (R + S) + T == R + (S + T) (R * S) * T == R * (S * T) R * (S + T) == (R * S) + (R * T) R + (S * T) == (R + S) * (R + T) R - (S + T) == (R - S) - T (R + S) - T == (R - T) + (S - T) S + 0 = S S * 0 = 0 S + S = S S * S = S S - S = 0 0 - S = 0 See section 7.3 of the text. ------------------------------------------------------------------------ ------------------------------------------------------------------------ Algebraic Laws for Join Commutativity (assuming order of columns doesn't matter -- this doesn't work for the pseudo-join in the assignment) R1 >< R2 == R2 >< R1 A=B B=A Associativity (R1 >< R2) >< R3 == R1 >< (R2 >< R3) A=B C=D A=B C=D so long as B and C are separate (non-missing) attributes of R2. If R2 doesn't have a C attribute, but R1 does, the first expression may be well defined while the second one isn't. If B and C are the same, we have a potential naming problem: if we keep the left-hand attribute name in each join, then the second expression may be well defined while the first one isn't. A similar problem arises with natural join: if R1 and R2 have a field in common, and R1 & R3 have a field in common, but R2 and R3 do not, then the first expression is well defined and the second is not. ---------------- Algebraic Laws for Selection Commutativity $P1 ($P2 (R)) == $P2 ($P1 (R)) Selection pushing $P (R >< S) == $P (R) >< S so long as all attributes mentioned in P are really members of R. Similarly $P (R >< S) == R >< $P (S) so long as all attributes mentioned in P are really members of S. And $P (R >< S) == $P (R) >< $P (S) so long as P mentions only the attribute on which we are performing the (natural) join. Similarly, $P (R + S) == $P (R) + $P (S) $P (R * S) == $P (R) * $P (S) $P (R - S) == $P (R) - $P (S) Selection splitting, where P == A and B $P (R) == $A ($B (R)) == $B ($A (R)) This rule sometimes allows us to push part of a selection down into one of the arguments of a join, and the other down into the other argument of the join. If P is the conjunction of A and B, where A mentions attributes found only in R, and B mentions attributes found only in S, then $P (R >< S) == $A (R) >< $B (S) ---------------- Example from the book: Consider the following 4 relation database * CSG: Course-StudentID-Grade * SNAP: StudentID-Name-Address-Phone * CDH: Course-Day-Hour * CR: Course-Room Implement the query "Where is Amy at Noon on Monday?" Let P be (Name="Amy" and Day="Monday" and Hour="Noon") We can use a brute-force approach that joins all the data in the relations into a single large relation, selects those tuples that meet the query criteria, and then isolates the answer field using projection: #Room ($P (((CSG >< SNAP) >< CDH) >< CR)) [fig 8.18] The selection uses only Name, Day, and Hour attributes (and not Course or Room), so we can push the selection inside the outermost join. #Room (($P ((CSG >< SNAP) >< CDH)) >< CR) [fig 8.19a] We cannot push selection further, because the predicate involves attributes from both operands of the next innermost join. We can, however, split the selection into two, one based on Name, and the other based on Day and Hour. Let P1 be (Name="Amy") and P2 be (Day="Monday" and Hour="Noon"). #Room (($P1 ($P2 ((CSG >< SNAP) >< CDH))) >< CR) [fig 8.18b] Now we can push the selection on P2 inside the join, since it involves only attributes from the CDH relation: #Room (($P1 ((CSG >< SNAP) >< $P2 (CDH))) >< CR) Similarly we can push the selection on P1 inside the preceding join, since it involves no attributes of CDH: #Room (($P1 (CSG >< SNAP) >< $P2 (CDH)) >< CR) [fig 8.19c] Finally, we can push the selection on P1 inside the first join, since it really needs attributes of only SNAP: #Room (((CSG >< $P1 (SNAP)) >< $P2 (CDH)) >< CR) [fig 8.18d] ---------------- Algebraic Laws for Projection Idempotence #SR (R) = R, where SR is the scheme of R Projection pushing Projection pushes down into unions, but not into intersections or differences: #A (R + S) == #A (R) + #A (S) The problem with intersections and differences is that projection may make tuples that used to look different look the same. To push a projection inside a join we go down into both arguments. We have to be sure to keep the attributes used in the join, and then, if necessary, get rid of them in a final step. If we're going to keep that final step, we have the option of pushing down into only one of the arguments, if we want. Suppose we're given #W (R >< S). A=B Let WR be the subset of W found in R, plus A. Let WS be the subset of W found in S, plus B. Then #W (R >< S) == #W (#WR (R) >< #WS (S)) This gets rid of unneeded columns early, but adds a bunch of operations. When is it a good idea? In general, we want to get rid of unneeded columns whenever a subsequent operation is going to have to look at the whole relation anyway. It does *not* make sense to get rid of columns if we're going to have to look at the whole relation to do so, and a subsequent operation is going to use an index to select only a subset of the tuples. We'll see instances of this tradeoff in the example below. ---------------- Continuing the example from the book: Originally we had #Room ($P (((CSG >< SNAP) >< CDH) >< CR)) [fig 8.18] After selection pushing, we had #Room (((CSG >< $P1 (SNAP)) >< $P2 (CDH)) >< CR) [fig 8.19d] This is likely to be a lot faster than the original, but it still carries a bunch of useless information through the joins. We can (and should) push the projection down into the outermost join. We have to project onto both room *and* course, because course is the attribute of the join. Note however that room doesn't appear as an attribute of the left argument (CSG >< $P1 (SNAP) >< $P2 (CDH)), so we just project on course. Moreover course and room are the *only* attributes of the right argument (CR), so that's a no-op, and can be dropped. This gives us: #Room (#Course ((CSG >< $P1 (SNAP)) >< $P2 (CDH)) >< CR) [fig 8.20a] Now we push the projection into both arguments of the next join. Since course is the attribute of that join, we don't have to add anything, nor do we have to a cleanup project to get rid of the join attribute afterward: #Room ((#Course (CSG >< $P1 (SNAP)) >< #Course ($P2 (CDH))) >< CR) [fig 8.20b] One more push is possible: #Room ((#Course (#Course,StudentID (CSG) >< #StudentID ($P1 (SNAP))) >< #Course ($P2 (CDH))) >< CR) [fig 8.20c] Here course isn't an attribute of the right argument, so we project onto only the join attribute, namely StudentID. Finally, we note that the innermost projection of CDH onto Course and StudentID will require that we peruse the whole relation. Immediately afterwards, however, we will do a join on StudentID, which is presumably an attribute with an index, so we won't have to look at all tuples. It will be cheaper to hang onto the grade fields for a while than to get rid of them up front. We'll get rid of them in the projection onto Course one level out, anyway. Here's the final query: #Room ((#Course (CSG >< #StudentID ($P1 (SNAP))) >< #Course ($P2 (CDH))) >< CR) [fig 8.20d] The book goes through both the selection and the projection "optimizations" with tree-based diagrams. There are pretty helpful. Take a look at figures 8.19 and 8.20, pp. 445 and 447. ---------------- A final aside: there are important parallels between query "optimization" in a DBMS and code "optimization" in a compiler. (Neither of course necessarily produces a provably optimal result. "Improvement" would be a more apt term.) In both cases (1) we have a formal mathematical framework that characterizes the transformations that are *semantically correct*. (2) we have a set of techniques, most often heuristic in nature, that drive decisions as to what is *computationally beneficial*. Code optimizations are discussed (briefly) in CSC 254. There has been preliminary discussion about the possibility of creating a new course (perhaps numbered 255) that would discuss them in considerably more depth.