Lecture notes for CSC 173, Thurs. Nov. 8 -- Tues. Nov. 13, 2001 ------------------------------------------------- READING: Aho & Ullman chapter 9 New assignment goes on the web tomorrow. ------------------------------------------------- Graphs A graph is a set of nodes (or points) connected by edges (or arcs). A simple example of a graph is a map of cities connected by roads. The cities are nodes; the roads are edges. Most questions you might pose about such a map can be posed in terms of operations on a graph. For example, finding the shortest route between two cities, or the shortest route that visits all cities, are common graph problems. In fact, a wide variety of problems can be posed as operations on graphs, including network routing, city planning, VLSI layout, deadlock detection, and register allocation in a compiler. As with most data structures, there are several different implementations of graphs, with different tradeoffs in time and space. Several classical graph problems (and their solutions) have been studied extensively because they arise frequently in practical settings, including * breadth-first and depth-first search * single-source-shortest-path problem * all-pairs-shortest-path problem * transitive closure * minimum spanning tree problem ======================================================================== Definitions A graph is a set of N nodes (or vertices) and E edges. Each element of E is a pair of nodes (u,v), which means there is an edge (or arc) between u and v. * In a directed graph (or digraph), each edge (u,v) is an ordered pair, and there is an arc from u to v. o u is a predecessor of v o v is a successor of u * In an undirected graph, each edge is an unordered pair, and there is an undirected arc between u and v. o u and v are said to be adjacent * In a weighted graph each edge has an associated value (weight). * A path in a directed graph is a list of nodes (v1, v2,...,vn) such that o there is an arc from v(i) to v(i+1), for all 1 <= i < n. o the length of the path is the number of arcs in the path (n-1) * A simple path visits no node more than once * A cycle in a directed graph is a path of length >= 1 that begins and ends with the same node. o a path of length 0 is not a cycle o if there is an arc from a node v to itself, there is a cycle v -> v o A cyclic graph is a graph that has at least one cycle; an acyclic graph has no cycles. o Notice that directions matter. Edges a->b, b->c, and a->c do NOT make a cycle. ======================================================================== Operations on Graphs * Breadth-first and depth-first search: Visit every node in a graph. * Finding cycles: Does the graph have a cycle? EX: Given a list of processes currently executing on the system, the resources each holds, and the resources each needs, determine whether or not progress is possible. (This example works on a so-called bipartite graph, in which nodes belong to classes, and every edge connects a node in one class to a node in the other class.) * Connected components of undirected graph: Separate nodes into equivalence classes, so that there is a path between any two nodes in any class. EX: I just lost a link in the company network. Is it still possible for everybody to reach everybody else? * Minimal spanning tree: Find a tree that connects all the nodes in a weighted graph with minimal cost. EX: Design the layout of the resnet backbone, minimizing the amount of fibre that must be laid. * Topological sorting: Assign a linear ordering to nodes in a directed acyclic graph (DAG) in such a way that if there is a path from u to v in the graph then v comes after u in the linear order. EX: Given a list of courses required for the major, and the prerequisite list for each course, find a schedule for taking the courses that obeys the prerequisite list. * Single-source-shortest-path: Find the shortest (lowest weight) path from a given node to all other reachable nodes. * All-pairs-shortest-path: Find the shortest paths between all pairs of nodes. EX: Find the shortest distance between all pairs of cities in the country, for publication in a traveler's guidebook. ------ * Minimal graph coloring: Assign a color to each node so that no two nodes sharing an edge have the same color, and the total number of distinct colors is as small as possible. EX: Assign the fewest numbers of registers needed to store variables and temporary results in a procedure. * Maximal clique: find the largest subset of the graph in which every pair of nodes shares an edge. * Maximal independent set: find the largest subset of the graph in which no pair of nodes shares an edge. * Hamiltonian circuit: find a cycle, if there is one, on which every node appears exactly once. * Euler circuit: find a cycle, if there is one, on which every edge appears exactly once. * Traveling sales circuit: find a minimum-cost cycle (not necessarily a proper one) that visits every node at least once. EX: minimum-cost circuit board drilling. * Planar subgraph: find the largest subgraph of a given graph that can be rendered in the plane without any edge crossings. EX: circuit board layout. These are just a sampling of some of the most important problems. There are many many others. The ones above the short horizontal line have polynomial time solutions. The ones below the line are NP-complete. (Technically, the decision-problem versions of them are NP-complete; the general versions are NP-hard. Don't worry about the difference for now.) ======================================================================== Graph Implementations There are two common implementations for graphs * An adjacency matrix represents a graph G of N nodes and E edges using an NxN boolean matrix A, where A[i,j] is true if (i,j) is an edge in G. * An adjacency list represents a graph G of N nodes and E edges as an array A of size N, where A[i] is a pointer to a list of vertices that are successors to vertex i. If we are implementing an undirected graph * the adjacency matrix is symmetric, which means A[i,j] = A[j,i] * each edge (u,v) appears on the adjacency list for both node u and v ------------------------------------------------------------------------ Comparison of Implementations Consider a graph G with N nodes and E edges (0 <= E <= N^2) * Adjacency matrix o Is (u,v) an edge? -- O(1) o Successors (u) -- O(N) o Predecessors (u) -- O(N) o Space -- O(N^2) (bits) to store the matrix o Best for dense graphs (E ~= N^2) * Adjacency lists o Is (u,v) an edge? -- O(E/N) on average o Successors (u) -- O(E/N) on average o Predecessors (u) -- O(N+E) o Space -- O(N + E) o Best for sparse graphs (E << N^2) ======================================================================== Searching a Graph Many problems can be described using graphs, where the solution to the problem requires that we search the graph, looking for nodes (or paths) with a certain property. Two important graph exploration techniques are * breadth-first search: like breadth-first search in a tree, we search as broadly as possible by visiting a node, and then immediately visiting all nodes adjacent to that node. * depth-first search: like depth-first search in a tree, we search as deeply as possible by visiting a node, and then recursively performing depth-first search on each adjacent node. In both algorithms we keep track of the nodes we've already seen, and decline to visit a node twice. If the graph is not connected we may need to employ multiple starting points in order to explore it all. Depth-first search is naturally recursive. Breadth-first search requires a queue. ------------------------------------------------------------------------ Breadth-First Search Algorithm BFS(vertex u) queue Q u.marked = true // do whatever is appropriate upon first visiting u Q.enqueue(u) while not Q.empty() v = Q.dequeue() for all neighbors w of v if not w.marked w.marked = true // do whatever is appropriate upon first visiting w Q.enqueue(w) main for all nodes u u.marked = false for all nodes u if not u.marked BFS(u) ------------------------------------------------------------------------ Analysis of the Algorithm Each vertex is placed in the queue once, so the while loop in BFS is executed at most N times. Same for the two loops in main. Each edge is examined once in the for all neighbors loop, whose body is therefore executed at most E times. Assuming we maintain head and tail pointers for the queue, enqueue, dequeue, and empty are all O(1). The algorithm requires O(N + E). ------------------------------------------------------------------------ Breadth-First Search and Die Hard Use breadth-first search to solve the problem posed in Die Hard with a Vengeance -- Measure 4 gallons with two jugs that hold 5 and 3 gallons. Node EmpB EmpS FillB FillS PourSB PourBS (0,0) (0,0) (0,0) (5,0) (0,3) (0,0) (0,0) (0,1) (0,1) (0,0) (5,1) (0,3) (1,0) (0,1) (0,2) (0,2) (0,0) (5,2) (0,3) (2,0) (0,2) (0,3) (0,3) (0,0) (5,3) (0,3) (3,0) (0,3) (1,0) (0,0) (1,0) (5,0) (1,3) (1,0) (0,1) (1,1) (0,1) (1,0) (5,1) (1,3) (2,0) (0,2) (1,2) (0,2) (1,0) (5,2) (1,3) (3,0) (0,3) (1,3) (0,3) (1,0) (5,3) (1,3) (4,0) (1,3) (2,0) (0,0) (2,0) (5,0) (2,3) (2,0) (0,2) (2,1) (0,1) (2,0) (5,1) (2,3) (3,0) (0,3) (2,2) (0,2) (2,0) (5,2) (2,3) (4,0) (1,3) (2,3) (0,3) (2,0) (5,3) (2,3) (5,0) (2,3) (3,0) (0,0) (3,0) (5,0) (3,3) (3,0) (0,3) (3,1) (0,1) (3,0) (5,1) (3,3) (4,0) (1,3) (3,2) (0,2) (3,0) (5,2) (3,3) (5,0) (2,3) (3,3) (0,3) (3,0) (5,3) (3,3) (5,1) (3,3) (4,0) (0,0) (4,0) (5,0) (4,3) (4,0) (1,3) (4,1) (0,1) (4,0) (5,1) (4,3) (5,0) (2,3) (4,2) (0,2) (4,0) (5,2) (4,3) (5,1) (3,3) (4,3) (0,3) (4,0) (5,3) (4,3) (5,2) (4,3) (5,0) (0,0) (5,0) (5,0) (5,3) (5,0) (2,3) (5,1) (0,1) (5,0) (5,1) (5,3) (5,1) (3,3) (5,2) (0,2) (5,0) (5,2) (5,3) (5,2) (4,3) (5,3) (0,3) (5,0) (5,3) (5,3) (5,3) (5,3) ------------------------------------------------------------------------ Breadth-First Search in Die Hard Here is the sequence in which nodes in the graph would be visited. Node EmpB EmpS FillB FillS PourSB PourBS (0,0) (0,0) (0,0) (5,0)* (0,3)* (0,0) (0,0) (5,0) (0,0) (5,0) (5,0) (5,3)* (5,0) (2,3)* (0,3) (0,3) (0,0) (5,3) (0,3) (3,0)* (0,3) (5,3) (0,3) (5,0) (5,3) (5,3) (5,3) (5,3) (2,3) (0,3) (2,0)* (5,3) (2,3) (5,0) (2,3) (3,0) (0,0) (3,0) (5,0) (3,3)* (3,0) (0,3) (2,0) (0,0) (2,0) (5,0) (2,3) (2,0) (0,2)* (3,3) (0,3) (3,0) (5,3) (3,3) (5,1)* (3,3) (0,2) (0,2) (0,0) (5,2)* (0,3) (2,0) (0,2) (5,1) (0,1)* (5,0) (5,1) (5,3) (5,1) (3,3) (5,2) (0,2) (5,0) (5,2) (5,3) (5,2) (4,3)* The queue contents over time would be: (0,0) (5,0) (0,3) (5,3) (2,3) (3,0) (2,0) (3,3) (0,2) (5,1) (5,2) (0,1) (4,3) ------------------------------------------------------------------------ Depth-First Search Algorithm DFS(vertex u) u.marked = true // do whatever is appropriate upon first visiting u for all neighbors v of u if not v.marked DFS(v) main for all nodes u u.marked = false for all nodes u if not u.marked DFS(u) ------------------------------------------------------------------------ Analysis of the Algorithm The number of calls to DFS is O(N), since we never call DFS on a marked node, and we mark a node on entering DFS. The total time spent traversing adjacency lists in the for loop of DFS is O(E). The algorithm requires O(N + E). ------------------------------------------------------------------------ Depth-First Search in Die Hard Here is the sequence in which nodes in the graph would be visited by DFS, starting at (0,0). (0,0) (5,0) (5,3) (0,3) (3,0) (3,3) (5,1) (0,1) (1,0) (1,3) (4,0) ------------------------------------------------------------------------ Depth-First Search Trees Since we never visit a node twice, our exploration of a graph using DFS resembles a tree. * if DFS(v) causes a recursive call DFS(u) then u is a child of v in the tree. * the children of v appear left to right in the tree in the order they are marked. DFS(v) produces a depth-first search tree with node v at the root. In some graphs, it isn't possible to reach all nodes from a given start node. That is, a single call to DFS may not visit all nodes in the graph. This is why the main program given above calls DFS for every unmarked node in the graph. Each such call produces a different depth-first search tree. In an undirected graph, these are the connected components. The main program thus produces a depth-first search *forest* for the graph. A DFS forest allows us to classify all edges in the graph. Tree edges end up in the DFS forest. Back edges point from a node to an ancestor in the forest. Cross edges point from a node to something to the left. ------------------------------------------------------------------------ It is sometimes handy to assign *postorder numbers* to the nodes of a graph. These are induced by a DFS forest: nodes are given numbers in the order in which they are *last* visited by the DFS algorithm: postorder_DFS(vertex u, ref int nextnum) u.marked = true for all neighbors v of u if not v.marked DFS(v, nextnum) u.ponum = nextnum++ postorder_main for all nodes u u.marked = false int nextnum = 1 for all nodes u if not u.marked postorder_DFS(u, nextnum) ------------------------------------------------------------------------ Testing for Cycles We can test for cycles during DFS, by keeping two marks. One says whether a node has been visited. The other says whether the node is on the path from the root to the current node. Alternatively, if we already have postorder numbers, we can use them to test for cycles. In either case, we look for an edge (u,v) in the graph such that v is an ancestor of u in the search tree. Such an arc represents a cycle created by following the tree edges from v to u (which must be possible since v is an ancestor of u), and then the back edge u to v to complete the cycle. (In an undirected graph, we have to pay attention only to back edges that go more than one level up -- going from u to v and then back to u over the same edge doesn't constitute a cycle.) Here's the direct algorithm: cycle_test_DFS(vertex u, p) // p is parent; needed only for undirected case u.marked = true u.onpath = true for all neighbors v of u if v.onpath and (graph is directed or v != p) announce cycle halt if not v.marked cycle_test_DFS(v, u) u.onpath = false cycle_test_main for all nodes u u.marked = false u.onpath = false for all nodes u if not u.marked cycle_test_DFS(u, nil) announce no cycle -------- If there is an edge (u,v) in E, and the postorder number of u is less than or equal to the postorder number of v, the graph has a cycle. * If you visited u first, you would have numbered v first (in postorder), and u.postorder > v.postorder. So you must have visited v first. * If DFS(v) does not visit u, then v.postorder < u.postorder. * If DFS(v) does visit u, the cycle consists of the path produced by DFS(v) to u, then u to v using the edge (u,v). Here's the alternative algorithm: Testing for Cycles: The Algorithm cycle_test_alternate_main postorder_main() for all nodes u for all neighbors v of u if u.ponum <= v.ponum // = catches self-loops announce cycle halt announce no cycle ------------------------------------------------------------------------ Topological Sort Topological sorting assigns a linear ordering to the vertices in a directed acyclic graph (DAG), such that if (i,j) is an edge, i appears before j in the ordering. We can do a direct algorithm (using a stack to get the order right): stack S; top_sort_DFS(vertex u) u.marked = true for all neighbors v of u if not v.marked top_sort_DFS(v) S.push(u) top_sort_main for all nodes u u.marked = false for all nodes u if not u.marked top_sort_DFS(u) while not S.empty() print S.pop() If we use postorder numbers to order nodes, then the reverse of this ordering is a topological sort (probably not the only one, but one). top_sort_alternate_main postorder_main() L = list of nodes, with postorder numbers sort(L, postorder_reverse_compare()) print(L) ------------------------------------------------------------------------ Reachability Given a directed graph G and a vertex v in G, the reachability problem is to find all vertices in G that can be reached from v by following arcs. The answer to the reachability problem is the same set of nodes explored from v using depth-first search. reachable(u) for all nodes v v.marked = false DFS(u) for all nodes v if v.marked print v ======================================================================== Single Source Shortest Path (SSSP) Problem Given a directed graph G = (V,E), with non-negative costs on each edge, and a selected source node v in V, for all w in V, find the cost of the least cost path from v to w. The cost of a path is simply the sum of the costs on the edges traversed by the path. This problem is a general case of the more common subproblem, in which we seek the least cost path from v to a particular w in V. In the general case, this subproblem is no easier to solve than the SSSP problem. ------------------------------------------------------------------------ Dijkstra's Algorithm Dijkstra's algorithm is a greedy algorithm for the SSSP problem. * A "greedy" algorithm always makes the locally optimal choice under the assumption that this will lead to an optimal solution overall. * Example: in making change using the fewest number of coins, always start with the largest coin possible. Abstractions used by Dijkstra's algorithm include: Adjacency lists to find the neighbors of a node and the cost of the edges among them. A priority queue of nodes for which we have not yet identified a cheapest path from our source node. A notion of "lowest currently known cost" to each "unsettled" node. The Algorithm DijkstraSSSP(vertex u) vertex_set unsettled = V - {u} for all nodes v v.cost = infinity for all neighbors v of u v.cost = weight(u,v) while not unsettled.empty() find v in unsettled s.t. v.cost is minimal unsettled -= {v} for all neighbors w of v // Shorter path from u to w using v? if v.cost + weight(v,w) < w.cost w.cost = v.cost + weight(v,w) unsettled.adjust() // re-order heap ------------------------------------------------------------------------ How Dijkstra's Algorithm Works On each iteration of the main loop, we remove vertex v from unsettled, where v has the least cost path from the source u (v.cost) involving only settled nodes. We know that v.cost is the cost of the least cost path from the source u to v. If there were a lower cost path from the source u to v going through node x (where x is not in known) then * x.cost would be less than v.cost * x would be selected before v * x would be in known ------------------------------------------------------------------------ Analysis of Dijkstra's Algorithm Consider the time spent in the two loops: 1. The first loop has O(N) iterations, where N is the number of nodes. 2. The while loop is executed O(N) times. If we use a partially ordered tree for the priority queue, the find operation is O(log N), for a total of O(N log N) over all iterations of the while loop. The nested for loop is executed O(E) times, summed over all iterations of the while loop. Inside it does an O(log N) re-adjusting of the priority queue. The whole algorithm, carefully implemented, is therefore O(E log N). ======================================================================== All Pairs Shortest Path (APSP) Problem This is a generalization of the SSSP problem: find the cheapest way from here to there for *all* combinations of here and there. One way to solve this is to run Dijkstra's algorithm N times. That O(NE log N). An alternative (described in the book) is to use Floyd's algorithm. It's O(N^3). Which is better depends on the relative magnitudes of E log N and N^2. For large sparse graphs, Dijkstra's algorithm is preferable. ======================================================================== Transitive Closure Floyd's algorithm finds the cost of the least cost path (or the path itself) between every pair of vertices. Often we only want to know if there *is* a path between any two vertices (and ignore costs). Warshall's algorithm is a specialized (but earlier) version of Floyd's algorithm that solves this problem, called the transitive closure of a graph. Given a directed graph G = (V,E), represented by an adjacency matrix A[i,j], where A[i,j] = true if (i,j) is in E, compute the matrix P, where P[i,j] is true if there is a path of length greater than or equal to 1 from i to j. ------------------------------------------------------------------------ Warshall's Algorithm Warshall (int N, matrix A, ref matrix P) // initialize: for i in 0..n-1 for j in 0..n-1 // There's a path if there's an edge P[i,j] = A[i,j] for k in 0..n-1 for i in 0..n-1 for j in 0..n-1 if (! P[i,j]) P[i,j] = P[i,k] && P[k,j] Clearly the algorithm is O(N^3). Note the similarity to Kleene's algorithm to turn a DFA into a RE. ======================================================================== Minimum Cost Spanning Tree Let G=(V,E) be a connected graph where for all (u,v) in E there is a cost vector weight[u,v]. * A graph is connected if every pair of vertices is connected by a path. A spanning tree for G is a free (unrooted) tree that connects all vertices in G. The cost of the spanning tree is the sum of the cost of all edges in the tree. We usually want to find a spanning tree of minimum cost. Example applications: * Computer networks - vertices in the graph might represent computer installations, edges represent connections between computers. We want to allow messages from any computer to get to any other, possibly with routing thru an intermediate computer, with minimum cost in connections. * Trucking routes - vertices in the graph are cities, and edges are courier delivery routes between cities. We want to service a connected set of cities, with minimum cost. ------------------------------------------------------------------------ Properties of Free Trees Property 1: A free tree with N >= 1 vertices has exactly N-1 edges. Proof by Contradiction: Let G=(V,E) be the smallest free tree that doesn't satisfy property 1. G must have more than 1 vertex, since the only 1-vertex free tree has 0 edges, which satisfies property 1. Furthermore, there must be a vertex v with exactly 1 incident edge (v,w). * No vertex can have 0 edges, as it wouldn't be connected, and so G would not be a free tree. * If every vertex has at least two incident vertices, consider the path created from a vertex v, entering and leaving vertices by a different edge. Such a path will eventually form a cycle, so G isn't a free tree. If we delete edge (v,w), we get a new tree G2 that satisfies property 1 (ow. we have a tree that is smaller than G but is a free tree, contradicting our assumption about G). Since G2 has N-1 vertices and N-2 edges, G must have N vertices and N-1 edges, contradicting our assumption about G. Property 2: Adding an edge to a free tree introduces a cycle. Proof: According to property 1, every free tree has N vertices and N-1 edges. If we added an edge to a free tree, we would have a connected graph with N vertices and N edges. If the edge does not introduce a cycle, we would have a connected, acyclic graph, with N vertices and N edges, which is a free tree that violates property 1 (a contradiction). ------------------------------------------------------------------------ Minimum Spanning Tree Property Let G = (V,E) be a connected graph with a cost function on the edges. Let U be a subset of V. If (u,v) is an edge of lowest cost such that u is in U and v is in V-U, then there is a minimum spanning tree that includes (u,v). Proof by contradiction: Assume the contrary. Consider T, a MCST for G. According to property 2 of free trees, adding (u,v) to T introduces a cycle involving (u,v). There must be another edge (u2,v2) in T such that u2 in U and v2 in V-U; otherwise there is no way for the cycle to leave vertices in U, enter vertices in V-U, and return without using (u,v) twice. If we delete (u2,v2), we break the cycle and get a spanning tree T2. The cost of T2 is the cost of T - (u2,v2) + (u,v). Since (u,v) is the least cost edge between vertices in U and V-U, the cost of T2 is less than or equal to the cost of T. This contradicts our assumption that a minimum cost spanning tree would not include (u,v). Prim's algorithm to find the minimum cost spanning tree exploits this property. ------------------------------------------------------------------------ Prim's Algorithm Initially, Prim's algorithm has one node in the spanning tree, and no edges. The algorithm adds nodes to the spanning tree one at a time, in order of the edge cost to connect to the nodes already in the tree. Note the superficial similarity to Dijkstra's SSSP algorithm. PrimMCST (ref edge_set T) // T = set of edges in spanning tree int closest[N] // closest[v] = vertex u in U closest to v int lowcost[N] // lowcost[v] = weight[v,u] // U is, implicitly, node 0 and the nodes connected to it // by edges of T T = empty for i in 1..N-1 lowcost[i] = weight[0,i] closest[i] = 0 N-1 times do // find the node closest to U and add it to U min = lowcost[1] k = 1 for j in 2..N-1 // consider all nodes other than 0 if lowcost[j] < min // including (needlessly) all those min = lowcost[j] // already in U k = j // k is now the node outside U closest to something in U; // add it to U T += {(closest[k], k)} lowcost[k] = infinity // k is now in U; make sure we never chose it again for j in 1..N-1 if weight[k,j] < lowcost[j] && lowcost[j] < infinity lowcost[j] = weight[k,j] closest[j] = k ------------------------------------------------------------------------ Analysis of Prim's Algorithm Prim's algorithm is O(N^2). The while loop is executed n-1 times, requiring O(N) * We add one vertex to U each iteration * We exit the loop when U = V We find the lowest cost edge from U to V-U in O(N) time and, similarly, update lowcost in O(N) time. We might be able to reduce constant overhead by using a slightly more complicated data structure to keep track of which nodes are in V-U, so we don't consider them in the two 'for' loops. This would not change the asymptotic complexity of the algorithm, however, because 1 + 2 + 3 + ... + N is still O(N^2). ------------------------------------------------------------------------ Kruskal's Algorithm Prim's algorithm requires O(N^2) time. We can do better using Kruskal's algorithm if E << N^2. Kruskal's algorithm constructs a MCST incrementally. Initially, each node is in its own MCST, consisting of that node and no edges. At each step in the algorithm, the two MCST's that can be connected together with the least cost are combined, adding the lowest cost edge that links a vertex in one tree with a vertex in another tree. When there is only one MCST that includes all vertices, the algorithm terminates. Since we must consider the edges in order of their cost, we must sort them, which requires O(E log E). Merge can be implemented in O(E log E). The key is being able to tell which tree a node is in. We do this using what are often called UNION-FIND trees. These are maintained separately from the trees we're gluing together to make the MCST. The UF tree for an initial, one-node set is trivial. Non-trivial trees have parent pointers but no child pointers. The root of a UF tree has a null parent pointer. When merging two UF trees we make the root of the shorter tree a child of the root of the taller tree. This guarantees that the height of a tree is worst-case logarithmic in the number of nodes. To tell if two nodes are already connected by the partially completed MCST, we follow UF parent pointers as far as we can and see if we end up at the same root. If not, we add the edge between the nodes to our MCST and merge the UF trees. Kruskal's algorithm is O(E log E), which is better than Prim's algorithm if the graph is not dense (ie, E << N^2).