The Integers * d|n means d divides n. Or n is a multiple of d * Two numbers are relatively prime if the only integers that divide both are +-1 * Euler phi-function phi(n) is the number of positive integers <= n that are relatively prime to n. This comes up in later applications when we want to get a handle the number of primes. * Easy results 1. if a|b and b|c then a|c 2. if d|x and d|y then d|(ax + by) for any integers a and b 3. if n|N then (x%N)%n = x%n Greatest common divisors * for every pair of positive integers m, n the following are equivalent. 1. d is the greatest common divisor of m and n, written gcd(m,n) 2. every common divisor of m,n divides d 3. d is the smallest positive integer of the form xm + yn where x,y are integers. * Two integers are relatively prime if their gcd is 1 The Euclidean Algorithm * Efficient method for finding gcd (much more so than factoring) * also an efficient method of finding x,y such that mx + ny = gcd(m,n) * Basically, repeated reduction mod x, or repeated remaindering. * to find gcd(210, 119) 210 - 1 * 119 = 91 119 - 1 * 91 = 28 91 - 3 * 28 = 7 28 - 4 * 7 = 0 ==> gcd is 7 * For gcd(26, 19) 26 - 1 * 19 = 7 19 - 2 * 7 = 5 7 - 1 * 5 = 2 5 - 2 * 2 = 1 2 - 2 * 1 = 0 ==> gcd is 1 * to work backward to find multiplicative inverse 1 = 5 - 2 * 2 = 5 - 2 * (7 - 1 * 5) = -2 * 7 + 3 * 5 = -2 * 7 + 3 * (19 - 2 * 7) = 3 * 19 - 8 * 7 = 3 * 19 - 8 * (26 - 1 * 19) = -8 * 26 + 11 * 19 From this we get that the multiplicative inverse of 19 mod 26 is 11. Equivalence Relations * A "relation" R on a set S is a subset of S x S i.e. a set of ordered pairs (x,y) of elements of S * A function is a specific sort of relation, namely a set R of pairs such that for every s in S there is exactly one pair (s,t) in R. Typically we write f(s) = t * An "equivalence relation" is another special sort of relation, namely one satisfying the following three properties. 1. Reflexivity: For all x in S, (x,x) is in R. 2. Symmetry: If (x,y) is in R, then (y.x) is in R. 3. Transitivity: If (x,y) and (y,z) are in R, then (x,z) is in R. Often we will write x R y, or generically, x ~ y, rather than (x,y) in R * The canonical example of an equivalence relation is ordinary equality (of integers, sets, or most others we are used to) * For any member x of a set S, we can define an "equivalence class" x\bar relative to a relation R ~ on S as the set of all elements y of S such that x ~ y. * The set of all equivalence classes of ~ on S is denoted S/~. These classes are mutually disjoint, and their union is all of S. Such a set of subsets is called a "partition" of S. * Conversely, any partition of S defines an equivalence relation. The integers mod m * We can use the notion of an equivalence class to more formally define the integers mod m. In particular, congruence mod m of x and y is an equivalence relation. * The integers mod m is the collection of equivalence classes with respect to the equivalence class congruence mod m. It is denoted Z/m, and consists of m subsets, which are sometimes called "residue classes" * One representation of Z/m is 0\bar, 1\bar ... (m-1)\bar (but there are others as well) * We can use the fact that elementary arithmetic operations (+, -, *) behave well with respect to congruence mod m to define analogous "operations" on the equivalence classes, which inherit familiar properties such as distributivity, associativity of +,*, etc. This leads straight back to the observation we made before that we can perform arithmetic mod m anywhere in a computation without "messing things up" In particular, we have 1. (x + y) %m = ((x % m) + (y % m)) % m 2. (x * y) %m = ((x % m) * (y % m)) % m * These relations can be used in some interesting ways 1. We can find the ones-place digit of the decimal expansion of 3^999 = 3*3*3 * (3^4)^(996/4) % 10 = 27 * 81^249 % 10 = 7 * 1^249 % 10 = 7 2. We can prove that x^2 - y^2 = 2002 has no integer solutions. note that mod 4, 0^2 = 0, 1^2 = 1, 2 ^ 2 = 0, 3^2 = 1, hence x^2 - y^2 must be 0, 1, or -1 = 3 mod 4. but 2002 = 2 mod 4, so it cannot be such a difference. Primitive roots and discrete logs. * For a modulus m, an integer g such that for every x relatively prime to n there is an integer l for which g^l = x mod m, is called a "primitive root modulo m". * Any l with this property is said to be a discrete logarithm of x base g modulo m * In other words, we can generate EVERY x (relatively prime to n) by some power of a primitive root. * For example, 2 is a primitive root mod 11 2^1 = 2; 2^2 = 4; 2^3 = 8; 2^4 = 5; 2^5 = 10; 2^6 = 9; 2^7 = 7; 2^8 = 3; 2^9 = 6; 2^10 = 1; 2^11 = 2. * Will be shown later that only integers n that satisfy one of the following have primitive roots 1. n = p^e for prime p > 2 and e >=1 2. n = 2p^e for prime p > 2 and e >= 1 3. n = 2 or 4 * So all primes have primitive roots * Can also be shown that for prime p, g^(p-1) = 1 mod p. --------------------------------------------------------------- Notions of Complexity * Big oh: f is O(g) if there are C and x_0 such that f(x) <= CG(x) for all x >= x_0. In other words, Cg(x) eventually bounds f(x). * Note that this bound does not have to be met. To indicate a bound that is met, we write f ~ g (or f is big theta of g) which implies that f is O(g) and g is O(f) * Little oh: f is o(g) if lim x -> inf g(x) / f(x) = 0 In other words, g(x) eventually dominates f(x). Such notations are useful in describing number of operations, amount of memory required, etc. for algorithms, typically as a function of the length of an input string. (though occasionally as a function of the magnitude of an input value). Since there is generally an exponential/logarithmic relation between the two, it is important to be clear on which we are using. Complexity classes: * P: Informally, the set of all problems with algorithms that run in O(p), where p is some polynomial function of the input size. More specifically, the set of all decision problems that are decidable by a deterministic Turing machine running in polynomial time on the size of the input. Examples: multiplication of binary or decimal integers; sorting a set of integers; graph shortest path Note that the polynomial bound must hold in the worst case. Problems with polynomial time algorithms are generally considered "tractable" or "easy". Oddly enough, most practical problems for which polynomial algoritms are known, have relatively low order, often n^1 to n^3. Tends to reinforce "easy" implication - note that n^1000 could hardly be considered easy by any practical standard. * NP: Informally, the set of all problems for which the correctness of a proposed solution can be checked in time polynomial in the size of the problem instance. More specifically, the set of all decision problems that are decidable by non-deterministic Turing machine in a number of steps that is bounded by a polynomial function of the input size. (A problem is decidable by an NDTM if there is at least one accepting path) Examples: Boolean formula satisfiability (SAT); Hamiltonian circuit; bin packing or "knapsack" problem (exact version). It is strongly suspected, but NOT proven, that NP strictly contains P. The best known algorithms for a large group of problems in NP (including those mentioned above) have exponential worst case behavior. Interestingly, some of these problems do seem to have efficient AVERAGE case algorithms. * NP Complete: Informally, the "hard" problems in NP. More formally, a problem in NP to which any other problem in NP can be reduced in polynomial time. Note that the existence of such problems is NOT obvious, and the proof that SAT is such a problem is, in some sense, a stunningly insightful result. * NP hard: A problem (not necessarily in NP) to which any other problem in NP can be reduced in polynomial time. * Note that several "classic" examples of NP complete problems (e.g. traveling salesman shortest path, maximal clique) are not, in their popular forms, known to be in NP. * Co-NP: The class of all problems whose complement is in NP, for example, unsatisfiability. It is suspected, but not proven, that Co-NP is different than NP. (Naively, it seems harder to prove that a formula has no satisfying formula, than to prove that it has one). * E: The class of all problems with algorithms that run in time bounded by an exponential function of the input size. Two forms: LEXP based on 2^(kn), and PEXP based on 2^(n^k). It is known that E properly contains P (trivially, some outputs are just too long). * Subexponential algorithms: Algorithms whose time requirements exceed any fixed polynomial bound, but are dominated by anything in LEXP. Garret defines the classes L(a,b) as problems with runtime that is O( exp( (b + o(1))(n^a)(ln n)^(1-a)). (b >= 0, and 0 <= a <= 1 ) Note that L(0,b) is polynomial class n^b, and L(1,b) is 2^(bn). The union of L(a,b) for 0= RP >= ZPP >= P Curiously, it is not known whether NP contains BPP, though it does contain RP. Kolmogoroff Complexity * A string (or set) is characterized by the length of the shortest program (in some language) that produces (or decides) it. * For finite sets, we are interested in objects that have a description shorter that the object itself * For example, the average description length of "random" strings of 1,000,000 bits can't be less than 1,000,000 bits, however some strings (e.g. 01010101...) have a short description (even numbered positions are 1). * In particular, the average description length of some subset of a set of N elements is at least order N (the average length of a list of all the elements). However some are shorter (e.g. the set of all odd integers in the range 0-1,000,000) * Relationship to information theory: A set with low entropy has low Kolmogoroff complexity, but not vice-versa. * The Kolmogoroff complexity of any set in different Turing-complete languages differs by a most a constant (fixed for the pair of languages) * The Kolmogoroff complexity is not computable for almost all sets. Linear Complexity * A "linear feedback shift register" (LFSR) basically produces its next character with a linear combination of the previous n characters modulo some base. Usually we make this base binary. Clearly such a machine needs to be seeded with an initial string of n characters. The "linear complexity" of a finite sequence is the smallest n for which some LFSR produces the string. * For example, the seed 10, acted on by the weight vector 10, produces 1010101010.... * There is an algorithm for finding the linear complexity of a sequence. * "most" sequences have linear complexity equal to their own length. * All LFSRs produce repeating sequences. * There are LFSRs of length n that produce sequences of length 2^n - 1 before repeating For example, 1001 on seed 0001 produces 000111101011001 0001111... * Sets with low linear complexity obviously have low Kolmogoroff complexity, but not vice versa. Entropy * Informally, the amount of "information" in a string, expressed in "bits" * More formally, if we have random process that produces several symbols, each with probability P_n, then the entropy of the process is - sum P_n * log2(P_n) bits per character. * Note that the entropy for equiprobable binary values is 1 (bit per character) as would be expected, but that the entropy decreases if the probability is biased. For P0 = 7/8, P1 = 1/8, the entropy is 7/8 * .1926 + 1/8 * 3 = .168 + .375 = .544. * It can be shown that this is a lower bound on the number of bits per character needed to transmit such a sequence using any encoding. this bound can be approached using "Huffman" (variable length) coding on groups of characters. * By computing the entropy of strings using larger and larger blocks as "macro characters", we can get an idea of the overall order in the sequence. * Random sequences of letters have an entropy of 4.7 bits per letter. English, in 8-letter chunks has an entropy of about 2.3 bits per letter, dropping to 1.0 to 1.5 bits per letter for larger chunks. * Thus English is redundant at a rate of around 3.5 bits per character. This redundancy is what is exploited by a cryptanalyst * A somewhat startling, if theoretical, way of looking at this is that a ciphertext string that represents a plaintext string encrypted so that each character of plaintext becomes a character of ciphertext, contains in some sense, 3.5 bits of information per character ABOUT THE KEY. * Suggests that compressing text before encryption is a good idea. * We can produce an estimate of how much ciphertext would be needed so that there is likely only one key that would produce a sensible decryption of the message. "Unicity distance" = amount of ciphertext such that the sum of the entropy of the plaintext plus the entropy of the key is the number of ciphertext bits used. Generally, this is the entropy of the cryptosystem divided by the redundancy of the language. * As redundancy approaches 0, even a trivial cipher may be unbreakable with a ciphertext only attack. * Seems information theory might be a useful tool for attacking cipher systems in general. Most work on this still seems to be classified however, including Shannon's 1940 papers... Finally, In many cryptographic protocols we care more about average case complexity (e.g. to crack a cipher) or even best-case, rather than worst case, as is traditional in complexity analysis.