This file contains bibliographic citations (with abstracts) for selected
papers produced at the University of Rochester. Most citations end
with a link to a PDF or (formerly) compressed postscript file.
These files are also available via anonymous ftp from
ftp.cs.rochester.edu (user anonymous, password your_name), in the
directory pub/.
Copyright on many of these papers may be owned by organizations other
than the University of Rochester, as indicated in the citations below.
For more information or for help obtaining technical
reports not available online, please contact
tr@cs.rochester.edu.
Keywords: computational complexity; frequency of correctness; heuristicalgorithms; nondeterministic computation; randomized computation; SAT solvers.
Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ning, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.
Keywords: compuational social choice; multiagent systems; elections;fixed-parameter tractablility.
Schulze and ranked-pairs elections have received attention recently,with the former having quickly become the most widely used Condorcet method. For many cases these systems have been proven resistant to bribery, control, and manipulation, with ranked pairs being particularly praised for being NP-hard for all three of those. Nonetheless, the present paper shows that with respect to the number of candidates, both Schulze and ranked-pairs elections are fixed-parameter tractable to bribe, control, and manipulate: we obtain uniform, polynomial-time algorithms whose degree does not depend on the number of candidates.
Keywords: computational social choice; Schulze voting; elections;manipulation; control.
Schulze voting is a recently introduced voting system with a high level of real-world use. It is a Condorcet voting system that determines the winners of an election using information about paths in a graph representation of the election. We build on what is known about the control and manipulation problems for Schulze voting
Keywords: Computational social choice; electoral control.
Previous work on voter control, which refers to situations where a chair seeks to change the outcome of an election by deleting, adding, or partitioning voters, takes for granted that the chair knows all the voters' preferences and that all votes are cast simultaneously. However, elections are often held sequentially and the chair thus knows only the previously cast votes and not the future ones, yet needs to decide instantaneously which control action to take. We introduce a framework that models online voter control in sequential elections. We show that the related problems can be much harder than in the standard (non-online) case: For certain election systems, even with efficient winner problems, online control by deleting, adding, or partitioning voters is PSPACE-complete, even if there are only two candidates. In addition, we obtain completeness for coNP in the deleting/adding cases with a bounded deletion/addition limit, and for NP in the partition cases with only one candidate. Finally, we show that for plurality, online control by deleting or adding voters is in P, and for partitioning voters is coNP-hard.
Keywords: Computational social choice; electoral control.
All previous work on ``candidate-control'' manipulation of elections has been in the model of full-information, simultaneous voting. This is a problem, since in quite a few real-world settings---from TV singing/dancing talent shows to university faculty-hiring processes---candidates are introduced, and appraised by the voters, in sequence. We provide a natural model for sequential candidate evaluation, a framework for evaluating the computational complexity of controlling the outcome within that framework, and some initial results on the range such complexity can take on. We hope our work will lead to further examination of temporally involved candidate control.
Keywords: computational social choice; electoral control.
Most work on manipulation assumes that all preferences are known to the manipulators. However, in many settings elections are open and sequential, and manipulators may know the already cast votes but may not know the future votes. We introduce a framework, in which manipulators can see the past votes but not the future ones, to model online coalitional manipulation of sequential elections, and we show that in this setting manipulation can be extremely complex even for election systems with simple winner problems. Yet we also show that for some of the most important election systems such manipulation is simple in certain settings. This suggests that when using sequential voting, one should pay great attention to the details of the setting in choosing one's voting rule. Among the highlights of our classifications are: We show that, depending on the size of the manipulative coalition, the online manipulation problem can be complete for each level of the polynomial hierarchy or even for PSPACE. We obtain the most dramatic contrast to date between the nonunique-winner and unique-winner models: Online weighted manipulation for plurality is in P in the nonunique-winner model, yet is coNP-hard (constructive case) and NP-hard (destructive case) in the unique-winner model. And we obtain what to the best of our knowledge are the first P^NP[1]-completeness and P^NP-completeness results in the field of computational social choice, in particular proving such completeness for, respectively, the complexity of 3-candidate and 4-candidate (and unlimited-candidate) online weighted coalition manipulation of veto elections.
Keywords: computational social choice; elections; voting; control; bribery;manipulation; computational complexity.
Most theoretical definitions about the complexity of manipulating elections focus on the decision problem of recognizing which instances can be successfully manipulated, rather than the search problem of finding the successful manipulative actions. Since the latter is a far more natural goal for manipulators, that definitional focus may be misguided if these two complexities can differ. Our main result is that they probably do differ: If integer factoring is hard, then for election manipulation, election bribery, and some types of election control, there are election systems for which recognizing which instances can be successfully manipulated is in polynomial time but producing the successful manipulations cannot be done in polynomial time.
Keywords: computational complexity; P versus NP; promise problems; uniform time bounds.
This note is a commentary on, and critique of, Andre Luiz Barbosa's paper entitled "P != NP Proof." Despite its provocative title, what the paper is seeking to do is not to prove P \neq NP in the standard sense in which that notation is used in the literature. Rather, Barbosa is (and is aware that he is) arguing that a different meaning should be associated with the notation P \neq NP, and he claims to prove the truth of the statement P \neq NP in his quite different sense of that statement. However, we note that (1) the paper fails even on its own terms, as due to a uniformity problem, the paper's proof does not establish, even in its unusual sense of the notation, that P \neq NP; and (2) what the paper means by the claim P \neq NP in fact implies that P \neq NP holds even under the standard meaning that that notation has in the literature (and so it is exceedingly unlikely that Barbosa's proof can be fixed any time soon).
Keywords: computational social choice; elections; control; bribery;manipulation; nearly single-peaked preferences.
Many electoral bribery, control, and manipulation problems (which we will refer to in general as "manipulative actions" problems) are NP-hard in the general case. It has recently been noted that many of these problems fall into polynomial time if the electorate is single-peaked (i.e., is polarized along some axis/issue). However, real-world electorates are not truly single-peaked. There are usually some mavericks, and so real-world electorates tend to merely be nearly single-peaked. This paper studies the complexity of manipulative-action algorithms for elections over nearly single-peaked electorates, for various notions of nearness and various election systems. We provide instances where even one maverick jumps the manipulative-action complexity up to $\np$-hardness, but we also provide many instances where a reasonable number of mavericks can be tolerated without increasing the manipulative-action complexity.
Keywords: ACC^k; circuit complexity; computational complexity; uniformity.
We note that for each k \in {0,1,2, ...} the following holds: NE has (nonuniform) ACC^k circuits if and only if NE has P^{NE}-uniform ACC^k circuits. And we mention how to get analogous results for other circuit and complexity classes
Keywords: computational social choice; voting; elections; control;computational complexity.
In 1992, Bartholdi, Tovey, and Trick opened the study of control attacks on elections---attempts to improve the election outcome by such actions as adding/deleting candidates or voters. That work has led to many results on how algorithms can be used to find attacks on elections and how complexity-theoretic hardness results can be used as shields against attacks. However, all the work in this line has assumed that the attacker employs just a single type of attack. In this paper, we model and study the case in which the attacker launches a multipronged (i.e., multimode) attack. We do so to more realistically capture the richness of real-life settings. For example, an attacker might simultaneously try to suppress some voters, attract new voters into the election, and introduce a spoiler candidate. Our model provides a unified framework for such varied attacks, and by constructing polynomial-time multiprong attack algorithms we prove that for various election systems even such concerted, flexible attacks can be perfectly planned in deterministic polynomial time.
Keywords: computational social choice; range voting; elections; voting; control.
We study the behavior of Range Voting and Normalized Range Voting with respect to electoral control. Electoral control encompasses attempts from an election chair to alter the structure of an election in order to change the outcome. We show that a voting system resists a case of control by proving that performing that case of control is computationally infeasible. Range Voting is a natural extension of approval voting, and Normalized Range Voting is a simple variant which alters each vote to maximize the potential impact of each voter. We show that Normalized Range Voting has among the largest number of control resistances among natural voting systems.
Keywords: computational social choice; elections; algorithms; bribery;control; single-peaked preferences; manipulation; dichotomy theorems;computational complexity; voting.
For many election systems, bribery (and related) attacks have been shown NP-hard using constructions on combinatorially rich structures such as partitions and covers. It is important to learn how robust these hardness protection results are, in order to find whether they can be relied on in practice. This paper shows that for voters who follow the most central political-science model of electorates---single-peaked preferences---those protections vanish. By using single-peaked preferences to simplify combinatorial covering challenges, we for the first time show that NP-hard bribery problems---including those for Kemeny and Llull elections---fall to polynomial time for single-peaked electorates. By using single-peaked preferences to simplify combinatorial partition challenges, we for the first time show that NP-hard partition-of-voters problems fall to polynomial time for single-peaked electorates. We show that for single-peaked electorates, the winner problems for Dodgson and Kemeny elections, though \Theta_2^p-complete in the general case, fall to polynomial time. And we completely classify the complexity of weighted coalition manipulation for scoring protocols in single-peaked electorates.
Keywords: elections; control; manipulations; single-peaked preferences.
Much work has been devoted, during the past twenty years, to using complexity to protect elections from manipulation and control. Many results have been obtained showing NP-hardness shields, and recently there has been much focus on whether such worst-case hardness protections can be bypassed by frequently correct heuristics or by approximations. This paper takes a very different approach: We argue that when electorates follow the canonical political science model of societal preferences the complexity shield never existed in the first place. In particular, we show that for electorates having single-peaked preferences, many existing NP-hardness results on manipulation and control evaporate.
Keywords: computational social choice; approval voting; computational complexity; elections; voting; control.
This paper is concerned with the computational aspects of approval voting and some of its variants, with a particular focus on the complexity of problems that model various ways of tampering with the outcome of an election: manipulation, control, and bribery. For example, in control settings, the election's chair seeks to alter the outcome of an election via control actions such as adding/deleting/partitioning either candidates or voters. In particular, sincere-strategy preference-based approval voting (SP-AV), a variant of approval voting proposed by Brams and Sanver [BS06], is computationally resistant to 19 of the 22 common types of control. Thus, among those natural voting systems for which winner determination is easy, SP-AV is the system currently known to display the broadest resistance to control. We also present the known complexity results for various types of bribery. Finally, we study local search heuristics for minimax approval voting, a variant of approval voting proposed by Brams, Kilgour, and Sanver [BKS04] (see also [BKS07a,BKS07b]) for the purpose of electing a committee of fixed size.
Keywords: elections; manipulations; bribery; classification; NP-completeness;computational social choice.
Voting and elections are at the core of democratic societies. People vote to elect leaders, decide policies, and organize their lives, but elections also have natural applications in computer science. For example, agents in multiagent systems often need to work together to complete some task, but each agent may have its own set of beliefs, preferences, and goals. Voting provides agents with a natural way to reach decisions that take all their preferences into account. With elections playing such an important role both in real-life political settings and in computer science, it is natural to ask about their resistance to misuse.Two particular types of election misuse are manipulation and bribery. In manipulation, a group of voters chooses to misrepresent its preferences in order to obtain a more desirable outcome, and in bribery an outside agent, the briber, asks (possibly at a cost) a group of voters to change its votes, to obtain some outcome desirable for the briber. Classical results from political science show that, for any reasonable election system, there are scenarios where at least some voters have an incentive to attempt manipulation.
In this thesis we seek to protect elections from manipulators and bribers by making their computational task of finding good manipulations/bribes prohibitively expensive. When this is not possible, we seek to better understand (and even improve) the algorithmic attacks that manipulators and bribers can employ. In doing so, we develop new models of manipulation and bribery, and provide new approaches to studying the computational complexity of bribery and manipulation in elections.
Keywords: frequently self-knowingly correct algorithms; greedy algorithms;junta distributions.
We prove that every distributional problem solvable in polynomial time on the average with respect to the uniform distribution has a frequently self-knowingly correct polynomial-time algorithm. We also study some features of probability weight of correctness with respect to generalizations of Procaccia and Rosenschein's junta distributions [PR07b].
Keywords: computational social choice; bribery; Copeland elections; control; voting;.
Control and bribery are settings in which an external agent seeks to influence the outcome of an election. Constructive control of elections refers to attempts by an agent to, via such actions as addition/deletion/partition of candidates or voters, ensure that a given candidate wins [BTT92]. Destructive control refers to attempts by an agent to, via the same actions, preclude a given candidate's victory [HHR07a]. An election system in which an agent can sometimes affect the result and it can be determined in polynomial time on which inputs the agent can succeed is said to be vulnerable to the given type of control. An election system in which an agent can sometimes affect the result, yet in which it is NP-hard to recognize the inputs on which the agent can succeed, is said to be resistant to the given type of control.Aside from election systems with an NP-hard winner problem, the only systems previously known to be resistant to all the standard control types were highly artificial election systems created by hybridization [HHR07b]. This paper studies a parameterized version of Copeland voting, denoted by Copeland^\alpha, where the parameter \alpha is a rational number between 0 and 1 that specifies how ties are valued in the pairwise comparisons of candidates. In every previously studied constructive or destructive control scenario, we determine which of resistance or vulnerability holds for Copeland^\alpha for each rational \alpha, 0 \leq \alpha \leq 1. In particular, we prove that Copeland^{0.5}, the system commonly referred to as ``Copeland voting,'' provides full resistance to constructive control, and we prove the same for Copeland^\alpha, for all rational \alpha, 0 < \alpha < 1. Among systems with a polynomial-time winner problem, Copeland voting is the first natural election system proven to have full resistance to constructive control. In addition, we prove that both Copeland^0 and Copeland^1 (interestingly, Copeland^1 is an election system developed by the thirteenth-century mystic Ramon Llull) are resistant to all standard types of constructive control other than one variant of addition of candidates. Moreover, we show that for each rational \alpha, 0 \leq \alpha \leq 1, Copeland^\alpha voting is fully resistant to bribery attacks, and we establish fixed-parameter tractability of bounded-case control for Copeland^\alpha.
We also study Copeland^\alpha elections under more flexible models such as microbribery and extended control, we integrate the potential irrationality of voter preferences into many of our results, and we prove our results in both the unique-winner model and the nonunique-winner model. Our vulnerability results for microbribery are proven via a novel technique involving min-cost network flow.
Keywords: power index; computational complexity; #P; completeness;.
We study the complexity of the following problem: Given two weighted voting games G' and G'' that each contain a player p, in which of these games is p's power index value higher? We study this problem with respect to both the Shapley-Shubik power index [SS54] and the Banzhaf power index [Ban65,DS79]. Our main result is that for both of these power indices the problem is complete for probabilistic polynomial time (i.e., is $\pp$-complete). We apply our results to partially resolve some recently proposed problems regarding the complexity of weighted voting games. We also study the complexity of the raw Shapley-Shubik power index. Deng and Papadimitriou [DP94] showed that the raw Shapley-Shubik power index is #P-metric-complete. We strengthen this by showing that the raw Shapley-Shubik power index is many-one complete for #P. And our strengthening cannot possibly be further improved to parsimonious completeness, since we observe that, in contrast with the raw Banzhaf power index, the raw Shapley-Shubik power index is not #P-parsimonious-complete.
Keywords: data structures; approximate counting; streaming algorithms;.
We present the Bitwise Bloom Filter, a data structure for maintaining counts for a large number of items. The bitwise filter is an extension of the Bloom filter, a space-efficient data structure for storing a large set efficiently by discarding the identity of the items being held while still being able to determine whether it is in the set or not with high probability. We show how this idea can be extended to maintaining counts of items by maintaining a separate Bloom filter for every position in the bit representations of all the counts. We give both theoretical analysis of the accuracy of the Bitwise filter together with validation via experiments on real network data.
Keywords: Copeland; election manipulations; tie resolution;computational complexity;.
We study the complexity of manipulation for a family of election systems derived from Copeland voting via introducing a parameter alpha that describes how ties in head-to-head contests are valued. We show that the problem of manipulation for unweighted Copeland^alpha elections is NP-complete even if the size of the manipulating coalition is limited to two. Our result holds for all rational values of alpha such that 0 < alpha < 1 except for alpha = 1/2. We contrast our result with the fact that microbribery for Copeland^alpha is currently known to be in P exactly for alpha in {0,1/2,1} (complexity results for other values of alpha are unknown). Microbribery is a problem very closely related to manipulation. Since it is well known that manipulation via a single voter is easy for Copeland, ourresult is the first one where an election system originally known to be vulnerable to manipulation via a single voter is shown to be resistant to manipulation via a coalition of a constant number of voters. We also study the complexity of manipulation for Copeland^alpha for the case of a constant number of candidates. We show that here the exact complexity of manipulation often depends closely on the winner model as well as on the parameter alpha: Depending whether we try to make our favorite candidate a winner or a unique winner and whether alpha is 0, 1 or between these values, the problem of weighted manipulation for Copeland^alpha with three candidates is either in P or is NP-complete. Our results show that ways in which ties are treated in an election system, here Copeland voting, can be crucial to establishing complexity results for this system.
Keywords: computational social choice; bribery; Copeland elections; control;voting.
Control and bribery are settings in which an external agent seeks to influence the outcome of an election. Faliszewski et al. [FHHR07] proved that Llull voting (which is here denoted by Copeland^1) and a variant (here denoted by Copeland^0) of Copeland voting are computationally resistant to many, yet not all, types of constructive control and that they also provide broad resistance to bribery. We study a parameterized version of Copeland voting, denoted by Copeland^alpha where the parameter alpha is a rational number between 0 and 1 that specifies how ties are valued in the pairwise comparisons of candidates in Copeland elections. We establish resistance or vulnerability results, in every previously studied control scenario, for Copeland^alpha, for each rational alpha, 0 <alpha < 1. In particular, we prove that Copeland^0.5, the system commonly referred to as ``Copeland voting,'' provides full resistance to constructive control. Among the systems with a polynomial-time winner problem, this is the first natural election system proven to have full resistance to constructive control. Results on bribery and fixed-parameter tractability of bounded-case control proven for Copeland^0 and Copeland^1 in [FHHR07] are extended to Copeland^alpha for each rational alpha, 0 < alpha < 1; we also give results in more flexible models such as microbribery and extended control.
Keywords: computational choice; bribery; plurality voting; utility-based voting.
We study the concept of bribery in the situation where voters are willing to change their votes as we ask them, but where their prices depend on the nature of the change we request. Our model is an extension of the one of Faliszewski et al. [FHH06], where each voter has a single price for any change we may ask for. We show polynomial-time algorithms for our version of bribery for a broad range of voting protocols, including plurality, veto, approval, and utility based voting. In addition to our polynomial-time algorithms we provide NP-completeness results for a couple of our nonuniform bribery problems for weighted voters, and a couple of approximation algorithms for NP-complete bribery problems defined in [FHH06] (in particular, an FPTAS for plurality-weighted-$bribery problem).
Keywords: Complexity theory; cryptography; interval functions;small-world networks;.
Redundancy is a basic property of many computational settings. This thesis concerns techniques for eliminating redundancy in some cases, and exploiting it in others.We study one-way functions, i.e., functions that are easy to compute but hard to invert. Such functions were previously studied as cryptographic primitives. Since it remains an open question whether one-way functions exist, we study the question of their existence in relation to a variety of complexity-theoretic hypotheses.
Starting with one-way functions in which redundancy in the preimage is absolutely minimal, i.e. one-to-one, we provide the first characterization of the existence of one-way permutations by a complexity class separation hypothesis, namely $\P \neq \up \inter \coup$.
Next, we study a type of one-way function that provably can never be one-to-one. Strong, total, associative, one-way functions are two-argument, one-way functions that are hard to invert, even if one of their arguments is known. Such special, one-way functions were originally used to construct secret-key agreement and digital signature protocols. We study techniques for creating such functions whose amount of preimage redundancy (as a function of the length of the corresponding image element) is minimized. We show that, if $\p \neq \up$, then such special one-way functions exist and that we can go from total, associative polyomial-time computable functions to strong, total, associative, one-way functions at no cost in increased preimage redundancy.
Continuing our study of eliminating redundancy in functions, we examine the complexity of counting the sizes of intervals over orders having certain, natural, computational and redundancy properties. We show that having redundancy in the adjacency relations of the order adds almost nothing to the computational complexity of computing such intervals.
Finally, we look at a problem in routing on ad-hoc networks whose solution exploits redundancy. We provide a theoretical framework for analyzing the behavior of a variety of tableless routing schemes. We show that such schemes work well when there is redundancy between the network distance and the objective functions used to make routing decisions.
Keywords: approximation; Dodgson elections; election systems; frequently self-knowingly correct algorithms; greedy algorithms; optimal lobbying; preference aggregation.
We investigate issues related to two hard problems related to voting, the optimal weighted lobbying problem and the winner problem for Dodgson elections. Regarding the former, Christian et al. [CFRS06] showed that optimal lobbying is intractable in the sense of parameterized complexity. We provide an efficient greedy algorithm that achieves a logarithmic approximation ratio for this problem and even for a more general variant---optimal weighted lobbying. We prove that essentially no better approximation ratio than ours can be proven for this greedy algorithm.The problem of determining Dodgson winners is known to be complete for parallel access to NP [HHR97]. Homan and Hemaspaandra [HH06] proposed an efficient greedy heuristic for finding Dodgson winners with a guaranteed frequency of success, and their heuristic is a ``frequently self-knowingly correct algorithm.'' We prove that every distributional problem solvable in polynomial time on the average with respect to the uniform distribution has a frequently self-knowingly correct polynomial-time algorithm. Furthermore, we study some features of probability weight of correctness with respect to Procaccia and Rosenschein's junta distributions [PR07].
Keywords: computational social choice; bribery; Copeland elections; control; Llull elections; voting.
Control of elections refers to attempts by an agent to, via such actions as addition/deletion/partition of candidates or voters, ensure that a given candidate wins [BTT92]. An election system in which such an agent's computational task is NP-hard is said to be resistant to the given type of control. The only election systems known to be resistant to all the standard control types are highly artificial election systems created by hybridization [HHR07]. In this paper, we prove that an election system developed by the 13th century mystic Ramon Llull and the well-studied Copeland election system are both resistant to all the standard types of (constructive) electoral control other than one variant of addition of candidates. This is the most comprehensive resistance to control yet achieved by any natural election system. In addition, we show that Llull and Copeland voting are very broadly resistant to bribery attacks, and we integrate the potential irrationality of voter preferences into many of our results.
Keywords: autoreducibility; length-decreasing self-reducibility; reductions; function classes; complete functions.
This paper studies the notions of autoreducibility and length-decreasing self-reducibility of functions and languages. Recently Glasser et al. have shown that for many classes C, including PSPACE and NP, it holds that all nontrivial complete languages are polynomial-time many-one autoreducible. In contrast, this paper shows that for many classes C such that P is a subset of C (e.g., PSPACE and NP) some complete languages in C are not polynomial-time length-decreasing self-reducible unless C is a subset of P and for classes C such that L is a subset of C and C is a subset of P (e.g., P and NL) some complete languages in C are not logarithmic-space length-decreasing self-reducible unless C is a subset of L.This paper also shows that contrast between autoreducibility and length-decreasing self-reducibility for the case of functions. In particular, the paper shows that many function complexity classes FC (including well-studied #P, SpanP, and GapP and not-so-well-studied but highly natural #PE and TotP) have the property that all complete functions in FC are polynomial-time Turing-autoreducible. For #P and TotP, the autoreductions can be made to be polynomial-time one-Turing (one query per input).
These results show that, under reasonable assumptions, the notions of length-decreasing self-reducibility and autoreducibility differ both on complete languages and on complete functions. In a similar vein, this paper shows that under reasonable assumptions autoreducibility and random-self-reducibility differ with respect to functions.
Keywords: computational complexity; graph diameter; kings; graph radius; initial components.
A king in a directed graph is a vertex from which each vertex in the graph can be reached via paths of length at most two. There is a broad literature on tournaments (completely oriented digraphs), and it has been known for more than half a century that all tournaments have at least one king [Lan53]. Recently, kings have proven useful in theoretical computer science, in particular in the study of the complexity of reachability problems [NT05] and semifeasible sets [HNP98, HT06, HOZZ06].In this paper, we study the complexity of recognizing kings. For each succinctly specified family of tournaments, the king problem is already known to belong to $\Pi_2^{\mathrm p}$ [HOZZ06]. We prove that the complexity of kingship problems is a rich enough vocabulary to pinpoint every nontrivial many-one degree in $\Pi_2^{\mathrm p}$. That is, we show that \emph{every} set in $\Pi_2^{\mathrm p}$ other than $\emptyset$ and $\Sigma^*$ is equivalent to a king problem under $\leq_{\mathrm m}^{\mathrm p}$-reductions. Indeed, we show that the equivalence can even be instantiated via relatively simple padding, and holds even if the notion of kings is redefined to refer to $k$-kings (for any fixed $k \geq 2$)---vertices from which the all vertices can be reached via paths of length at most $k$. In contrast, we prove that recognizing whether a given vertex is a source (i.e., there exists a $k$ such that it is a $k$-king) yields languages that also fall within $\Pi_2^{\mathrm p}$, yet cannot be $\Pi_2^{\mathrm p}$-complete---or even $\Class{NP}$-hard---unless $\Class{P} = \Class{NP}$.
Using these and related techniques, we obtain a broad range of additional results about the complexity of king problems, diameter problems, and radius problems. It follows easily from our proof approach that the problem of testing kingship in succinctly specified graphs (which need not be tournaments) is $\Pi_2^{\mathrm p}$-complete. We show that the radius problem for arbitrary succinctly represented graphs is $\Sigma_3^{\mathrm p}$-complete, but that in contrast the diameter problem for arbitrary succinctly represented graphs (or even tournaments) is $\Pi_2^{\mathrm p}$-complete.
Keywords: bribery; computational social choice; control; manipulation; voting.
We provide an overview of some recent progress on the complexity of election systems. The issues studied include the complexity of the winner, manipulation, bribery, and control problems.
Keywords: computational social choice; multiagent systems; preference aggregation; computational complexity; elections; control; vulnerability; resistance; immunity; susceptibility.
Electoral control refers to attempts by an election's organizer (``the chair'') to influence the outcome by adding/deleting/partitioning voters or candidates. The groundbreaking work of Bartholdi, Tovey, and Trick on (constructive) control proposes computational complexity as a means of resisting control attempts: Look for election systems where the chair's task in seeking control is itself computationally infeasible.We introduce and study a method of combining two or more candidate-anonymous election schemes in such a way that the combined scheme possesses all the resistances to control (i.e., all the NP-hardnesses of control) possessed by any of its constituents: It combines their strengths. From this and new resistance constructions, we prove for the first time that there exists an election scheme that is resistant to all twenty standard types of electoral control.
Keywords: nonuniform complexity; Kolmogorov random sets; sparse sets; leaf languages.
Unger studied the balanced leaf languages defined via poly-logarithmically sparse leaf pattern sets. Unger shows that $\np$-complete sets are not polynomial-time many-one reducible to such balanced leaf language unless the polynomial hierarchy collapses to Theta^p_2 and that Sigam^p_2-complete sets are not polynomial-time bounded-truth-table reducible (respectively, polynomial-time Turing reducible) to any such balanced leaf language unless the polynomial hierarchy collapses to Delta^p_2 (respectively, Sigma^p_4).This paper studies the complexity of the class of such balanced leaf languages, which will be denoted by VSLL. In particular, the following tight upper and lower bounds of VSLL are shown:
1. coNP is included in VSLL and VSLL is included in coNP/poly (the former inclusion is already shown by Unger).
2. coNP/1 is not included in VSLL unless PH collapses to Theta^p_2.
3. For no constant c>0, VSLL is included coNP/n^c.
4. P/(loglog(n) + O(1)) is included in VSLL.
5. For no h(n) = loglog(n) + omega(1), P/h is included in VSLL.
Keywords: closure property; computational complexity; integer division; NPMV; NPSV; proper subtraction; refinement; solution elimination; solution reduction; #P.
Given a function based on the computation of an NP machine, can one in general eliminate some solutions? That is, can one in general decrease the ambiguity? This simple question remains, even after extensive study by many researchers over many years, mostly unanswered. However, complexity-theoretic consequences and enabling conditions are known. In this tutorial-style article we look at some of those, focusing on the most natural framings: reducing the number of solutions of NP functions, refining the solutions of NP functions, and subtracting from or otherwise shrinking #P functions. We will see how small advice strings are important here, but we also will see how increasing advice size to achieve robustness is central to the proof of a key ambiguity-reduction result for NP functions.
Keywords: comparison network; comparator network; oblivious sorting; parallel sorting; analysis of algorithms; sorting network.
We further siimplify Paterson's version of the Ajtai-Komlos-Szemeredi sorting network, and its analysis, mainly by tuning the invariant to be maintained.
Keywords: approval voting; bribery; computational complexity; Condorcet winner; dichotomy theorem; distributed artificial intelligence; Dodgson election; election manipulation; election system; Kemeny election; voting rule; multiagent system; plurality rule; preference aggregation; scoring system; Young election.
We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified candidate can be made the election's winner? We study this problem for election systems as varied as scoring protocols and Dodgson voting, and in a variety of settings regarding homogeneous-vs.-nonhomogeneous electorate bribability, bounded-size-vs.-arbitrary-sized candidate sets, weighted-vs.-unweighted voters, and succinct-vs.-nonsuccinct input specification. We obtain both polynomial-time bribery algorithms and proofs of the intractability of bribery, and indeed our results show that the complexity of bribery is extremely sensitive to the setting. For example, we find settings in which bribery is NP-complete but manipulation (by voters) is in P, and we find settings in which bribing weighted voters is NP-complete but bribing voters with individual bribe thresholds is in P. For the broad class of elections (including plurality, Borda, k-approval, and veto) known as scoring protocols, we prove a dichotomy result for bribery of weighted voters: We find a simple-to-evaluate condition that classifies every case as either NP-complete or in P.
Keywords: network monitoring; network security; streaming algorithms; data streams; entropy.
Using entropy of traffic distributions has been shown to aid a wide variety of network monitoring applications such as anomaly detection, clustering to reveal interesting patterns, and traffic classification. However, realizing this potential benefit in practice requires accurate algorithms that can operate on high-speed links, with low CPU and memory requirements. Estimating the entropy in a streaming model to enable such fine-grained traffic analysis has been a challenging problem. We give lower bounds for this problem, showing that neither approximation nor randomization alone will let us compute the entropy efficiently.We present two algorithms for randomly approximating the entropy in a time and space efficient manner, applicable for use on very high speed (greater than OC-48) links. Our first algorithm for entropy estimation, inspired by the seminal work of Alon et al. for estimating frequency moments, has strong theoretical guarantees on the error and resource usage. Our second algorithm utilizes the observation that the efficiency can be substantially enhanced by separating the high-frequency items (or elephants), from the low-frequency items (or mice). Evaluations on real-world traffic traces from different deployment scenarios demonstrate the utility of our approaches.
Keywords: computational complexity; selector functions; positive reducibility; self-reducibility; p-selective sets.
We eliminate some special cases from the proofs of two theorems in which a machine instantiating a many-query reduction to a p-selective set is made to use only one query. The first theorem, originally proved by Buhrman, Torenvliet, and van Emde Boas [BTvEB93], states that any set that positively reduces to a p-selective set has a many-one reduction to that same set. The second, originally proved by Buhrman and Torenvliet [BT96], states that self-reducible p-selective sets are in P.
Keywords: self-knowing correctness; greedy algorithms; heuristic algorithms; frequently self-knowingly correct algorithms; Dodgson elections; Dodgson winner; Dodgson score.
In the year 1876 the mathematician Charles Dodgson, who wrote fiction under the now more famous name of Lewis Carroll, devised a beautiful voting system that has long fascinated political scientists. However, determining the winner of a Dodgson election is known to be complete for the \Theta_2^p level of the polynomial hierarchy. This implies that unless P=NP no polynomial-time solution to this problem exists, and unless the polynomial hierarchy collapses to NP the problem is not even in NP. Nonetheless, we prove that when the number of voters is much greater than the number of candidates---although the number of voters may still be polynomial in the number of candidates---a simple greedy algorithm very frequently finds the Dodgson winners in such a way that it ``knows'' that it has found them, and furthermore the algorithm never incorrectly declares a nonwinner to be a winner.
Keywords: computational complexity; unambiguous computing; unique discovery; closure properties; cluster computing; edge detection.
We study the robustness---the invariance under definition changes---of the cluster class CL#P [HHKW05]. This class contains each #P function that is computed by a balanced Turing machine whose accepting paths always form a cluster with respect to some length-respecting total order with efficient adjacency checks. The definition of CL#P is heavily influenced by the defining paper's focus on (global) orders. In contrast, we define a cluster class, CLU#P, to capture what seems to us a more natural model of cluster computing. We prove that the naturalness is costless: CL#P = CLU#P. Then we exploit the more natural, flexible features of CLU#P to prove new robustness results for CL#P and to expand what is known about the closure properties of CL#P.The complexity of recognizing edges---of an ordered collection of computation paths or of a cluster of accepting computation paths---is central to this study. Most particularly, our proofs exploit the power of unique discovery of edges---the ability of nondeterministic functions to, in certain settings, discover on exactly one (in some cases, on at most one) computation path a critical piece of information regarding edges of orderings or clusters.
Keywords: reference affinity; NP-complete; divide-and-conquer computation; sampling method; data locality.
In POPL 2002, Petrank and Rawitz showed a universal result---finding optimal data placement is not only NP-hard but also impossible to approximate within a constant factor if P <> NP. Here we study a recently published concept called reference affinity, which characterizes a group of data that are always accessed together in computation. On the theoretical side, we give the complexity for finding reference affinity in program traces, using a novel reduction that converts the notion of distance into satisfiability. We also prove that reference affinity automatically captures the hierarchical locality in divide-and-conquer computations including matrix solvers and N-body simulation. The proof establishes formal links between computation patterns in time and locality relations in space.On the practical side, we show that efficient heuristics exist. In particular, we present a sampling method and show that it is more effective than the previously published technique, especially for data that are often but not always accessed together. We show the effect on generated and real traces. These theoretical and empirical results demonstrate that effective data placement is still attainable in general-purpose programs because common (albeit not all) locality patterns can be precisely modeled and efficiently analyzed.
Keywords: self-reducibility; autoreducibility; PSPACE-complete; NP-complete; NL-complete.
Recently Gla{\ss}er et al. have shown that for many classes $C$ including PSPACE and NP it holds that all of its nontrivial many-one complete languages are autoreducible. This immediately raises the question of whether all many-one complete languages are Turing self-reducible for such classes $C$.This paper considers a simpler version of this question---whether all PSPACE-complete (NP-complete) languages are length-decreasing self-reducible. We show that if all PSPACE-complete languages are length-decreasing self-reducible then PSPACE = P and that if all NP-complete languages are length-decreasing self-reducible then NP = P.
The same type of result holds for many other natural complexity classes. In particular, we show that (1) not all NL-complete sets are logspace length-decreasing self-reducible, (2) unconditionally not all PSPACE-complete languages are logspace length-decreasing self-reducible, and (3) unconditionally not all EXP-complete languages are polynomial-time length-decreasing self-reducible.
Keywords: approval voting; computational complexity; computational resistance; computational vulnerability; Condorcet voting; destructive control; election systems; immunity; plurality voting; vote suppression; preference aggregation; multiagent systems; tie-breaking rules; voting systems; distributed artificial intelligence.
Preference aggregation in a multiagent setting is a central issue in both human and computer contexts. In this paper, we study in terms of complexity the vulnerability of preference aggregation to destructive control. That is, we study the ability of an election's chair to, through such mechanisms as voter/candidate addition/suppression/partition, ensure that a particular candidate (equivalently, alternative) does not win. And we study the extent to which election systems can make it impossible, or computationally costly (NP-complete), for the chair to execute such control. Among the systems we study---plurality, Condorcet, and approval voting---we find cases where systems immune or computationally resistant to a chair choosing the winner nonetheless are vulnerable to the chair blocking a victory. Beyond that, we see that among our studied systems no one system offers the best protection against destructive control. Rather, the choice of a preference aggregation system will depend closely on which types of control one wishes to be protected against. We also find concrete cases where the complexity of or susceptibility to control varies dramatically based on the choice among natural tie-handling rules.
Keywords: advice classes; associative selector functions; function refinement; linear advice; low hierarchy; NP-hardness; NPSV-selective sets; P-selective sets; semifeasible algorithms.
The study of semifeasible algorithms was initiated by Selman's work a quarter of century ago [Sel79,Sel81,Sel82]. Informally put, this research stream studies the power of those sets L for which there is a deterministic (or in some cases, the function may belong to one of various nondeterministic function classes) polynomial-time function f such that when at least one of x and y belongs to L, then f(x,y) \in L \cap \{x,y\}. The intuition here is that it is saying: "Regarding membership in L, if you put a gun to my head and forced me to bet on one of x or y as belonging to L, my money would be on f(x,y)."In this article, we present a number of open problems from the theory of semifeasible algorithms. For each we present its background and review what partial results, if any, are known.
Keywords: tournaments; Pi-Two completeness; P-selectivity; succintly specified graphs; kings; complexity classification.
A king in a directed graph is a node from which each node in the graph can be reached via paths of length at most two. There is a broad literature on tournaments (completely oriented digraphs), and it has been known for more than half a century that all tournaments have at least one king [Lan53]. Recently, kings have proven useful in theoretical computer science, in particular in the study of the complexity of the semifeasible sets [HNP98,HT05] and in the study of the complexity of reachability problems [Tan01,NT02].In this paper, we study the complexity of recognizing kings. For each succinctly specified family of tournaments, the king problem is known to belong to $\Pi_2^p$ [HOZZ]. We prove that this bound is optimal: We construct a succinctly specified tournament family whose king problem is $\Pi_2^p$-complete. It follows easily from our proof approach that the problem of testing kingship in succinctly specified graphs (which need not be tournaments) is $\Pi_2^p$-complete. We also obtain $\Pi_2^p$-completeness results for k-kings in succinctly specified j-partite tournaments, $k,j \geq 2$, and we generalize our main construction to show that $\Pi_2^p$-completeness holds for testing k-kingship in succinctly specified families of tournaments for all $k \geq 2$.
Keywords: computational complexity; complexity classes; relativization; polynomial degree bounds; graph rreconstruction; directed hypergraphs.
We attain two main objectives in this thesis. First, we employ test languages to prove limitations of proof techniques to resolve certain questions in complexity theory. In this part of the thesis, we study the relationship between quantum classes and counting classes via closure properties, collapses, and relativized separations. We show that the best known classical bounds for quantum classes such as EQP and BQP cannot be significantly improved using relativizable proof techniques. In some cases, we strengthen known relativized separations between quantum and counting classes to their relativized immunity separations. Furthermore, using the closure properties of certain gap-definable counting classes, we prove strong consequences, in terms of the complexity of the polynomial hierarchy, of the following hypotheses: NQP is contained in BQP, and EQP equals NQP. Aside from using test languages to study the relationship between quantum and counting classes, we use test languages to construct, via degree bounds of polynomials, relativized worlds that exhibit separations of classes and nonexistence of complete sets.Second, we study certain concrete problems and characterize their complexity either by showing completeness results for complexity classes or by relating their complexity to some well-studied computational problem (e.g., the graph isomorphism problem). In this part of the thesis, we study concrete problems related to the reconstruction of a graph from a collection of vertex-deleted or edge-deleted subgraphs, and concrete problems related to a notion of linear connectivity in directed hypergraphs. We show that the problems we study related to the reconstruction of graphs either are isomorphic (in complexity-theoretic sense) to the graph isomorphism problem or are many-one hard for the graph isomorphism problem. In our study related to directed hypergraphs, we introduce a notion of linear hyperconnectivity, denoted by L-hyperpath, in directed hypergraphs and show how this notion can be used to model problems in diverse domains. We study problems related to the cyclomatic number of directed hypergraphs with respect to L-hypercycles (the minimum number of hyperedges that need to be deleted so that the directed hypergraph becomes free of L-hypercycles) and obtain completeness results for different levels of the polynomial hierarchy.
Keywords: semifeasible algorithms; advice complexity; P-selectivity; immunity.
We prove that P-sel, the class of all P-selective sets, is EXP-immune, but is not EXP/1-immune. That is, we prove that some infinite P-selective set has no infinite EXP-time subset, but we also prove that every infinite P-selective set has some infinite subset in EXP/1. Informally put, the immunity of P-sel is so fragile that it is pierced by a single bit of information.The above claims follow from broader results that we obtain about the immunity of the P-selective sets. In particular, we prove that for every recursive function f, P-sel is DTIME(f)-immune. Yet we also prove that P-sel is not \Pi_2^p/1-immune.
Keywords: computational complexity; elections; election manipulation; scoring systems; dichotomy theorems; voting.
Scoring protocols are a broad class of voting systems. Each is defined by a vector $(\alpha_1,\alpha_2,\ldots,\alpha_m)$, $\alpha_1 \geq \alpha_2 \geq \cdots \geq \alpha_m$, of integers such that each voter contributes $\alpha_1$ points to his/her first choice, $\alpha_2$ points to his/her second choice, and so on, and any candidate receiving the most points is a winner.What is it about scoring-protocol election systems that makes some have the desirable property of being NP-complete to manipulate, while others can be manipulated in polynomial time? We find the complete, dichotomizing answer: Diversity of dislike. Every scoring-protocol election system having two or more point values assigned to candidates other than the favorite---i.e., having $||\{\alpha_i \condition 2 \leq i \leq m\}||\geq 2$---is NP-complete to manipulate. Every other scoring-protocol election system can be manipulated in polynomial time. In effect, we show that---other than trivial systems (where all candidates alway tie), plurality voting, and plurality voting's transparently disguised translations---\emph{every} scoring-protocol election system is NP-complete to manipulate.
Keywords: reference affinity; data locality; NPC; N-body simulation; memory hierarchy.
To study data placement on memory hierarchy, we present a model called {\em reference affinity}. Given a program trace, the model divides program data into hierarchical partitions (called affinity groups) based on a parameter $k$, which specifies the number of distinct data elements between accesses to members of each affinity group. Trivial solutions exist for the two ends of the hierarchy. At the top, when $k$ is no less than the data size, all program data belong to one affinity group. At the bottom, when $k$ is 0, each element is an affinity group.We present two theoretical results. The first is the complexity. We show that finding and checking affinity groups are in P when $k=1$ and $k=2$. When $k=3$, the checking problem is NP-complete, and the finding problem is NP-hard. The second is the uses. We show that reference affinity captures the hierarchical data locality from the trace of a hierarchical computation. As additional evidence, we cite empirical results for general-purpose programs.
Keywords: computational complexity; counting complexity; interval size functions; p-orders; adjacency checks; number of divisors; cluster computation.
Given a p-order A over a universe of strings (i.e., a transitive, reflexive, antisymmetric relation such that if (x, y) is an element of A then |x| is polynomially bounded by |y|), an interval size function of A returns, for each string x in the universe, the number of strings in the interval between strings b(x) and t(x) (with respect to A), where b(x) and t(x) are functions that are polynomial-time computable in the length of x.By choosing sets of interval size functions based on feasibility requirements for their underlying p-orders, we obtain new characterizations of complexity classes. We prove that the set of all interval size functions whose underlying p-orders are polynomial-time decidable is exactly #P. We show that the interval size functions for orders with polynomial-time adjacency checks are closely related to the class FPSPACE(poly). Indeed, FPSPACE(poly) is exactly the class of all nonnegative functions that are an interval size function minus a polynomial-time computable function.
We study two important functions in relation to interval size functions. The function #DIV maps each natural number n to the number of nontrivial divisors of n. We show that #DIV is an interval size function of a polynomial-time decidable partial p-order with polynomial-time adjacency checks. The function #MONSAT maps each monotone boolean formula F to the number of satisfying assignments of F. We show that #MONSAT is an interval size function of a polynomial-time decidable total p-order with polynomial-time adjacency checks.
Finally, we explore the related notion of cluster computation.
Keywords: simple stochastic games; Hoffman-Karp algorithm; algorithm analysis.
We obtain the first nontrivial worst-case upper bound on the number of iterations required by the well-known Hoffman-Karp algorithm for the simple stochastic game problem. We also describe a randomized variant of the Hoffman-Karp algorithm and analyze the expected number of iterations required by it in the worst case.
Keywords: computational complexity; complexity-theoretic one-way functions; associativity; commutativity; strong noninvertibility.
Rabi and Sherman [RS97,RS93] proved that the hardness of factoring is a sufficient condition for there to exist one-way functions (i.e., p-time computable, honest, p-time noninvertible functions; this paper is in the worst-case model, not the average-case model) that are total, commutative, and associative but not strongly noninvertible. In this paper we improve the sufficient condition to ``P does not equal NP.''More generally, in this paper we completely characterize which types of one-way functions stand or fall together with (plain) one-way functions---equivalently, stand or fall together with P not equaling NP. We look at the four attributes used in Rabi and Sherman's seminal work on algebraic properties of one-way functions (see [RS97,RS93]) and subsequent papers---strongness (of noninvertibility), totality, commutativity, and associativity---and for each attribute, we allow it to be required to hold, required to fail, or ``don't care.'' In this categorization there are 3^4 = 81 potential types of one-way functions. We prove that each of these 81 feature-laden types stand or fall together with the existence of (plain) one-way functions.
Keywords: legitimate deck; graph isomorphism; reconstruction numbers; graph reconstruction.
We investigate the relative complexity of the graph isomorphism problem (GI) and problems related to the reconstruction of a graph from its vertex-deleted or edge-deleted subgraphs (in particular, deck checking (DC) and legitimate deck (LD) problems). We show that these problems are closely related for all amounts $c \geq 1$ of deletion:1) $GI \equiv^{l}_{iso} VDC_{c}$, $GI \equiv^{l}_{iso} EDC_{c}$, $GI \leq^{l}_{m} LVD_c$, and $GI \equiv^{p}_{iso} LED_c$.
2) For all $k \geq 2$, $GI \equiv^{p}_{iso} k-VDC_c$ and $GI \equiv^{p}_{iso} k-EDC_c$.
3) For all $k \geq 2$, $GI \leq^{l}_{m} k-LVD_c$.
4) $GI \equiv^{p}_{iso} 2-LVC_c$.
5) For all $k \geq 2$, $GI \equiv^{p}_{iso} k-LED_c$.
For many of these results, even the $c = 1$ case was not previously known.
Similar to the definition of reconstruction numbers $vrn_{\exists}(G)$ [HP85] and $ern_{\exists}(G)$ (see page 120 of [LS03]), we introduce two new graph parameters, $vrn_{\forall}(G)$ and $ern_{\forall}(G)$, and give an example of a family $\{G_n\}_{n \geq 4}$ of graphs on $n$ vertices for which $vrn_{\exists}(G_n) < vrn_{\forall}(G_n)$. For every $k \geq 2$ and $n \geq 1$, we show that there exists a collection of $k$ graphs on $(2^{k-1}+1)n+k$ vertices with $2^{n}$ 1-vertex-preimages, i.e., one has families of graph collections whose number of 1-vertex-preimages is huge relative to the size of the graphs involved.
Keywords: structural complexity; unambiguous computation; alternation; relativization.
Unambiguity in alternating Turing machines has received considerable attention in the context of analyzing globally-unique games by Aida et al. [ACRW04] and in the design of efficient protocols involving globally-unique games by Crasmaru et al. [CGRS04]. This paper explores the power of unambiguity in alternating Turing machines in the following settings:(1) We show that unambiguity based hierarchies---AUPH, UPH, and \slant{UPH}---are infinite in some relativized world. For each $k$ >= 2, we construct another relativized world where the unambiguity based hierarchies collapse so that they have exactly $k$ distinct levels and their $k$'th levels coincide with PSPACE. These results shed light on the relativized power of the unambiguity based hierarchies, and parallel the results known for the case of the polynomial hierarchy.
(2) We define the bounded-level unambiguous alternating solution class UAS(k), for every $k >= 1, as the class of sets for which strings in the set are accepted unambiguously by some polynomial-time alternating Turing machine N with at most $k$ alternations, while strings not in the set either are rejected by $N$ or are accepted with ambiguity by N. We construct a relativized world where, for all $k >= 1$, $UP_{\leq k}$ is a subset of $UP_{\leq k+1}$ and $UAS(k)$ is a subset of $UAS(k+1)$.
(3) Finally, we show that robustly $k$-level unambiguous alternating polynomial-time Turing machines accept languages that are computable in $P^{\Sigma^{p}_{k} \oplus A}$, for every oracle $A$. This generalizes a result of Hartmanis and Hemachandra [HH90].
Keywords: computational complexity; problem classification.
Computer scientists, programmers, and engineers need to determine the complexity of computational problems on a daily basis, and they typically ask the following questions: Is the problem easy or hard? If it is easy, is there a really efficient algorithm for the problem? If the problem is hard, how hard is it? Are there large subclasses of problems that are easy? Are there efficient approximation algorithms for the problem? Finding the answers to these questions pertaining to problem classification can be arduous and daunting for someone who is not an expert in the domain. Different problems, even from the same domain, may require vastly different proof techniques for problem classification. Thus, it is highly desirable to have easily applicable tools (theorems, classification tests, algorithms, and dichotomy results) that classify a wide range of problems. In this thesis we provide such general tools for determining the complexity of problems arising in the following settings: boolean circuits, language properties of central complexity classes (such as NP, PP, and ParityP), cycles in graphs, oracle (database) access, theoretical bmodels of computer simulation, and structural restrictions on the witness functions of nondeterministic polynomial-time Turing machines.
Keywords: cannibalistic computation; context-free languages; linear space; overhead-free computation; CFL; deterministic context-free languages; DCFL; in-place algorithms; space overhead; two-stack automata; DLINSPACE; restarting automata; RRW-automata; editing Turing machines; space reuse.
We study Turing machines that are allowed absolutely no space overhead. The only work space the machines have, beyond the fixed amount of memory implicit in their finite-state control, is that which they can create by cannibalizing the input bits' own space. This model more closely reflects the fixed-sized memory of real computers than does the standard complexity-theoretic model of linear space.Though some context-sensitive languages cannot be accepted by such machines, we show that all context-free languages can be accepted nondeterministically in polynomial time with absolutely no space overhead, and that all deterministic context-free languages can be accepted deterministically in polynomial time with absolutely no space overhead.
Keywords: linear advice; selector functions; P-selectivity; associativity; commutativity; P/linear; NP/linear; advice complexity; semifeasible computation; algebraic properties.
This paper provides a tutorial overview of the advice complexity of the semifeasible sets---informally put, the class of sets having a polynomial-time algorithm that, given as input any two strings of which at least one belongs to the set, will choose one that does belong to the set. No previous familiarity with either the semifeasible sets or advice complexity will assumed, and when we include proofs we will try to make the material as accessible as possible via providing intuitive, informal presentations.Karp and Lipton (1980) introduced advice complexity about a quarter of a century ago. Advice complexity asks, for a given power of interpreter, how many bits of ``help'' suffice to accept a given set. Thus, this is a notion that contains aspects both of informational complexity and of computational complexity. We will see that for some powers of interpreter the (worst-case) complexity of the semifeasible sets is known right down to the bit (and beyond), but that for the most central power of interpreter---deterministic polynomial time---the complexity is currently known only to be at least linear and at most quadratic.
While overviewing the advice complexity of the semifeasible sets, we will stress also the issue of whether the functions at the core of semifeasibility---so-called selector functions---can without cost be chosen to possess such algebraic properties as commutativity and associativity. We will see that this is relevant, in ways both potential and actual, to the study of the advice complexity of the semifeasible sets.
Keywords: certificates; P-producible sets; complexity theory; inverse problems; coNP-hardness; NP.
How hard is it to invert NP-problems? We show that all superlinearly certified inverses of NP problems are coNP-hard. As part of our work we develop a novel proof technique that builds diagonalizations against certificates directly into a circuit.
Keywords: odd-even merge; merging networks; sorting networks; comparison networks; oblivious merging; oblivious sorting; parallel sorting; bitonic sort; parallel processing; analysis of algorithms.
Batcher's bitonic merge has been presented in two distinct recursive ways. We show that a transparently equivalent redefinition of bitonicity clarifies the correctness of the more elegant of the approaches, and we outline a proof that the two approaches do yield the same networks.
Keywords: complexity classes; gap-definability; polynomial degree bounds; Turing hardness; relativization theory.
Resolving an issue open since Fenner, Fortnow, and Kurtz raised it in [FFK94], we prove that LWPP is not uniformly gap-definable and that WPP is not uniformly gap-definable. We do so in the context of a broader investigation, via the polynomial degree bound technique, of the lowness, Turing hardness, and inclusion relationships of counting and other central complexity classes.
Keywords: semi-feasible algorithms; advice complexity; relativization theory; computational complexity.
Ko proved that the P-selective sets are in the advice class P/quadratic. Hemaspaandra et al. showed that P-selective sets are in PP/linear. Hemaspaandra and Torenvliet improved this upper bound and proved that that P-selective sets are in NP/linear. From this result it follows that if P-sel \not\subseteq P/linear, then P \neq NP, and so it cannot be proven using relativizable techniques. They also raised the following question: P-sel \subseteq P/linear? That is, they asked whether each P-selective set has linear advice. This question is interesting in light of the fact that the P-selective sets constructed using the classic left-cut technique all have linear advice complexity. In this paper, we prove that no relativizable technique can resolve this question. In fact, we prove that there is an oracle A such that PP^A \inter P-sel^A \not\subseteq P^A/linear. In our proof, we use Kolmogorov random permutations in conjunction with random tournaments to construct a P-selective set with the desired properties. This construction may be of independent interest in relativization theory.
Keywords: Turing reduction; oracle (database) access; padding functions; computational complexity.
We study reductions that limit the extreme adaptivity of Turing reductions. In particular, we study reductions that make a rapid, structured progression through the set to which they are reducing: Each query is strictly longer (shorter) than the previous one. We call these reductions query-increasing (query-decreasing) Turing reductions. We also study query-nonincreasing (query-nondecreasing) Turing reductions. These are Turing reductions in which the sequence of query lengths is nonincreasing (nondecreasing). We ask whether these restrictions in fact limit the power of reductions. We prove that query-increasing and query-decreasing Turing reductions are incomparable with (that is, are neither strictly stronger than nor strictly weaker than) truth-table reductions and are strictly weaker than Turing reductions. In addition, we prove that query-nonincreasing and query-nondecreasing Turing reductions are strictly stronger than truth-table reductions and strictly weaker than Turing reductions. Despite the fact that we prove query-increasing and query-decreasing Turing reductions to in the general case be strictly weaker than Turing reductions, we identify a broad class of sets A for which any set that Turing reduces to A will also reduce to A via both query-increasing and query-decreasing Turing reductions. In particular, this holds for all tight paddable sets, where a set is said to be tight paddable exactly if it is paddable via a function whose output length is bounded tightly both from above and from below in the length of the input. We prove that many natural NP-complete problems such as satisfiability, clique, and vertex cover are tight paddable.
Keywords: complexity classes; hyperconnection; cyclomatic number; directed hypergraphs.
We introduce a notion of linear hyperconnection (formally denoted L-hyperpath) between nodes in a directed hypergraph and relate this notion to existing notions of hyperpaths in directed hypergraphs. We show that many interesting questions in problem domains such as secret transfer protocols, routing in packet filtered networks, and propositional satisfiability are basically questions about existence of L-hyperpaths or about cyclomatic number of directed hypergraphs w.r.t. L-hypercycles (the minimum number of hyperedges that need to be deleted to make a directed hypergraph free of L-hypercycles). We prove that the L-hyperpath existence problem, the cyclomatic number problem, the minimum cyclomatic set problem, and the minimal cyclomatic set problem are each complete for a different level (respectively, NP, $\Sigma^{p}_{2}$, $\Pi^{p}_{2}$, and DP) of the polynomial hierarchy.
Keywords: computational complexity; kings; ranking; immunity; bi-immunity; Toda equivalence classes; semi-feasible computation; P-selectivity; left cuts; P-printability tournaments.
We identify two properties that for P-selective sets are effectively computable. Namely we show that, for any P-selective set, finding a string that is in a given length's top Toda equivalence class (very informally put, a string from Sigma^n that the set's P-selector function declares to be most likely to belong to the set) is FP^{Sigma_2^p} computable, and we show that each P-selective set contains a weakly-P^{Sigma_2^p}$-rankable subset.
Keywords: cycle modularity problems; graph theory; computational complexity; algorithms.
The even cycle problem for both undirected [Tho88] and directed [RST99] graphs has been the topic of intense research in the last decade. In this paper, we study the computational complexity of cycle length modularity problems. Roughly speaking, in a cycle length modularity problem, given an input (undirected or directed) graph, one has to determine whether the graph has a cycle $C$ of a specific length (or one of several different lengths), modulo a fixed integer. We denote the two families (one for undirected graphs and one for directed graphs) of problems by $(S,m)-UC$ and $(S,m)-DC$, where $m \in \naturalnumber$ and $S \subseteq \{0, 1, \ldots, m-1\}$. $(S,m)-UC$ (respectively, $(S,m)-DC$) is defined as follows: Given an undirected (respectively, directed) graph $G$, is there a cycle in $G$ whose length, modulo $m$, is a member of $S$? In this paper, we fully classify (i.e., as either polynomial-time solvable or as $\np$-complete) each problem $(S,m)-UC$ such that $0 \in S$ and each problem $(S,m)-DC$ such that $0 \notin S$. We also give a sufficient condition on $S$ and $m$ for the following problem to be polynomial-time computable: $(S,m)-UC$ such that $0 \notin S$.
Keywords: computational complexity; quantum complexity classes; gap-definable counting classes; relativization theory; strong separations; reduction closure properties.
We study the complexity of quantum complexity classes such as EQP, BQP, and NQP (quantum analogs of P, BPP, and NP, respectively) using classical complexity classes such as ZPP, WPP, and C_{=}P. The contributions of this paper are threefold. First, via oracle constructions, we show that no relativizable proof technique can improve the best known classical upper bound for BQP (BQP \subseteq AWPP [FR99]) to BQP \subseteq WPP and the best known classical lower bound for EQP (P \subseteq EQP) to ZPP \subseteq EQP. Second, we prove that there are oracles A and B such that, relative to A, coRP is immune to NQP and relative to B, BQP is immune to P^{C_{=}P}. Extending a result of de Graaf and Valiant [dGV02], we construct a relativized world where EQP is immune to ModpP. Third, motivated by the fact that counting classes (e.g., LWPP, AWPP, etc.) are the best known classical upper bounds on quantum complexity classes, we study properties of these counting classes. We prove that WPP is closed under polynomial-time truth-table reductions, while we construct an oracle relative to which WPP is not closed under polynomial-time Turing reductions. The latter result implies that proving the equality of the similar appearing classes LWPP and WPP would require nonrelativizable proof techniques. We also prove that both AWPP and APP are closed under UP-Turing reductions. We use closure properties of WPP and AWPP to prove interesting consequences, in terms of the complexity of the polynomial-hierarchy, of the following hypotheses: NQP \subseteq BQP and EQP = NQP.
Keywords: greedy search; heuristic search; ad-hoc routing; small worlds; landscapes.
Kleinberg provides the first theoretical characterization of the algorithmic aspects of small-world graphs embedded in metric spaces. The algorithms that Kleinberg studies are closely related to decentralized routing schemes used in ad-hoc networking environments. We study decentralized routing on fitness landscapes, which are a model that generalizes the properties of metric graphs and allows us to consider factors other than distance in designing decentralized routing schemes. We show that certain features of landscapes upper bound the amount of flooding necessary in order for greedy, decentralized routing schemes to successfully deliver messages. Finally, we show that, in Kleinberg's model, there is a phase transition in the amount of flooding necessary for efficient routing.
Keywords: computational complexity; selective; membership comparable; self-reduction; low information content; sparse; graph isomorphism; graph automorphism; circuit value problem; reachability problem.
We study whether sets inside NP can be reduced to sets with low information content but possibly still high computational complexity. Examples of sets with low information content are tally sets, sparse sets, P-selective sets and membership comparable sets. For the graph automorphism and isomorphism problems GA and GI, for the directed graph reachability problem GAP, for the determinant function det, and for logspace self-reducible languages we establish the following results:o If GA is polynomial-time truth-table reducible to a P-selective set, then GA is in P.
o If GI is O(log n)-membership comparable, then GI is in RP.
o If GAP is logspace O(1)-membership comparable, then GAP is in L.
o If det is logspace Turing reducible to an L-selective set, then det is in FL.
o If a language A is logspace self-reducible and logspace Turing reducible to an L-selective set, then A is in L.
The last result is a strong logspace version of the characterisation of P as the class of self-reducible P-selective languages. As P and NL have logspace self-reducible complete sets, it also establishes a logspace analogue of the conjecture that if SAT is polynomial-time Turing reducible to a P-selective set, then SAT is in P.
Keywords: associations; maximally frequent itemset; distributed data mining; heterogeneous; similarity.
This paper proposes a new measure for similarity between basket datasets. The new measure is calculated from support counts using a formula inspired by information entropy. Experiments on both real and synthetic datasets show the effectiveness of the measure. This paper also studies the problem of finding a mapping between categorical database attribute sets using similarity measures. A generic approach for identifying such a mapping is proposed. The approach is implemented based on the similarity measure proposed in the paper and its performance has been evaluated and validated. Moreover, this paper also explores the applications of using the similarity measure to mine distributed datasets.
Keywords: space overhead; space reuse; overhead-free computation; linear space; context-sensitive languages; context-free languages; deterministic linear languages; meta-linear languages.
We study Turing machines that are allowed absolutely no space overhead. The only work space the machines have, beyond the fixed amount of memory implicit in their finite-state control, is that which they can create by cannibalizing the input bits' own space. This model more closely reflects the fixed-sized memory of real computers than does the standard complexity-theoretic model of linear space. Though some context-sensitive languages cannot be accepted by such machines, we show that subclasses of the context-free languages can even be accepted in polynomial time with absolutely no space overhead.
Keywords: P-selectivity; NP-selectivity; nondeterministic selectivity; selector functions; advice complexity; nonuniform complexity; semi-feasible computation; algebraic properties; associativity; commutativity; immunity; printability; tournaments; digraphs.
The nondeterministic advice complexity of the P-selective sets is known to be exactly linear. Regarding the deterministic advice complexity of the P-selective sets---i.e., the amount of Karp--Lipton advice needed for polynomial-time machines to recognize them in general---the best current upper bound is quadratic [Ko, 1983] and the best current lower bound is linear [Hemaspaandra and Torenvliet, 1996].We prove that every associatively P-selective set is commutatively, associatively P-selective. Using this, we establish an algebraic sufficient condition for the P-selective sets to have a linear upper bound (which thus would match the existing lower bound) on their deterministic advice complexity: If all P-selective sets are associatively P-selective then the deterministic advice complexity of the P-selective sets is linear. The weakest previously known sufficient condition was P=NP.
We also establish related results for algebraic properties of, and advice complexity of, the nondeterministically selective sets.
Keywords: determinant; rank; enumerative approximation; counting logspace classes.
We investigate the complexity of enumerative approximation of two elementary problems in linear algebra, computing the rank and the determinant of a matrix. In particular, we show that if there exists an enumerator that, given a matrix, outputs a list of constantly many numbers, one of which is guaranteed to be the rank of the matrix, then it can be determined in AC^0 (with oracle access to the enumerator) which of these numbers is the rank. Thus, for example, if the enumerator is an FL function, then the problem of computing the rank is in FL. The result holds for matrices over any commutative ring whose size grows at most polynomially with the size of the matrix. The existence of such an enumerator also implies a slightly stronger collapse of the exact counting logspace hierarchy.For the determinant function Det we establish the following two results: 1. If Det is poly-enumerable in logspace, then Det is in FL. 2. For any prime p, if Det-mod-p is (p-1)-enumerable in Mod_pL, then Det-mod-p is in FL. These results give a perspective on the approximability of many elementary linear algebra problems equivalent to computing the rank or the determinant. Due to the close connection between the determinant function and #L, as well as between the rank function and AC^0(C_=L), our results might yield a better understanding of the exact power of counting in logspace and the relationships among the complexity classes sandwiched between NL and uniform TC^1.
Keywords: counting properties; lower bounds; ambiguity; circuits; UP; NP; nondeterministic computation; Rice's Theorem; computation paths.
Rice's Theorem states that all nontrivial language properties of recursively enumerable sets are undecidable. Borchert and Stephan started the search for complexity-theoretic analogs of Rice's Theorem, and proved that every nontrivial counting property of boolean circuits is UP-hard. Hemaspaandra and Rothe improved the UP-hardness lower bound to UP_{O(1)}-hardness. The present paper raises the lower bound for nontrivial counting properties from UP_{O(1)}-hardness to FewP-hardness, i.e., from constant-ambiguity nondeterminism to polynomial-ambiguity nondeterminism. Furthermore, we prove that no relativizable technique can raise this lower bound to FewP-1-truth-table-hardness. We also prove a Rice-style theorem for NP, namely that every nontrivial language property of NP sets is NP-hard.
Keywords: estimation; joint probability; support count, minAB; prodAB; data mining.
Estimating joint probabilities plays an important role in many data mining and machine learning tasks. In this paper we introduce two methods, minAB and prodAB, to estimate joint probabilities. Both methods are based on a light-weight structure, partition support. The core idea is to maintain the partition support of itemsets over logically disjoint partitions and then use it to estimate joint probabilities of itemsets of higher cardinalities. We present extensive mathematical analyses on both methods and compare their performances on synthetic datasets. We also demonstrate a case study of using the estimation methods in a priori algorithm for fast association mining. Moreover, we explore the usefulness of the estimation methods in other mining/learning tasks. Experimental results show the effectiveness of the estimation methods.
Keywords: one-way functions; one-to-one functions; complexity-theoretic cryptography; permutations; self-witnessing languages.
A desirable property of one-way functions is that they be total, one-to-one, and onto---in other words, that they be permutations. We prove that one-way permutations exist exactly if P is the intersection of UP and coUP. This provides the first characterization of the existence of one-way permutations based on a complexity-class separation and shows that their existence is equivalent to a number of previously studied complexity-theoretic hypotheses.We also study permutations in the context of witness functions of nondeterministic Turing machines. A language is in PermUP if, relative to some unambiguous, nondeterministic, polynomial-time Turing machine accepting the language, the function mapping each string to its unique witness is a permutation of the members of the language. We show that under standard complexity-theoretic assumptions PermUP is a nontrivial subset of UP.
We study SelfNP, the set of all languages such that, relative to some nondeterministic, polynomial-time Turing machine that accepts the language, the set of all witnesses of strings in the language is identical to the language itself. We show that SAT is a member of SelfNP, and under standard complexity-theoretic assumptions, SelfNP is to equal to NP.
Keywords: structural complexity; competing provers; nonuniform complexity; symmetric alternation; Karp-Lipton Theorem; Yap's Theorem; Kaemper-AFK Theorem; lowness.
Via competing provers, we show that if a language A is self-reducible and has polynomial-size circuits then S2(A)=S2. Building on this, we strengthen the Kaemper-AFK Theorem, namely, we prove that if NP subseteq (NP intersect coNP)/poly then the polynomial hierarchy collapses to S2(NP intersect coNP). We also strengthen Yap's Theorem, namely, we prove that if NP subseteq coNP/poly then the polynomial hierarchy collapses to S2(NP). Under the same assumptions, the best previously known collapses were to ZPP(NP) and ZPP(NP(NP)) respectively ([KW98,BCK+94], building on [KL80,AFK89,Kaem91,Yap83]). It is known that S2 subseteq ZPP(NP) [Cai01]. That result and its relativized version show that our new collapses indeed improve the previously known results. Since the Kaemper-AFK Theorem and Yap's Theorem are used in the literature as bridges in a variety of results---ranging from the study of unique solutions to issues of approximation---our results implicitly strengthen all those results.
Keywords: Rice's Theorem; counting properties; boolean circuits; ambiguity-bounded computation; computational complexity.
Rice's Theorem states that all nontrivial language properties of recursively enumerable sets are undecidable. Borchert and Stephan [BS00] started the search for complexity-theoretic analogs of Rice's Theorem, and proved that every nontrivial counting property of boolean circuits is UP-hard. Hemaspaandra and Rothe [HR00] improved the UP-hardness lower bound to UP_{O(1)}-hardness. The present paper raises the lower bound for nontrivial counting properties from UP_{O(1)}-hardness to FewP-hardness, i.e., from constant-ambiguity nondeterminism to polynomial-ambiguity nondeterminism. We also prove a Rice-style theorem for NP, namely that every nontrivial language property of NP sets is NP-hard, and we prove that every P-constructibly semi-switching counting property of circuits is PP-hard.
Keywords: quantum computing; computational complexity; almost-everywhere superiority.
We prove that, relative to some black box, there are languages for which polynomial-time quantum machines are exponentially faster than each classical machine almost everywhere.
Keywords: no-search easy-hard technique; downward-collapse; computational complexity.
The top part of the preceding figure [figure appears in actual paper] shows some classes from the (truth-table) bounded-query and boolean hierarchies. It is well-known that if either of these hierarchies collapses at a given level, then all higher levels of that hierarchy collapse to that same level. This is a standard ``upward translation of equality'' that has been known for over a decade. The issue of whether these hierarchies can translate equality {\em downwards\/} has proven vastly more challenging. In particular, with regard to the figure above, consider the following claim: $$P_{m-tt}^{\Sigma_k^p} = P_{m+1-tt}^{\Sigma_k^p} \implies DIFF_m(\Sigma_k^p) coDIFF_m(\Sigma_k^p) = BH(\Sigma_k^p).~~~~(*)$$ This claim, if true, says that equality translates downwards between levels of the bounded-query hierarchy and the boolean hierarchy levels that (before the fact) are immediately below them.Until recently, it was not known whether~(*) {\em ever\/} held, except for the degenerate cases $m=0$ and $k=0$. Then Hemaspaandra, Hemaspaandra, and Hempel~\cite{hem-hem-hem:j:downward-translation} proved that~(*) holds for all $m$, for $k > 2$. Buhrman and Fortnow~\cite{buh-for:j:two-queries} then showed that, when $k=2$,~($*$) holds for the case $m = 1$. In this paper, we prove that for the case $k=2$,~($*$) holds for all values of $m$. Since there is an oracle relative to which ``for $k=1$,~($*$) holds for all $m$'' fails~\cite{buh-for:j:two-queries}, our achievement of the $k=2$ case cannot to be strengthened to $k=1$ by any relativizable proof technique. The new downward translation we obtain also tightens the collapse in the polynomial hierarchy implied by a collapse in the bounded-query hierarchy of the second level of the polynomial hierarchy.
Keywords: P-selectivity; selector functions; advice complexity; nonuniform complexity; semi-feasible computation; algebraic properties; associativity; commutativity; immunity; printability; tournaments.
Karp and Lipton, in their seminal 1980 paper, introduced the notion of advice (nonuniform) complexity, which since has been of central importance in complexity theory. Nonetheless, much remains unknown about the optimal advice complexity of classes having polynomial advice complexity.In particular, let P-sel denote the class of all P-selective sets [Selman 1979]. For the nondeterministic advice complexity of P-sel, linear upper and lower bounds are known [Hemaspaandra and Torenvliet 1996]. However, for the deterministic advice complexity of P-sel, the best known upper bound is quadratic [Ko 1983], and the best known lower bound is the linear lower bound inherited from the nondeterministic case. This paper establishes an algebraic sufficient condition for P-sel to have a linear upper bound: If all P-selective sets are associatively P-selective then the deterministic advice complexity of P-sel is linear. (The weakest previously known sufficient condition was P=NP.)
Relatedly, we prove that every associatively P-selective set is commutatively, associatively P-selective.
Keywords: computational complexity; easiness bands; emptiness testing; exponential gaps; NP-hardness; P-immunity; positive reductions; printability; self-reducibility.
No P-immune set having exponential gaps is positive-Turing self-reducible.
Keywords: computational complexity; sparse complete sets; polynomial-time reductions.
This paper discusses advances, due to the work of Cai, Naik, and Sivakumar and Glasser, in the complexity class collapses that follow if NP has sparse hard sets under reductions weaker than (full) truth-table reductions.
Keywords: space-bounded computation, probabilistic computation, probabilistic plus nondeterministic computation, parallel computation, Turing machines, finite-state automata, multihead finite-state automata, auxiliary pushdown automata, residue number system, matrix inversion, derandomization, class hierarchy, head hierarchy, logspace reductions, probabilistic automata, stochastic languages, Markov chains, Arthur-Merlin games, games against nature.
We present hierarchical characterizations and reductions of classes of languages recognized by logarithmic-space (logspace, for short) probabilistic Turing machines, and by Arthur-Merlin games and games against Nature, both with logspace probabilistic verifiers. We decompose each logspace complexity class into a hierarchy based on the corresponding multihead two-way finite automata, and we prove that most of these hierarchies are strict, even with respect to languages over a single-letter alphabet. We also obtain efficient reductions of our logspace complexity classes to low levels in the corresponding hierarchies.Another focus is on space-efficient deterministic simulation of space-bounded Turing machines with probabilistic and mixed (i.e., probabilistic and nondeterministic) transitions. We present new results for two classes of machines, defined respectively in terms of logarithmic and sublogarithmic space bounds:
1) For logarithmic bounds, we obtain several new complete problems. These include variants of Savitch's maze-threading problem that are solvable by surprisingly simple devices. In particular, it follows that matrix inversion problems, which seem computationally hard, are efficiently reducible to languages recognized by one-way probabilistic devices that seem quite weak.
2) For sublogarithmic bounds, we find deterministic simulations significantly more space-efficient than the previously known simulations. In particular, for one-head probabilistic finite automata, we obtain an optimal, logspace deterministic simulation. Since our simulations are presented in the more general setting of Markov chains, they may have other applications as well. For use in the simulations, we develop space-efficient (and parallel-time-efficient) deterministic techniques for working with succinct residue representations of large natural numbers.
We extend our study to pushdown automata and auxiliary-pushdown automata with probabilistic and mixed transitions. We give characterizations in terms of well-known complexity classes for the classes of languages recognized by these automata. It follows that the differences between classes of languages such as P and PSPACE, NL and SAC^1, and PL and Diff_>(#SAC^1) all derive from the difference between using one symbol and using two symbols on a pushdown store, in certain settings.
Finally, we define and investigate probabilistic automata with "logspace-constructible" transition probabilities.
Keywords: cryptography; one-way functions; worst-case cryptocomplexity.
Rabi, Rivest, and Sherman alter the standard notion of noninvertibility to a new notion they call strong noninvertibility, and show---via explicit cryptographic protocols for secret-key agreement ([Rabi and Sherman 1993; Rabi and Sherman 1997] attribute this to Rivest and Sherman) and digital signatures [Rabi and Sherman 1993; Rabi and Sherman 1997]---that strongly noninvertible functions would be very useful components in protocol design. Their definition of strong noninvertibility has a small twist (``respecting the argument given'') that is needed to ensure cryptographic usefulness. In this paper, we show that this small twist has a large, unexpected consequence: Unless P=NP, some strongly noninvertible functions are invertible.
Keywords: parallel access; polynomial ambiguity; computational complexity; parallel census technique; USAT; FewP; nondeterministic computation; exponentially length-decreasing reductions.
We discuss the history and uses of the parallel census technique---an elegant tool in the study of certain computational objects having polynomially bounded census functions. A sequel will discuss advances (including [Cai, Naik, and Sivakumar, 1995] and [Glasser, 2000]), some related to the parallel census technique and some due to other approaches, in the complexity-class collapses that follow if NP has sparse hard sets under reductions weaker than (full) truth-table reductions.
Keywords: associativity; computational complexity; cryptocomplexity; cryptography; ambiguity; algebraic cryptography; one-way functions.
Rabi and Sherman [1997] present a cryptographic paradigm based on associative, one-way functions that are strong (i.e., hard to invert even if one of their arguments is given) and total. Hemaspaandra and Rothe [1999] proved that such powerful one-way functions exist exactly if (standard) one-way functions exist, thus showing that the associative one-way function approach is as plausible as previous approaches. In the present paper, we study the degree of ambiguity of one-way functions. Rabi and Sherman showed that no associative one-way function (over a universe having at least two elements) can be unambiguous (i.e., one-to-one). Nonetheless, we prove that if standard, unambiguous, one-way functions exist, then there exist strong, total, associative, one-way functions that are \mathcal{O}(n)-to-one. This puts a reasonable upper bound on the ambiguity. Our other main results are: (1) P \neq FewP if and only if there exists an (n^{\mathcal{O}(1)})-to-one, strong, total AOWF. (2) No \mathcal{O}(1)-to-one total, associative functions exist in \Sigma^* \times \Sigma^* \rightarrow \Sigma^*. (3) For every nondecreasing, unbounded, total, recursive function g : \mathbb{N} \rightarrow \mathbb{N}, there is a g(n)-to-one, total, commutative, associative, recursive function in \Sigma^* \times \Sigma^* \rightarrow \Sigma^*.
(Superceded by TR 746)
Keywords: solution reduction; solution-pruning algorithms; cardinality types; function refinement; lowness; semi-feasible computation; selectivity theory; computational complexity.
We study whether one can prune solutions from NP functions. Though it is known that, unless surprising complexity class collapses occur, one cannot reduce the number of accepting paths of NP machines [Ogihara and Hemachandra 1993], we nonetheless show that it often is possible to reduce the number of solutions of NP functions. For finite cardinality types, we give a sufficient condition for such solution reduction. We also give absolute and conditional necessary conditions for solution reduction, and in particular we show that in many cases solution reduction is impossible unless the polynomial hierarchy collapses.
Keywords: one-way functions; cryptography; associativity of one-way functions; commutativity of one-way functions; strongly noninvertible functions.
We survey recent developments in the study of (worst-case) one-way functions having strong algebraic and security properties. According to [Rabi and Sherman, 1993], this line of research was initiated in 1984 by Rivest and Sherman who designed two-party secret-key agreement protocols that use strongly noninvertible, total, associative one-way functions as their key building blocks. If commutativity is added as an ingredient, these protocols can be used by more than two parties, as noted by Rabi and Sherman [1993] who also developed digital signature protocols that are based on such enhanced one-way functions.Until recently, it was an open question whether one-way functions having the algebraic and security properties that these protocols require could be created from any given one-way function. Recently, Hemaspaandra and Rothe [1999] resolved this open issue in the affirmative, by showing that one-way functions exist if and only if strong, total, commutative, associative one-way functions exist. We discuss this result, and the work of Rabi, Rivest, and Sherman, and recent work of Homan [1999] that makes progress on related issues.
Keywords: quantum computing; lower bounds; almost-everywhere hardness; computational complexity.
Simon [Sim97] as extended by Brassard and H{\o}yer [BH97] shows that there are tasks on which polynomial-time quantum machines are exponentially faster than each classical machine infinitely often. The present paper shows that there are tasks on which polynomial-time quantum machines are exponentially faster than each classical machine almost everywhere.
Keywords: computational complexity; space complexity; probabilistic Turing machine; nodeterministic Turing machine; multihead finite automaton.
Nondeterministic Turing acceptors can be viewed as probabilistic acceptors with errors that are one-sided but not significantly bounded. In his seminal work on resource-bounded probabilistic Turing machines, Gill showed how to transform such a machine to one that does have a good error bound, at only modest cost in space usage. We describe a simpler transformation that incurs absolutely no cost in space usage.If we change each probabilistic transition to a nondeterministic one, then we get a space-preserving transformation in the opposite direction as well, assuming the definitions are right. Thus the complexity classes are exactly the same, and we have the same space-bound hierarchy for bounded-one-sided-error probabilistic computation that we have for nondeterministic computation. Similarly, we have the same number-of-heads complexity classes and hierarchy for bounded-one-sided-error probabilistic multihead finite automata as for nondeterministic ones.
Keywords: computational complexity theory; graphs of functions; parallel versus sequential access; polynomial-time reductions.
We provide optimal inclusions and separations between parallel and sequential self-checking, i.e., regarding the parallel and sequential reduction relationships between functions and their graphs. In particular, we show that there are functions for which parallel self-checking is exponentially more expensive than sequential self-checking. Prior to this work, it had not been established that parallel self-checking ever needed to be even one query more expensive than sequential self-checking.
Keywords: computational complexity; cryptography; security of secret-key agreement and digital signature protocols; complexity-theoretic one-way functions; associativity.
Rabi and Sherman (1997) presented novel digital signature and unauthenticated secret-key agreement protocols, developed by themselves and by Rivest and Sherman. These protocols use "strong," total, commutative (in the case of multi-party secret-key agreement), associative one-way functions as their key building blocks. Though Rabi and Sherman did prove that associative one-way functions exist if P \neq NP, they left as an open question whether any natural complexity-theoretic assumption is sufficient to ensure the existence of "strong," total, commutative, associative one-way functions. In this paper, we prove that if P \neq NP then "strong," total, commutative, associative one-way functions exist.
Keywords: computational complexity; enumerative counting; census functions; tally NP sets.
We study the question of whether every P set has an easy (i.e., polynomial-time computable) census function. We characterize this question in terms of unlikely collapses of language and function classes such as the containment of #P_1 in FP, where #P_1 is the class of functions that count the witnesses for tally NP sets. We prove that every #P_{1}^{PH} function can be computed in FP^{#P_{1}^{#P_{1}}}. Consequently, every P set has an easy census function if and only if every set in the polynomial hierarchy does. We show that the assumption #P_1 being contained in FP implies P = BPP and that PH is contained in MOD_{k}P for each k \geq 2, which provides further evidence that not all sets in P have an easy census function. We also relate a set's property of having an easy census function to other well-studied properties of sets, such as rankability and scalability (the closure of the rankable sets under P-isomorphisms). Finally, we prove that it is no more likely that the census function of any set in P can be approximated (more precisely, can be n^{\alpha}-enumerated in time n^{\beta} for fixed \alpha and \beta) than that it can be precisely computed in polynomial time.
Keywords: downward translation of equality; polynomial hierarchy; boolean hierarchy; easy-hard technique; downward collapse; computational complexity.
During the past decade, nine papers have obtained increasingly strong consequences from the assumption that boolean or bounded-query hierarchies collapse. The final four papers of this nine-paper progression actually achieve downward collapse---that is, they show that high-level collapses induce collapses at (what beforehand were thought to be) lower complexity levels. For example, for each $k \geq 2$ it is now known that if $P^{\Sigma^p_k[1]} = P^{\Sigma^p_k[2]}$ then PH = $\Sigma^p_k$. This article surveys the history, the results, and the technique---the so-called easy-hard method---of these nine papers.J. Kadin. The polynomial time hierarchy collapses if the boolean hierarchy collapses. SIAM Journal on Computing, 17(6):1263-1282, 1988. Erratum appears in the same journal, 20(2):404.
K. Wagner. Number-of-query hierarchies. Technical Report 158, Univ. Augsburg, Inst. Mathematik, Augsburg, Germany, October 1987.
K. Wagner. Number-of-query hierarchies. Technical Report 4, Univ. Wurzburg, Inst. Informatik, Wurzburg, Germany, February 1989.
R. Chang and J. Kadin. The boolean hierarchy and the polynomial hierarchy: A closer connection. SIAM Journal on Computing, 25(2):340-354, 1996.
R. Beigel, R. Chang, and M. Ogiwara. A relationship between difference hierarchies and relativized polynomial hierarchies. Mathematical Systems Theory, 26(3):293-310, 1993.
E. Hemaspaandra, L. Hemaspaandra, and H. Hempel. An upward separation in the polynomial hierarchy. Technical Report Math/Inf/96/15, Friedrich-Schiller-Univ. Jena, Fak. Mathematik und Informatik, Jena, Germany, June 1996.
E. Hemaspaandra, L. Hemaspaandra, and H. Hempel. A downward collapse within the polynomial hierarchy. SIAM Journal on Computing. To appear.
H. Buhrman and L. Fortnow. Two queries. Proc., 13th Annual IEEE Conf. on Computational Complexity. To appear.
E. Hemaspaandra, L. Hemaspaandra, and H. Hempel. Translating equality downwards. Technical Report 657, University of Rochester, Department of Computer Science, Rochester, NY, April 1997.
Keywords: easy-hard technique; polynomial hierarchy; translations of equality; computational complexity; downward collapse.
Hemaspaandra et al. (1997) proved that, for m > 0 and 0 < i < k - 1: if (\Sigma^p_i) \bold\Delta DIFF_m(\Sigma^p_k) is closed under complementation, then DIFF_m(\Sigma^p_k) = coDIFF_m(\Sigma^p_k). This sharply asymmetric result fails to apply to the case in which the hypothesis is weakened by allowing the (\Sigma^p_i) to be replaced by any class in its difference hierarchy. We so extend the result by proving that, for s,m > 0 and 0 < i < k - 1: if DIFF_s(\Sigma^p_i) \bold\Delta DIFF_m(\Sigma^p_k) is closed under complementation, then DIFF_m(\Sigma^p_k) = coDIFF_m(\Sigma^p_k).
Keywords: computational complexity; immunity; relativized computation; circuit lower bounds; counting classes.
Ko and Bruschi showed that in some relativized world, PSPACE (in fact, ParityP) contains a set that is immune to the polynomial hierarchy (PH). In this paper, we study and settle the question of (relativized) separations with immunity for PH and the counting classes PP, C_=P, and ParityP in all possible pairwise combinations. Our main result is that there is an oracle A relative to which C_=P contains a set that is immune to BPP^{ParityP}. In particular, this C_=P^A set is immune to PH^A and ParityP^A. Strengthening results of Tor\'{a}n and Green, we also show that, in suitable relativizations, NP contains a C_=P-immune set, and ParityP contains a PP^{PH}-immune set. This implies the existence of a C_=P^B-simple set for some oracle B, which extends results of Balc\'{a}zar et al. and provides the first example of a simple set in a class not known to be contained in PH. Our proof technique requires a circuit lower bound for "exact counting" that is derived from Razborov's lower bound for majority.
Keywords: DNA computation; Boolean circuits.
This paper studies seemingly the smallest DNA computational model. This model assumes as its computation basis merge, detect, synthesize, anneal, and length-specific separation, but does not assume sequence-specific separation as in many other DNA computational models. Uncertainty occurring in some of the operations is taken into consideration, and the decision by computation under the model is defined in terms of robustness. This paper shows tight upper bounds on the power of this computational model in terms of circuits. For every $k\geq 1$, the languages robustly accepted by programs under this model in $O(\log^k n)$ steps using polynomially many DNA molecules resides between $\NC^{k}$ and $\SAC^{k+1}$.
Keywords: satisfiability threshold; Horn satisfiability; positive unit resolution.
This paper studies the phase transition in random Horn satisfiability: under the random model $\Omega(n,m)$ in which a formula is obtained by choosing $m$ clauses independently, uniformly at random, and with repetition from all Horn clauses in $n$ variables, $\theta(n)=2^{n}$ is the satisfiability threshold for the Horn satisfiability. The threshold is coarse since, if $\mu(n) = c\cdot 2^{n}$ then \[ \lim_{n\goesto \infty} \PR_{\Phi \in \Omega(n,\mu(n))}\, [\mbox{$\Phi$ is satisfiable}] = 1-F(e^{-c}), \] where $F(x)=(1-x)(1-x^2)(1-x^4)(1-x^8)\cdots$. This resolves both of the two remaining cases of the problem of analyzing phase transitions of the six maximally tractable cases in Schaefer's Dichotomy Theorem.
Keywords: selectivity theory; tournaments; computational complexity theory; P-selective sets; linear advice classes; limited nondeterminism.
Hemaspaandra and Torenvliet showed that each P-selective set can be accepted by a polynomial-time nondeterministic machine using linear advice and quasilinear nondeterminism. We extend this by showing that each P-selective set can be accepted by a polynomial-time nondeterministic machine using linear advice and linear nondeterminism.
Keywords: robust reductions; overproductive reductions; underproductive reductions; strong nondeterministic reductions; sparse complete sets; Karp-Lipton Theorem; computational complexity theory.
We continue the study of robust reductions initiated by Gavald\`{a} and Balc\'{a}zar. In particular, a 1991 paper of Gavald\`{a} and Balc\'{a}zar claimed an optimal separation between the power of robust and nondeterministic strong reductions. Unfortunately, their proof is invalid. We re-establish their theorem. Generalizing robust reductions, we note that robustly strong reductions are built from two restrictions, robust underproductivity and robust overproductivity, both of which have been separately studied before in other contexts. By systematically analyzing the power of these reductions, we explore the extent to which each restriction weakens the power of reductions. We show that one of these reductions yields a new, strong form of the Karp-Lipton Theorem.
Keywords: query order; boolean hierarchy; polynomial hierarchy; translations of equality.
Hemaspaandra, Hempel, and Wechsung raised the following questions: If one is allowed one question to each of two different information sources, does the order in which one asks the questions affect the class of problems that one can solve with the given access? If so, which order yields the greater computational power? The answers to these questions have been learned---inasfar as they can be learned without resolving whether or not the polynomial hierarchy collapses---for both the polynomial hierarchy and the boolean hierarchy. In the polynomial hierarchy, query order never matters. In the boolean hierarchy, query order sometimes does not matter and, unless the polynomial hierarchy collapses, sometimes does matter. Furthermore, the study of query order has yielded dividends in seemingly unrelated areas, such as bottleneck computations and downward translation of equality. In this article, we present some of the central results on query order. The article is written in such a way as to encourage the reader to try his or her own hand at proving some of these results. We also give literature pointers to the quickly growing set of related results and applications.
Keywords: Rice's Theorem; circuits; counting classes; computational complexity theory.
Rice's theorem states that every nontrivial language property of the recursively enumerable sets is undecidable. Borchert and Stephan initiated a search for complexity-theoretic analogs of Rice's Theorem. In particular, they proved that every nontrivial counting property of circuits is UP-hard. We extend their result by proving that every nontrivial counting property of circuits is UP_{O(1)}-hard; that is, we raise the lower bound from unambiguous nondeterminism to constant-ambiguity nondeterminism. We show that this conclusion cannot be strengthened to SPP-hardness unless unlikely complexity class containments hold. Nonetheless, we prove that every P-constructibly bi-infinite counting property of circuits is SPP-hard.
A lot of current research in DNA computing has been directed towards solving difficult combinatorial search problems. However, for DNA computing to be applicable on a wider range of problems, support for basic computational operations such as logic operations like AND, OR and NOT and arithmetic operations like addition and subtraction is necessary. Unlike search problems, which can be solved by generating all possible combinations and extracting the correct output, these operations mandate that only a unique output be generated by specific inputs. The question of suitability of DNA for such simple operations has so far largely been unaddressed. In this paper we describe a novel method for using DNA molecules to solve the basic arithmetic and logic operations. We also show that multiple rounds of operations can be performed in a single test tube, utilizing the output of an operation as an input for the next. Furthermore, the operations can be performed in a linear series or a series-parallel fashion and operators can be mixed to form any operation sequence.
Keywords: DNA computing; SAT; counting.
The potential of DNA as a truly parallel computing device is enormous. Solution-phase DNA chemistry, though not unlimited, provides the only currently-available experimental system. Its practical feasibility, however, is controversial. We have sought to extend the feasibility and generality of DNA computing by a novel application of the theory of counting. The biochemically equivalent operation for DNA counting is well known. We propose a DNA algorithm that employs this new operation. We also present an implementation of this algorithm by a novel DNA-chemical method. Preliminary computer simulations suggest that the algorithm can significantly reduce the DNA space complexity (i.e., the maximum number of DNA molecules that must be present in the test tube during computation) for solving 3SAT to $O(2^{0.4n})$. If the observation is correct, our algorithm can solve 3SAT instances of size up to or exceeding 120 variables.
Keywords: lower bounds; parallel NP; Lewis Carroll elections; greedy algorithms; minimum equivalent expression; computational complexity.
A decade ago, a beautiful paper by Wagner developed a ''toolkit'' that in certain cases allows one to prove problems hard for parallel access to NP. However, the problems his toolkit applies to most directly are not overly natural. During the past year, problems that previously were known only to be NP-hard or coNP-hard have been shown to be hard even for the class of sets solvable via parallel access to NP. Many of these problems are longstanding and extremely natural, such as the Minimum Equivalent Expression problem (which was the original motivation for creating the polynomial hierarchy), the problem of determining the winner in the election system introduced by Lewis Carroll in 1876, and the problem of determining on which inputs heuristic algorithms perform well. In the present article, we survey this recent progress in raising lower bounds.
Keywords: downward collapse; upward separation; easy-hard technique; polynomial hierarchy; boolean hierarchy; computational complexity.
Downward translation of equality refers to cases where a collapse of some pair of complexity classes would induce a collapse of some other pair of complexity classes that (a priori) one expects are smaller. Recently, the first downward translation of equality was obtained that applied to the polynomial hierarchy---in particular, to bounded access to its levels. In this paper, we provide a much broader downward translation that subsumes not only that downward translation but also that translationUs elegant enhancement by Buhrman and Fortnow. Our work also sheds light on previous research on the structure of refined polynomial hierarchies.
Keywords: constraint satisfaction; counting; structure identification; approximation.
Using a model inspired by Schaefer's work on generalized satisfiability, Cooper, Cohen, and Jeavons studied the complexity of binary constraint satisfaction problems in the special case when the set of constraints is closed under permutation of labels and domain restriction, and precisely identified the tractable (and intractable) cases.Using the same model we characterize the complexity of three related problems: (1) counting the number of solutions; (2) structure identification (Dechter and Pearl (1992)); (3) approximating the maximum number of satisfiable constraints.
Keywords: computational complexity; counting; ordered computation; SelfPath; SelfOutput; NP; translations of equality.
We study the computational power of machines that specify their own acceptance types, and show that they accept exactly the languages in R^(#P)_m(NP). A natural variant accepts exactly the languages in R^(#P)_m(P). We show that these two classes coincide if and only if P^(#P[1]) = P^(#P[1]:NP[\cal O (1)]), where the latter class denotes the sets acceptable via at most one question to #P followed by at most a constant number of questions to NP.
Keywords: Schaefer's dichotomy theorem.
Schaefer (1978) introduced a generalized satisfiability problem SAT(S) and showed that, depending on the nature of relations in S, SAT(S) is either in P or NP-complete. A similar result holds for generalized satisfiability with constants SAT_{C}(S) (the version of the above problem where constants are allowed). We study the possibility to obtain a version of Schaefer's dichotomy theorem for instances satisfying an additional constraint, namely each variable appears at most twice. We prove several partial results on the complexity of the versions SAT(S,2), SAT_{C}(S,2) of the above two problems that take into account this restriction. We obtain a dichotomy theorem for SAT_{C}(S,2) in the case when all relations in S are symmetric.
Keywords: DNA computing; arithmetic operations; boolean operations; truth tables; DNA hybridization.
The use of DNA molecules to solve hard computational problems has been demonstrated in recent studies. However, the question of suitability of DNA for solving simple computer operations, such as boolean or arithmetic operations, has largely been unaddressed. Incorporation of these operations in DNA computing is essential for solving a wide range of applications. We present a fixed bit encoding scheme, modeling the input/output mechanisms of an electronic computer, and show how a sequence of such operations can be executed in a single test tube producing a unique result.
Keywords: two-head tape; multihead tape; buffer; queue; heads vs. tapes; multitape Turing machine; real-time simulation; on-line simulation; lower bound; Kolmogorov complexity; overlap.
We show that a Turing machine with two single-head one-dimensional tapes cannot recognize the set {x 2 x \prime \mid x \in {0,1} \ast and x \prime is a prefix of x} in real time, although it can do so with three tapes, two two-dimensional tapes, or one two-head tape, or in linear time with just one tape. In particular, this settles the longstanding conjecture that a two-head Turing machine can recognize more languages in real time if its heads are on the same one-dimensional tape than if they are on separate one-dimensional tapes.
Keywords: resource-bounded measure; autoreducibility.
We prove that the following classes have resource-bounded measure zero: (1) the class of self-reducible sets; (2) the class of commitable sets; (3) the class of sets that are non-adaptively autoreducible with a linear number of queries; (4) the class of disjunctively autoreducible sets.
Keywords: positive political science; complexity-theoretic political science; voting systems; election schemes; majority rule; Condorcet winners; parallel access to NP.
In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters' preferences becomes a Condorcet winner---a candidate who beats all other candidates in pairwise majority-rule elections. Bartholdi, Tovey, and Trick provided a lower bound---NP-hardness---on the computational complexity of determining the election winner in Carroll's system. We provide a stronger lower bound and an upper bound that matches our lower bound. In particular, determining the winner in Carroll's system is complete for parallel access to NP, i.e., it is complete for \Theta^p_2, for which it becomes the most natural complete problem known. It follows that determining the winner in Carroll's elections is not NP-complete unless the polynomial hierarchy collapses.
Keywords: flexible scheduling; bottleneck computation; computational complexity; polynomial hierarchy; exact counting.
Cai and Furst proved that every PSPACE language can be solved via a large number of identical, simple tasks, each of which is provided with the original input, its own unique task number, and at most three bits of output from the previous task. In the Cai-Furst model, the tasks are required to be run in the order specified by the task numbers. To study the extent to which the Cai-Furst PSPACE result is due to this strict scheduling, we remove their ordering restriction, allowing tasks to execute in any serial order. That is, we study the extent to which complex tasks can be decomposed into large numbers of simple tasks that can be scheduled arbitrarily. We provide upper bounds on the complexity of the sets thus accepted. Our bounds suggest that Cai and Furst's surprising PSPACE result is due in large part to the fixed order of their task execution. In fact, our bounds suggest the possibility that even relatively low levels of the polynomial hierarchy cannot be accepted via large numbers of simple tasks that can be scheduled arbitrarily. However, adding randomization recaptures the polynomial hierarchy: the entire polynomial hierarchy can be accepted by large numbers of arbitrarily scheduled probabilistic tasks passing only a single bit of information between successive tasks (and using J. Simon's ``exact counting'' acceptance mechanism). In fact, we show that the class of languages so accepted is exactly NP^{PP}.
Keywords: power indices; Congressional apportionment; simulated annealing; voting; #P; equal proportions.
We measure the performance, in the task of apportioning the Congress of the United States, of an algorithm combining a simulated-annealing-driven search with an exact-computation dynamic programming evaluation of the apportionments visited in the search. We compare this with the actual algorithm currently used in the United States to apportion Congress, and with a number of other algorithms that have been proposed. We conclude that on every set of census data in this countryUs history, the simulated-annealing apportionment provably yields far fairer apportionments than those of any other algorithm considered, including the algorithm currently used for Congressional apportionment.
Keywords: Cook versus Karp; completeness; query order; boolean hierarchy; computational complexity.
Do complexity classes have many-one complete sets if and only if they have Turing-complete sets? We prove that there is a relativized world in which a relatively natural complexity class---namely a downward closure of NP, R^{SN}_{1-tt}(NP)---has Turing-complete sets but has no many-one complete sets. In fact, we show that in the same relativized world this class has 2-truth-table complete sets but lacks 1-truth-table complete sets. As part of the groundwork for our result, we prove that R^{SN}_{1-tt}(NP) has many equivalent forms having to do with ordered and parallel access to NP and NP \cap coNP.
Keywords: database access order; polynomial hierarchy; query order.
We study query order within the polynomial hierarchy. $P^{\cal C : \cal D}$ denotes the class of languages computable by a polynomial-time machine that is allowed one query to $\cal C$ followed by one query to $\cal D$. We prove that the levels of the polynomial hierarchy are order-oblivious: $P^{\sum^p_j:\sum^p_k} = P^{\sum^p_k:\sum^p_j}. Yet, we also show that these ordered query classes form new levels in the polynomial hierarchy unless the polynomial hierarchy collapses. We prove that all leaf language classes---and thus essentially all standard complexity classes---inherit all order-obliviousness results that hold for P.
Keywords: computational complexity; abstract complexity; speed-up theorem; gap theorem; polynomial-degrees; random oracle; one-way functions; p-selective sets; pseudo-random generator; Kolmogorov complexity; AC^0; NP optimization problems; descriptive complexity; approximation algorithms.
How strong are the results in computational complexity that assert, under certain hypotheses, the existence of an object? Are there many such objects, or are there few? To what extent can we relax the hypotheses and still maintain the same conclusions? These are the types of questions that are studied in this thesis. More precisely, we investigate some of the central existential results in computational complexity from the point of view of size and robustness.Below is a sample of the results in the thesis. We show that for any effective enumeration of computational devices that cover the whole set of computable functions and for any complexity measure satisfying a single axiom, neither the set of speedable functions nor the set of functions that generate complexity gaps is small from a topological point of view. We show that, with probability one on the set of oracles, there is a set in NP^A that asymptotically splits in half any infinite set in P^A. This is the strongest currently known relativized separation of NP from P. We also show that most (in the resource-bounded measure sense) sets that are computable in exponential time do not have even very weak membership-related properties that are computable in polynomial time. We prove that in almost all relativized worlds, there are very strong hard functions and pseudo random generators. This result is quite relevant in cryptography: it displays an efficient method that takes as input an exponentially long public random string and a polynomially long private random string and outputs an exponentially long public string that can be used as a private key because it looks truly random to any adversary circuit of exponential size. We show that all NP optimization problems admit a normal-form characterization involving the language of first-order logic and a unique system of weights. Various restrictions of the syntax generate classes containing important natural problems. Some such restrictions dictate good approximation properties for the corresponding classes in the case when positive weights are used. When negative weights are also allowed, the good approximation properties are not preserved.
Keywords: Boolean circuits, NC, SAC, DNA computing.
We demonstrate that DNA computers can simulate Boolean circuits with a small overhead. Boolean circuits embody the notion of massively parallel signal processing and are frequently encountered in many parallel algorithms. Many important problems such as sorting, integer arithmetic, and matrix multiplication are known to be computable by small size Boolean circuits much faster than by ordinary sequential digital computers. This paper shows that DNA chemistry allows one to simulate large semi-unbounded fan-in Boolean circuits with a logarithmic slowdown in computation time. Also, for the class NC$^1$, the slowdown can be reduced to a constant. In this algorithm we have encoded the inputs, the Boolean AND gates, and the OR gates to DNA oligonucleotide sequences. We operate on the gates and the inputs by standard molecular techniques of sequence-specific annealing, ligation, separation by size, limited amplification, sequence-specific cleavage, and detection by size. Preliminary biochemical experiments on a small test circuit have produced encouraging results. Further confirmatory experiments are in progress.
Keywords: downward translation; computational complexity theory; boolean hierarchy.
Downward collapse (a.k.a. upward separation) refers to cases where the equality of two larger classes implies the equality of two smaller classes. We provide an unqualified downward collapse result completely within the polynomial hierarchy. In particular, we prove that, for k > 2, if P^{\sum^p_k[1]} = P^{\sum^p_k[2]} then \sum^p_k = \prod^p_k = PH. We extend this to obtain a more general downward collapse result.
Keywords: NP-complete problems; DNA computing; SAT.
This paper demonstrates that some practical 3SAT algorithms on conventional computers can be implemented on a DNA computer as a polynomial time breadth first search procedure based only on the fundamental chemical operations identified by Adleman and Lipton's method. In particular, the Monien-Speckenmeyer algorithm, when implemented on DNA, becomes an $O(n\cdot\max\{m^2,n\})$ time, $2^{0.6942 n}$ space algorithm, with significant increase in time and significant decrease in space.This paper also proposes a fast breadth first search method with fixed split points. The running time is at most twice as that of Lipton. Although theoretical analysis of the algorithm is yet to be done, simulations on a conventional computer suggest that the algorithm could significantly reduce the search space for 3SAT for most cases. If the observation is correct, the algorithm would allow DNA computers to handle 3SAT formulas of more than 120 variables, thereby doubling the limit given by Lipton.
Keywords: 2-dag interchange equivalence; boolean negation equivalence; EP; ES; ordered binary decision diagram negation equivalence.
We study EP, the subclass of NP consisting of those languages accepted by NP machines that when they accept always have a number of accepting paths that is a power of two. We show that the negation equivalence problem for OBDDs (ordered binary decision diagrams) and the interchange equivalence problem for 2-dags are in EP. We also show that for boolean negation the equivalence problem is in EP^{NP}, thus tightening the existing NP^{NP} upper bound. We show that FewP, bounded ambiguity polynomial time, is contained in EP, a result that seems incomparable with the previous SPP upper bound. Finally, we show that EP can be viewed as the promise-class analog of C_=P.
Keywords: disjoint union; information encoding; computational complexity; extended low hierarchy.
We prove that the join of sets may actually be simpler than the sets themselves: There exist sets that are not in the second level of the extended low hierarchy, EL_2, yet their join is in EL_2. That is, in terms of extended lowness, the join operator can lower complexity. We study the closure properties of EL_2 and prove that EL_2 is not closed under certain Boolean operations. To this end, we establish the first known (and optimal) EL_2 lower bounds for certain notions generalizing Selman's P-selectivity, which may be regarded as an interesting result in its own right.
Keywords: local search; optimization; witness-isomorphic reductions.
We study witness-isomorphic reductions, a type of structure-preserving reduction between NP decision problems. We completely determine the relative power of the different models of witness-isomorphic reduction, and we show that witness-isomorphic reductions can be used in a uniform approach to the local search problem.
Keywords: one-way function; pseudo-random generator; hard function.
This paper investigates the extent to which a public source of random bits can be used to obtain private random bits that can be safely used in cryptographic protocols. We consider two cases: (a) the case in which the part privatizing random bits is computationally more powerful than the adversary, and (b) the case in which the part privatizing random bits has a small number of private random bits. The first case corresponds to randomized hard functions and the second variant corresponds to randomized pseudo-random generators. We show the existence of strong randomized hard functions and pseudo-random generators. The randomized pseudo-random generator takes as input an exponentially long random string from a public source and a polynomially long private random string and outputs an exponentially long string which looks random to any adversary circuit of exponential size. The construction is very efficient and has provable safety. As a side effect, it is shown that relative to a random oracle P/poly is not measurable in $EXP$ in the resource-bounded theoretical sense and a very strong separation between sublinear time and $AC^0$ is obtained.
Keywords: computational complexity theory; complete sets; polynomial hierarchy; pseudorandom generators; relativization; time hierarchiees.
We survey the background and challenges of a number of open problems in the theory of relativization. Among the topics covered are pseudorandom generators, time hierarchies, the potential collapse of the polynomial hierarchy, and the existence of complete sets.
Keywords: ordered computation; self-specifying machines; mind changes; bounded queries; NP; #P; computational complexity theory.
We study the computational power of machines that specify their own acceptance types, and we show that they accept exactly the languages in R_m^{#P}(NP). We study the effect of query order on computational power, and show that P^{BH_j[1]:BH_k[1]}---the languages computable via a polynomial-time machine given one query to the $j$th level of the boolean hierarchy followed by one query to the $k$th level of the boolean hierarchy---equals R^p_{j+2k--1-tt}(NP) if $j$ is even and $k$ is odd, and equals R^p_{j+2k-tt} otherwise. Thus, unless the polynomial hierarchy collapses, it holds that for each 1 \leq j \leq k: P^{BH_j[1]:BH_k[1]} = P^{BH_k[1]:BH_j[1]} \Longleftrightarrow (j=k) \vee (j is even \wedge k = j+1). We extend our analysis to apply to more general query classes.
Keywords: probabilistic complexity classes; nondeterministic complexity classes; logspace reducibility.
It is shown that the PL hierarchy PLH = PL \bigcup PL^{PL} \bigcup PL^{PL}^{PL} \bigcup \cdots, defined in terms of the Ruzzo-Simon-Tompa relativization, collapses to PL. Also, it is shown that PL is closed under logspace-uniform AC^0-reductions.
Keywords: pseudo-random generator; hard function; one-way function.
I investigate the extent to which a public source of random bits can be used to obtain some basic cryptographic primitives: hard functions, pseudo-random generators and one-way functions. Strong randomized hard functions and one-way functions are exhibited. The existence of a randomized pseudo-random generators with analogous safety parameters remains open, but a weaker variant is presented. As a side effect, I show a very strong separation between sublinear time and $AC^0$.
Keywords: computational complexity; certificate complexity.
Can easy sets only have easy certificate schemes? In this paper, we study the class of sets that, for all NP certificate schemes (i.e., NP machines), always have easy acceptance certificates (i.e., accepting paths) that can be computed in polynomial time. We also study the class of sets that, for all NP certificate schemes, infinitely often have easy acceptance certificates. We give structural conditions that control the size of these classes.
Keywords: semi-feasible sets; P-selectivity; ranking; closure properties; NNT.
We study the polynomial-time semi-rankable sets (P-sr), the ranking analog of the P-selective sets. We prove that P-sr is a strict subset of the P-selective sets, and indeed that the two classes differ with respect to closure under complementation, closure under union with P sets, closure under join with P sets, and closure under P-isomorphism. While P/poly is equal to the closure of P-selective sets under polynomial-time Turing reductions, we build a tally set that is not polynomial-time reducible to any P-sr set. We also show that though P-sr falls between the P-rankable and the weakly-P-rankable sets in its inclusiveness, it equals neither of these classes.
Keywords: nondeterministic functions; reducibility; the polynomial hierarchy.
It is shown for any constant c<1, that if NPMV have refinements in the class of multivalued functions that are polynomial time computable with c(log n) access to NPSV, then the polynomial time hierarchy collapses to its second level.
Keywords: space complexity; complexity specification; complexity hierarchy; compression theorem; speedup theorem; gap theorem; Turing machine; Fundamental Theorem.
This is a complete exposition of a tight version of a fundamental theorem of computational complexity due to Levin: The inherent space complexity of any partial function is very accurately specifiable in a \Pi_1 way, and every such specification that is even \Sigma_2 does characterize the complexity of some partial function, even one that assumes only the values 0 and 1.
Keywords: the sparse hard set problems; P-complete sets; reducibilities.
In 1978, Hartmanis conjectured that there exist no sparse complete sets for P under logspace many-one reductions. In this paper, in support of the conjecture, it is shown that if P has sparse hard sets under logspace many-one reductions, then P \subseteq DSPACE[log^2 n]. The result is derived from a more general statement that if P has 2^{polylog} sparse hard sets under poly-logarithmic space-computable many-one reductions, then P \subseteq DSPACE[polylog].
Keywords: computational complexity; P-selectivity; closure properties.
We introduce a generalization of SelmanUs P-selectivity that yields a more flexible notion of selectivity, called (polynomial-time) multi-selectivity, in which the selector is allowed to operate on multiple input strings. Since our introduction of this class, it has been used to prove the first known (and optimal) lower bounds for generalized selectivity-like classes in terms of EL_2, the second level of the extended low hierarchy. We study the resulting selectivity hierarchy, denoted by SH, which we prove does not collapse. In particular, we study the internal structure and the properties of SH and completely establish, in terms of incomparability and strict inclusion, the relations between our generalized selectivity classes and Ogihara's P-mc (polynomial-time membership-comparable) classes. Although SH is a strictly increasing infinite hierarchy, we show that the core results that hold for the P-selective sets and that prove them structurally simple also hold for SH. In particular, all sets in SH have small circuits; the NP sets in SH are in Low_2, the second level of the low hierarchy within NP; and SAT cannot be in SH unless P = NP. Finally, it is known that the P-selective sets are not closed under union or intersection. We provide an extended selectivity hierarchy that is based on SH and that is large enough to capture those closures of the P-selective sets, and yet, in contrast with the P-mc classes, is refined enough to distinguish them.
Keywords: counting classes; #P; computational complexity; unambiguous computation.
We explore the potentially ``off-by-one'' nature of the definitions of counting (#P versus #NP), difference (DP versus DNP), and unambiguous (UP versus UNP; FewP versus FewNP) classes, and make suggestions as to logical approaches in each case. We discuss the strangely differing representations that oracle and predicate models give for counting classes, and we survey the properties of counting classes beyond #P. We ask whether subtracting a #P function from a P function it is no greater than necessarily yields a #P function.
Keywords: resource-bounded measure; P-selective sets; P-multiselective sets; cheatable sets; easily countable sets; easily approximable sets; near-testable sets; nearly near-testable sets; P-bi-immune sets.
It is shown that the following classes have measure 0 in E: the class of P-selective sets, the class of P-multiselective sets, the class of cheatable sets, the class of easily countable sets, the class of easily approximable sets, the class of near-testable sets, the class of nearly near-testable sets, the class of sets that are not P-bi-immune. These are corollaries of a more general result stating that the class of sets that are p-isomorphic to P-quasi-approximable sets has measure 0 in E. By considering the recent approach of Allender and Strauss for measuring in subexponential classes, we obtain similar results with respect to P for classes having weak logarithmic time membership properties.
Keywords: AC^0; Kolmogorov complexity; strong separation.
If A is a set in AC^0 such that for some q > 0 and infinitely many n, |A^n| > 2^{n - log^q n}, then A contains more than quasipolynomially many strings with polynomial-time bounded length-conditioned Kolmogorov complexity below n^{\epsilon}, for arbitrary \epsilon > 0, at each length n where A satisfies the above density condition. As a consequence, we exhibit a set A in NP such that all sets consisting of strings that have small Hamming distance to some string in A are bi-immune to sets in AC^0 that satisfy the above density condition.
Keywords: auxiliary pushdown automata; Arthur-Merlin games; games against nature; context-free languages; semi-unbounded circuits; exponential-time.
Properties of probabilistic as well as ``probabilistic plus nondeterministic'' pushdown automata and auxiliary pushdown automata are studied. These models are analogous to their counterparts with nondeterministic and alternating states. Complete characterizations in terms of well-known complexity classes are given for the classes of languages recognized by polynomial time-bounded, logarithmic space-bounded auxiliary pushdown automata with probabilistic states and with ``probabilistic plus nondeterministic'' states. Also, complexity lower bounds are given for the classes of languages recognized by these automata with unlimited running time. It follows that, by fixing an appropriate mode of computation, the difference between classes of languages such as P and PSPACE, NL and SAC^1, PL and Diff_>(#SAC^1) is characterized as the difference between the number of stack symbols; that is, whether the stack alphabet contains one versus two distinct symbols.
Keywords: robust machines; counting complexity classes.
It is shown for any prime power k, that the class of languages recognized by robust oracle Turing machines that are P-helped by MOD_kP coincides with the class MOD_kP.
Keywords: P-selective sets; semi-recursive sets; polynomial-time reducibilities; polynomial-size circuits.
This paper studies a notion called polynomial-time membership comparable sets. For a function g, a set A is polynomial-time g-membership comparable if there is a polynomial-time computable function f such that for any x_1, \cdots, x_m with m \geq g(\max{|x_1|, \cdots, |x_m|}), outputs b \in \{0,1\}^m such that (A(x_1), \cdots, A(x_m)) \neq b. The following is a list of major results proven in the paper. (1) Polynomial-time membership comparable sets construct a proper hierarchy according to the bound on the number of arguments. (2) Polynomial-time membership comparable sets have polynomial-size circuits. (3) For any function f and for any constant c>0, if a set is \leq^p_{f(n)-tt}-reducible to a P-selective set, then the set is polynomial-time (1+c) log f(n)-membership comparable. (4) For any {\cal C} chosen from {PSPACE, UP, FewP, NP, C_=P, PP, MOD_2P, MOD_3P, \cdots}, if {\cal C} \subseteq P-mc(c log n) for some c<1, then {\cal C} = P. As a corollary of the last two results, it is shown that if there is some constant c<1 such that all of {\cal C} are polynomial-time n^c-truth-table reducible to some P-selective sets, then {\cal C} = P, which resolves a question that has been left open for a long time.
Keywords: logspace reductions; query models; computational complexity; Kolmogorov complexity; self-reducibility.
We study the relative computational power of logspace reduction models. In particular, we study the relationships between one-way and two-way oracle tapes, resetting of the oracle head, and blanking of the oracle tape. We show that oracle models letting information persist between queries can be quite powerful, even if the information is not readable by the querying machine. We show that logspace f(n)-Turing reductions are stronger than polynomial-time f(n)-Turing reductions when f(n) = \omega (log n), and that this is optimal if P = L.
Keywords: computational complexity theory; education.
This note describes and discusses textbooks, monographs, and collections that are excellent resources from which to teach courses on computational complexity theory.
Keywords: Kolmogorov complexity; Martin-L\"{o}f test; logical definability; probabilistic computation; probability asymptotics.
We argue that Martin-L\"{o}f tests provide useful techniques in proofs involving Kolmogorov complexity. Our examples cover more areas: (1) logical definability; we prove an analogue of the 0-1 Law for various logics in terms of structures with high Kolmogorov complexity; (2) probabilistic computation; we display a relation between the Kolmogorov complexity of the random strings on which a probabilistic algorithm errs and the probability error of the algorithm as well as a trade-off relation between the error probability, the length of the random bits, the number of provers, and the length of the provers' answers in one-round multiprover interactive proof systems for NP; and (3) convergence rates of probability asymptotics; we find a general lower bound on the convergence rate of such probabilities in terms of Kolmogorov complexity and provide a concrete example for the probability that a random graph with n vertices is connected.
Keywords: complexity measure; operator speed-up theorem; operator gap theorem; compression theorem; effective Baire classification; effective measure.
Strong variants of the Operator Speed-up Theorem, Operator Gap Theorem and Compression Theorem are obtained using an effective version of Baire Category Theorem. It is also shown that all complexity classes of recursive predicates have effective measure zero in the space of recursive predicates and, on the other hand, the class of predicates with almost everywhere complexity above an arbitrary recursive threshold has recursive measure one in the class of recursive predicates.
Keywords: optimal advice; computational complexity theory; semi-feasible sets; P-selective sets; advice classes.
Ko [1983] proved that the P-selective sets are in the advice class P/quadratic. We prove that the P-selective sets are in NP/linear \cap coNP/linear. We show this to be optimal in terms of the amount of advice needed.
Keywords: semi-feasible sets; selector functions; computational complexity.
A semi-membership algorithm for a set A is, informally, a program that when given any two strings determines which is logically more likely to be in A. A flurry of interest in this topic in the late seventies and early eighties was followed by a relatively quiescent half-decade. However, in the 1990s there has been a resurgence of interest in this topic. We survey recent work on the theory of semi-membership algorithms.
Keywords: complexity theory; semi-decision algorithms; membership testing; selector functions.
A set is P-selective if there is a polynomial-time semi-decision algorithm for the set---an algorithm that given any two strings decides which is ``more likely'' to be in the set. This paper establishes a strict hierarchy among the various reductions and equivalences to P-selective sets.
Keywords: branching program; ACC-circuits; counting polynomial-time hierarchy.
Cai and Furst introduced the notion of bottleneck Turing machines and showed that the languages recognized by width-5 bottleneck Turing machines are exactly those in PSPACE. Computational power of bottleneck Turing machines with width fewer than 5 is investigated. It is shown that width-2 bottleneck Turing machines capture polynomial-time many-one closure of nearly near-testable sets. For languages recognized by bottleneck Turing machines with intermediate width 3 and 4, some lower and upper bounds are shown.
Keywords: NP optimization problems; approximation algorithms; logical definability; MAXSNP; multiprover interactive proof systems.
An optimization problem A is defined by: (1) a set {\cal I}_A of input instances; we assume that this set can be recognized in polynomial time, (2) for each I \in {\cal I}_A, a set {\cal F}_A(I) of feasible solutions associated to each input instance; we assume that each element in {\cal F}_A(I) has size polynomially bounded in the size of I, and (3) an objective function f_A which maps each feasible solution to a real number; we assume that this function is computable in polynomial time. Problem A is an NP optimization problem if the associated decision problem (e.g., given I \in {\cal I}_A and a real value k, does there exists J \in {\cal F}_A(I) with f_A(J) \geq k?) is in NP. Extending a well known property of NP optimization problems in which the value of the optimum is guaranteed to be polynomially bounded in the length of the input, we observe that all NP optimization problems admit a logical characterization. NP optimization problems in which the optimum is not polynomially bounded in the length of the input are called weighted NP optimization problems since they usually arise when the input includes numerical weights like costs, distances, penalties or bonuses, etc. The logical representation of such a problem also uses weights attached to tuples over the domain of the finite structure that encodes the input. We show that any NP optimization problem can be stated as a problem in which the constraint conditions can be expressed by a \Pi_2 first-order formula and this is the best possible result. We further analyze the weighted analogue of all syntactically defined classes of optimization problems that are known to have good approximation properties in the case of NP optimization problems with polynomially bounded optimum value: MAX~NP, MAX~SNP, MAX~SNP(\pi), MIN~F^+\Pi_1 and MIN~F^+\Pi_2(1). The focus is on the difference between the case when only positive weights are allowed versus the case when both positive and negative weights are legal. All the classes above continue to have the same approximation properties in the case of positive weights. More precisely, the first four classes are approximable in polynomial time within a constant factor and the fifth class is approximable within a logarithmic factor. Using reductions from multiprover interactive systems, we show that if NP \not \subseteq DTIME[2^{\log^{O(1)} n}], the approximation properties of the above classes devaluate considerably when negative weights are also allowed (with the exception of MIN~F^+\Pi_1, where only a weaker deterioration could be proven). It follows that the general weighted versions of MAX 2SAT, SET COVER, PRIORITY ORDERING (given a finite set X and real-valued weights w(\cdot, \cdot) to all pairs of distinct elements in X find the maximum over all permutations \pi of X of \sum_{\{(x,y)~:~ x,y \in X, \pi(x) < \pi(y)\}} w(x,y)) and of some other closely related natural problems are not approximable in quasipolynomial time within a factor of 2^{\log^\mu n} for some \mu > 0 (depending on the problem), unless NP \subseteq DTIME[2^{\log^{O(1)} n}].
Keywords: complexity theory; security; reductions; sparse sets; nonadaptive queries to NP.
In computational complexity theory, an oracle is an abstract form of database that a computation with limited resources can consult to get useful information at unit cost per access. Throughout this thesis, we are interested in studying the power of various methods of access to certain types of oracles. In particular, we study three main issues: secure computation in the presence of a powerful eavesdropper, access to oracles of low information content, and filtered access to NP oracles.In the presence of a very powerful eavesdropper who watches over all transactions between a client and its oracle, it is reasonable to assume that even the algorithm that the client uses is known to the eavesdropper. Thus, the only thing the client can hope to hide from its adversary is the identity of its input. We say that a client has secure access to its oracle if no eavesdropper can distinguish between input strings of the same length merely by looking at the transactions between the client and the oracle. We say that a client has oblivious access to its oracle if the client achieves secure access by blinding its own algorithm; while accessing the oracle, the algorithm is prevented from seeing the identity of the input string other than its length. For the cases of bounded-error probabilistic and bounded-error threshold clients, though it seems likely that more languages can be recognized by secure access to an oracle than by oblivious access to the same oracle, we show that the existence of an oracle for which this is true would imply that P \neq PSPACE. We also show that the class of languages recognized by bounded-error probabilistic clients via secure access to oracles is identical to the class of languages that have one-oracle instance-hiding schemes that leak at most the length of their inputs.
Since a sparse oracle may contain only a small number of strings at each length, it can be considered a database of low information content. In view of this observation, one can expect that hard problems in NP do not reduce to any sparse set. We confirm this expectation by showing that, unless P = NP, SAT does not conjunctively reduce to a sparse set. Similar results have been shown in the literature regarding certain types of truth-table reductions such as many-one and bounded truth-table reductions. We also show that the sets that reduce to a sparse set can indeed be reduced to a relatively simple sparse set.
Lastly, we consider bounded truth-table reductions to NP oracles. In the case of deterministic polynomial-time evaluators, a single bit of parity information can be as powerful as unfiltered full information from an NP oracle. In contrast, in the case of nondeterministic polynomial-time evaluators, we show that, unless the polynomial hierarchy collapses, the parity information is strictly less powerful than the full information obtained from queries to an NP oracle. We also study the effect of filtering, by taking the remainder, modulo some constant, of the count of positive answers obtained from queries to NP oracles.
Keywords: cryptography; pseudorandom generators; injectivity.
Allender [1989] showed that if there are dense P languages containing only a finite set of Kolmogorov-simple strings, then all pseudorandom generators are insecure. We extend this by proving that if there are dense P (or even BPP) languages containing only a sparse set of Kolmogorov-simple strings, then all pseudorandom generators are insecure.
Keywords: probabilistic computation; Arthur-Merlin games; games against nature; multihead finite automata; translational methods; heads hierarchy; maze threading problem.
We investigate hierarchical properties and log-space reductions of languages recognized by log-space probabilistic Turing machines, Arthur-Merlin games, and Games against Nature with log-space probabilistic verifiers. For each log-space complexity class, we decompose it into a hierarchy based on corresponding multihead two-way finite automata and we (eventually) prove the separation of the hierarchy levels (even over one letter alphabet); furthermore, we show log-space reductions of each log-space complexity class to low levels of its corresponding hierarchy.We find probabilistic (and ``probabilistic+nondeterministic'') variants of Savitch's maze threading problem which are log-space complete for PL (and respectively P) and can be recognized by two-head one-way and one-way one-counter finite automata with probabilistic (probabilistic and nondeterministic) states.
Keywords: computational complexity; log-space probabilistic Turing machines; log-space complete problem.
Adapting the competitions method of Freivalds to the setting of unbounded-error probabilistic computation, we prove that, for any \epsilon \in (0, 1], Band-Mat-Inv(n^{\epsilon}) is log-space complete for the class of languages recognized by log-space unbounded-error probabilistic Turing machines (PL). This extends the result of Jung that Band-Mat-Inv(n) is log-space complete for PL, and may open new possibilities for space-efficient deterministic simulation of space-bounded probabilistic Turing machines.
Keywords: pattern recognition; Fourier descriptors; moment invariants; projective invariants.
The determination of invariant characteristics is an important problem in pattern recognition. Many invariants are known: Hu's moment invariants are invariant to shifts, to changes of scale, and to rotations [Hu 1962]; Burkhardt's Fourier descriptors to shifts, to changes of scale and to rotations [Burkhardt 1979]; Arbter's Fourier descriptors to affine transformations [Arbter 1990]; Wang's invariant moments to affine transformations [Wang 1977]; the two well-known 5-points invariants to projective transformations [Mundy and Zissermann 1992], etc. This paper shows that all these results, which were obtained independently and by different methods, derive from a common basic principle that allows the determination of new invariants as well.
Keywords: computational complexity.
We prove that the class NP has Co-NP-immune sets and the class Co-NP has NP-immune sets relative to a random oracle.
Keywords: computational complexity.
We present a method to prove oracle results of the following type. Let K_1, \ldots, K_{2n}, and L_1, \ldots, L_{2m} be complexity classes. Our method provides a general framework for constructing an oracle A such that K^A_{2i-1} \neq K^A_{2i} for i=1, \ldots, n and L^A_{2j-1} = L^A_{2j} for j=1, \ldots, m. Using that method we prove several results of this kind. The hardest of them is the existence of an oracle A such that P^A \neq NP^A, P^A = BPP^A, and both CoNP^A-sets and NP^A-sets are P^A-separable. We exhibit also two theorems that cannot be proved by that method.
Keywords: complexity theory; P-selectivity; closeness; sparse sets; lowness.
P/poly, the class of sets with polynomial size circuits, has been the subject of considerable study in complexity theory. Two important subclasses of P/poly are the class of sparse sets [Berman and Hartmanis, 1977] and the class of P-selective sets [Selman, 1979]. A large number of results have been proved about both these classes but it has been observed (for example, [Hemaspaandra et al., 1993]) that despite their similarity, proofs about one class generally do not translate easily to proofs regarding the other class.In this note, we propose to resolve this asymmetry by investigating the class PSEL-close of sets that are polynomially close to P-selective sets; by definition, PSEL-close includes both sparse sets and P-selective sets, thereby providing a unifying platform for proving results applicable to both. Intuitively, PSEL-close is the class of sets that can in a certain sense be approximated by P-selective sets. We prove several results separating PSEL-close from known classes within and including P/poly, and establish its location optimally in the extended low hierarchy. We illustrate the naturalness and usefulness of the class by examining the question of whether sets hard for NP or E can be PSEL-close; several well-known results are obtained as immediate corollaries.
Keywords: computational complexity.
We prove that perceptrons separating Boolean matrices in which each row has a one from matrices in which many rows have no one must have large size or large order. This result partially strengthens one-in-a-box theorem by Minsky and Papert [1967; 1988] stating that perceptrons of small order cannot decide if each row of given Boolean matrix has a one. As a consequence, we prove that AM \cap co-AM \not\subseteq PP under some oracle. This contrasts the fact that MA \subseteq PP under any oracle.
Keywords: computational complexity; upward separation.
This paper studies the range of application of the upward separation technique that has been introduced by Hartmanis to relate certain structural properties of polynomial-time complexity classes to their exponential-time analogs and was first applied to NP [Hartmanis 1983]. Later work revealed the limitations of the technique and identified classes defying upward separation. In particular, it is known that coNP as well as certain promise classes such as BPP, R, and ZPP do not possess upward separation in all relativized worlds [Hartmanis et al., 1985; Hemaspaandra and Jha 1993], and it had been suspected that this was also the case for other promise classes such as UP and FewP [Allender 1991].In this paper, we refute this conjecture by proving that, in particular, FewP does display upward separation, thus providing the first upward separation result for a promise class. In fact, this follows from a more general result the proof of which heavily draws on Buhrman, Longpr\'{e}, and Spaan's recently discovered tally encoding of sparse sets. As consequences of our main result, we obtain upward separations for various known counting classes such as \oplus P, SPP, and LWPP. Some applications and open problems are discussed.
Keywords: complexity theory; counting classes; promise problems.
In this paper, an open problem raised by Toda and Ogiwara is reformulated in the context of promise problems to make precise its very nature---thereby answering it partially---which, by intuition, is due to the promise in the definition of SPP. In particular, it is shown that the polynomial hierarchy is contained in a promise class that naturally corresponds to the class BP \cdot SPP (even though for this class itself, unfortunately, the original problem remains unsolved). Furthermore, some properties of several related classes defined via operators are studied.
Keywords: computational complexity; sparse complete sets; Boolean hierarchies; Karp-Lipton theorem.
This paper studies, for UP, two topics that have been intensely studied for NP: Boolean hierarchies and the consequences of the existence of sparse Turing-complete sets. Unfortunately, as is often the case, the results for NP draw on special properties of NP that do not seem to carry over straightforwardly to UP. For example, it is known for NP (and more generally for any class containing \Sigma^* and \emptyset and closed under union and intersection) that the symmetric difference hierarchy, the Boolean hierarchy, and the Boolean closure all are equal. We prove that closure under union is not needed for this claim: For any class \cal K that contains \Sigma^* and \emptyset and is closed under intersection (e.g., UP, US, and FewP), the symmetric difference hierarchy over \cal K, the Boolean hierarchy over \cal K, and the Boolean closure of \cal K all are equal. On the other hand, we show that two hierarchies---the Hausdorff hierarchy and the nested difference hierarchy---which in the NP case are equal to the Boolean closure fail to be equal for the UP case in some relativized worlds. Regarding sparse Turing-complete sets for UP, we prove that if UP has sparse Turing-complete sets, then the levels of the unambiguous polynomial hierarchy are simpler than one would otherwise expect: they collapse one level in terms of their location in the promise unambiguous polynomial hierarchy. We obtain related results under the weaker assumption that UP has sparse Turing-hard sets.
Keywords: complexity theory; theory of computation.
The P-selective sets are those sets for which there is a polynomial-time algorithm that, given any two strings, determines which is ``more likely'' to belong to the set: if either of the strings is in the set, the algorithm chooses one that is in the set. We prove that, for each k, the k-ary Boolean connectives under which the P-selective sets are closed are exactly those that are either completely degenerate or almost-completely degenerate. We determine the complexity of the index set of the r.e. P-selective sets: \Sigma_3^0-complete.
Keywords: density; spread; approximate lower bound; congestion control; sorting; density control; oblivious adversary; smoothness.
We extend the notion of density to individual points on a discrete distribution. We provide a linear time algorithm to find points with certain density, showing that our definition is computationally efficient. The Hot-Spot Lemma guarantees the existence of "congestion." This fact lets us find overall structure on density and enables density control. We will prove an \Omega(n log n) lower bound for the List Labeling Problem with a polynomial number of labels, hence we will solve a problem open for over ten years. The lower bound proof is based on an adversary strategy. The adversary will always insert the new item at the "crowded" point, which can be located by maintaining a structure based on the Hot-Spot Lemma. When the adversary cannot see the actual labeling, i.e., the adversary is oblivious, we are able to provide a probabilistic adversary which forces an expected \Omega(n log n / log log n) relabeling cost. When we restrict the algorithm to be smooth, which is satisfied by all the known labeling algorithms, a simple adversary strategy that always inserts at one end will give the following lower bounds: (1) when the number of labels is a polynomial in the number of items, the lower bound is \Omega(n log n); (2) when the number of labels is linear in the number of items, the lower bound is \Omega(n log^2 n); (3) when the number of labels is equal to the number of items, the lower bound is \Omega(n log^3 n).
Keywords: structural complexity theory; balanced immunity; bi-immunity; generic oracles; probability one separations.
Do self-reducible sets inherently lack immunity from deterministic polynomial time? Though this is unlikely to be true in general, in this paper we prove that sufficiently strong self-reducibility precludes sufficiently strong immunity from deterministic polynomial time. In particular, we prove that NT is not P balanced immune. However, we prove that NT, a class whose sets have very strong self-reducibility properties, is P bi-immune relative to a generic oracle. Thus the previous result cannot be relativizably extended to bi-immunity. We also prove that NP and \oplus P are both P balanced immune relative to a random oracle; the former provides the strongest known relativized separation of NP from P.
Keywords: monotonic list labeling; order maintenance; load balancing; density/congestion management and exploitation; bucketing; on line; lower bound; adversary argument.
Maintaining a monotonic labeling of an ordered list during the insertion of n items requires \Omega (n log n) individual relabelings in the worst case, if the number of usable labels is only polynomial in n. This follows from a lower bound for a new problem, prefix bucketing.
Keywords: multihead probabilistic finite automata; configuration transition matrix; log-space probabilistic Turing machines; heads hierarchy; matrix bandwidth; deterministic simulation.
We present properties of multihead two-way probabilistic finite automata that parallel those of their deterministic and nondeterministic counterparts. We define multihead probabilistic finite automata with log-space constructible transition probabilities and describe a technique to simulate these automata by standard log-space probabilistic Turing machines. Next we represent log-space probabilistic complexity classes as proper hierarchies based on corresponding multihead two-way probabilistic finite automata, and show their (deterministic log-space) reducibility to the second levels of these hierarchies. We relate the number of heads of a multihead probabilistic finite automaton to the bandwidth of its configuration transition matrix for an input string; partially based on this relation we find an apparently easier log-space complete problem for PL (the class of languages recognized by log-space unbounded-error probabilistic Turing machines), and explore possibilities for a space-efficient deterministic simulation of probabilistic automata.
Keywords: multihead nondeterministic finite automata; multihead probabilistic finite automata; probabilistic Turing machines; deterministic simulation.
We show that the heads of multihead unbounded-error or bounded-error or one-sided-error probabilistic finite automata are equivalent alternatives to the storage tapes of the corresponding probabilistic Turing machines (Theorem 1). These results parallel the classic ones concerning deterministic and nondeterministic automata. Several important properties of logarithmic-space (nondeterministic and probabilistic) Turing machines follow trivially (Observations 1--3) from the more refined versions that we prove in the setting of multihead finite automata (Theorems 2--4).
Keywords: complexity theory.
An oracle is constructed relative to which there exists an NP set that has no infinite sparse subset in co-DP.
Keywords: residue number system; Chinese remainder theorem; space complexity; sign detection; parity detection; binary representation.
For each k, let P_k be the product of the first k primes. By the Chinese remainder theorem, each integer in the interval [0, P_k) is determined by its residues modulo these k primes. We address the problems of space-efficiently computing the bits and the relative order of such numbers from their residues.
Keywords: hypergraphs; spanning hypertrees; NP-completeness; approximation algorithms.
The notion of h-hypertree was defined in [Tomescu 1986] in order to improve the Bonferroni inequalities. We prove that the problem of finding the optimal spanning h-hypertree of a complete hypergraph is NP complete, for h \geq 3. Moreover, no polynomial time approximate algorithm even with poor approximation ratio seems to exist for this problem. The existence of such an algorithm for the minimum version achieving an approximation ratio of poly(n) implies P = NP, and, for the maximum version, the existence of a quasipolynomial algorithm achieving the ratio 2^{log^{\in}n} implies NP \subseteq DTIME[n^{log^{O(1)}n}].
Keywords: advice class; bounded truth-table reduction; nonadaptive queries; NP.
Consider the standard model of computation to decide a language that is bounded truth-table reducible to an NP set: on a given input, a polynomial-time Turing machine, called a generator, produces a constant number of queries to the NP oracle; then, a second polynomial-time Turing machine, called an evaluator, given the answers to the queries, determines the membership of the given input.In this paper, we investigate the classes of languages that are decided by bounded truth-table reductions to an NP set in which evaluators do not have full access to the answers to the queries but get only partial information such as the number of queries that are in the oracle set or even just this number modulo some constant. We also investigate the case in which evaluators are nondeterministic.
We locate all these classes within levels of the boolean hierarchy, which allows us to compare the complexity of such classes. The result shows the various degrees to which the power of P or NP evaluators are affected as the partial information that the evaluators get from the answers to the queries produced by generators are changed.
Keywords: complexity theory; semi-decision algorithms; membership testing; selector functions; lowness; nonuniform complexity.
A set is P-selective if there is a polynomial-time semi-decision algorithm for the set---an algorithm that given any two strings decides which is ``more likely'' to be in the set. This paper studies two natural generalizations of P-selectivity: the NP-selective sets and the sets reducible or equivalent to P-selective sets via polynomial-time reductions. We establish a strict hierarchy among the various reductions and equivalences to P-selective sets. We show that the NP-selective sets are in (NP \cap coNP)/poly, are extended low, and (those in NP) are Low_2; we also show that NP-selective sets cannot be NP-complete unless NP = coNP. By studying more general notions of nondeterministic selectivity, we conclude that all multivalued NP functions have single-valued NP refinements only if the polynomial hierarchy collapses to its second level.
Keywords: satisfying assignments; refinement.
Is there a single-valued NP function that, when given a satisfiable formula as input, outputs a satisfying assignment? That is, can a nondeterministic function cull just one satisfying assignment from a possibly exponentially large collection of assignments? We show that if there is such a nondeterministic function, then the polynomial hierarchy collapses to its second level. As the existence of such a function is known to be equivalent to the statement ``every multivalued NP function has a single-valued NP refinement,'' our result provides the strongest evidence yet that multivalued NP functions cannot be refined.We prove our result via theorems of independent interest. We say that a set A is NPSV-selective (NPMV-selective) if there is a 2-ary partial function in NPSV (NPMV, respectively) that decides which of its inputs (if any) is ``more likely'' to belong to A; this is a nondeterministic analog of the recursion-theoretic notion of the semi- recursive sets and the extant complexity-theoretic notion of P-selectivity. Our hierarchy collapse follows by combining the easy observation that every set in NP is NPMV-selective with either of the following two theorems that we prove: (1)If A \in NP is NPSV-selective, then A \in (NP \cap coNP)/poly. (2) If A \in NP is NPSV-selective, then A is Low_2. To wit, either result implies that if every set in NP is NPSV-selective, then the polynomial hierarchy collapses to its second level, NP^{NP}.
Keywords: stochastic language; probabilistic Turing machine; residue representation; deterministic simulation; space-bounded complexity classes.
Given a description of a probabilistic automaton (one-head probabilistic finite automaton or probabilistic Turing machine) and an input string x of length n, we ask how much space does a deterministic Turing machine need in order to decide the acceptance of the input string by that automaton? The question is interesting even in the case of one-head one-way probabilistic finite automata (PFA). We call (rational) stochastic languages (S^>_{rat}) the class of languages recognized by PFAs whose transition probabilities and cutpoints (i.e., recognition thresholds) are rational numbers. The class S^>_{rat} contains context-sensitive languages that are not context free, but on the other hand there are context-free languages not included in S^>_{rat}.Our main results are as follows: (1) The (proper) inclusion of S^>_{rat} in Dspace(log n), which is optimal (i.e., S^>_{rat} \not \subset Dspace(o(log n))). The previous upper bounds were Dspace(n) [Dieu 1972; Wang 1992] and Dspace(log n log log n) [Jung 1984]. (2) Probabilistic Turing machines with space bound f(n) \in O(log n) can be deterministically simulated in space O(min (c^{f(n)}log n, log n (f(n) + log log n))), where c is a constant depending on the simulated probabilistic Turing machine. The best previously known simulation required space O(log n (f(n) + log log n)) [Jung 1984].
Of independent interest is our technique to compare numbers given in terms of their values modulo a sequence of primes, p_1 < p_2 < \cdots < p_n = O(n^a) (where a is some constant) in O(log n) deterministic space.
Keywords: threshold computation; security.
Threshold machines [Simon 1975] are Turing machines whose acceptance is determined by what portion of the machine's computation paths are accepting paths. Probabilistic machines [Gill 1977] are Turing machines whose acceptance is determined by the probability weight of the machine's accepting computation paths. Simon [1975] proved that for unbounded-error polynomial-time machines these two notions yield the same class, PP. Perhaps because Simon's result seemed to collapse the threshold and probabilistic modes of computation, the relationship between threshold and probabilistic computing for the case of bounded error has remained unexplored.In this paper, we compare the bounded-error probabilistic class BPP with the analogous threshold class, BPP_{path}, and, more generally, we study the structural properties of BPP_{path}. We prove that BPP_{path} contains both NP^{BPP} and P^{NP[log]}, and that BPP_{path} is contained in P^{\Sigma_2^p[log]}, BPP^{NP}, and PP. We conclude that, unless the polynomial hierarchy collapses, bounded-error threshold computation is strictly more powerful than bounded-error probabilistic computation.
We also consider the natural notion of secure access to a database: an adversary who watches the queries should gain no information about the input other than perhaps its length [Beaver and Feigenbaum 1990]. We show, for both BPP and BPP_{path}, that if there is any database for which this formalization of security differs from the security given by oblivious [Feigenbaum et al. 1992] database access, then P \neq PSPACE. It follows that if any set lacking small circuits can be securely accepted, then P \neq PSPACE.
Keywords: p-m-degree; Baire category; p-isomorphism problem; collapsing and noncollapsing degree.
All polynomial many-one degrees are shown to be of second Baire category in the superset topology when witness functions are allowed to run in 2^{\log^hn} time, for any h. Any improvement of this result for the complete p-m-degrees of RE, EXP or NP implies P \neq NP.
Keywords: \beta hierarchy; \beta-complete problems; limited nondeterminism; greedy algorithms; nondeterminism-preserving reductions.
Kintala and Fischer [1980] defined the limited nondeterminism hierarchy within NP, the so-called \beta hierarchy. \beta_k is the class of languages recognized by polynomial time bounded Turing machines making at most O(\log^k n) nondeterministic moves, where n is the length of the input.It has been conjectured that ``By restricting the amount of nondeterminism in NP-complete problems, we do not seem to obtain complete problems for \beta_k'' [D\'{i}az and Tor\'{a}n, 1990]. We demonstrate that this statement is incorrect under what seems to us to be the natural interpretation of the term ``restricting the amount of nondeterminism.'' We develop the concept of limited nondeterminism-preserving reductions, and obtain complete problems for \beta_k by restricting the amount of nondeterminism in NP-complete problems. We also discuss the connections between \beta hierarchy completeness and greedy algorithms; we show that using greediness we can define many complete problems for \beta.
Robust computation---a radical approach to fault-tolerant database access---was explicitly defined one decade ago, and in the following year this notion was presented at ICALP in Antwerp. A decade's study of robust computation by many researchers has determined which problems can be fault-tolerantly solved via access to databases of many strengths. This paper surveys these results and mentions some interesting unresolved issues.
Keywords: fault tolerance; consistent versus inconsistent failure; adaptive versus nonadaptive queries; arbitrary failure model; robust computation; unambiguous computation; probabilistic computation.
This paper studies the power of access, especially fault-tolerant access, to probabilistic databases and to unambiguous databases. We study fault-tolerant access to probabilistic computation, and completely characterize the complexity classes R and ZPP in terms of fault-tolerant database access. We also show that consistent and inconsistent failure are in general interchangeable.We study the power of three types of access to unambiguous computation: nonadaptive access, fault-tolerant access, and guarded access. (1) Though for NP it is known that nonadaptive access has exponentially terse adaptive simulations, we show that UP has no such relativizable simulations: there are worlds in which k+1-truth-table access to UP is not subsumed by k-Turing access to UP, or even to NP machines that are unambiguous on the questions actually asked. (2) Though fault-tolerant access (i.e., ``1-helping'' access) to NP is known to be no more powerful than NP itself, we give both structural and relativized evidence that fault tolerant access to UP suffices to recognize even sets beyond UP. Furthermore, we completely characterize, in terms of locally positive reductions, the sets that fault-tolerantly reduce to UP. (3) In guarded access, Grollmann and Selman's natural notion of access to unambiguous computation, a deterministic polynomial-time Turing machine asks questions to a nondeterministic polynomial-time Turing machine in such a way that the nondeterministic machine never accepts ambiguously. In contrast to guarded access, the standard notion of access to unambiguous computation is that of access to a set that is uniformly unambiguous---even for queries that it never will be asked by its questioner, it must be unambiguous. We show that these notions, though the same for nonadaptive reductions, differ for Turing and strong nondeterministic reductions.
Keywords: stochastic language; probabilistic automaton; stochastic function; rational stochastic language.
The study of probabilistic one-way finite-state automata has been sparsely spread over at least thirty-three years, and across media that are not all readily accessible in the West at the present time. In a uniform setting, we present properties of the classes of languages specifiable by such automata. We present results found in a wide and nonhomogeneous literature together with original results. Along with probabilistic automata, we also study a clean generalized version. The classes of languages accepted by such automata are defined in terms of numerical cutpoints and the functions that these automata compute. Our main results are some closure properties for the classes and functions. We give a technique for proving stochasticity and we apply it in the case of some well-known languages.
Keywords: computational complexity .
For any operator \tau, we say that #P is closed under \tau in context PF \circ #P, if for every f \in #P, and for every h \in PF, h \circ \tau[f] also belongs to PF \circ #P. For several operators \tau on #P functions, it is shown that the closure property of #P w.r.t. \tau in context PF \circ #P is closely related to the relation between P^{#P[1]} and higher classes such as PH^{PP} and PP^{PP}.
Keywords: security; computational complexity; probabilistic computation.
Threshold machines are Turing machines whose acceptance is determined by what portion of the machine's computation paths are accepting paths. Probabilistic machines are Turing machines whose acceptance is determined by the probability weight of the machine's accepting computation paths. Simon proved that for unbounded-error polynomial-time machines these two notions yield the same class, PP. Perhaps because Simon's result seemed to collapse the threshold and probabilistic modes of computation, the relationship between threshold and probabilistic computing for the case of bounded error has remained unexplored.In this paper, we compare the bounded-error probabilistic class BPP with the analogous threshold class, BPP_{path}, and, more generally, we study the structural properties of BPP_{path}. We prove that BPP_{path} contains both NP^{BPP} and P^{NP[log]}, and that BPP_{path} is contained in P^{\Sigma_2^P[log]}, BPP^{NP}, and PP. We conclude that, unless the polynomial hierarchy collapses, bounded-error threshold computation is strictly more powerful than bounded-error probabilistic computation.
We also formalize the natural notion of secure access to a database: an adversary who watches the queries should gain no information about the input other than perhaps its length. We show, for both BPP and BPP_{path}, that if there is any database for which this formalization of security differs from the security given by oblivious database access, then P \neq PSPACE. It follows that if any set lacking small circuits can be securely accepted, then P \neq PSPACE.
Keywords: amortized analysis; combinatorial games; computational geometry; data structures; disjoint set union-find; persistent data structures; randomized algorithms; real-time computation.
An efficient amortized data structure is one that ensures that the average time per operation spent on processing any sequence of operations is small. Amortized data structures typically have very non-uniform response times, i.e., individual operations can be occasionally and unpredictably slow, although the average time over the sequence is kept small by completing most of the other operations quickly. This makes amortized data structures unsuitable in many important contexts, such as real-time systems, parallel programs, persistent data structures and interactive software. On the other hand, an efficient (single-operation) worst-case data structure guarantees that every operation will be processed quickly.The construction of worst-case data structures from amortized ones is a fundamental problem which is also of pragmatic interest. Progress has been slow so far, both because the techniques used were of a limited nature and because the resulting data structures had much larger hidden constant factors. I try to address both these issues in this thesis.
I consider several inter-related dynamic data structuring problems for which only efficient amortized solutions were known and obtain new worst-case algorithms for them using a unified framework, that of RpebbleS games. These two-player combinatorial games are formulated so that winning strategies translate fairly readily into worst-case algorithms for data structures. I analyze these games and obtain tight bounds on the payoffs to the players. These results are then made to yield new worst-case algorithms for finger search trees, partially and fully persistent linked data structures, fully persistent union-find, dynamic fractional cascading, range trees and segment trees. With my new algorithms I often get worst-case running times that match the old amortized bounds, in a few cases settling for considerable improvements over the best existing worst-case algorithm. Several of the new algorithms are comparable in simplicity to the amortized algorithms they are based on.
The data structures considered in this thesis have numerous uses. Persistent data structures support efficient access to old versions of the data structure. They have been used in geometric algorithms, implementations of object oriented languages, optimistic discrete event simulation and tree pattern matching. Fractional cascading, segment trees and range trees are used extensively in geometric algorithms and range trees in databases as well. In addition, pebble games appear to have applications unrelated to data structuring and may also be of independent interest, as they are similar in form to "vector balancing" and "chip firing" games studied by combinatorial mathematicians.
Keywords: selection; sorting; routing; mesh-connected processor arrays; median; grids; randomized; deterministic; algorithms; lower bounds; off-line routing; multi-packet selection; optimal selection.
The mesh-connected processor array has been the object of a great deal of attention in recent years, and several parallel computers have configurations based on the mesh topology. This thesis addresses the fundamental problems of selection, sorting, and routing on mesh-like networks. Sorting and selection are prototype problems, due both to their practical applications and to their role in inter-processor communication. The routing problem isolates the issue of communication between processors in an interconnection network.We show efficient randomized algorithms for selection on mesh-like networks. In particular, we show that there is a 1.22n step randomized algorithm that selects the element of rank k at the middle processor of the nxn mesh, and uses constant size queues, with high probability. In the deterministic setting, we devise a 1.44n step algorithm for selection at the middle processor. For the case when there are N elements distributed at the nodes of an nxn mesh (N > n^2), we show a deterministic algorithm that works in O(min{n^2 log (N/n^2), max{N/n^{4/3}, n}}) steps. We show optimal algorithms for selection in a variety of restricted settings: at specific locations in the mesh; when the inputs are chosen from a small domain; and for elements with specific ranks. We are able to show that adding the toroidal and/or diagonal connections to the mesh yields better algorithms for selection. We exhibit improved randomized and deterministic algorithms for selection in higher-dimensional meshes.
The bounds for sorting and selection on the mesh seem to be very model-dependent. We define a general model of computation that has the queue size and the ability to replicate packets as parameters. We prove lower bounds for sorting and selection in some incarnations of this model. The architectures we consider are meshes and tori, with and without diagonal connections.
We examine the trade-off between time and queue size in algorithms for routing on the mesh. We show an elegant off-line algorithm to route permutations that works in 2.2n+5 steps, and requires queues of size 14.
Keywords: computational complexity; checking; self-reducibility; sparse sets.
This paper explores two generalizations, within NP, of self-reducibility: kernel constructibility and committability. Informally stated, kernel constructible sets have (generalized) self-reductions that are easy to check, though perhaps hard to compute, and committable sets are those sets for which the potential correctness of a partial proof of set membership can be checked via a query to the same set (that is, via a self-reduction). We study these two notions of generalized self-reducibility on non-dense sets. We show that sparse kernel constructible sets are of low complexity, we extend previous results showing that sparse committable sets are of low complexity, and we provide structural evidence of interest in its own right---namely that if all sparse disjunctively self-reducible sets are in P then FewP \cap coFewP is not P-bi-immune---that our extension is unlikely to be further extended. We obtain density-based sufficient conditions for kernel-constructibility: sets whose complements are captured by non-dense sets are perforce kernel constructible. Using sparse languages and Kolmogorov complexity theory as tools, we argue that kernel constructibility is orthogonal to standard notions of complexity.
Keywords: complexity theory; upward separation; downward separation; limited nondeterminism.
Upward and downward separation results link the collapse of small and large classes, and are a standard tool in complexity theory. We study the limitations of upward and downward separation.We show that the exponential-time limited nondeterminism hierarchy does not robustly possess downward separation. We show that probabilistic classes do not robustly possess Hartmanis-Immerman-Sewelson upward separation. Though NP is known to robustly possess Hartmanis-Immerman-Sewelson upward separation, we show that NP does not robustly possess Hartmanis-Immerman-Sewelson upward separation with respect to strong (immunity) separation. On the other hand, we provide a structural sufficient condition for upward separation.
Keywords: heap; parallel algorithm; optimal time/processor product; min-max heap; parallel decision tree; randomized algorithm; Ackerman's function.
This paper shows how to put n values into heap order in O(log log n) time using n/log log n processors in the parallel comparison tree model of computation, and in O^~(\alpha(n)) time on n/\alpha(n) processors, in the randomized parallel comparison tree model, where \alpha(n) is an inverse of Ackerman's function. Similar bounds are proven for the related problem of putting n values into a min-max heap.
Keywords: probabilistic space complexity; probabilistic algorithms; lower bounds.
In this paper we investigate a well known sequential model of computation: one-way LOG-SPACE Turing machines. We analyze a different known method for constructing an effective probabilistic algorithm. We prove a lower bound for probabilistic space complexity, which is good enough for understanding the above problem for the one-way LOG-SPACE Turing machine model of computation.
Keywords: communication complexity; probabilistic communication complexity; lower bounds; probabilistic algorithms.
We prove three different types of complexity lower bounds for one-way unbounded-error and bounded-error error probabilistic communication protocols for boolean functions. The lower bounds are proved for arbitrary boolean functions in the common way in terms of the deterministic communication complexity of functions and in terms of the notion "probabilistic communication characteristic" that we define.It is shown that for almost all boolean functions either Yao's lower bound or the first or third lower bound are more precise depending on the value of probability error.
We present boolean functions with different probabilistic communication characteristics which demonstrates that each of these lower bounds can be more precise than the others depending on the probabilistic communication characteristics of a function. The examples of boolean functions show that the lower bounds of the paper are precise and incomparable.
Our lower bounds are good enough for proving proper hierarchies for various one-way probabilistic communication complexity classes (namely for unbounded error probabilistic communication, for bounded error probabilistic communication, and for errors of probabilistic communication).
Keywords: fault-tolerant computing; robust computation.
This paper studies the power of three types of access to unambiguous computation: nonadaptive access, fault-tolerant access, and guarded access. (1) Though for NP it is known that nonadaptive access has exponentially terse adaptive simulations, we show that UP has no such relativizable simulations: there are worlds in which (k+1)-truth-table access to UP is not subsumed by k-Turing access to UP. (2) Though fault-tolerant access (i.e., "1-helping" access) is known to be no more powerful than NP itself, we give both structural and relativized evidence that fault tolerant access to UP suffices to recognize even sets beyond UP. Furthermore, we completely characterize, in terms of locally positive reductions, the sets that fault-tolerantly reduce to UP. (3) In guarded access, Grollmann and Selman's natural notion of access to unambiguous computation, a deterministic polynomial-time Turing machine asks questions to a nondeterministic polynomial-time Turing machine in such a way that the nondeterministic machine never accepts ambiguously. In contrast to guarded access, the standard notion of access to unambiguous computation is that of access to a set that is uniformly unambiguous-- even for queries that it never will be asked by its questioner, it must be unambiguous. We show that these notions, though the same for nonadaptive reductions, differ for Turing and strong nondeterministic reductions.
Keywords: complexity theory; reductions; sparse sets.
This paper is concerned with three basic questions about sparse sets: (1) With respect to what types of reductions might NP have hard or complete sparse sets? (2) If a set A reduces to a sparse set, does it follow that A is reducible to some sparse set that is "simple" relative to A? (3) With respect to what types of reductions might NP have hard or complete sets of low instance complexity, and, relatedly, what is the structure of the class of sets with low instance complexity?With respect to the first and third questions, intuitively one would expect that even with respect to flexible reductions NP is unlikely to have complete sets whose information content is low. With respect to the second question, one might intuitively feel that the structure imposed on a set by the fact that it reduces to a sparse set makes it plausible that we can indeed find a simple sparse set that can masquerade as the original sparse set. These two intuitions are in many ways certified by the current literature, and by the results of this paper.
Keywords: evasiveness; lower bounds; closest neighbors; Hamming distance.
We prove a lower bound on the number of distance queries necessary to solve the closest pair problem in a set of binary strings. We show that given a set of \ell^d binary strings of length 2 \cdot \ell \cdot d+1, at least \Omega (\ell ^{d+1}) pairwise distance queries have to be made by any decision tree algorithm that finds the pair of closest strings.In the course of proving this lower bound, we examine a graph theoretic problem related to lattice graphs. The nodes and edges of a lattice graph correspond to points and links of a d-dimensional grid. We consider the problem of distinguishing a lattice graph \cal L _{d,\ell} of dimension d with \ell^d nodes from its subgraph \cal L' _{d,\ell}; the latter is induced by removing the edges of a single node across one dimension. We derive a lower bound of \Omega (\ell^{d+1}) on the number of adjacency matrix queries made by any decision-tree algorithm that solves the problem.
Last Change: 4 Dec 2012 / marty@cs.rochester.edu