Chronological list of talks in the 2001-2002 academic year:
Speaker: Leen Torenvliet ,
Department of Computer Science,
University of Amsterdam, The Netherlands
Topic: Enumerations of the Kolmogorov function.
The Kolmogorov function C is the function that, on input x, returns the length of the smallest program (in some fixed programming system) that prints x. There is an easy proof that shows that C is Turing hard for r.e., i.e., with C as an oracle the halting problem can be computed. What if a function f on input x returns several values, one of which is guaranteed to be C(x)? How hard is it to compute such an f? Evidently, the hardness of f may depend on the number of values it returns. If f is allowed to return about as many values of as the length of x plus a constant, then for the most interesting, flexible type of programming systems defining C (namely, those having a certain universality-of-compression property), f is trivially computable since for such systems C(x) will always be one of 1,...,|x|+c, where c is a constant depending on the programming system. (The fact that sometimes strings have only longer names is a side-effect of the tremendous compression achieved by these systems on other strings.)
In the talk we will look at several properties of functions f as described above, which we will call enumerators. In particular:
We also consider time bounded version of enumerators. It turns out that the time bounded enumeration gives a new characterization for the class SymP as defined by Russel and Sundaram. Also Enumerating log(n) values of the space bounded Kolmogorov function is hard for PSPACE.
Speaker: Till Tantau ,
Fachbereich Informatik,
Technische Universitaet Berlin, Germany
Topic: Comparing Verboseness for Finite Automata and
Turing Machines
A language is called \emph{$(m,n)$-verbose} if there exists a Turing machine that enumerates for any $n$~words at most $m$ possibilities for their characteristic string. We study \emph{$(m,n)$-fa-verbose} languages where instead of a Turing machine a finite automaton is used. Surprisingly, Turing machines and finite automata behave in exactly the same way with respect to inclusion of verboseness classes: all $(m,n)$-verbose languages are $(h,k)$-verbose, iff all $(m,n)$-fa-verbose languages are $(h,k$)-fa-verbose. A corollary of this structural equivalence is that the \emph{Nonspeedup Theorem} also holds for finite automata. We discuss consequences of this theorem for protocol testing using finite automata.
Speaker: Mohammed Zaki ,
Department of Computer Science, Rensselaer Polytechnic Institute 
Topic:
Efficiently Mining Maximal Frequent Itemsets
I will present GenMax, a backtrack search based algorithm for mining maximal frequent itemsets. GenMax uses a number of optimizations to prune the search space. It uses a novel technique called progressive focusing to perform maximality checking, and diffset propagation to perform fast frequency computation. Systematic experimental comparison with previous work indicates that different methods have varying strengths and weaknesses based on dataset characteristics. We found GenMax to be a highly efficient method to mine the exact set of maximal patterns.
Speaker: Sue Whitesides, School of Computer Science,
McGill University  
Topic: Fixed Parameter Tractability Results for Crossing
Minimization Problems in Layered Graphs
A major initiative for dealing with (seemingly) intractable problems has been that of Fixed Parameter Tractability, proposed by Downey and Fellows. Roughly, the basic idea is to identify a parameter of the problem to be solved that is small in typical instances of interest, and then to design algorithms whose running time is bounded by a ``good'' function of the problem size, multiplied by a ``bad'' function of the parameter value. Thus for sufficiently small values of the parameter, one can hope to obtain useful algorithms for instances of problems that are NP-hard in general. This talk will explore how we have applied basic techniques from Fixed Parameter Tractability to obtain new results for Layered Graph Drawing, an area abounding with NP-complete problems.
Speaker: Alina Beygelzimer
Topic: On the treewidth of graphs and its relation to
the complexity of inference in Bayesian networks
This work is motivated by the desire to learn Bayesian networks that allow efficient inference, rather than learning Bayesian networks first, without any consideration of their inference complexity, and then dealing with this complexity by means of approximation. The complexity of inference is easily seen to be exponential in the treewidth of the underlying graph. We will show that the cases when two networks have similar description complexity (i.e. the number of independent parameters needed to specify the network) and represent close distributions, yet have drastically different treewidths, are frequent enough. Moreover, we exhibit two large natural families of networks that have this property. This demonstrates the importance of using some measure of the inference complexity as a part of model selection criteria. We also investigate the relationships between the treewidth and other structural properties of graphs that suggest efficient probabilistic tests for a quick estimation of the treewidth during learning. (Joint work with Irina Rish, IBM Watson.)
Speaker: Yin-he Cheng
Topic: Two new approaches for Gene functionality discovery
With the study of gene expression pattern through microarray data, information of gene functionality can be discovered. Widely used methods are gene-clustering methods such as self-organizing maps and K-means. Disadvantages of these methods are that they force gene expression data into a fixed number of prerdetermined clustering structures, and it cannot efficiently take advantage of heterogeneous data. Two new gene functionality discovery approaches aimed on these two disadvantages will be present. One approach is "percolation clustering" introduced in [1]. This method is based on mutual connectivity of the patterns, and aims to reveal the natural tendency of the data to cluster, in analogy to the physical phenomenon of percolation. The other approach is introduced in [2]. Instead of clustering, the approach infers gene functional classifications by synthesizing information from disparate types of genomic data. SVM (support vector machine) with heterogeneous kernel function are used. Reference: 1. Sasik, R., Hwa, T. Iranfar, N., and Loomis, W.F. (2001) Percolation clustering: a novel approach to the clustering of gene expression patterns in Dictyostelium development. Pacific Symp. Biocomputing ed.R. B. Altman, A. K. Dunker, L. Hunter, K. Lauderdale, and T. E. Klein, World Scientific, Singapore (pp. 335-347). 2. Jason Weston, Jinsong Cai and William Noble Grundy, Gene functional classification from heterogeneous data Paul Pavlidis, Proceedings of RECOMB 2001
Speaker: Jonathan Tomer 
Topic: A Survey of Nonuniform Complexity, Selectivity, and Output
Multiplicity of Multi-valued NP Functions
Nonuniform (or circuit) complexity is a measure of complexity of languages given a certain degree of "advice," which is limited in nature (in particular, the advice must be a function solely of the \emph{length} of the input). Selectivity is a different measure: a set is C-selective if there is a function in the class C that can select which of two strings is "more likely" to be in the set. Results relating these two concepts date back to 1983. More recently, this relationship has shown useful in proving that unless the polynomial-time hierarchy collapses to an extremely low level, no NP machine can compute a *unique* satisfying assignment to a satisfiable formula (even if it is allowed to reach its solution on multiple independent accepting paths). I will present a survey of these concepts and several major results in the area.
Speaker: You 
Topic:
"Problem show and tell'' meeting
Following semi-tradition, the January 21 meeting will be a ``problem show and tell,'' which means you should (if you are a theory person) come in with an open problem to explain and discuss. We'll probably start with the graduate students. ;-}
Speaker: Louis A. Deaett 
Topic: Boolean Circuit Complexity: Fundamentals and Directions
The Boolean circuit family is a computational model that, in a certain sense, allows us an alternative to the Turing machine. This is true in that Boolean circuits of bounded size are a model of non-uniform computation known to be equivalent to non-uniform Turing machines with bounded advice. Moreover, we can apply proof techniques unique to the circuit model in order to obtain some interesting complexity-theoretic results. This talk will present the basics of boolean circuits along with some fundamental results and techniques, and will explore the relationship between circuit complexity theory and other areas of computational complexity theory. It will examine what we know about circuit complexity right now, as well as some open issues and their implications.
Speaker: Mayur Thakur
Topic: The Hardness of Counting Properties: An Introduction and Overview
What can programs decide about other programs? Can a program decide whether an arbitrary program halts on a given input? Can a program decide whether an arbitrary program accepts a finite language? The answers to the above questions are "no" and "no." In fact, programs are so nontransparent that no program can decide anything interesting about them. This is an informal way of stating the well-known result known as Rice's Theorem [Ric53].
Borchert and Stephan [BS00] started the explicit search for complexity-theoretic analogs of Rice's Theorem. They proved that any nontrivial property of boolean circuits that depends solely on the number of satisfying assignments of the circuits is hard for UP, the unambiguous version of NP. Hemaspaandra and Rothe [HR00], and recently, Hemaspaandra and Thakur [HT02], each improved the previous known lower bound (respectively, from unambiguous nondeterminism to constant-ambiguity nondeterminism, and from constant-ambiguity nondeterminism to polynomial-ambiguity nondeterminism).
Furthermore, Hemaspaandra and Thakur [HT02] prove, via an oracle construction using the Party Lemma of Cai et al. [CGH+80], that the polynomial-ambiguity lower bound cannot, in terms of the exact type of hardness reduction used, be improved much using relativizable techniques. This talk will present that proof, after first giving an overview of the short yet interesting history of Rice-style theorems in complexity theory.
Speaker: Yin-he Cheng 
Topic: Two new approaches for gene functionality discovery:
Part II
With the study of gene expression pattern through microarray data, information of gene functionality can be discovered. Widely used methods are gene-clustering methods such as self-organizing maps and K-means. Disadvantages of these methods are that they force gene expression data into a fixed number of prerdetermined clustering structures, and it cannot efficiently take advantage of heterogeneous data. Two new gene functionality discovery approaches aimed on these two disadvantages will be present. One approach is "percolation clustering" introduced in [1]. This method is based on mutual connectivity of the patterns, and aims to reveal the natural tendency of the data to cluster, in analogy to the physical phenomenon of percolation. The other approach is introduced in [2]. Instead of clustering, the approach infers gene functional classifications by synthesizing information from disparate types of genomic data. SVM (support vector machine) with heterogeneous kernel function are used.
Speaker: Christopher Homan 
Topic: Interval Size Functions
We study functions that map their inputs to the sizes of intervals over partial and total orders having polynomial-time adjacency checks and/or polynomial-time order checks.
For some functions, such an interval size description is very natural. For example, the function #DIV on input n returns the number of nontrivial divisors of n. This function computes the size of the interval (1, n), defined over the order <_d as x <_d y iff x < y and x,y each divide n.
Clearly, this order has polynomial-time order checks, but what about polynomial-time adjacency checks? We show that if this order has polynomial-time adjacency checks then primality testing is in P.
It turns out that using interval sizes to describe a function is surprising natural. We define complexity classes that are very close to important, previously-studied complexity classes such as #P and FPSPACE(poly). We also show that separation hypotheses between such interval size function classes correspond exactly to separation hypotheses of other well-known complexity classes.
Speaker: Rahul Tripathi 
Topic: The Isolation Technique and its Applications
It would have been really nice if all languages in NP were recognized by NP machines with at most one accepting computation path, that is, if NP = UP. Unfortunately, it is known that the relativized proof techniques can not settle the NP = UP question. However, using randomness we can relate all languages in NP to the problem of detecting unique solutions. Valiant and Vazirani [VV86] showed that a boolean formula F can be reduced with high probability to another boolean formula F' having a unique satisfying assignment, provided that the initial formula F is a satisfiable boolean formula. Such a reduction method is known as an isolation technique.
This talk will present one of the isolation techniques, often called as the Isolation Lemma, established by Mulmuley, Vazirani, and Vazirani [MVV87]. I will show how the Isolation Lemma is used to "randomly reduce" NP to the problem of detecting unique solutions. Using this result, I will derive the celebrated theorem by S. Toda [Toda91] that "PP is as hard as the polynomial-time hierarchy (PH)". Another consequence of Toda's theorem is that all languages in the polynomial-time hierarchy are decidable in polynomial-time with at most one query access to a #P oracle.
Speaker: Junling Hu 
Topic: Stochastic Games and Learning Best Strategy
In this talk, I will discuss the framework of stochastic games, which models agent interaction in terms of dynamic games. Such games have stochastic state transition and it assumes that time is discrete. Each agent in the game faces a Markov Decision Process. The optimal policy of an agent is therefore contingent on other agents' policies. I will discuss the concept of Nash equilibrium, which requires that each agent's policy is the best response to the joint policy of others and no agent can gain by unilaterally deviating from the equilibrium. Based on this solution concept, I will discuss the multiagent reinforcement learning method and its algorithm. Particularly, I will focus on the convergence proof and its implication. Finally, I will distinguish stationary strategies and non-stationary strategies, and discuss future research on learning non-stationary optimal strategies.
Speaker: Kai Salomaa, Queen's University, Kingston, Ontario, Canada  
Topic: Shuffle decompositions of regular languages
We say that a language L has a (non-trivial) shuffle decomposition if L can be expressed as a shuffle of two languages such that neither one of them is the singleton language consisting of the empty word.
We consider the question of determining whether or not a given regular language admits a shuffle decomposition. In spite of the general consensus that ``everything'' should be decidable for regular languages (there are known exceptions but these are rather artificial problems) even the decidability of this question remains open. We discuss algebraic techniques that give elegant decidability proofs for certain subclasses of regular languages. The problem remains open for general regular languages.
(This is joint work with C. Campeanu and S. Vagvolgyi.)