Invited Speakers and Abstracts

Tutorials (joint CiE/UCNC 2013)

Gilles Brassard (Université de Montréal)

TUTORIAL 1: Rise of the Quantum Age
Quantum mechanics is perhaps the most profound revolution ever to occur in our understanding of Nature. While born with last century, it appeared for many decades to be just a game, played by physicists strictly for the sake of advancing pure knowledge, without any impact on everyday life. And then, quantum mechanics gave birth to the transistor, and society was transformed forever. The physicists’ “game” ushered in the Information Age, which is the signature of the 20th century, just as
the 19th century was the Machine Age. But we are poised to experience a second quantum revolution, in which the full power of the quantum world will be unleashed in ways never before thought possible, transmitting and processing information with unconditionally secure communication and
computers powerful beyond imagination. Indeed, the 21st century will go down in history as the Quantum Age. No prior knowledge in quantum mechanics will be expected.

TUTORIAL 2: Cryptography in the Quantum Age
Quantum Cryptography is the first near-term practical application of the emerging field of quantum information science. Access to a quantum channel allows two parties who share only a short prior secret to exchange messages with provably perfect confidentiality under the nose of an eavesdropper whose computational power is unlimited and whose technology is restricted only by the accepted laws of physics. But what if the legitimate parties are purely classical whereas the eavesdropper
is endowed with a full-scale quantum computer? We shall see that a modest level of computationally-secure confidentiality remains provably possible in the random oracle model. No prior knowledge in cryptography will be expected.

TUTORIAL 3: Quantum Communication Complexity and Pseudotelepathy
Communication complexity is an area of classical computer science that aims at quantifying the amount of communication necessary to solve distributed computational problems. Quantum communication complexity uses quantum mechanics to reduce the amount of communication that would be classically required. In extreme cases, the effect of transmitting quantum bits cannot be achieved classically short of transmitting an exponentially larger number of bits. Pseudotelepathy is a surprising application of quantum information science to communication complexity.
Thanks to entanglement, perhaps the most nonclassical manifestation of quantum mechanics, two or more quantum players can accomplish some distributed tasks with no need for communication whatsoever in contexts that would be provably impossible for classical players.

Grzegorz Rozenberg

TUTORIAL : Processes inspired by the functioning of living cells: Natural Computing approach

Natural Computing is an interdisciplinary research field that investigates human-designed computing inspired by nature as well as computation taking place in nature, i.e., it investigates models, computational techniques, and computational technologies inspired by nature as well as it investigates phenomena/processes taking place in nature in terms of information processing.
One of research areas from the second strand of research is  a computational understanding of the functioning of the living cell. We view this functioning in terms of formal processes resulting from interactions between (a huge number of) individual reactions. These interactions are driven by two mechanisms, facilitation and inhibition: reactions may (through their products) facilitate or inhibit each other.
We present a formal framework for the investigation of these interactions.  We motivate this framework by explicitly stating a number of assumptions that hold for processes resulting from these interactions, and we point out that these assumptions are very different from the ones underlying traditional models of computation. We discuss some basic properties of these processes, and demonstrate how to capture and analyse, in our formal framework, some notions related to cell biology and biochemistry.
Research topics in this framework are motivated by biological considerations as well as by the need to understand the underlying computations. The models we discuss turned out to be novel and attractive from the theory of computation point of view.
The lectures are of interest to  mathematicians and computer scientists interested in formal models of computation as well as to bioinformaticians, biochemists, and biologists interested in foundational/formal understanding of biological processes. They are of a tutorial style and self-contained. In particular, no prior knowledge of biochemistry or cell biology is required.

Plenary Lectures

Ulle Endriss (University of Amsterdam) Collective decision making is the process of mapping the individual views of several individual agents into a joint decision. The need for collective decision making mechanisms is abundant, not just in the realm of politics, but also for a wide range of scientic and technological applications. These include, for instance,  logistics, grid computing, and recommender systems. The alternatives to be decided upon often have a combinatorial structure: an alternative is characterised by a tuple of variables, each ranging over a nite domain. Classical approaches to collective decision making, developed in social choice theory, do not take the computational limitations induced by the combinatorial nature of the problem into account. For instance, if we are asked to elect a committee consisting of k representatives, choosing from a pool of n candidates, then we
are in fact faced with a social choice problem with (n|k) alternatives. Asking each voter for their full preferences over these (n|k) alternatives may not be feasible in practice. Similarly, if we have to choose between accepting or rejecting each of n different propositions, then we are dealing with a social choice problem with 2n possible outcomes.
In this talk I will report on a number of recent developments in the area of collective decision making in combinatorial domains. This includes the study of languages for the compact representation of preferences [2, 4], the design of voting rules for multi-issue elections [1], and the analysis of the computational complexity of decision problems arising in judgment aggregation [3].

Lance Fortnow (Georgia Institute of Technology) EACSL lecture

I recently completed a general audience book on the P versus NP problem. Writing the book has forced me to step back and take a fresh look at the question from a non-technical point of view. There are really two different P versus NP problems. One is the formal mathematical question, first formulated by Steve Cook in 1971 and listed as one of the six unresolved millennium problems by the Clay Mathematics Institute. The other P versus NP problem is the one that interests physicists, biologists, economists and the mathematically-curious general public. This talk will explore both faces of the P versus NP problem and what it means for mathematics and computer science moving forward.

Anna Karlin (University of Washington) APAL lecture

Optimizing in a Strategic World: An Introduction to Algorithmic Game Theory

The goal of discrete optimization is to design systems with optimal or near-optimal performance. In the age of the Internet, however, we must take into account the fact that many of the users of our systems are driven by an economic goal, and interact with varying degrees of collaboration and
competition. Moreover, the strategic nature of interactions in online dynamic marketplaces means that the roll-out of a new algorithm designed with the expectation of improved performance can end up degrading performance due to unanticipated responses by strategic users.
The field of algorithmic game theory addresses this issue, as well as a wide variety of other problems at the intersection of game theory, economics and theoretical computer science. In this talk, we survey recent research and open problems in this field.

Bernard Moret: With the enormous and growing variety of computational problems arising in the analysis of biological data, much of what gets published is more speculative than grounded in solid mathematics, statistics, machine learning, or algorithms. Of all areas of computational biology, phylogenetic inference, the reconstruction of the evolutionary history (typically represented as a tree) of a group of organisms, is perhaps the most daring, in the sense that its results may be the most challenging to evaluate.  However, because phylogenetic inference has become one of the most widespread research tools throughout the life sciences, assessing the accuracy, reproducibility,
and robustness of its predictions is an important, if often neglected, task.
In this talk, I will focus on the assessment of robustness for phylogenetic inference.  Phylogenetic inference is a well defined problem with a large variety of well described algorithms, and one for which accepted methods of robustness assessment exist, namely likelihood ratio tests and the
phylogenetic bootstrap.  Bootstrapping is a resampling method first proposed in 1979 by Efron to assess the accuracy of statistical estimates; 6 years later, Felsenstein described a procedure based on bootstrapping to assess confidence in an inferred phylogeny.  This phylogenetic bootstrap is now a required component of any phylogenetic analysis in the systematics community. However, it applies only to sequence data at a time where many other forms of biological data are being produced and it remains poorly understood even by most of its practitioners.
I will briefly review bootstrapping and the phylogenetic bootstrap and show how to develop a bootstrapping procedure for genomic rearrangement data (one of the new types of data now getting into mainstream use), in the process gaining new insights into bootstrapping. I will then look at the effect of data selection on the bootstrap scores, focusing on so-called rogue taxa, taxa
that are very difficult to place in the tree and, as we shall see, can cause the attribution of poor bootstrap scores to edges that are in fact perfectly correct.  Thus even a well accepted procedure in use for over 20 years can be misleading in its assessment of robustness unless one considers very carefully its underlying assumptions and the precise meaning of its results.

Mariya Soskova (Sofia University) A fundamental goal of computability theory is to understand the way that objects relate to each other in terms of their information content. We wish to understand the relative information content between sets of natural numbers, how one subset of the natural numbers Y can be used to specify another one X. This specification can be computational, or arithmetic, or even by the application of a countable sequence of Borel operations. Each notion in the spectrum gives rise to a different model of relative computability. Which of these models best reflects the real world computation is under question.

Endre Szemerédi: Various Regularity Lemmas in Graphs and Hypergraphs
We are going to formulate and discuss the philosophy of these lemmas
and their applications in property testing and at the designing
polynomial time approximation schemes for certain graph problems.