A general approach to incremental approximation and hierarchical clustering

David P. Williamson
Cornell University

I will present a general framework and algorithmic approach for incremental
approximation algorithms.
The framework handles cardinality constrained minimization problems. Given
some notion of ordering on
solutions of different cardinalities k, we give solutions for all values of k
such that the solutions respect the
ordering and such that for any k, our solution is close in value to the value
of an optimal solution of cardinality k.
For instance, for the k-median problem, the notion of ordering is set
inclusion and our incremental algorithm produces solutions such that any k
and k', k < k', our solution of size k is a subset of our solution of size
k'. We show that this framework applies to this incremental version of the
k-median problem (introduced by Mettu and Plaxton), and incremental versions
of the k-MST problem, k-vertex cover problem, and k-set cover problem. For
these problems we either get new incremental algorithms, or improvements over
what was previously known.

We also show that the framework applies to hierarchical clustering problems,
in which a k-clustering is ordered before a k'-clustering for k < k' if the
k'-clustering is a refinement of the k-clustering. Then a complete ordering
gives what corresponds to what is known as a hierarchical clustering in the
clustering literature. We show that our framework thus applies to giving
hierarchical clusterings such that each k-clustering in our solution is close
in value to an optimal k-clustering. In particular, we give improved
algorithms for a hierarchical version of the k-median problem introduced by
Plaxton.

This work is joint with Guolong Lin (Northeastern), Chandrashekhar Nagarajan
(Cornell), and Rajmohan Rajaraman (Northeastern).
 


Cuts, Metrics and Unique Games
Nisheeth Vishnoi


It has been a long standing conjecture due to Goemans and Linial that
negative type metrics (metrics that are squares of Euclidean metrics)
embed into L_1 with O(1) (constant) distortion. Negative type metrics
arise naturally as solutions of SDP-relaxations for Sparsest Cut and other
Graph Partitioning problems. This conjecture implies that the "integrality
gap" for the SDP-relaxation of Sparsest Cut is bounded by a constant
(which in turn implies a constant factor approximation algorithm for
Sparsest Cut). The breakthrough algorithm of Arora, Rao and Vazirani (STOC
2004) gave an O(\sqrt{log n}) upper bound on the integrality gap. They
conjectured that the upper bound could in fact be O(1)!

We disprove both of the above conjectures by constructing a (loglog
n)^{\Omega{1}} integrality gap example for the SDP-relaxation of Sparsest
Cut. Surprisingly, our construction is inspired from complexity theoretic
tools such as PCPs, Fourier Analysis, and the Unique Games Conjecture due
to Khot (STOC/CCC 2002). These techniques have no a priori connection with
metric embeddings! As a by-product, we obtain (first) hardness of
approximation results for Sparsest Cut and related problems.  Further, our
techniques allow us to construct optimal integrality gap instance for the
Goemans-Williamson SDP for MAX-CUT with the "triangle inequality
constraints", improving the results of Feige and Schechtman (STOC 2001).

In this talk, I will give an overview of our approach. This is based on a
joint work with Subhash Khot of Georgia Tech.

 

On Approximation Algorithms for Minimum Linear Arrangement

                 Howard Karloff
              AT&T Labs--Research

We study Minimum Linear Arrangement (MLA), the problem of placing
the n vertices of a graph onto {1,2,3,...,n} so as to minimize
the sum of the edge lengths.  Until the appearance of Arora,
Rao, and Vazirani's groundbreaking work on Sparsest Cut,
the best known approximation ratio for MLA was O(log n),
achieved by Rao and Richa, who improved on a O((log n)(log log n))
approximation algorithm of Even, Naor, Rao, and Schieber.

We show how to use the main lemma of Arora, Rao,
and Vazirani, together with the decomposition techniques of Rao and Richa,
to improve the approximation ratio of MLA to O((sqrt(log n))log log n).
The best part is, no previous knowledge of [ARV] will be necessary.

This is joint work with Moses Charikar of Princeton and Satish Rao of Berkeley.

 

Title: The Price of Routing Unsplittable Flow

 Yossi Azar, Tel-Aviv University

 Abstract:

The essence of the routing problem in real networks is that the
traffic demand from a source to destination must be satisfied by
choosing a single path between source and destination. The
splittable version of this problem is when demand can be satisfied
by many paths, namely a flow from source to destination. The
unsplittable, or discrete version of the problem is more realistic
yet is more complex from the algorithmic point of view; in some
settings optimizing such unsplittable traffic flow is
computationally intractable.

In this paper, we assume this more realistic unsplittable model,
and investigate the "price of anarchy", or deterioration of
network performance measured in total traffic latency under the
selfish user behavior. We show that for linear edge latency
functions the price of anarchy is exactly $2.618$ for weighted
demand and exactly $2.5$ for unweighted demand. These results are
easily extended to (weighted or unweighted) atomic "congestion
games", where paths are replaced by general subsets. We also show
that for polynomials of degree $d$ edge latency functions the
price of anarchy is $d^{\Theta(d)}$. Our results hold also for
mixed strategies.

Previous results of Roughgarden and Tardos showed that for linear
edge latency functions the price of anarchy is exactly
$\frac{4}{3}$ under the assumption that each user controls only a
negligible fraction of the overall traffic (this result also holds
for the splittable case). Note that under the assumption of
negligible traffic pure and mixed strategies are equivalent and
also splittable and unsplittable models are equivalent.

Joint work with B. Awerbuch and A. Epstein

Title: Berge’s Theorem for the Maximum Charge Problem

Abstract:
In 1957 Berge  established that a matching is maximum if and only if there are no augmenting paths in the graph.
We will talk about Berge’s result for a generalization of the matching problem - the maximum charge problem with
capacity constraints. We show that a charge is maximum if and only if there is no alternating path, or lasso, along
which the charge can be augmented. This characterizes the optimal solution to the linear programming dual of
the vertex cover problem. Despite concerted efforts it is not known whether vertex cover admits an approximation
ratio better than two.

This is joint work with Subir Ghosh, Ramesh Krishnamurti and Horst Sachs.
 

Simultaneous Matchings
 

Meena Mahajan


Given a bipartite graph G =(V,E) where V has the bipartition X
disjoint-union D (hence E is a subset of X x D), an X-perfect
matching is a matching in G that saturates every node in X.

In this talk we discuss the following generalisation of the
X-perfect matching problem, which has applications in constraint
programming: Given a bipartite graph as above and a collection F of
k subsets of X, find a subset M of the edges such that for each C in
F, the restriction of M to C x D is a C-perfect matching in G (or
report that no such set exists).

We show that the decision problem is NP-complete and that the
corresponding optimisation problem is in APX when k=O(1) and even
APX-complete already for k=2. On the positive side, we show that a
2/(k+1)-approximation can be found in O(2^k poly(k,|X union D|) )
time.

This is joint work with Khaled Elabssioni, Irit Katriel and Martin
Kutz, and is due to appear in ISAAC 2005.