I focused on three major areas:
Algorithmic Game Theory,
Distributed Computing and
Networking Theory.
1.
Algorithmic Game Theory
This is an emerging discipline
at the interface between
Computer Science and
Game Theory, which recently attracted talents and
demanded hard work. (See (RM2) for a research monograph on Algorithmic Game Theory, which
P. Spirakis and myself are currently preparing.) I
got interested in working in this area back in 1999, after I read the
(by now seminal) paper of
E. Koutsoupias and
C. H.
Papadimitriou
"Worst-Case Equilibria" in
STACS 1999, introducing the (now famous) KP model for selfish routing over
the anarchical internet. That work introduced the
Price of Anarchy as a measure of the worst-case performance loss in a computer system
due to the selfish behavior of its users. So, the Price of
Anarchy is the worst-case
ratio of the system's
Social Cost at a Nash equilibrium over the system's Optimum;
at a Nash equilibrium, no
user can unilaterally
deviate to improve its own Individual Cost.
I soon succeeded in sharing my
excitement about the new area with my closest colleague
Paul Spirakis,
with whom I published my first paper
(C22) in it, which now appears as
(J25) (but see also extensions in (C23));
that was also the
first published work on the KP model following the original paper in
STACS 1999.
Together with
Paul, we introduced the fully mixed Nash equilibrium,
a mixed equilibrium with all
probabilities strictly positive, which seemed amenable to tractable
analysis. We provided the
first tight bounds on Price
of Anarchy for the special case of the fully mixed Nash equilibrium, but
also a compendium of techniques for analyzing Nash equilibria in the KP model that influenced much subsequent
work - that turned out to be my most cited work.
We went on with
Paul in this promising endeavor, and
together with
D. Fotakis,
S. Kontogiannis and
E. Koutsoupias,
we shortly thereafter
presented in
(C24) a thorough analysis of the structure
and complexity of Nash equilibria for the KP model. Our findings about structure
provided the first seed to studying the fully mixed Nash equilibrium as
a candidate worst-case
equilibrium. Our complexity
results provided an efficient algorithm to compute a pure Nash
equilibrium (for the KP model), based on the classical LPT scheduling algorithm of
R. L. Graham;
in contrast, we proved that finding a (pure) Nash equilibrium with
certain optimality properties is an NP-complete
problem. Further on,
Paul and myself together with
E. Koutsoupias presented in (C25) (which later appeared as (J16))
the
first tight bounds on Price of Anarchy that
hold for all Nash equilibria. This work
introduced and analyzed a new variant of the classical balls-and-bins problem, and a new probabilistic tool
called ball fusion,
probably of more general interest and applicability, for the analysis of
random processes.
As the interest in Algorithmic Game Theory was growing stronger around 2001, a
group of European researchers, including
E. Koutsoupias,
B. Monien,
P. Spirakis and myself, were merged into a
consortium and succeeded in attracting European funds through the IST project
FLAGS.
One of the main topics of
FLAGS was Algorithmic Game Theory;
with support from
FLAGS,
we were able to push its frontier much further in the next three years.
In our already first
trip under
FLAGS,
M. Gairing,
T. Lucking,
B. Monien,
P. Spirakis and myself officially formulated in
(C32) (which later
appeared as
(J24))
the Fully Mixed Nash
Equilibrium Conjecture,
stating that the fully mixed Nash equilibrium is the worst-case one - it maximizes Social Cost. We
showed something very close to the conjecture by proving that any
arbitrary Nash equilibrium is within 2h(1+ε)
of the fully mixed one (with respect to Social Cost), where h is the factor by which the largest user
traffic deviates from the average user traffic, and ε > 0 is any arbitrarily small constant.
Crucial to our proof has been the use for the first time of techniques from Majorization Theory and
Stochastic Orders for the analysis of Nash equilibria.
Still there, we introduced Nashification - the process of transforming an
arbitrary (pure) strategy profile into a (pure) Nash equilibrium with no
increased Social Cost. We presented the first Nashification algorithm
to approximate the Social Cost of the best pure Nash equilibrium to any arbitrary
accuracy. In sharp contrast, we proved that approximating well the Social
Cost of the worst
pure Nash equilibrium is an
NP-complete
problem. These complexity results established the first contrast between best and worst Nash
equilibria with respect to their approximability properties. Moreover,
Nashification has survived as a natural paradigm for the approximation
of optimal Nash equilibria.
Our next
trip focused on the Fully Mixed Nash Equilibrium Conjecture. Together
with
T. Lucking,
B. Monien,
M. Rode,
P. Spirakis and
I. Vrto,
we used combinatorial
arguments to establish in
(C30) the conjecture for two important
special cases (two users or two links). These results still provide
state-of-the-art cases for which the Fully Mixed Nash Equilibrium
Conjecture holds. (Additional state-of-the-art cases were soon
thereafter identified in the M.Sc. Theses of I. Ippolytou and P. Pavlou, which I supervised at
University of Cyprus.)
Around 2003, the time was already just
right
for considering other theoretical models for selfish routing as
alternatives to the KP model, and investigate how Price of
Anarchy and the complexity of computing Nash equilibria depend on model assumptions. Together with
my already closest colleague
B. Monien and members of his
group
M. Gairing,
T. Lucking and
M. Rode
(whose Ph.D. Thesis at University of Paderborn I co-supervised with
Burkhard), we introduced and
investigated a series of such models in four papers ((C33)
(which now appears as
(IJS1)),
(C35) (a part of which
now appears as
(J26)),
(C36) and (C38)
(whose full version appears
as
(JS7))). These models were formed through varying the definition of either the
Social Cost or the Individual Cost, or through restricting each user to
a particular set of (allowed)
links. We obtained a comprehensive collection of bounds on Price of
Anarchy for them; some bounds identified subtle
differences between ways of defining Social Cost - for example, they
imply that the Price of Anarchy with Social Cost defined as maximum latency (as in the KP model) is radically different than when
Social Cost is defined as
average latency.
Interestingly, some of the bounds demonstrated the first (probably unforeseen) connections of
the Price of Anarchy to classical sequences of combinatorial numbers,
such as the Stirling numbers
of the second kind and the Bell numbers.
Some other bounds demonstrated the crucial role of the convexity assumption on the latency functions.
Furthermore, we presented in (C35) an algorithm to compute a
pure Nash equilibrium for the model of restricted parallel links.
This algorithm extends first the classical Preflow-Push algorithm of
A. V. Goldberg and
R. E. Tarjan to the setting of unsplittable flow; it then employs a novel
Nashification algorithm that still relies on techniques from network flows.
I believe that the connection to network flows may be the key to
identifying new tractable classes for the problem of computing
Nash equilibria.
Together with
R. Elsasser,
M. Gairing,
T. Lucking and
B. Monien in (C44),
we further investigated the model of restricted parallel links in its
simplest case where each user is restricted to only two machines. This assumption allows a
graph-theoretic representation, called the interaction graph, where users are edges and machines are
vertices. Interaction graphs allowed us to pose a new genre of
mathematical questions concerning the best-case or worst-case interaction graph (with respect to
Social Cost) for a given
number of users and
machines. We answered some of these questions for the special case of 3-regular interaction graphs; our analysis
yielded some deep properties of orientations in 3-regular graphs, which are of
independent interest. (Additional results for interaction graphs appear in the recent M.Sc. Thesis of S. Lazarides, which I supervised at
University of Cyprus.)
In a latest trip supported by the newly
funded IST projects DELIS and
AEOLUS, I embarked with
P. Panagopoulou and
P. Spirakis(in (C43)
and
(C46)) to studying the
impact of pricing mechanisms
for charging users assigned
to machines on the quality of the achievable Nash equilibria. Congestion games had assumed that the number of users
(assigned to a machine) is the determining parameter; weighted congestion games extended these to users with weights.
Our idea has been to combine the two and let each machine charge to each
user the total weight over
the total number of users
assigned to it. This scheme induces a new set of Nash equilibria,
which we studied. Within this new model, we also defined and
evaluated the Diffuse Price
of Anarchy, which extends
the Price of Anarchy to take into account the probability distribution
on inputs. We believe that the Diffuse Price of Anarchy offers a
more realistic assessment of the effect of the selfish behavior of users
than the (worst-case) Price of Anarchy. We expect that the Diffuse Price
of Anarchy will attract much further interest and study in
the years to come.
In my most recent trip to Algorithmic Game Theory and together with my post-doc
V.
Papadopoulou and M. Gelastou,
A. Philippou and
P. Spirakis,
we introduced and analyzed
(in
(C42),
(C45), (CS1) and (CS2))
a novel, simultaneously game-theoretic and graph-theoretic model for network security;
it applies to adversarial networks
with harmful procedures called attackers, and protector (or defender)
entities cleaning up the network against the harm. We have characterized
the Nash equilibria of the associated game. The obtained graph-theoretic
characterizations point out connections to milestones in Matching Theory, such as Hall's Theorem.
From the characterizations, we derived efficient algorithms to compute
(classes of) Nash equilibria for some special cases of the network graph.
In conclusion, my published work on Algorithmic Game Theory has spawned several innovative ideas
and brand-new concepts
and techniques, such as the fully mixed Nash
equilibrium, the Fully Mixed Nash Equilibrium
Conjecture, ball fusion, Nashification, the connection of Nash equilibria to network flows, (worst-case
and best-case) interaction graphs,
the Diffuse Price of Anarchy,
to name a few.
All these concepts and methodologies have found a position in contemporary Algorithmic
Game Theory.
For future work,
I will keep studying the Price of Anarchy and the
problem of computing equilibria (including Nash,
correlated and market) for other
significant classes of games that model modern computer
systems. |
To top of page
2.
Distributed Computing
My interest in Distributed Computing is traced back to my graduate years
at
Harvard (but see also (J12) for a survey I wrote later), where
my
(Ph.D. Thesis) focused
on Timing-Based Distributed
Computation. This subfield concerns the study of how timing assumptions in
a distributed system affect the complexity (and even the possibility) of
solving interesting problems. I presented in
(C1) (which later appeared as (J2))
together with
H. Attiya,
and in (J1) the first complexity-theoretic separations
between semi-synchronous
and asynchronous models of distributed computation; we
used the classical session
problem as a vehicle for
obtaining the separations. I also presented, together with
D. Roth in (C2) and
(C3)
(which were later combined
into
(J9)) time bounds on
implementing Distributed
Shared Memory over partially
synchronous message-passing;
in particular, we presented the first timing-based distributed algorithm for
implementing linearizable
read-write objects over
message-passing. I later generalized in (C20),
together with my Undergraduate Diploma student M. Eleftheriou, some of those bounds to models with drifting clocks and different delay assumptions.
Also, I obtained in
(C4) (almost matching) lower and upper bounds
on how closely clocks can be synchronized in a semi-synchronous
network.
Since 1993,
I have have been studying
counting networks (invented by
J. Aspnes,
M. Herlihy and
N. Shavit back in 1991) - some distributed,
low-latency and low-contention data structures for implementing shared counters and other primitives in a
distributed system. (See
(J5) for an early survey I wrote, and
(RM1) for a research monograph which
C. Busch and myself are currently
preparing.) My work on this topic can be partitioned as followed:
-
Combinatorial properties:
This reflects the interest that originally attracted me to work on counting
networks; it arose back in 1993 out of my supervision of a term project by
Costas Busch, then a Master's student at
University of Crete, for a graduate class on distributed computing I
taught there. Starting up a fruitful, yearly collaboration with Costas, we published in (C6) (which later appeared as (J4))
an extensive mathematical study of the combinatorial properties of counting
networks and their closest relatives, smoothing networks and (the much older) sorting networks.
Most notably, our study yielded an analog for counting networks of the classical0-1 principle for sorting networks. We further extended this
study in (J6) to encompass threshold and weak threshold networks, and in (J3) to produce methodologies for
the verification of counting networks.
-
Limitations:
Which other concurrent operations can counting networks support? Together with
W. Aiello,
C. Busch, M. Herlihy,
N. Shavit and
D. Touitou,
we discovered in
(BA6) and (C16)
(which later appeared as
(J11))
that counting networks can support, through the antitoken primitive invented earlier by
N. Shavit and
D. Touitou,
the decrement-by-one operation (concurrently with the increment-by-one).
Further on, with
C. Busch, N.
Demetriou and M. Herlihy,
we soon extended this result in
(C18) (which later appeared as
(J14))
to counting networks supporting threshold counting; eventually, we
provided in
(C21) (which later appeared as
(BC2) and (J10))
a combinatorial characterization of the capabilities and limitations of the antitoken.
In (C28)
(appearing later as
(J19)),
C. Busch,
P. Spirakis
and myself solved a
long-standing open problem about the possibility of extending
counting networks, while keeping them finite-sized, high-concurrency and
low-contention, to support operations more complex than increment-by-one but yet as simple as adding. We introduced a novel group-theoretic
framework (in detail, a new class
of algebraic groups
called monotone groups)
to model monotone
Read&Modify&Write operations, within which we proved the long-seeked impossibility
result: there is no such finite-sized network, even if the tokens realizing the operations can carry
and exchange information with the network, and even if the network
is made up of
arbitrary switches. Key to our proof is a Monotone Linearizability Lemma,
providing the
first instance where linearizability is
an inherent (as opposed to required)
property that builds up an efficiency bottleneck.
(For further extensions and generalizations, see (C41)
and
(J18).)
-
Consistency properties:
It had been known that there is no (non-trivial) linearizable, asynchronous counting network.
Together with
M.
Papatriantafilou and
Ph. Tsigas, we investigated in (C8) timing
conditions for linearizablecounting networks. Together with
M. Merritt and
G.
Taubenfeld, we later provided in (C17) the
first comparison of sequential consistency and linearizability for timing-based
counting networks.
Motivated by my constant
interest in the impact of timing assumptions in a distributed
system on time and space complexities, I looked at the
connection management problem from the
timing point of view. How do timing assumptions affect the delivery
and quiescence times for a long-lived connection between a
sender and a receiver?
(For the general question, see (J8)
for an
early survey I wrote.) Together with my Undergraduate Diploma
student N. Papadakis,
we presented in
(C11) (which later appeared as
(J15)) a
comprehensive collection of trade-offs between delivery and
quiescence times under various assumptions on the sender and
receiver clocks.
In a distinct direction with a
strong combinatorial flavor, I studied together with
P. Spirakis and my Undergraduate
Diploma student S.
Georgiades in (BA9) and (C19),
a distributed
decision-making model
introduced by
C. H.
Papadimitriou and
M. Yannakakis in
PODC 1992.
We focused a system that suffers
long periods with no
communication among the agents.
We provided a geometric analysis of high-dimensional
polytopes to derive a new formula for the volume of the intersection
between a simplex
and an orthogonal parallelepiped
(in any dimension). In
turn, we optimized the formula to derive optimal decision-making protocols for a
certain load balancing
problem. For 3 agents, our results settle the 0.545 Conjecture
of
C. H.
Papadimitriou and
M. Yannakakis.
The lattice agreement decision problem is a weakening of
traditional consensus.
In (JS1), I presented the first early-stopping algorithm for it in
the synchronous
message-passing model where
processors fail by crashing.
For future work,
I am planning to continue
studying foundational problems of distributed computing. I am also interested in
distributed algorithms for modern settings like
ad-hoc wireless and mobile networks. |
To top of page
3.
Networking Theory
I worked on four major topics: Flow Control,
Packet-Switching/Routing, Queueing and Sensor Networks.
-
Flow Control:
I have worked on this topic together with
P. Spirakis and
P. Fatourou (then a Ph.D. student at University of Patras, whom I co-supervised with Paul).
Specifically, we looked at
bottleneck flow control, which
converges to max-min fairness;
there, we investigated rate-based
flow control, where max-min fairness
is reached through a sequence of (local) adjustments to the rates of sessions.
We studied in
(C10) (which later appeared as (J22))
trade-offs between the speed at which the system converges to max-min fairness
and the locality of the schedulers (choosing the sequence of adjustments). Our
study has yielded the
first (tight)lower bound on the convergence complexity of
oblivious schedulers (ones who "see"
neither the network nor the rates). We went on with
P. Spirakis and
P. Fatourou to extend in (C15) (which later appeared as (J21))
the classical theory of max-min fair
flow control to settings of
heterogeneous networks where sessions get different priorities to bandwidth. We also investigated in
(BA8) and (C13) the effect of local computations on the total efficiency of distributed, rate-based flow control
algorithms. An invited survey of our work appears in (C9).
-
Queueing:
I have worked on this topic together with
P. Spirakis,
S. Nikoletseas and
D. Koukopoulos(then a Ph.D. student at University of Patras, whom I co-supervised with Paul);
see (J13) and (BC1) for an early survey I wrote. Specifically, we
looked at the popular framework of
Adversarial Queueing Theory, which
replaces the stochastic assumptions made by traditional Queueing Theory on the arrival patterns of inputs to a communication network with worst-case (adversarial)
ones. The natural question then concerns stability:
how and when will all queues at the system’s servers remain bounded? We have addressed this
fundamental question in a variety of settings. For greedy queueing protocols (ones that always forward a
packet if they can), we discovered in (C27) (which appeared later as (J23))
that the network structure plays a crucial role for stability. For heterogeneous networks (ones that combine different queueing
protocols for queues at different servers), we found out in (C26) that stability is very sensitive to the precise combination of the different protocols. We also discovered
in (C29) (but see also (JS2) for a complete exposition) that stability may
be seriously harmed when different servers use different rates (also called capacities)
for forwarding packets out of their (out-going) buffers; in fact, an adversary against stability that uses just two different
levels of rates is as powerful as a general adversary. Furthermore, we proved in(TR1) that the popular FIFO
queueing protocol can be made unstable by such an adversary even if the rate of
injecting messages is arbitrarily low, and even if the network is restricted to
remain planar.
On the positive side, we provided in (C34) (but see also (JS3) for a complete exposition) tight bounds on delivery times for
certain specific combinations of networks and queueing protocols that are known
to guarantee stability.
-
Packet-Switching/Routing:
A long-standing open problem in packet-switching
concerns the
existence of a universal packet-switching algorithm with near-optimal
performance guarantees for the class of bufferless networks, where the buffer size for packets in
transit is zero. This problem has remained open since the seminal work of
T. Leighton,
B. Maggs and
S. Rao in 1988, which showed the existence of a
(near-optimal) universal, store-and-forward
packet-switching
algorithm. Together with
C. Busch,
and
M. Magdon-Ismail,
we gave in
(C40) a positive answer to this question. (See
(JS6) for a full exposition.) At the
heart of our proof is a new technique for hand-crafting a (universal)
bufferless algorithm via emulating
a
(universal) store-and-forward algorithm. The emulation builds on our novel idea
of replacing packet buffering by packet circulation; we then
prove that circulation converges fast. We expect that this idea
will find further applications, in the years to come.
To evaluate the universality price
paid by our universal, bufferless algorithm, we looked together with
C. Busch,
M. Magdon-Ismail and
R.
Wattenhofer in (C37)
(but see also
(JS5) for a complete
exposition) at bufferless algorithms for specific topologies, such
as trees and leveled networks.
We presented both centralized and distributed hot-potato algorithms for them, which improved
upon all previously known corresponding algorithms.
A more stringent (than hot-potato)
class of bufferless packet-switching algorithms is that of
direct routing, where each packet must, once
injected, be delivered to its destination without collisions with other packets.
Together with
C. Busch,
M. Magdon-Ismail and
P. Spirakis,
we presented in
(C39) (which later appears as
(J20))
a comprehensive treatment of direct routing. Our analysis includes algorithms to compute efficient direct schedules for the packets; hardness
results (more precisely, NP-completeness
and inapproximability results) for computing the optimal direct schedules; analysis of the
buffering requirements for such
optimal schedules.
-
Sensor Networks:Smart dust
is made of a vast number of ultra-small, fully
autonomous computing devices of restricted energy and computing capabilities,
cooperating towards a large sensing
task. Together with
I. Chatzigiannakis,
T. Dimitriou, S. Nikoletseas and
P. Spirakis,
we implemented and experimentally evaluated in
(C31) (which later appeared as
(J17))
a number of protocols for local
detection and propagation in smart dust networks.
For future work,
I
will keep
studying efficiency and stability issues for fundamental
problems like routing, packet switching, flow control
and queueing in modern networks like
ad-hoc wireless and mobile networks. |
To top of page
|