Systems and Networks Research Group

What's going on

Our research

Path choice and Internet architecture

Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.
In 1988 the Internet was saved from congestion collapse by Van Jacobson: he devised an algorithm by which users' machines could reduce their transmission rate when they detect that the network is overloaded. What if users were given a choice of paths, and they were given incentives to take the less congested path? In other words, what if we think of routing as another way that end systems can control congestion?

This would remove significant strains from the Internet's current routing system (BGP). It would mean that traffic reroutes itself around congestion hotspots within milliseconds. It would also force network operators to declare honestly the level of congestion in their networks, and it would encourage competition between operators.

Researchers: Mark Handley
Find out more: The Resource Pooling Principle, Multipath congestion control and routing, MP-TCP

The problem with layering

Networking research has tended to split into three parts: engineers who deal with channel coding and physical hardware, network researchers who build algorithms for controlling network access and congestion and routing, and application programmers who treat the Internet as a black box that 'just works'. This is a useful first step in understanding networks, but it is not the last step.

We are working on designs for networks which take fuller account of the underlying hardware, with particular emphasis on wireless. The illustration shows a software radio—a device that records a 'complete' wireless signal, and allows us to program software to do all the signal processing that would normally be left to a DSP chip. We can use these radios, for example, to experiment with answers to the question: if the network protocols were aware of the full characteristics of the wireless channel, then how could they adapt so as to get better performance?

Researchers: Kyle Jamieson and Brad Karp
Find out more: Wireless projects

lockpicking tools on a keyboard

Building secure systems with unreliable parts

The most insecure parts of a computer system are the programmer and the operator. Can you trust a shopping website to have configured its website securely so that your credit card details don't leak, and can you trust that there are no vulnerabilities in the web server and the operating system that support the website?

We aim to find clean design principles and to provide software toolkits, so that programmers can more easily reason about security for complex computer systems.

Researchers: Brad Karp and Mark Handley
Find out more: Wedge, a technique for building secure bulkheads between components of a large program.

The Internet as a machine for calculating how to share resources

Millions of devices use the Internet, all of them sensing congestion and backing off when they detect overload, and no one device can see more than a tiny fraction of the links and routers in the Internet. And yet, it was discovered in 2000 that the net outcome is that the Internet's capacity is shared so as to maximize the sum total of each device's 'happiness'. The happiness function was not designed by purpose; it simply emerges from the way the algorithms work.

The picture illustrates an input-queued switch, the device at the heart of high-speed Internet routers; switches are another case in which global optimization emerges from the scheduling algorithm.

We are developing mathematical techniques for analysing the behaviour of such networks, with emphasis on queueing theory and fluid models.

Researcher: Damon Wischik
Find out more: The teleology of scheduling algorithms

Network Structure

Structure fundamentally affects function. We characterise network structures, study the relation between structures and critical properties such as network resilience, efficiency and security, and model the evolution of networks.

We introduced the rich-club coefficient to quantify how tightly the well-connected nodes, rich nodes, are interconnected with themselve. We reported that the Internet features a rich-club structure where the largest ISPs are almost fully interconnected, whereas many social networks, surprisingly, do not exhibit such a structure. We introduced the Positive-Feedback Preference (PFP) model, which has been regarded as one of the most advanced Internet topology generators.

Researchers: Shi Zhou
Find out more: Modelling complex networks

Deployment projects

The ancient Silk Road was not only a trade route but also an all-important road for the transfer of information and knowledge between major regions of the world. UCL is part of a consortium that is bringing cost-effective global Internet connectivity to Afghanistan, the Caucusus and Central Asia, through satellite and fibre technology. This is one of several large-scale deployment projects at UCL.

Researcher: Peter Kirstein
Find out more: 6CHOICE, 6DEPLOY, AVATS, GLOBAL, Silk