I thought about "how to draw a graph" quite a bit this last month, and I have things to say about it. They're probably not that interesting though, so I'm going to say them in this forlorn blog.

Among stupid abstract concepts that mathematicians seem to love, graphs are among the most drawable. Which is unexpected, because their definition is completely disconnected from any form of spatial embedding: they don't need a metric space, or a concept of distance. Yet it's fairly obvious that "how a graph is drawn" (in two dimensions, on a sheet of paper) matters a lot. Take for example Lombardi's artsy conspiracies. Would we care about the relations between the Bush and bin Laden families without these beautiful drawings?

One of Lombardi conspirationist and wonderful graph layout, they are hard to find in high-resolution. The establishment, no doubt.
One of Lombardi conspirationist and wonderful graph layout, they are hard to find in high-resolution. The establishment, no doubt..

So graphs are inextricably related to graph layouts. And finding the "right" graph layout with a computer (or without, for that matter) is a complicated problem. In part because "right" is hard to define (both aesthetically and practically), but even with a well-defined objective function, graphs tend to be large and optimization becomes complicated fast.

Force-directed layouts

Graph drawing has a rich history that I'm not going to explore. Focusing on modern algorithm, and ignoring trivial or specialised ones, most layouts are more or less involved versions of what's called a force-directed layout (FDL).

A force-directed layout is a natural idea, for a physicist at least. Edges are replaced by some sort of attractive force (think springs) and nodes can repel each other. So, highly connected nodes cluster together, and isolated ones are pushed toward the periphery. For example, denoting $x_i$ the nodes positions, you'll be trying to minimize something like: $$ E = \sum_{\mathrm{i-j\ connected}} (x_i - x_j - l_{i, j})^2 - \sum_{i,\ j} \frac{e_{i, j}}{|x_i - x_j|} $$

Obviously you could consider an infinity of small variations starting with this template:

  • Some kind of central gravity to avoid nodes flying far away
  • An "electric field" for directed graphs
  • Fixed nodes
  • Avoided crossings

Ultimately by choosing custom potentials it's easy to place every node exactly where you want them, which kind of defeats the point. What's more, most of these options are not readily implemented (except if you like 40 years-old pseudo-code). igraph (the reference, at least in R/python1), only implements standard algorithms: fruchterman-reingold (repelling force between nodes), kamada-kawai (springs alone, but with non-zero lengths at rest) and drl (a variation on fruchterman-reingold). The minimisation step is often done with some sort of annealing process which can become quite involved2.

On that note, Wikipedia criticizes the FDL layouts for rarely reaching their true optimum. I tend to disagree with Wikipedia here, metastates are a good thing in this type of layout: they allow you to nudge the graph in the right direction with smart initial conditions (something that's used quite a bit in the drl algorithm for example). Suboptimal can be good.

A typical example to test these different algorithms is the so-called "Swiss roll" graph. To build it, start with a $L \times N$ dots arranged on a two-dimensional rectangular grid, then roll this rectangle along one of its axis in three-dimension. The distances between the dots can then be used to make a complete graph with weighted edges.3

Projection of the 3-dimensional Swiss roll on the left and the result of the fruchterman-reingold algorithm on the right.

As visible above, the FDL algorithm does a good job with this dataset, even if some of the clusters are not as separated as they could be, the structure is well conserved.

Even then, this example hints at another option, which is less used (and arguably less good in many cases) but, at least to me, feels more interesting conceptually. It consists in two stage:

  • First, embedding the graph into a high dimensional space where its internal tensions are at least somewhat resolved
  • Then reduce the dimension back to two-dimensions, while keeping the structure of the high dimensional space.

Multidimensional scaling

To resolve the tensions created by the edges in the high dimensional space, we would like to have the euclidean distances between pairs of nodes inversely proportional to the weight of the edge.space.

Let's start simple and assume that:

  1. The graph is complete (all pair of nodes are linked)
  2. The inverse of the weights between pairs of nodes are already equal to the euclidean distance between point in some high dimensional space of (unknown) dimension

Assuming this to be true, then we just need to find back the nodes positions given the distances. This is a well-known problem, called multidimensional scaling. And in the case of euclidean distances it admits an elegant solution. This is well known, but I can't resist re-derivating it here.

Let's call $D$ the matrix verifying $S_{i,j} = w_{i, j}^{-1}$. If $D$ represents euclidean distances, $D_{i, j}^2 = || \mathbf{x}_i - \mathbf{x}_j ||^2$ , then

$$S_{i, j} = D_{i, j} - D_{a, j} - D_{j, a} = 2(\mathbf{x}_i - \mathbf{x}_a)\cdot(\mathbf{x}_j - \mathbf{x}_a)$$

So $S = \mathbf{x}^T \mathbf{x}$, and $S$ is positive semi-definite4. If the eigenvalues and eigenvectors of $S$ are defined as $\lambda_i$ and $\mathbf{u_i}$ respectively, then5:

$$ \mathbf{x} = \sum_j \mathbf{u_j} \sqrt{\lambda_j} $$

So in practice, finding back the position of the dots is as easy as diagonalising the matrix $S$.

However, in most cases hypothesis 2 is not verified, i.e. if the matrix $D$ is not a distance matrix. The triangle inequality for example is unlikely to be respected for every graph. We can still try to apply the algorithm I sketched out above, in an arbitrary dimension, and hope for the best. $S$ is diagonalisable (because symmetric), so we can always find eigenvectors and eigenvalues. However, some of these eigenvalues are probably negative. What happens if we simply replace the potential negative eigenvalues by zero (without modifying the associated eigenspace), then, writing:

$$ \mathbf{x} = \sum_{j | \lambda_j > 0} \mathbf{u_j} \sqrt{\lambda_j} $$

The matrix $\mathbf{x} \mathbf{x}^T$ is then the closest positive definite matrix to $S$, in terms of the Frobenius norm6. Explicitly, this algorithm gives the solution to the following minimisation problem (for an arbitrary reference point $a$)

$$ \sum_{\mathrm{i, j}} (d_{i, j}^2 - d_{a, i}^2 - d_{a, j}^2 - (x_i - x_a)(x_j - x_a))^2 $$

Assuming $S$ is close to semi-definite7 this becomes a spring model, with spring resting length equal to $d_{i, j}$. Not so far from an FDL layout, in the end.

Dimensionality reduction

Now we have a potentially very good graph representation, but it's in an arbitrary dimension. The return to the two-dimensional world can usually be done with one of the standard dimensionality reduction algorithms. PCA is the traditional one8, t-SNE or UMAP are the cool and trendy ones.

Let's test this. With the Swiss roll's example, this method works very well (obviously, it was made for it):

Multidimensional scaling followed by PCA, applied to the Swiss roll graph
Multidimensional scaling followed by PCA, applied to the Swiss roll graph.

But how does it fare with a real-life example, where the edges weights may not represent actual distances?

Butterflies

Let's play with a cute dataset. In this graph, nodes represent photos of butterflies and edges the visual similarities between them. There are different (sub?)-species of butterflies that we're going to use as ground-truth labels.

In that case the graph is not complete (hypothesis 1 is not verified), so a preprocessing step is needed. Because the graph is connected, it can be made complete by measuring the shortest (weighted) distances between unlinked points (using Dijkstra algorithm typically).

The results are shown below. The MDS-projection via PCA does badly, but the MDS-UMAP one performs very well, and the result is qualitatively better than the layout-based on a force-directed algorithm.

From left to right Fruchterman-Reingold algorithm, PCA based MDS-projection algorithm, and UMAP-based MDS-projection algorithm.

It's packaged. This is not a very original idea –while researching it I found one paper that implemented MDS-PCA– and the fact that it's not used much is probably proof enough that in general it's not that great. So I don't encourage you to use it before the tried-and-true force-directed-layout methods. Standards exist for a reason. But if these methods don't work for you, well, why not. It's a relatively fast method (but a larger dataset may kill your memory) and the high-dimensional space can be useful in itself.


1

Following the time-honored tradition of "if it isn't implemented in my favourite language it may as well not exist".

2

drl, or Openord as it is called now, is particularly complicated but works impressively well. To my knowledge nobody tried to recode it from scratch... Probably for good reasons.

3

Hereafter, I'll take the convention that weights represent the inverse of a distance. This is of course arbitrary.

4

The reverse is also true, if a matrix is positive definite it admits a Gram representation: there's some dimension in which it can be represented as a scalar product of vectors.

5

$\lambda_i$ is positive, due to the positive-definiteness of $S$.

6

Cheng and Higham, 1998 for a more general result. Also this blog post.

7

All its negative eigenvalues are close to $0$.