Mercator: uncovering faithful hyperbolic embeddings of complex networks




Guillermo García-Pérez, Antoine Allard, M Ángeles Serrano, Marián Boguñá

PublisherInstitute of Physics Pub.

2019

New Journal of Physics

NEW JOURNAL OF PHYSICS

NEW J PHYS

123033

21

12

17

1367-2630

1367-2630

DOIhttps://doi.org/10.1088/1367-2630/ab57d2

https://iopscience.iop.org/article/10.1088/1367-2630/ab57d2

https://research.utu.fi/converis/portal/detail/Publication/45454239



We introduce Mercator, a reliable embedding method to map real complex
networks into their hyperbolic latent geometry. The method assumes that
the structure of networks is well described by the popularity × similarity ${{\mathbb{S}}}^{1}/{{\mathbb{H}}}^{2}$
static geometric network model, which can accommodate arbitrary degree
distributions and reproduces many pivotal properties of real networks,
including self-similarity patterns. The algorithm mixes machine learning
and maximum likelihood (ML) approaches to infer the coordinates of the
nodes in the underlying hyperbolic disk with the best matching between
the observed network topology and the geometric model. In its fast mode,
Mercator uses a model-adjusted machine learning technique performing
dimensional reduction to produce a fast and accurate map, whose quality
already outperforms other embedding algorithms in the literature. In the
refined Mercator mode, the fast mode embedding result is taken as an
initial condition in a ML estimation, which significantly improves the
quality of the final embedding. Apart from its accuracy as an embedding
tool, Mercator has the clear advantage of systematically inferring not
only node orderings, or angular positions, but also the hidden degrees
and global model parameters, and has the ability to embed networks with
arbitrary degree distributions. Overall, our results suggest that mixing
machine learning and ML techniques in a model-dependent framework can
boost the meaningful mapping of complex networks.

Last updated on 2024-26-11 at 21:29