9. Where do we go from here?#

Congratulations on making it to the final section of this book. Hopefully, you’ve learned a lot so far in your journey and how, armed with some statistical intuition and insight, you can bring new approaches to best visualize, represent, and apply network learning techniques to your work, research, personal musings, or any other domain in which you might come across network-valued data. If you enjoyed this book, we have great news: the field is in its infancy, and is simultaneously rapidly expanding. This means that there are a lot of directions that you can go from here to further your knowledge and keep up with the field of network machine learning. We focused a lot of this book on simple networks for simplicity, but most of the techniques described in here (the statistical models, spectral embeddings, and applications in particular) can be easily extended to more complicated networks.

Finally, there are many techniques that, in the interest of keeping the book as cohesive and concise as possible, we haven’t quite covered yet! To finish this book off, we wanted to give you a brief introduction to some other flavors of machine learning approaches for use on networks. These techniques broadly can be described as graph neural networks (or GNNs) for short. In this section, we’ll cover the following two applications:

  1. Graph Neural Networks

  2. Random walk and diffusion-based methods

  3. Network Sparsity

These topics might feel slightly tangential to the overall flavor of the book, as they cannot quite be described as statistical learning techniques, but we will present them alongside related statistical learning techniques along with some intuition as to how they differ to hopefully give you some insight into how they are working under the hood in conjunction with a lot of the intuition you’ve already developed.

Basically, what the first two techniques will do is, they will bridge machine learning techniques together and then, under the hood, will construct latent representations for the network using assorted strategies. In this sense, they implicitly learn representations for network data, in that the representations is never explicitly sought or desired for most use-cases. Then, these implicit representations are transformed for desirable downstream applications, using neural networks and other strategies. In this sense, you can think of these strategies as extensions of the approaches developed in the preceding sections, where many of the desirable intuitive steps that are taken for our preceding models can be used to conceptualize what is going on internally in these representations.

Next, we provide some background on an advanced network concept, known as sparsity. We think that if you have gotten through this book, you are probably ready to conceptualize sparsity in network data, which can be a big problem when deciding how to store and analyse your real networks. We’ll introduce some of the basics here, and leave you with some references to check into if you want to learn more.