RDPGs and more general network models
Contents
11.6. RDPGs and more general network models#
11.6.1. Random Dot Product Graph (RDPG)#
11.6.1.1. A Priori RDPG#
The a priori Random Dot Product Graph is an RDPG in which we know a priori the latent position matrix
Parameter |
Space |
Description |
---|---|---|
The matrix of latent positions for each node |
Noting that
What is the generative model for the a priori RDPG? As we discussed above, given
11.6.1.1.1. Probability#
Given
Unfortunately, the probability equivalence classes are a bit harder to understand intuitionally here compared to the ER and SBM examples so we won’t write them down here, but they still exist!
11.6.1.2. A Posteriori RDPG#
Like for the a posteriori SBM, the a posteriori RDPG introduces another strange set: the intersection of the unit ball and the non-negative orthant. Huh? This sounds like a real mouthful, but it turns out to be rather straightforward. You are probably already very familiar with a particular orthant: in two-dimensions, an orthant is called a quadrant. Basically, an orthant just extends the concept of a quadrant to spaces which might have more than
In two dimensions, this is the traditional upper-right portion of the standard coordinate axis. To give you a picture, the
Click to show
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axisartist import SubplotZero
import matplotlib.patches as patch
class myAxes():
def __init__(self, xlim=(-5,5), ylim=(-5,5), figsize=(6,6)):
self.xlim = xlim
self.ylim = ylim
self.figsize = figsize
self.__scale_arrows()
def __drawArrow(self, x, y, dx, dy, width, length):
plt.arrow(
x, y, dx, dy,
color = 'k',
clip_on = False,
head_width = self.head_width,
head_length = self.head_length
)
def __scale_arrows(self):
""" Make the arrows look good regardless of the axis limits """
xrange = self.xlim[1] - self.xlim[0]
yrange = self.ylim[1] - self.ylim[0]
self.head_width = min(xrange/30, 0.25)
self.head_length = min(yrange/30, 0.3)
def __drawAxis(self):
"""
Draws the 2D cartesian axis
"""
# A subplot with two additional axis, "xzero" and "yzero"
# corresponding to the cartesian axis
ax = SubplotZero(self.fig, 1, 1, 1)
self.fig.add_subplot(ax)
# make xzero axis (horizontal axis line through y=0) visible.
for axis in ["xzero","yzero"]:
ax.axis[axis].set_visible(True)
# make the other axis (left, bottom, top, right) invisible
for n in ["left", "right", "bottom", "top"]:
ax.axis[n].set_visible(False)
# Plot limits
plt.xlim(self.xlim)
plt.ylim(self.ylim)
ax.set_yticks([-1, 1, ])
ax.set_xticks([-2, -1, 0, 1, 2])
# Draw the arrows
self.__drawArrow(self.xlim[1], 0, 0.01, 0, 0.3, 0.2) # x-axis arrow
self.__drawArrow(0, self.ylim[1], 0, 0.01, 0.2, 0.3) # y-axis arrow
self.ax=ax
def draw(self):
# First draw the axis
self.fig = plt.figure(figsize=self.figsize)
self.__drawAxis()
axes = myAxes(xlim=(-2.5,2.5), ylim=(-2,2), figsize=(9,7))
axes.draw()
rectangle =patch.Rectangle((0,0), 3, 3, fc='blue',ec="blue", alpha=.2)
axes.ax.add_patch(rectangle)
plt.show()

Now, what is the unit ball? You are probably familiar with the idea of the unit ball, even if you haven’t heard it called that specifically. Remember that the Euclidean norm for a point
The Euclidean unit ball is just the set of points whose Euclidean norm is at most
We draw the
Click to show
axes = myAxes(xlim=(-2.5,2.5), ylim=(-2,2), figsize=(9,7))
axes.draw()
circle =patch.Circle((0,0), 1, fc='red',ec="red", alpha=.3)
axes.ax.add_patch(circle)
plt.show()

Now what is their intersection? Remember that the intersection of two sets
That is, each element must be in both sets to be in the intersection. The interesction of the unit ball and the non-negative orthant will be the set:
visually, this will be the set of points in the overlap of the unit ball and the non-negative orthant, which we show below in purple:
Click to show
axes = myAxes(xlim=(-2.5,2.5), ylim=(-2,2), figsize=(9,7))
axes.draw()
circle =patch.Circle((0,0), 1, fc='red',ec="red", alpha=.3)
axes.ax.add_patch(circle)
rectangle =patch.Rectangle((0,0), 3, 3, fc='blue',ec="blue", alpha=.2)
axes.ax.add_patch(rectangle)
plt.show()

This space has an incredibly important corollary. It turns out that if
The a posteriori RDPG is to the a priori RDPG what the a posteriori SBM was to the a priori SBM. We instead suppose that we do not know the latent position matrix
Parameter |
Space |
Description |
---|---|---|
F |
inner-product distributions |
A distribution which governs each latent position. |
The parameter
The model for edges of the a posteriori RDPG can be described by conditioning on this unobserved latent-position matrix. We write down that, conditioned on
11.6.1.2.1. Probability#
The probability for the a posteriori RDPG is fairly complicated. This is because, like the a posteriori SBM, we do not actually get to see the latent position matrix
We will let
Next, we will simplify this expression a little bit more, using the definition of a conditional probability like we did before for the SBM:
Further, remember that if
Which means that:
Finally, we that conditional on
This implies that:
So our complete expression for the probability is:
11.6.2. Generalized Random Dot Product Graph (GRDPG)#
The Generalized Random Dot Product Graph, or GRDPG, is the most general random network model we will consider in this book. Note that for the RDPG, the probability matrix
11.6.2.1. A Priori GRDPG#
The a priori GRDPG is a GRDPG in which we know a priori the latent position matrices
Parameter |
Space |
Description |
---|---|---|
The matrix of left latent positions for each node |
||
The matrix of right latent positions for each node |
What is the generative model for the a priori GRDPG? As we discussed above, given
11.6.2.2. A Posteriori GRDPG#
The A Posteriori GRDPG is very similar to the a posteriori RDPG. We have two parameters:
Parameter |
Space |
Description |
---|---|---|
F |
inner-product distributions |
A distribution which governs the left latent positions. |
G |
inner-product distributions |
A distribution which governs the right latent positions. |
Here, we treat the left and right latent position matrices as latent variable matrices, like we did for a posteriori RDPG. That is, the left latent positions are sampled independently and identically from
The model for edges of the a posteriori RDPG can be described by conditioning on the unobserved left and right latent-position matrices. We write down that, conditioned on
11.6.3. Inhomogeneous Erdös-Rényi (IER)#
In the preceding models, we typically made assumptions about how we could characterize the edge-existence probabilities using fewer than
Nonetheless, the most general model for an independent-edge random network is known as the Inhomogeneous Erdös-Rényi (IER) Random Network. An IER Random Network is characterized by the following parameters:
Parameter |
Space |
Description |
---|---|---|
[0,1] |
The edge probability matrix. |
The probability matrix
It is worth noting that all of the preceding models we have discussed so far are special cases of the IER model. This means that, for instance, if we were to consider only the probability matrices where all of the entries are the same, we could represent the ER models. Similarly, if we were to only to consider the probability matrices
The IER Random Network can be thought of as the limit of Stochastic Block Models, as the number of communities equals the number of nodes in the network. Stated another way, an SBM Random Network where each node is in its own community is equivalent to an IER Random Network. Under this formulation, note that the block matarix for such an SBM,
11.6.3.1. Probability#
The probability for a network which is IER is very straightforward. We use the independence assumption, and the p.m.f. of a Bernoulli-distributed random-variable