The problem
Quite often, we have too little data to perform valid inference. Consider the situation with multivariate Gaussian distribution, where we have few observations compared to the number of variables. For example, that’s the case for graphical models used in biology or medicine. In such a setting, the usual way of finding the covariance matrix (the maximum likelihood method) isn’t statistically applicable. What now?
Invariance by permutation
In some cases, the interchange of variables in the vector does not change its distribution. In the multivariate Gaussian case, it would mean that they have the same variances and covariances with other respective variables. For instance, in the following covariance matrix, variables X1 and X3 are interchangeable, meaning that vectors (X1, X2, X3) and (X3, X2, X1) have the same distribution.
Now, we can state this interchangeability property in terms of permutations. In our case, the distribution of (X1, X2, X3) is invariant by permutation (\(1\mapsto3\), \(3\mapsto1\)), or in cyclic form \((1,3)(2)\). This is equivalent to saying that swapping the first with the third row and then swapping the first and third columns of the covariance matrix results in the same matrix. Then we say that this covariance matrix is invariant by permutation.
Of course, in the samples collected in the real world, no perfect equalities will be observed. Still, if the respective values in the (poorly) estimated covariance matrix were close, adopting a particular assumption about invariance by permutation would be a reasonable step.
Package gips
We propose creating a set of constraints on the covariance matrix to use the maximum likelihood method. The constraint we consider is - none other than - invariance under permutation symmetry.
This package provides a way to find a reasonable permutation
to be used as a constraint in covariance matrix estimation. In this
case, reasonable means maximizing the Bayesian posterior
distribution when using a Wishart-like distribution on symmetric,
positive definite matrices as a prior. The idea, exact formulas, and
algorithm sketch are explored in another vignette that can be accessed
by vignette("Theory", package="gips")
or on its pkgdown
page.
Example
library(gips)
toy_example_data
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] -6.720747 8.560522 -1.723573 -1.695641 -0.7969862
#> [2,] -12.717667 3.743516 -1.615323 -1.975209 -0.3726433
#> [3,] -11.681954 5.690620 -1.964964 -3.076413 -1.9625259
#> [4,] -13.621191 6.949620 -2.859141 -3.890607 -2.7605052
dim(toy_example_data)
#> [1] 4 5
number_of_observations <- nrow(toy_example_data) # 4
perm_size <- ncol(toy_example_data) # 5
S <- cov(toy_example_data)
sum(eigen(S)$values > 0.00000001)
#> [1] 3
Note that the rank of the S
matrix is 3, despite the
number_of_observations
being 4. This is because
cov()
estimated the mean on every column to compute
S
.
We want to find reasonable additional assumptions on S
to make it easier to estimate.
Looking at the plot, one can see the similarities between columns 3, 4, and 5. They have similar variance and covariance to each other. The 3 and 5 have similar covariance with columns 1 and 2. However, the 4 is not far from them.
Let’s see if gips
will find the relationship:
g_map <- find_MAP(g, optimizer = "brute_force",
return_probabilities = TRUE, save_all_perms = TRUE)
#> ================================================================================
#> ================================================================================
plot(g_map, type = "heatmap")
gips
decided that \((3,4,5)\) was the most reasonable
assumption. Let’s see how much better it is:
g_map
#> The permutation (3,4,5)
#> - was found after 120 log_posteriori calculations
#> - is 19.055 times more likely than the starting, () permutation.
This assumption is nineteen times more believable than making no assumption. Let’s examine how reasonable are other possible assumptions:
get_probabilities_from_gips(g_map)
#> () (4,5) (3,4) (3,4,5) (3,5)
#> 0.01061282927 0.06171956464 0.04211999314 0.20223220920 0.04567419862
#> (2,3) (2,3)(4,5) (2,3,4) (2,3,4,5) (2,3,5,4)
#> 0.00581023111 0.01783654869 0.01551801497 0.05682939187 0.07903711778
#> (2,3,5) (2,4) (2,4,5) (2,4)(3,5) (2,4,3,5)
#> 0.04483174508 0.00632844959 0.09831655373 0.01384671335 0.05703725463
#> (2,5) (2,5)(3,4) (1,2) (1,2)(4,5) (1,2)(3,4)
#> 0.01796890911 0.01890647384 0.00445029902 0.01010029703 0.01112908332
#> (1,2)(3,4,5) (1,2)(3,5) (1,2,3) (1,2,3)(4,5) (1,2,3,4)
#> 0.05643114895 0.01048091275 0.00125289503 0.00577963435 0.00081399784
#> (1,2,3,4,5) (1,2,3,5,4) (1,2,3,5) (1,2,4,3) (1,2,4,5,3)
#> 0.00425749682 0.00735807992 0.00240359629 0.00028403009 0.00341026916
#> (1,2,4) (1,2,4,5) (1,2,4)(3,5) (1,2,4,3,5) (1,2,5,4,3)
#> 0.00233411313 0.00522571301 0.00758134816 0.00367149151 0.00257878934
#> (1,2,5,3) (1,2,5,4) (1,2,5) (1,2,5,3,4) (1,2,5)(3,4)
#> 0.00081794919 0.00530911969 0.00575507966 0.00370416241 0.01455544007
#> (1,3) (1,3)(4,5) (1,3,4) (1,3,4,5) (1,3,5,4)
#> 0.00021825685 0.00061927155 0.00016548479 0.00005041919 0.00005399096
#> (1,3,5) (1,3)(2,4) (1,3)(2,4,5) (1,3,2,4) (1,3,5)(2,4)
#> 0.00024122684 0.00003053547 0.00438962100 0.00049700802 0.00023070395
#> (1,3)(2,5) (1,3,2,5) (1,3,4)(2,5) (1,4) (1,4,5)
#> 0.00011037285 0.00159335260 0.00052851925 0.00055295545 0.00042223214
#> (1,4)(3,5) (1,4,3,5) (1,4)(2,3) (1,4,5)(2,3) (1,4)(2,3,5)
#> 0.00089431399 0.00005700363 0.00022150996 0.00020077919 0.00934379767
#> (1,4)(2,5) (1,4,2,5) (1,5) (1,5)(3,4) (1,5)(2,3)
#> 0.00119644677 0.00672864070 0.00082845248 0.00135409444 0.00047108119
#> (1,5)(2,3,4) (1,5)(2,4)
#> 0.00413882960 0.00054995409
We see that assumption \((3,4,5)\) is the most likely with \(20.2\%\) posterior probability. All other permutations are way less likely.
Remember that for the n0
could still be to big for your
data. In this example the assumptions with transpositions (like \((3,5)\)) will have the n0
\(= 4\), which would be insufficient
for us to estimate covariance correctly. The assumption \((3,4,5)\) will be just right:
S_projected <- project_matrix(S, g_map[[1]])
S_projected
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 9.486870 4.2433055 1.5435370 1.5435370 1.5435370
#> [2,] 4.243306 4.1408570 -0.3208554 -0.3208554 -0.3208554
#> [3,] 1.543537 -0.3208554 0.8454335 0.7183971 0.7183971
#> [4,] 1.543537 -0.3208554 0.7183971 0.8454335 0.7183971
#> [5,] 1.543537 -0.3208554 0.7183971 0.7183971 0.8454335
sum(eigen(S_projected)$values > 0.00000001)
#> [1] 5
Now, the estimated covariance matrix is of full rank (5).
Practical example
Let’s examine 12 books’ thick, height, and breadth data:
library(gips)
Z <- DAAG::oddbooks[,c(1,2,3)]
number_of_observations <- nrow(Z) # 12
p <- ncol(Z) # 3
S <- cov(Z)
S
#> thick height breadth
#> thick 72.69697 -40.33485 -31.74242
#> height -40.33485 25.36992 20.58576
#> breadth -31.74242 20.58576 17.18424
g <- gips(S, number_of_observations, D_matrix=diag(p)) # the default D_matrix
my_add_text(plot(g, type = "heatmap"))
We can see similarities between columns 2 and 3, representing the book’s height and breadth. In particular, the covariance between [1,2] is very similar to [1,3], and the variance of [2] is similar to the variance of [3]. Those are not surprising, given the interpretation of the data.
g_map <- find_MAP(g, optimizer = "brute_force",
return_probabilities = TRUE, save_all_perms = TRUE)
#> ================================================================================
#> ================================================================================
g_map
#> The permutation ()
#> - was found after 6 log_posteriori calculations
#> - is 1 times more likely than the starting, () permutation.
get_probabilities_from_gips(g_map)
#> () (2,3) (1,2)
#> 0.917699644399123216 0.082300333638115772 0.000000000064309861
#> (1,2,3) (1,3)
#> 0.000000021892918704 0.000000000005532453
We see the search was too restrictive and did not find the
permutation. We will weaken the restrictions by changing the
D_matrix
parameter.
D_coef <- 0.05
g <- gips(S, number_of_observations, D_matrix = D_coef*diag(p))
g_map <- find_MAP(g, optimizer = "brute_force",
return_probabilities = TRUE, save_all_perms = TRUE)
#> ================================================================================
#> ================================================================================
g_map
#> The permutation (2,3)
#> - was found after 6 log_posteriori calculations
#> - is 3.58 times more likely than the starting, () permutation.
get_probabilities_from_gips(g_map)
#> () (2,3) (1,2) (1,2,3)
#> 0.21834865211409399 0.78161461578574387 0.00000000027813589 0.00003673179839076
#> (1,3)
#> 0.00000000002363545
find_MAP
found the symmetry represented by permutation
(2,3). The result depends on two input parameters, delta
and D_matrix
. By default they are set to 3
and
diag(p)
, respectively.
The method is not scale-invariant and therefore we recommend to run
gips for different values of D_matrix
(typically, of the
form D_coef * diag(p)
, where D_coef
\(\in \mathbb{R}^+\)).
Further reading
- To learn more about the available optimizers in
find_MAP()
and how to use those, seevignette("Optimizers", package="gips")
or its pkgdown page. - To learn more about the math and theory behind the
gips
package, seevignette("Theory", package="gips")
or its pkgdown page.