title: “An Introduction to mvnfast” author: “Matteo Fasiolo” date: “2017-12-23” vignette: > %\VignetteEngine{knitr::rmarkdown} %\VignetteIndexEntry{mvnfast_vignette}

The `mvnfast`

R package provides computationally efficient tools related to the multivariate normal and Student's t distributions.
The tools are generally faster than those provided by other packages, thanks to the use of C++ code through the
`Rcpp`

\`RcppArmadillo`

packages and parallelization through the `OpenMP`

API. The most important functions are:

`rmvn()`

: simulates multivariate normal random vectors.`rmvt()`

: simulates Student's t normal random vectors.`dmvn()`

: evaluates the probability density function of a multivariate normal distribution.`dmvt()`

: evaluates the probability density function of a multivariate Student's t distribution.`maha()`

: evaluates mahalanobis distances.

In the following sections we will benchmark each function against equivalent functions provided by other packages, while in the final section we provide an example application.

Simulating multivariate normal random variables is an essential step in many Monte Carlo algorithms (such as MCMC or Particle Filters),
hence this operations has to be as fast as possible. Here we compare the `rmvn`

function with the equivalent function `rmvnorm`

(from the `mvtnorm`

package) and `mvrnorm`

(from the `MASS`

package). In particular, we simulate \(10^4\) twenty-dimensional random vectors:

```
library("microbenchmark")
library("mvtnorm")
library("mvnfast")
library("MASS")
# We might also need to turn off BLAS parallelism
library("RhpcBLASctl")
blas_set_num_threads(1)
```

```
N <- 10000
d <- 20
# Creating mean and covariance matrix
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
microbenchmark(rmvn(N, mu, mcov, ncores = 2),
rmvn(N, mu, mcov),
rmvnorm(N, mu, mcov),
mvrnorm(N, mu, mcov))
```

```
## Unit: milliseconds
## expr min lq mean median
## rmvn(N, mu, mcov, ncores = 2) 2.869343 2.951836 3.964180 3.009329
## rmvn(N, mu, mcov) 5.116576 5.198669 7.404826 5.304450
## rmvnorm(N, mu, mcov) 15.083198 16.123729 22.379661 16.657447
## mvrnorm(N, mu, mcov) 14.724016 15.617562 19.670027 15.859236
## uq max neval cld
## 3.842818 38.13823 100 a
## 6.097371 50.07633 100 a
## 17.366874 51.74506 100 b
## 16.521721 51.22644 100 b
```

In this example `rmvn`

cuts the computational time, relative to the alternatives, even when a single core is used. This gain is attributable to several factors: the use of C++ code and efficient numerical algorithms to simulate the random variables. Parallelizing the computation on two cores gives another appreciable speed-up. To be fair, it is necessary to point out that `rmvnorm`

and `mvrnorm`

have many more safety check on the user's input than `rmvn`

. This is true also for the functions described in the next sections.

Notice that this function does not use one of the Random Number Generators (RNGs) provided by R, but one of the parallel cryptographic RNGs described in (Salmon et al., 2011) and available here. It is important to point out that this RNG can safely be used in parallel, without risk of collisions between parallel sequence of random numbers, as detailed in the above reference.

We get similar performance gains when we simulate multivariate Student's t random variables:

```
# Here we have a conflict between namespaces
microbenchmark(mvnfast::rmvt(N, mu, mcov, df = 3, ncores = 2),
mvnfast::rmvt(N, mu, mcov, df = 3),
mvtnorm::rmvt(N, delta = mu, sigma = mcov, df = 3))
```

```
## Unit: milliseconds
## expr min lq
## mvnfast::rmvt(N, mu, mcov, df = 3, ncores = 2) 5.754565 5.839437
## mvnfast::rmvt(N, mu, mcov, df = 3) 8.020242 8.101024
## mvtnorm::rmvt(N, delta = mu, sigma = mcov, df = 3) 18.270408 19.520069
## mean median uq max neval cld
## 9.171869 5.914170 6.932162 85.33542 100 a
## 9.369743 8.214005 9.250888 79.09919 100 a
## 28.326863 19.867925 20.914040 101.13255 100 b
```

When `d`

and `N`

are large, and `rmvn`

or `rmvt`

are called several times with the same arguments, it would make sense to create the matrix where to store the simulated random variable upfront. This can be done as follows:

```
A <- matrix(nrow = N, ncol = d)
class(A) <- "numeric" # This is important. We need the elements of A to be of class "numeric".
rmvn(N, mu, mcov, A = A)
```

Notice that here `rmvn`

returns `NULL`

, not the simulated random vectors! These can be found in the matrix provided by the user:

```
A[1:2, 1:5]
```

```
## [,1] [,2] [,3] [,4] [,5]
## [1,] -2.513917 1.155618 4.9239162 8.161547 -4.154014
## [2,] 3.508160 -2.127151 0.9363504 5.214487 15.911238
```

Pre-creating the matrix of random variables saves some more time:

```
microbenchmark(rmvn(N, mu, mcov, ncores = 2, A = A),
rmvn(N, mu, mcov, ncores = 2),
times = 200)
```

```
## Unit: milliseconds
## expr min lq mean median
## rmvn(N, mu, mcov, ncores = 2, A = A) 2.613869 2.628497 2.648726 2.644245
## rmvn(N, mu, mcov, ncores = 2) 2.856497 2.886709 3.927323 2.905606
## uq max neval cld
## 2.660891 2.751751 200 a
## 3.181075 75.669056 200 b
```

Don't look at the median time here, the mean is much more affected by memory re-allocation.

Here we compare the `dmvn`

function, which evaluates the multivariate normal density, with the equivalent function `dmvtnorm`

(from the `mvtnorm`

package).
In particular we evaluate the log-density of \(10^4\) twenty-dimensional random vectors:

```
# Generating random vectors
N <- 10000
d <- 20
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
X <- rmvn(N, mu, mcov)
microbenchmark(dmvn(X, mu, mcov, ncores = 2, log = T),
dmvn(X, mu, mcov, log = T),
dmvnorm(X, mu, mcov, log = T), times = 500)
```

```
## Unit: milliseconds
## expr min lq mean
## dmvn(X, mu, mcov, ncores = 2, log = T) 1.553744 1.602327 1.681106
## dmvn(X, mu, mcov, log = T) 2.664310 2.765360 2.847991
## dmvnorm(X, mu, mcov, log = T) 2.400358 2.450243 4.896150
## median uq max neval cld
## 1.628550 1.778469 2.952421 500 a
## 2.822249 2.917674 3.900135 500 b
## 2.557461 3.602018 75.144144 500 c
```

Again, we get some speed-up using C++ code and some more from the parallelization. We get similar results if we use a multivariate Student's t density:

```
# We have a namespace conflict
microbenchmark(mvnfast::dmvt(X, mu, mcov, df = 4, ncores = 2, log = T),
mvnfast::dmvt(X, mu, mcov, df = 4, log = T),
mvtnorm::dmvt(X, delta = mu, sigma = mcov, df = 4, log = T), times = 500)
```

```
## Unit: milliseconds
## expr min
## mvnfast::dmvt(X, mu, mcov, df = 4, ncores = 2, log = T) 1.735921
## mvnfast::dmvt(X, mu, mcov, df = 4, log = T) 2.880508
## mvtnorm::dmvt(X, delta = mu, sigma = mcov, df = 4, log = T) 2.641131
## lq mean median uq max neval cld
## 1.805730 1.891945 1.855004 2.007636 3.115382 500 a
## 3.028681 3.119221 3.091241 3.203544 4.911419 500 b
## 2.683805 4.375662 2.764473 3.974187 76.237174 500 c
```

Finally, we compare the `maha`

function, which evaluates the square mahalanobis distance with the equivalent function `mahalanobis`

(from the `stats`

package).
Also in the case we use \(10^4\) twenty-dimensional random vectors:

```
# Generating random vectors
N <- 10000
d <- 20
mu <- 1:d
tmp <- matrix(rnorm(d^2), d, d)
mcov <- tcrossprod(tmp, tmp)
X <- rmvn(N, mu, mcov)
microbenchmark(maha(X, mu, mcov, ncores = 2),
maha(X, mu, mcov),
mahalanobis(X, mu, mcov))
```

```
## Unit: milliseconds
## expr min lq mean median
## maha(X, mu, mcov, ncores = 2) 1.445606 1.480913 1.546898 1.500379
## maha(X, mu, mcov) 2.564499 2.646450 2.740035 2.701891
## mahalanobis(X, mu, mcov) 2.691467 2.808553 4.817008 3.406626
## uq max neval cld
## 1.657132 1.726759 100 a
## 2.819453 3.673934 100 a
## 3.953476 75.355109 100 b
```

The acceleration is similar to that obtained in the previous sections.

As an example application of the `dmvn`

function, we implemented the mean-shift mode seeking algorithm.
This procedure can be used to find the mode or maxima of a kernel density function, and it can be used to set up
clustering algorithms. Here we simulate \(10^4\) d-dimensional random vectors from mixture of normal distributions:

```
set.seed(5135)
N <- 10000
d <- 2
mu1 <- c(0, 0); mu2 <- c(2, 3)
Cov1 <- matrix(c(1, 0, 0, 2), 2, 2)
Cov2 <- matrix(c(1, -0.9, -0.9, 1), 2, 2)
bin <- rbinom(N, 1, 0.5)
X <- bin * rmvn(N, mu1, Cov1) + (!bin) * rmvn(N, mu2, Cov2)
```

Finally, we plot the resulting probability density and, starting from 10 initial points, we use mean-shift to converge to the nearest mode:

```
# Plotting
np <- 100
xvals <- seq(min(X[ , 1]), max(X[ , 1]), length.out = np)
yvals <- seq(min(X[ , 2]), max(X[ , 2]), length.out = np)
theGrid <- expand.grid(xvals, yvals)
theGrid <- as.matrix(theGrid)
dens <- dmixn(theGrid,
mu = rbind(mu1, mu2),
sigma = list(Cov1, Cov2),
w = rep(1, 2)/2)
plot(X[ , 1], X[ , 2], pch = '.', lwd = 0.01, col = 3)
contour(x = xvals, y = yvals, z = matrix(dens, np, np),
levels = c(0.002, 0.01, 0.02, 0.04, 0.08, 0.15 ), add = TRUE, lwd = 2)
# Mean-shift
library(plyr)
inits <- matrix(c(-2, 2, 0, 3, 4, 3, 2, 5, 2, -3, 2, 2, 0, 2, 3, 0, 0, -4, -2, 6),
10, 2, byrow = TRUE)
traj <- alply(inits,
1,
function(input)
ms(X = X,
init = input,
H = 0.05 * cov(X),
ncores = 2,
store = TRUE)$traj
)
invisible( lapply(traj,
function(input){
lines(input[ , 1], input[ , 2], col = 2, lwd = 1.5)
points(tail(input[ , 1]), tail(input[ , 2]))
}))
```

As we can see from the plot, each initial point leads one of two points that are very close to the true mode. Notice that the bandwidth for the kernel density estimator was chosen by trial-and-error, and less arbitrary choices are certainly possible in real applications.

Dirk Eddelbuettel and Romain Francois (2011). Rcpp: Seamless R and C++ Integration. Journal of Statistical Software, 40(8), 1-18. URL http://www.jstatsoft.org/v40/i08/.

Eddelbuettel, Dirk (2013) Seamless R and C++ Integration with Rcpp. Springer, New York. ISBN 978-1-4614-6867-7.

Dirk Eddelbuettel, Conrad Sanderson (2014). RcppArmadillo: Accelerating R with high-performance C++ linear algebra. Computational Statistics and Data Analysis, Volume 71, March 2014, pages 1054-1063. URL http://dx.doi.org/10.1016/j.csda.2013.02.005

John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw (2011). Parallel Random Numbers: As Easy as 1, 2, 3. D. E. Shaw Research, New York, NY 10036, USA.