Getting Started with NNS: Correlation and Dependence

Fred Viole

require(NNS)
require(knitr)
require(rgl)
require(data.table)

Correlation and Dependence

The limitations of linear correlation are well known. Often one uses correlation, when dependence is the intended measure for defining the relationship between variables. NNS dependence NNS.dep is a signal:noise measure robust to nonlinear signals.

Below are some examples comparing NNS correlation NNS.cor and NNS.dep with the standard Pearson’s correlation coefficient cor.

Linear Equivalence

Note the fact that all observations occupy the co-partial moment quadrants.

## [1] 1

## $Correlation
## [1] 1
## 
## $Dependence
## [1] 1

Nonlinear Relationship

Note the fact that all observations occupy the co-partial moment quadrants.

## [1] 0.6610183

## $Correlation
## [1] 0.9192362
## 
## $Dependence
## [1] 0.9192362

Dependence

Note the fact that all observations occupy only co- or divergent partial moment quadrants for a given subquadrant.

## $Correlation
## [1] 0.0006796968
## 
## $Dependence
## [1] 0.9401103

Multi-Dimensional Dependence

These partial moment insights permit us to extend the analysis to multivariate instances. This level of analysis is simply impossible with Pearson or other rank based correlation methods, which are restricted to pairwise cases.

set.seed(123)
x <- rnorm(1000); y <- rnorm(1000); z <- rnorm(1000)
NNS.dep.hd(cbind(x, y, z), plot = TRUE, independence.overlay = TRUE)
## $actual.observations
## [1] 267
## 
## $independent.null
## [1] 250
## 
## $Dependence
## [1] 0.02266667

p-values for NNS.dep()

p-values and confidence intervals can be obtained from sampling random permutations of \(y \rightarrow y_p\) and running NNS.dep(x,\(y_p\)) to compare against a null hypothesis of 0 correlation, or independence between (x, y).

## p-values for [NNS.dep]
x <- seq(-5, 5, .1); y <- x^2 + rnorm(length(x))

nns_cor_dep <- NNS.dep(x, y, print.map = TRUE)

nns_cor_dep
## $Correlation
## [1] -0.009602208
## 
## $Dependence
## [1] 0.9934588
## Create permutations of y
y_p <- replicate(100, sample.int(length(y)))

## Generate new correlation and dependence measures on each new permutation of y
nns.mc <- apply(y_p, 2, function(g) NNS.dep(x, y[g]))

## Store results
cors <- unlist(lapply(nns.mc, "[[", 1))
deps <- unlist(lapply(nns.mc, "[[", 2))

## View results
hist(cors)
abline(v = LPM.VaR(.975,0, cors), col = 'red')
abline(v = UPM.VaR(.975,0, cors), col = 'red')

hist(deps)
abline(v = LPM.VaR(.975,0, deps), col = 'red')
abline(v = UPM.VaR(.975,0, deps), col = 'red')

## Left tailed correlation p-value
cor_p_value <- LPM(0, nns_cor_dep$Correlation, cors)
cor_p_value
## [1] 0.43
## Right tailed correlation p-value
cor_p_value <- UPM(0, nns_cor_dep$Correlation, cors)
cor_p_value
## [1] 0.57
## Confidence Intervals
## For 95th percentile VaR (both-tails) see [LPM.VaR] and [UPM.VaR]
## Lower CI
LPM.VaR(.975, 0, cors)
## [1] -0.2369068
## Upper CI
UPM.VaR(.975, 0, cors)
## [1] 0.2137402
## Left tailed dependence p-value
dep_p_value <- LPM(0, nns_cor_dep$Dependence, deps)
dep_p_value
## [1] 1
## Right tailed dependence p-value
dep_p_value <- UPM(0, nns_cor_dep$Dependence, deps)
dep_p_value
## [1] 0
## Confidence Intervals
## For 95th percentile VaR (both-tails) see [LPM.VaR] and [UPM.VaR]
## Lower CI
LPM.VaR(.975, 0, deps)
## [1] 0.07386584
## Upper CI
UPM.VaR(.975, 0, deps)
## [1] 0.39517

References

If the user is so motivated, detailed arguments and proofs are provided within the following: