# Partial Moments

Why is it necessary to parse the variance with partial moments? The additional information generated from partial moments permits a level of analysis simply not possible with traditional summary statistics.

Below are some basic equivalences demonstrating partial moments role as the elements of variance.

## Mean

set.seed(123) ; x = rnorm(100) ; y = rnorm(100)

mean(x)
## [1] 0.09040591
UPM(1, 0, x) - LPM(1, 0, x)
## [1] 0.09040591

## Variance

var(x)
## [1] 0.8332328
# Sample Variance:
UPM(2, mean(x), x) + LPM(2, mean(x), x)
## [1] 0.8249005
# Population Variance:
(UPM(2, mean(x), x) + LPM(2, mean(x), x)) * (length(x) / (length(x) - 1))
## [1] 0.8332328
# Variance is also the co-variance of itself:
(Co.LPM(1, 1, x, x, mean(x), mean(x)) + Co.UPM(1, 1, x, x, mean(x), mean(x)) - D.LPM(1, 1, x, x, mean(x), mean(x)) - D.UPM(1, 1, x, x, mean(x), mean(x))) * (length(x) / (length(x) - 1))
## [1] 0.8332328

## Standard Deviation

sd(x)
## [1] 0.9128159
((UPM(2, mean(x), x) + LPM(2, mean(x), x)) * (length(x) / (length(x) - 1))) ^ .5
## [1] 0.9128159

## Covariance

cov(x, y)
## [1] -0.04372107
(Co.LPM(1, 1, x, y, mean(x), mean(y)) + Co.UPM(1, 1, x, y, mean(x), mean(y)) - D.LPM(1, 1, x, y, mean(x), mean(y)) - D.UPM(1, 1, x, y, mean(x), mean(y))) * (length(x) / (length(x) - 1))
## [1] -0.04372107

## Covariance Elements and Covariance Matrix

PM.matrix(LPM.degree = 1, UPM.degree = 1,target = 'mean', variable = cbind(x, y), pop.adj = TRUE)
## $cupm ## x y ## x 0.4299250 0.1033601 ## y 0.1033601 0.5411626 ## ##$dupm
##           x         y
## x 0.0000000 0.1560924
## y 0.1469182 0.0000000
##
## $dlpm ## x y ## x 0.0000000 0.1469182 ## y 0.1560924 0.0000000 ## ##$clpm
##           x         y
## x 0.4033078 0.1559295
## y 0.1559295 0.3939005
##
## \$cov.matrix
##             x           y
## x  0.83323283 -0.04372107
## y -0.04372107  0.93506310

## Pearson Correlation

cor(x, y)
## [1] -0.04953215
cov.xy = (Co.LPM(1, 1, x, y, mean(x), mean(y)) + Co.UPM(1, 1, x, y, mean(x), mean(y)) - D.LPM(1, 1, x, y, mean(x), mean(y)) - D.UPM(1, 1, x, y, mean(x), mean(y))) * (length(x) / (length(x) - 1))
sd.x = ((UPM(2, mean(x), x) + LPM(2, mean(x), x)) * (length(x) / (length(x) - 1))) ^ .5
sd.y = ((UPM(2, mean(y), y) + LPM(2, mean(y) , y)) * (length(y) / (length(y) - 1))) ^ .5
cov.xy / (sd.x * sd.y)
## [1] -0.04953215

## CDFs (Discrete and Continuous)

P = ecdf(x)
P(0) ; P(1)
## [1] 0.48
## [1] 0.83
LPM(0, 0, x) ; LPM(0, 1, x)
## [1] 0.48
## [1] 0.83
# Vectorized targets:
LPM(0, c(0, 1), x)
## [1] 0.48 0.83
plot(ecdf(x))
points(sort(x), LPM(0, sort(x), x), col = "red")
legend("left", legend = c("ecdf", "LPM.CDF"), fill = c("black", "red"), border = NA, bty = "n")

# Joint CDF:
Co.LPM(0, 0, x, y, 0, 0)
## [1] 0.28
# Vectorized targets:
Co.LPM(0, 0, x, y, c(0, 1), c(0, 1))
## [1] 0.28 0.73
# Continuous CDF:
plot(sort(x), LPM.ratio(1, sort(x), x), type = "l", col = "blue", lwd = 3, xlab = "x")

## PDFs

NNS.PDF(degree = 1, x)

##        Intervals          PDF
##  1: -2.286686366 0.0003890644
##  2: -2.241721348 0.0004040456
##  3: -2.196756329 0.0004199092
##  4: -2.151791310 0.0004367259
##  5: -2.106826292 0.0004545735
##  6: -2.061861273 0.0004735381
##  7: -2.016896254 0.0005772753
##  8: -1.971931235 0.0008239690
##  9: -1.926966217 0.0009981972
## 10: -1.882001198 0.0010426466
## 11: -1.837036179 0.0010901329
## 12: -1.792071161 0.0011409394
## 13: -1.747106142 0.0012343120
## 14: -1.702141123 0.0015481614
## 15: -1.657176105 0.0018394307
## 16: -1.612211086 0.0019554939
## 17: -1.567246067 0.0023281082
## 18: -1.522281049 0.0026980593
## 19: -1.477316030 0.0028413740
## 20: -1.432351011 0.0029964212
## 21: -1.387385993 0.0031645178
## 22: -1.342420974 0.0033471707
## 23: -1.297455955 0.0040395453
## 24: -1.252490937 0.0051709046
## 25: -1.207525918 0.0059441701
## 26: -1.162560899 0.0069053105
## 27: -1.117595880 0.0083045104
## 28: -1.072630862 0.0100784620
## 29: -1.027665843 0.0121288608
## 30: -0.982700824 0.0134592742
## 31: -0.937735806 0.0142463818
## 32: -0.892770787 0.0151046310
## 33: -0.847805768 0.0160428638
## 34: -0.802840750 0.0170713400
## 35: -0.757875731 0.0187201632
## 36: -0.712910712 0.0218207201
## 37: -0.667945694 0.0250308509
## 38: -0.622980675 0.0277794175
## 39: -0.578015656 0.0308552001
## 40: -0.533050638 0.0338661243
## 41: -0.488085619 0.0373091886
## 42: -0.443120600 0.0406145432
## 43: -0.398155581 0.0436071170
## 44: -0.353190563 0.0469758898
## 45: -0.308225544 0.0502014118
## 46: -0.263260525 0.0533737022
## 47: -0.218295507 0.0561135147
## 48: -0.173330488 0.0578116164
## 49: -0.128365469 0.0589675388
## 50: -0.083400451 0.0601306644
## 51: -0.038435432 0.0610643465
## 52:  0.006529587 0.0615105251
## 53:  0.051494605 0.0616324186
## 54:  0.096459624 0.0613707470
## 55:  0.141424643 0.0606553320
## 56:  0.186389661 0.0595789134
## 57:  0.231354680 0.0581541226
## 58:  0.276319699 0.0565093857
## 59:  0.321284718 0.0546160165
## 60:  0.366249736 0.0520835468
## 61:  0.411214755 0.0489036070
## 62:  0.456179774 0.0458203745
## 63:  0.501144792 0.0431387143
## 64:  0.546109811 0.0403293419
## 65:  0.591074830 0.0377013651
## 66:  0.636039848 0.0353697312
## 67:  0.681004867 0.0327864978
## 68:  0.725969886 0.0305357162
## 69:  0.770934904 0.0285494583
## 70:  0.815899923 0.0261478803
## 71:  0.860864942 0.0235917898
## 72:  0.905829960 0.0213413772
## 73:  0.950794979 0.0195198159
## 74:  0.995759998 0.0174875526
## 75:  1.040725017 0.0156999908
## 76:  1.085690035 0.0144169658
## 77:  1.130655054 0.0130499470
## 78:  1.175620073 0.0116479213
## 79:  1.220585091 0.0101170497
## 80:  1.265550110 0.0089872997
## 81:  1.310515129 0.0082425271
## 82:  1.355480147 0.0071881190
## 83:  1.400445166 0.0063770598
## 84:  1.445410185 0.0060450778
## 85:  1.490375203 0.0053094942
## 86:  1.535340222 0.0041889701
## 87:  1.580305241 0.0035465574
## 88:  1.625270259 0.0033628737
## 89:  1.670235278 0.0030646939
## 90:  1.715200297 0.0026551018
## 91:  1.760165315 0.0021707062
## 92:  1.805130334 0.0018155726
## 93:  1.850095353 0.0017077244
## 94:  1.895060372 0.0016301362
## 95:  1.940025390 0.0015577189
## 96:  1.984990409 0.0014780900
## 97:  2.029955428 0.0011915263
## 98:  2.074920446 0.0009296535
## 99:  2.119885465 0.0008033076
##        Intervals          PDF

## Numerical Integration

Partial moments are asymptotic area approximations of $$f(x)$$ akin to the familiar Trapezoidal and Simpson’s rules. More observations, more accuracy…

$[UPM(1,0,f(x))-LPM(1,0,f(x))]\asymp\frac{[F(b)-F(a)]}{[b-a]}$

x = seq(0, 1, .001) ; y = x ^ 2
UPM(1, 0, y) - LPM(1, 0, y)
## [1] 0.3335

$0.3333=\frac{\int_{0}^{1} x^2 dx}{1-0}$ For the total area, not just the definite integral, simply sum the partial moments: $[UPM(1,0,f(x))+LPM(1,0,f(x))]\asymp\left\lvert{\int_{a}^{b} f(x)dx}\right\rvert$

## Bayes’ Theorem

For example, when ascertaining the probability of an increase in $$A$$ given an increase in $$B$$, the Co.UPM(degree.x, degree.y, x, y, target.x, target.y) target parameters are set to target.x = 0 and target.y = 0 and the UPM(degree, target, variable) target parameter is also set to target = 0.

$P(A|B)=\frac{Co.UPM(0,0,A,B,0,0)}{UPM(0,0,B)}$

# References

If the user is so motivated, detailed arguments and proofs are provided within the following: