expected_value_and_variance_properties
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
expected_value_and_variance_properties [2023/10/04 12:37] – created hkimscil | expected_value_and_variance_properties [2023/12/07 12:18] (current) – [Theorem 2: Why square] hkimscil | ||
---|---|---|---|
Line 9: | Line 9: | ||
| $Var(c)$ | | $Var(c)$ | ||
| $Var(aX + b)$ | $a^{2}Var(X) \;\;\; $ see $\ref{var.theorem.2}$ and $\ref{var.theorem.3}$ | | | $Var(aX + b)$ | $a^{2}Var(X) \;\;\; $ see $\ref{var.theorem.2}$ and $\ref{var.theorem.3}$ | | ||
- | | $Var(aX - bY)$ | $a^{2}Var(X) + b^{2}Var(Y)$ see 1 | | + | | $Var(aX - bY)$ | $a^{2}Var(X) + b^{2}Var(Y)$ see $\ref{var.theorem.2}$ and $\ref{var.theorem.52}$ |
| $Var(X1 + X2 + X3)$ | $Var(X) + Var(X) + Var(X) = 3 Var(X) \;\;\; $ ((X1, x2, x3는 동일한 특성을 (statistic, 가령 Xbar = 0, sd=1) 갖는 독립적인 세 집합이다. 따라서 세집합의 분산은 모두 1인 상태이고, | | $Var(X1 + X2 + X3)$ | $Var(X) + Var(X) + Var(X) = 3 Var(X) \;\;\; $ ((X1, x2, x3는 동일한 특성을 (statistic, 가령 Xbar = 0, sd=1) 갖는 독립적인 세 집합이다. 따라서 세집합의 분산은 모두 1인 상태이고, | ||
| $Var(X1 + X1 + X1)$ | $Var(3X) = 3^2 Var(X) = 9 Var(X) $ | | | $Var(X1 + X1 + X1)$ | $Var(3X) = 3^2 Var(X) = 9 Var(X) $ | | ||
- | |||
- | \begin{eqnarray*} | ||
- | Var(aX - bY) & = & Var(aX + -bY) \\ | ||
- | & = & Var(aX) + Var(-bY) \\ | ||
- | & = & a^{2}Var(X) + b^{2}Var(Y) | ||
- | \end{eqnarray*} | ||
see also [[:why n-1]] | see also [[:why n-1]] | ||
Line 39: | Line 33: | ||
\begin{align*} | \begin{align*} | ||
- | Var[aX] & = & E[a^2X^2] − (E[aX])^2 \\ | + | Var[aX] & = E[a^2X^2] − (E[aX])^2 \\ |
- | & = & a^2 E[X^2] - (a E[X])^2 \\ | + | & = a^2 E[X^2] - (a E[X])^2 \\ |
- | & = & a^2 E[X^2] - (a^2 E[X]^2) \\ | + | & = a^2 E[X^2] - (a^2 E[X]^2) \\ |
- | & = & a^2 (E[X^2] - (E[X])^2) \\ | + | & = a^2 (E[X^2] - (E[X])^2) \\ |
- | & = & a^2 (Var[X]) \label{var.theorem.2} \tag{variance theorem 2} \\ | + | & = a^2 (Var[X]) \label{var.theorem.2} \tag{variance theorem 2} \\ |
\end{align*} | \end{align*} | ||
====== Theorem 3: Why Var[X+c] = Var[X] ====== | ====== Theorem 3: Why Var[X+c] = Var[X] ====== | ||
Line 159: | Line 153: | ||
보통 X1, X2 집합은 같은 특성을 (statistic) 갖는 두 독립적인 집합을 의미하므로 | 보통 X1, X2 집합은 같은 특성을 (statistic) 갖는 두 독립적인 집합을 의미하므로 | ||
\begin{align*} | \begin{align*} | ||
- | Var(X1 + X2) = Var(X1) + Var(X2) | + | Var(X1 + X2) = & Var(X1) + Var(X2) |
+ | & \text{because X1 and x2 have} \\ | ||
+ | & \text{X' | ||
+ | & \text{and variance of X)} \\ | ||
+ | |||
+ | = & Var(X) + Var(X) | ||
\end{align*} | \end{align*} | ||
Line 179: | Line 178: | ||
& \;\;\;\;\; \text{according to the below } \ref{cov.xx}, | & \;\;\;\;\; \text{according to the below } \ref{cov.xx}, | ||
& \;\;\;\;\; Cov(X,X) = Var(X) \\ | & \;\;\;\;\; Cov(X,X) = Var(X) \\ | ||
- | & = Var(X) + 2 Var(X) + Var(X) | + | & = Var(X) + 2 Var(X) + Var(X) |
& = 4 Var(X) | & = 4 Var(X) | ||
\end{align*} | \end{align*} | ||
Line 197: | Line 196: | ||
rnorm2 <- function(n, | rnorm2 <- function(n, | ||
- | m <- 0 | + | m <- 50 |
- | v <- 1 | + | v <- 4 |
- | n <- 10000 | + | n <- 100000 |
set.seed(1) | set.seed(1) | ||
x1 <- rnorm2(n, m, sqrt(v)) | x1 <- rnorm2(n, m, sqrt(v)) | ||
Line 205: | Line 204: | ||
x3 <- rnorm2(n, m, sqrt(v)) | x3 <- rnorm2(n, m, sqrt(v)) | ||
- | m.x1 <- mean(x1) | + | m.x1 <- round(mean(x1),3) |
- | m.x2 <- mean(x2) | + | m.x2 <- round(mean(x2),3) |
- | m.x3 <- mean(x3) | + | m.x3 <- round(mean(x3),3) |
m.x1 | m.x1 | ||
m.x2 | m.x2 | ||
m.x3 | m.x3 | ||
+ | |||
+ | y1 <- 3*x1 +5 | ||
+ | exp.y1 <- mean(y1) | ||
+ | exp.3xplus5 <- 3 * mean(x1) + 5 | ||
+ | exp.y1 | ||
+ | exp.3xplus5 | ||
v.x1 <- var(x1) | v.x1 <- var(x1) | ||
Line 218: | Line 223: | ||
v.x2 | v.x2 | ||
v.x3 | v.x3 | ||
+ | |||
+ | var(x1) | ||
+ | var((3*x1)+5) | ||
+ | 3^2 * var(x1) | ||
v.12 <- var(x1 + x2) | v.12 <- var(x1 + x2) |
expected_value_and_variance_properties.1696390638.txt.gz · Last modified: 2023/10/04 12:37 by hkimscil