multiple_regression
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
multiple_regression [2020/12/01 19:08] – [in R] hkimscil | multiple_regression [2024/09/30 07:36] (current) – [e.g.] hkimscil | ||
---|---|---|---|
Line 44: | Line 44: | ||
====== e.g.====== | ====== e.g.====== | ||
Data set again. | Data set again. | ||
+ | < | ||
+ | datavar <- read.csv(" | ||
^ DATA for regression analysis | ^ DATA for regression analysis | ||
Line 67: | Line 69: | ||
</ | </ | ||
+ | 아래는 분산을 (variance 혹은 MS) 구하는 과정이다. 표에서 error 컬럼은 개인점수를 평균으로 ($\overline{Y}=8$) 예측했을 때의 오차를 (error) 말한다. 그리고 이를 제곱하여 (error< | ||
^ prediction for y values with $\overline{Y}$ | ^ prediction for y values with $\overline{Y}$ | ||
| bankaccount | | bankaccount | ||
Line 332: | Line 335: | ||
===== in R ===== | ===== in R ===== | ||
- | < | + | < |
mod <- lm(api00 ~ ell + acs_k3 + avg_ed + meals, data=dvar) | mod <- lm(api00 ~ ell + acs_k3 + avg_ed + meals, data=dvar) | ||
summary(mod) | summary(mod) | ||
Line 391: | Line 394: | ||
></ | ></ | ||
$$ \hat{Y} = 709.6388 + -0.8434 \text{ell} + 3.3884 \text{acs_k3} + 29.0724 \text{avg_ed} + -2.9374 \text{meals} \\$$ | $$ \hat{Y} = 709.6388 + -0.8434 \text{ell} + 3.3884 \text{acs_k3} + 29.0724 \text{avg_ed} + -2.9374 \text{meals} \\$$ | ||
- | ====== Why overall model is significant while IVs are not? ====== | ||
- | see https:// | ||
- | < | + | 그렇다면 각각의 독립변인 고유의 설명력은 얼마인가? |
- | RSS = 3:10 #Right shoe size | + | |
- | LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS | + | |
- | cor(LSS, RSS) # | + | |
- | + | ||
- | weights = 120 + rnorm(RSS, 10*RSS, 10) | + | |
- | + | ||
- | ##Fit a joint model | + | |
- | m = lm(weights ~ LSS + RSS) | + | |
- | + | ||
- | ##F-value is very small, but neither LSS or RSS are significant | + | |
- | summary(m) | + | |
- | </code> | + | |
- | + | ||
- | + | ||
- | < | + | |
- | > LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS | + | |
- | > cor(LSS, RSS) #correlation | + | |
- | [1] 0.9994836 | + | |
- | > | + | |
- | > weights = 120 + rnorm(RSS, 10*RSS, 10) | + | |
- | > | + | |
- | > ##Fit a joint model | + | |
- | > m = lm(weights ~ LSS + RSS) | + | |
- | > | + | |
- | > ##F-value is very small, but neither LSS or RSS are significant | + | |
- | > summary(m) | + | |
- | + | ||
- | Call: | + | |
- | lm(formula = weights ~ LSS + RSS) | + | |
- | + | ||
- | Residuals: | + | |
- | 1 | + | |
- | | + | |
- | + | ||
- | Coefficients: | + | |
- | Estimate Std. Error t value Pr(> | + | |
- | (Intercept) | + | |
- | LSS -14.162 | + | |
- | RSS | + | |
- | --- | + | |
- | Signif. codes: | + | |
- | + | ||
- | Residual standard error: 7.296 on 5 degrees of freedom | + | |
- | Multiple R-squared: | + | |
- | F-statistic: | + | |
- | + | ||
- | > | + | |
- | > ##Fitting RSS or LSS separately gives a significant result. | + | |
- | > summary(lm(weights ~ LSS)) | + | |
- | + | ||
- | Call: | + | |
- | lm(formula = weights ~ LSS) | + | |
- | + | ||
- | Residuals: | + | |
- | | + | |
- | -6.055 -4.930 -2.925 | + | |
- | + | ||
- | Coefficients: | + | |
- | Estimate Std. Error t value Pr(> | + | |
- | (Intercept) | + | |
- | LSS | + | |
- | --- | + | |
- | Signif. codes: | + | |
- | + | ||
- | Residual standard error: 7.026 on 6 degrees of freedom | + | |
- | Multiple R-squared: | + | |
- | F-statistic: | + | |
- | + | ||
- | > | + | |
- | </ | + | |
Line 476: | Line 407: | ||
* Enter method (all at once as if they are not related) | * Enter method (all at once as if they are not related) | ||
* Selection methods | * Selection methods | ||
- | * [[: | + | * [[: |
* Forward selection: X변인들 (predictors) 중 종속변인인 Y와 상관관계가 가장 높은 변인부터 먼저 투입되어 회귀계산이 수행된다. 먼저 투입된 변인은 (상관관계가 높으므로) 이론적으로 종속변인을 설명하는 중요한 요소로 여겨지게 된다. 또한 다음 변인은 우선 투입된 변인을 고려한 상태로 투입된다. | * Forward selection: X변인들 (predictors) 중 종속변인인 Y와 상관관계가 가장 높은 변인부터 먼저 투입되어 회귀계산이 수행된다. 먼저 투입된 변인은 (상관관계가 높으므로) 이론적으로 종속변인을 설명하는 중요한 요소로 여겨지게 된다. 또한 다음 변인은 우선 투입된 변인을 고려한 상태로 투입된다. | ||
* Backward elimination: | * Backward elimination: | ||
Line 490: | Line 421: | ||
| | Standard Multiple | | | Standard Multiple | ||
- | | r< | + | | r< |
| ::: | IV< | | ::: | IV< | ||
- | | sr< | + | | sr< |
| ::: | IV< | | ::: | IV< | ||
- | | pr< | + | | pr< |
| ::: | IV< | | ::: | IV< | ||
| IV< | | IV< | ||
Line 651: | Line 582: | ||
</ | </ | ||
+ | [[:Multiple Regression Exercise]] | ||
====== Resources ====== | ====== Resources ====== |
multiple_regression.1606817321.txt.gz · Last modified: 2020/12/01 19:08 by hkimscil