模型估計(jì)與選擇_第1頁(yè)
已閱讀1頁(yè),還剩31頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、Model Assessment & Selection,Dept. Computer Science & Engineering, Shanghai Jiaotong University,2024/3/17,Model Assessment & Selection,2,Outline,Bias, Variance and Model ComplexityThe Bias-Variance Decompos

2、itionOptimism of the Training Error RateEstimates of In-Sample Prediction ErrorThe Effective Number of ParametersThe Bayesian Approach and BICMinimum Description LengthVapnik-Chernovenkis DimensionCross-Validation

3、Bootstrap Methods,2024/3/17,Model Assessment & Selection,3,Bias, Variance & Model Complexity,2024/3/17,Model Assessment & Selection,4,Bias, Variance & Model Complexity,The standard of model assessment :

4、the generalization performance of a learning methodModel:Prediction Model:Loss function:,2024/3/17,Model Assessment & Selection,5,Bias, Variance & Model Complexity,Error: training error, generalization error

5、Typical loss function:,2024/3/17,Model Assessment & Selection,6,Bias, Variance & Model Complexity,模型選擇:估計(jì)不同模型的性能,以便選擇最好的模型模型評(píng)估:從選定的模型,估計(jì)新樣本的預(yù)測(cè)值解決方法:交叉校驗(yàn):數(shù)據(jù)集=訓(xùn)練集+校證集+測(cè)試集,2024/3/17,Model Assessment & Se

6、lection,7,Bias-Variance Decomposition,Basic Model:The expected prediction error of a regression fit .The more complex the model, the lower the (squared) bias but the higher the variance.,2024/3/17,Model Asse

7、ssment & Selection,8,Bias-Variance Decomposition,For the k-NN regression fit the prediction error:For the linear model fit,2024/3/17,Model Assessment & Selection,9,Bias-Variance Decomposition,The in-sample err

8、or of the Linear Model The model complexity is directly related to the number of parameters p.For ridge regression the square bias,2024/3/17,Model Assessment & Selection,10,Bias-Variance Decomposition,Schematic o

9、f the behavior of bias and variance,2024/3/17,Model Assessment & Selection,11,Optimism of the Training Error Rate,Training Error < True Error is extra-sample errorThe in-sample errorOptimism:,2024/3/1

10、7,Model Assessment & Selection,12,Optimism of the Training Error Rate,For squared error, 0-1, other loss function: is obtained by a linear fit with d inputs or basis function, a simplification is:,輸入維數(shù)或基函數(shù)的個(gè)數(shù)

11、增加,樂觀性增大訓(xùn)練樣本數(shù)增加,樂觀性降低,2024/3/17,Model Assessment & Selection,13,Estimates of In-sample Prediction Error,The general form of the in-sample estimates is parameters are fit under Squared error lossUse a log-like

12、lihood function to estimateThis relationship introduce the Akaike Information Criterion,2024/3/17,Model Assessment & Selection,14,Akaike Information Criterion,Akaike Information Criterion is a similar but more ge

13、nerally applicable estimate ofA set of models with a turning parameter : provides an estimate of the test error curve, and we find the turning parameter that minimizes it.,2024/3/17,Model

14、 Assessment & Selection,15,Akaike Information Criterion,For the logistic regression model, using the binomial log-likelihood.For Gaussian model the AIC statistic equals to the Cp statistic.,2024/3/17,Model Assessm

15、ent & Selection,16,Akaike信息準(zhǔn)則,音素識(shí)別例子:,,2024/3/17,Model Assessment & Selection,17,Effective number of parameters,A linear fitting method:Effective number of parameters:If is an orthogonal projection matrix

16、 onto a basis set spanned by features, then: is the correct quantity to replace in the Cp statistic,2024/3/17,Model Assessment & Selection,18,Bayesian Approach & BIC,The Bayesian I

17、nformation Criterion (BIC)Gaussian model: Variance thenSo is proportional to , 2 replaced by 傾向選擇簡(jiǎn)單模型, 而懲罰復(fù)雜模型,2024/3/17,Model Assessment & Selection,19,Bayesian

18、 Model Selection,BIC derived from Bayesian Model SelectionCandidate models Mm , model parameter and a prior distributionPosterior probability:,2024/3/17,Model Assessment & Selection,20,Bayesian Model Selection

19、,Compare two modelsIf the odds are greater than 1, model m will be chosen, otherwise choose model Bayes Factor:The contribution of the data to the posterior odds,2024/3/17,Model Assessment & Selection,21,Bayesi

20、an模型選擇,如果模型的先驗(yàn)是均勻的Pr(M)是常數(shù),,極小BIC的模型等價(jià)于極大化后驗(yàn)概率模型優(yōu)點(diǎn):當(dāng)模型包含真實(shí)模型是,當(dāng)樣本趨于無窮時(shí),BIC選擇正確的概率是一。,2024/3/17,Model Assessment & Selection,22,最小描述長(zhǎng)度(MDL),來源:最優(yōu)編碼信息: z1 z2 z3 z4編碼: 0 10 110 111編碼2: 110 10

21、 111 0準(zhǔn)則:最頻繁的使用最短的編碼發(fā)送信息zi的概率:香農(nóng)定律指出使用長(zhǎng)度:,2024/3/17,Model Assessment & Selection,23,最小描述長(zhǎng)度(MDL),2024/3/17,Model Assessment & Selection,24,模型選擇MDL,2024/3/17,Model Assessment & Selection,25,模型選擇MDL,MDL

22、原理:我們應(yīng)該選擇模型,使得下列長(zhǎng)度極小,2024/3/17,Model Assessment & Selection,26,Vapnik-Chernovenkis維,問題:如何選擇模型的參數(shù)的個(gè)數(shù) d?該參數(shù)代表了模型的復(fù)雜度VC維是描述模型復(fù)雜度的一個(gè)重要的指標(biāo),2024/3/17,Model Assessment & Selection,27,VC維,類 的VC維定義為可以被

23、 成員分散的點(diǎn)的最大的個(gè)數(shù)平面的直線類VC維為3。sin(ax) 的VC維是無窮大。,2024/3/17,Model Assessment & Selection,28,VC維,實(shí)值函數(shù)類 的VC維定義為指示類 的VC維。引入VC維可以為泛化誤差提供一個(gè)估計(jì)設(shè) 的VC維為

24、h,樣本數(shù)為N.,2024/3/17,Model Assessment & Selection,29,交叉驗(yàn)證,2024/3/17,Model Assessment & Selection,30,自助法,基本思想:從訓(xùn)練數(shù)據(jù)中有放回地隨機(jī)抽樣數(shù)據(jù)集,每個(gè)數(shù)據(jù)集的與原訓(xùn)練集相同。如此產(chǎn)生B組 自助法數(shù)據(jù)集如何利用這些數(shù)據(jù)集進(jìn)行預(yù)測(cè)?,2024/3/17,Model Assessment & Selection,

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論