Yilda statistika , Bayesning chiziqli regressiyasi ga yondashuv chiziqli regressiya kontekstida statistik tahlil o'tkaziladigan Bayes xulosasi . Regressiya modeli mavjud bo'lganda xatolar bor normal taqsimot va agar ma'lum bir shakl bo'lsa oldindan tarqatish taxmin qilinmoqda, uchun aniq natijalar mavjud orqa ehtimolliklar taqsimoti model parametrlari.
Modelni sozlash
Standartni ko'rib chiqing chiziqli regressiya muammo, buning uchun men = 1 , … , n {displaystyle i = 1, ldots, n} ning o'rtacha qiymatini aniqlaymiz shartli taqsimlash ning y men {displaystyle y_ {i}} berilgan a k × 1 {displaystyle k imes 1} bashorat qiluvchi vektor x men {displaystyle mathbf {x} _ {i}} :
y men = x men T β + ε men , {displaystyle y_ {i} = mathbf {x} _ {i} ^ {m {T}} {oldsymbol {eta}} + varepsilon _ {i},} qayerda β {displaystyle {oldsymbol {eta}}} a k × 1 {displaystyle k imes 1} vektor va ε men {displaystyle varepsilon _ {i}} bor mustaqil va bir xil odatda taqsimlanadi tasodifiy o'zgaruvchilar:
ε men ∼ N ( 0 , σ 2 ) . {displaystyle varepsilon _ {i} sim N (0, sigma ^ {2}).} Bu quyidagilarga mos keladi ehtimollik funktsiyasi :
r ( y ∣ X , β , σ 2 ) ∝ ( σ 2 ) − n / 2 tugatish ( − 1 2 σ 2 ( y − X β ) T ( y − X β ) ) . {displaystyle ho (mathbf {y} mid mathbf {X}, {oldsymbol {eta}}, sigma ^ {2}) propto (sigma ^ {2}) ^ {- n / 2} exp left (- {frac {1) } {2sigma ^ {2}}} (mathbf {y} -mathbf {X} {oldsymbol {eta}}) ^ {m {T}} (mathbf {y} -mathbf {X} {oldsymbol {eta}}) yaxshi).} The oddiy kichkina kvadratchalar eritmasi yordamida koeffitsient vektorini baholash uchun ishlatiladi Mur-Penrose pseudoinverse :
β ^ = ( X T X ) − 1 X T y {displaystyle {hat {oldsymbol {eta}}} = (mathbf {X} ^ {m {T}} mathbf {X}) ^ {- 1} mathbf {X} ^ {m {T}} mathbf {y}} qayerda X {displaystyle mathbf {X}} bo'ladi n × k {displaystyle n imes k} dizayn matritsasi , har bir qatori bashorat qiluvchi vektor x men T {displaystyle mathbf {x} _ {i} ^ {m {T}}} ; va y {displaystyle mathbf {y}} ustun n {displaystyle n} -vektor [ y 1 ⋯ y n ] T {displaystyle [y_ {1}; cdots; y_ {n}] ^ {m {T}}} .
Bu tez-tez uchraydigan yondashuv va bu erda biron bir mazmunli narsa aytish uchun etarli o'lchovlar mavjudligini taxmin qiladi β {displaystyle {oldsymbol {eta}}} . In Bayesiyalik yondashuv, ma'lumotlar a shaklida qo'shimcha ma'lumotlar bilan to'ldiriladi oldindan ehtimollik taqsimoti . Parametrlar haqidagi oldingi ishonch ma'lumotlarning ehtimollik funktsiyasi bilan birlashtirilgan Bayes teoremasi hosil berish orqa e'tiqod parametrlari haqida β {displaystyle {oldsymbol {eta}}} va σ {displaystyle sigma} . Oldingi domenga va mavjud bo'lgan ma'lumotlarga qarab turli xil funktsional shakllarga ega bo'lishi mumkin apriori .
Konjugat ustunliklari bilan
Oldindan tarqatishni birlashtiring Ixtiyoriy ravishda oldindan taqsimlash uchun, uchun analitik echim bo'lmasligi mumkin orqa taqsimot . Ushbu bo'limda biz deb nomlangan narsani ko'rib chiqamiz oldingi konjugat buning uchun analitik tarzda orqa taqsimotni olish mumkin.
Oldingi r ( β , σ 2 ) {displaystyle ho ({oldsymbol {eta}}, sigma ^ {2})} bu birlashtirmoq agar u xuddi shu funktsional shaklga ega bo'lsa, bu ehtimollik funktsiyasiga β {displaystyle {oldsymbol {eta}}} va σ {displaystyle sigma} . Kundalik ehtimolligi kvadratik bo'lgani uchun β {displaystyle {oldsymbol {eta}}} , jurnalga o'xshashlik qayta yozilib, ehtimollik odatiy holga keladi ( β − β ^ ) {displaystyle ({oldsymbol {eta}} - {hat {oldsymbol {eta}}})}} . Yozing
( y − X β ) T ( y − X β ) = ( y − X β ^ ) T ( y − X β ^ ) + ( β − β ^ ) T ( X T X ) ( β − β ^ ) . {displaystyle (mathbf {y} -mathbf {X} {oldsymbol {eta}}) ^ {m {T}} (mathbf {y} -mathbf {X} {oldsymbol {eta}}) = (mathbf {y} - mathbf {X} {hat {oldsymbol {eta}}}) ^ {m {T}} (mathbf {y} -mathbf {X} {hat {oldsymbol {eta}}}) + ({oldsymbol {eta}} - {hat {oldsymbol {eta}}}) ^ {m {T}} (mathbf {X} ^ {m {T}} mathbf {X}) ({oldsymbol {eta}} - {hat {oldsymbol {eta}} }).} Ehtimol endi qayta yozilgan
r ( y | X , β , σ 2 ) ∝ ( σ 2 ) − v 2 tugatish ( − v s 2 2 σ 2 ) ( σ 2 ) − n − v 2 tugatish ( − 1 2 σ 2 ( β − β ^ ) T ( X T X ) ( β − β ^ ) ) , {displaystyle ho (mathbf {y} | mathbf {X}, {oldsymbol {eta}}, sigma ^ {2}) propto (sigma ^ {2}) ^ {- {frac {v} {2}}} exp chap (- {frac {vs ^ {2}} {2 {sigma} ^ {2}}} ight) (sigma ^ {2}) ^ {- {frac {nv} {2}}} exp chap (- {frac {1} {2 {sigma} ^ {2}}} ({oldsymbol {eta}} - {hat {oldsymbol {eta}}}) ^ {m {T}} (mathbf {X} ^ {m {T}) } mathbf {X}) ({oldsymbol {eta}} - {hat {oldsymbol {eta}}}) ight),} qayerda
v s 2 = ( y − X β ^ ) T ( y − X β ^ ) va v = n − k , {displaystyle vs ^ {2} = (mathbf {y} -mathbf {X} {hat {oldsymbol {eta}}}) ^ {m {T}} (mathbf {y} -mathbf {X} {hat {oldsymbol { eta}}}) quad {ext {va}} quad v = nk,} qayerda k {displaystyle k} regressiya koeffitsientlari soni.
Bu avvalgi shaklni taklif qiladi:
r ( β , σ 2 ) = r ( σ 2 ) r ( β ∣ σ 2 ) , {displaystyle ho ({oldsymbol {eta}}, sigma ^ {2}) = ho (sigma ^ {2}) ho ({oldsymbol {eta}} mid sigma ^ {2}),} qayerda r ( σ 2 ) {displaystyle ho (sigma ^ {2})} bu teskari-gamma taqsimoti
r ( σ 2 ) ∝ ( σ 2 ) − v 0 2 − 1 tugatish ( − v 0 s 0 2 2 σ 2 ) . {displaystyle ho (sigma ^ {2}) propto (sigma ^ {2}) ^ {- {frac {v_ {0}} {2}} - 1} exp chap (- {frac {v_ {0} s_ {0) } ^ {2}} {2sigma ^ {2}}} kech).} Yilda kiritilgan yozuvda teskari-gamma taqsimoti maqola, bu an zichligi Inv-Gamma ( a 0 , b 0 ) {displaystyle {ext {Inv-Gamma}} (a_ {0}, b_ {0})} bilan tarqatish a 0 = v 0 2 {displaystyle a_ {0} = {frac {v_ {0}} {2}}} va b 0 = 1 2 v 0 s 0 2 {displaystyle b_ {0} = {frac {1} {2}} v_ {0} s_ {0} ^ {2}} bilan v 0 {displaystyle v_ {0}} va s 0 2 {displaystyle s_ {0} ^ {2}} ning oldingi qiymatlari sifatida v {displaystyle v} va s 2 {displaystyle s ^ {2}} navbati bilan. Ekvivalent sifatida uni a deb ham ta'riflash mumkin miqyosli teskari xi-kvadrat taqsimot , O'lchov-inv- χ 2 ( v 0 , s 0 2 ) . {displaystyle {ext {Scale-inv -}} chi ^ {2} (v_ {0}, s_ {0} ^ {2}).}
Keyinchalik shartli oldingi zichlik r ( β | σ 2 ) {displaystyle ho ({oldsymbol {eta}} | sigma ^ {2})} a normal taqsimot ,
r ( β ∣ σ 2 ) ∝ ( σ 2 ) − k / 2 tugatish ( − 1 2 σ 2 ( β − m 0 ) T Λ 0 ( β − m 0 ) ) . {displaystyle ho ({oldsymbol {eta}} mid sigma ^ {2}) propto (sigma ^ {2}) ^ {- k / 2} exp left (- {frac {1} {2sigma ^ {2}}} ( {oldsymbol {eta}} - {oldsymbol {mu}} _ {0}) ^ {m {T}} mathbf {Lambda} _ {0} ({oldsymbol {eta}} - {oldsymbol {mu}} _ {0 }) yaxshi).} Ning yozuvida normal taqsimot , shartli oldindan taqsimlash N ( m 0 , σ 2 Λ 0 − 1 ) . {displaystyle {mathcal {N}} chap ({oldsymbol {mu}} _ {0}, sigma ^ {2} mathbf {Lambda} _ {0} ^ {- 1} ight).}
Orqa taqsimot Oldingi belgilanib, orqa taqsimot quyidagicha ifodalanishi mumkin
r ( β , σ 2 ∣ y , X ) ∝ r ( y ∣ X , β , σ 2 ) r ( β ∣ σ 2 ) r ( σ 2 ) ∝ ( σ 2 ) − n / 2 tugatish ( − 1 2 σ 2 ( y − X β ) T ( y − X β ) ) ( σ 2 ) − k / 2 tugatish ( − 1 2 σ 2 ( β − m 0 ) T Λ 0 ( β − m 0 ) ) ( σ 2 ) − ( a 0 + 1 ) tugatish ( − b 0 σ 2 ) {displaystyle {egin {aligned} ho ({oldsymbol {eta}}, sigma ^ {2} mid mathbf {y}, mathbf {X}) & propto ho (mathbf {y} mid mathbf {X}, {oldsymbol {eta} }, sigma ^ {2}) ho ({oldsymbol {eta}} mid sigma ^ {2}) ho (sigma ^ {2}) & propto (sigma ^ {2}) ^ {- n / 2} exp left ( - {frac {1} {2 {sigma} ^ {2}}} (mathbf {y} -mathbf {X} {oldsymbol {eta}}) ^ {m {T}} (mathbf {y} -mathbf {X } {oldsymbol {eta}}) ight) (sigma ^ {2}) ^ {- k / 2} exp chapda (- {frac {1} {2sigma ^ {2}}} ({oldsymbol {eta}} - { oldsymbol {mu}} _ {0}) ^ {m {T}} {oldsymbol {Lambda}} _ {0} ({oldsymbol {eta}} - {oldsymbol {mu}} _ {0}) ight) (sigma ^ {2}) ^ {- (a_ {0} +1)} exp chap (- {frac {b_ {0}} {sigma ^ {2}}} ight) end {hizalanmış}}} Ba'zi bir tartibga solish bilan,[1] orqa orqa yozilishi mumkin, shunda orqa o'rtacha bo'ladi m n {displaystyle {oldsymbol {mu}} _ {n}} parametr vektori β {displaystyle {oldsymbol {eta}}} eng kichik kvadratlarni baholovchi bilan ifodalanishi mumkin β ^ {displaystyle {hat {oldsymbol {eta}}}} va oldingi o'rtacha m 0 {displaystyle {oldsymbol {mu}} _ {0}} , oldingi aniqlik matritsasi bilan ko'rsatilgan oldingi kuch bilan Λ 0 {displaystyle {oldsymbol {Lambda}} _ {0}}
m n = ( X T X + Λ 0 ) − 1 ( X T X β ^ + Λ 0 m 0 ) . {displaystyle {oldsymbol {mu}} _ {n} = (mathbf {X} ^ {m {T}} mathbf {X} + {oldsymbol {Lambda}} _ {0}) ^ {- 1} (mathbf {X } ^ {m {T}} mathbf {X} {hat {oldsymbol {eta}}} + {oldsymbol {Lambda}} _ {0} {oldsymbol {mu}} _ {0}).} Buni oqlash uchun m n {displaystyle {oldsymbol {mu}} _ {n}} chindan ham orqa o'rtacha, eksponent tarkibidagi kvadratik atamalar a sifatida qayta joylashtirilishi mumkin kvadratik shakl yilda β − m n {displaystyle {oldsymbol {eta}} - {oldsymbol {mu}} _ {n}} .[2]
( y − X β ) T ( y − X β ) + ( β − m 0 ) T Λ 0 ( β − m 0 ) = ( β − m n ) T ( X T X + Λ 0 ) ( β − m n ) + y T y − m n T ( X T X + Λ 0 ) m n + m 0 T Λ 0 m 0 . {displaystyle (mathbf {y} -mathbf {X} {oldsymbol {eta}}) ^ {m {T}} (mathbf {y} -mathbf {X} {oldsymbol {eta}}) + ({oldsymbol {eta} } - {oldsymbol {mu}} _ {0}) ^ {m {T}} {oldsymbol {Lambda}} _ {0} ({oldsymbol {eta}} - {oldsymbol {mu}} _ {0}) = ({oldsymbol {eta}} - {oldsymbol {mu}} _ {n}) ^ {m {T}} (mathbf {X} ^ {m {T}} mathbf {X} + {oldsymbol {Lambda}} _ {0}) ({oldsymbol {eta}} - {oldsymbol {mu}} _ {n}) + mathbf {y} ^ {m {T}} mathbf {y} - {oldsymbol {mu}} _ {n} ^ {m {T}} (mathbf {X} ^ {m {T}} mathbf {X} + {oldsymbol {Lambda}} _ {0}) {oldsymbol {mu}} _ {n} + {oldsymbol {mu }} _ {0} ^ {m {T}} {oldsymbol {Lambda}} _ {0} {oldsymbol {mu}} _ {0}.} Endi orqa tomonni a shaklida ifodalash mumkin normal taqsimot marta an teskari-gamma taqsimoti :
r ( β , σ 2 ∣ y , X ) ∝ ( σ 2 ) − k / 2 tugatish ( − 1 2 σ 2 ( β − m n ) T ( X T X + Λ 0 ) ( β − m n ) ) ( σ 2 ) − n + 2 a 0 2 − 1 tugatish ( − 2 b 0 + y T y − m n T ( X T X + Λ 0 ) m n + m 0 T Λ 0 m 0 2 σ 2 ) . {displaystyle ho ({oldsymbol {eta}}, sigma ^ {2} mid mathbf {y}, mathbf {X}) propto (sigma ^ {2}) ^ {- k / 2} exp left (- {frac {1) } {2 {sigma} ^ {2}}} ({oldsymbol {eta}} - {oldsymbol {mu}} _ {n}) ^ {m {T}} (mathbf {X} ^ {m {T}} mathbf {X} + mathbf {Lambda} _ {0}) ({oldsymbol {eta}} - {oldsymbol {mu}} _ {n}) ight) (sigma ^ {2}) ^ {- {frac {n + 2a_ {0}} {2}} - 1} exp chap (- {frac {2b_ {0} + mathbf {y} ^ {m {T}} mathbf {y} - {oldsymbol {mu}} _ {n} ^ {m {T}} (mathbf {X} ^ {m {T}} mathbf {X} + {oldsymbol {Lambda}} _ {0}) {oldsymbol {mu}} _ {n} + {oldsymbol {mu }} _ {0} ^ {m {T}} {oldsymbol {Lambda}} _ {0} {oldsymbol {mu}} _ {0}} {2sigma ^ {2}}} ight).} Shuning uchun posterior taqsimotni quyidagicha parametrlash mumkin.
r ( β , σ 2 ∣ y , X ) ∝ r ( β ∣ σ 2 , y , X ) r ( σ 2 ∣ y , X ) , {displaystyle ho ({oldsymbol {eta}}, sigma ^ {2} mid mathbf {y}, mathbf {X}) propto ho ({oldsymbol {eta}} mid sigma ^ {2}, mathbf {y}, mathbf { X}) ho (sigma ^ {2} mid mathbf {y}, mathbf {X}),} bu erda ikki omil zichlikka mos keladi N ( m n , σ 2 Λ n − 1 ) {displaystyle {mathcal {N}} chap ({oldsymbol {mu}} _ {n}, sigma ^ {2} {oldsymbol {Lambda}} _ {n} ^ {- 1} ight),} va Inv-Gamma ( a n , b n ) {displaystyle {ext {Inv-Gamma}} chap (a_ {n}, b_ {n} ight)} parametrlari bilan berilgan taqsimotlar
Λ n = ( X T X + Λ 0 ) , m n = ( Λ n ) − 1 ( X T X β ^ + Λ 0 m 0 ) , {displaystyle {oldsymbol {Lambda}} _ {n} = (mathbf {X} ^ {m {T}} mathbf {X} + mathbf {Lambda} _ {0}), to'rtinchi {oldsymbol {mu}} _ {n } = ({oldsymbol {Lambda}} _ {n}) ^ {- 1} (mathbf {X} ^ {m {T}} mathbf {X} {hat {oldsymbol {eta}}} + {oldsymbol {Lambda} } _ {0} {oldsymbol {mu}} _ {0}),} a n = a 0 + n 2 , b n = b 0 + 1 2 ( y T y + m 0 T Λ 0 m 0 − m n T Λ n m n ) . {displaystyle a_ {n} = a_ {0} + {frac {n} {2}}, qquad b_ {n} = b_ {0} + {frac {1} {2}} (mathbf {y} ^ {m {T}} mathbf {y} + {oldsymbol {mu}} _ {0} ^ {m {T}} {oldsymbol {Lambda}} _ {0} {oldsymbol {mu}} _ {0} - {oldsymbol { mu}} _ {n} ^ {m {T}} {oldsymbol {Lambda}} _ {n} {oldsymbol {mu}} _ {n}).} Bu parametrlarni quyidagi tenglamalar bo'yicha yangilanadigan Bayescha o'rganish deb talqin qilish mumkin.
m n = ( X T X + Λ 0 ) − 1 ( Λ 0 m 0 + X T X β ^ ) , {displaystyle {oldsymbol {mu}} _ {n} = (mathbf {X} ^ {m {T}} mathbf {X} + {oldsymbol {Lambda}} _ {0}) ^ {- 1} ({oldsymbol { Lambda}} _ {0} {oldsymbol {mu}} _ {0} + mathbf {X} ^ {m {T}} mathbf {X} {hat {oldsymbol {eta}}}),} Λ n = ( X T X + Λ 0 ) , {displaystyle {oldsymbol {Lambda}} _ {n} = (mathbf {X} ^ {m {T}} mathbf {X} + {oldsymbol {Lambda}} _ {0}),} a n = a 0 + n 2 , {displaystyle a_ {n} = a_ {0} + {frac {n} {2}},} b n = b 0 + 1 2 ( y T y + m 0 T Λ 0 m 0 − m n T Λ n m n ) . {displaystyle b_ {n} = b_ {0} + {frac {1} {2}} (mathbf {y} ^ {m {T}} mathbf {y} + {oldsymbol {mu}} _ {0} ^ { m {T}} {oldsymbol {Lambda}} _ {0} {oldsymbol {mu}} _ {0} - {oldsymbol {mu}} _ {n} ^ {m {T}} {oldsymbol {Lambda}} _ {n} {oldsymbol {mu}} _ {n}).} Namunaviy dalillar The namunaviy dalillar p ( y ∣ m ) {displaystyle p (mathbf {y} m m)} modelga berilgan ma'lumotlarning ehtimolligi m {displaystyle m} . Shuningdek, u marginal ehtimollik va kabi oldindan taxmin qilinadigan zichlik . Bu erda model ehtimollik funktsiyasi bilan belgilanadi p ( y ∣ X , β , σ ) {displaystyle p (mathbf {y} mid mathbf {X}, {oldsymbol {eta}}, sigma)} va parametrlar bo'yicha oldindan taqsimlash, ya'ni. p ( β , σ ) {displaystyle p ({oldsymbol {eta}}, sigma)} . Namunaviy dalil bitta raqamda aks ettiradi, bunday model kuzatuvlarni qanchalik yaxshi tushuntiradi. Ushbu bo'limda keltirilgan Bayes chiziqli regressiya modelining namunaviy dalillari bilan raqobatdosh chiziqli modellarni taqqoslash uchun foydalanish mumkin Bayes modelini taqqoslash . Ushbu modellar taxminiy o'zgaruvchilar soni va qiymatlari, shuningdek model parametrlari bo'yicha oldingi ko'rsatkichlari bilan farq qilishi mumkin. Modelning murakkabligi allaqachon namunaviy dalillar bilan hisobga olingan, chunki u parametrlarni birlashtirish orqali marginallashtiradi p ( y , β , σ ∣ X ) {displaystyle p (mathbf {y}, {oldsymbol {eta}}, sigma mid mathbf {X})} ning barcha mumkin bo'lgan qiymatlari ustidan β {displaystyle {oldsymbol {eta}}} va σ {displaystyle sigma} .
p ( y | m ) = ∫ p ( y ∣ X , β , σ ) p ( β , σ ) d β d σ {displaystyle p (mathbf {y} | m) = int p (mathbf {y} o'rta mathbf {X}, {oldsymbol {eta}}, sigma), p ({oldsymbol {eta}}, sigma), d {oldsymbol {eta}}, dsigma} Ushbu integralni analitik usulda hisoblash mumkin va yechim quyidagi tenglamada keltirilgan.[3]
p ( y ∣ m ) = 1 ( 2 π ) n / 2 det ( Λ 0 ) det ( Λ n ) ⋅ b 0 a 0 b n a n ⋅ Γ ( a n ) Γ ( a 0 ) {displaystyle p (mathbf {y} mid m) = {frac {1} {(2pi) ^ {n / 2}}} {sqrt {frac {det ({oldsymbol {Lambda}} _ {0})} {det ({oldsymbol {Lambda}} _ {n})}}} cdot {frac {b_ {0} ^ {a_ {0}}} {b_ {n} ^ {a_ {n}}}} cdot {frac {Gamma (a_ {n})} {Gamma (a_ {0})}}} Bu yerda Γ {displaystyle Gamma} belgisini bildiradi gamma funktsiyasi . Biz ilgari konjugatni tanlaganimiz sababli, chekka ehtimollik ham o'zboshimchalik qiymatlari uchun quyidagi tenglikni baholash orqali osonlikcha hisoblab chiqilishi mumkin. β {displaystyle {oldsymbol {eta}}} va σ {displaystyle sigma} .
p ( y ∣ m ) = p ( β , σ | m ) p ( y ∣ X , β , σ , m ) p ( β , σ ∣ y , X , m ) {displaystyle p (mathbf {y} mid m) = {frac {p ({oldsymbol {eta}}, sigma | m), p (mathbf {y} mid mathbf {X}, {oldsymbol {eta}}, sigma, m)} {p ({oldsymbol {eta}}, sigma mid mathbf {y}, mathbf {X}, m)}}} E'tibor bering, bu tenglama qayta tashkil etishdan boshqa narsa emas Bayes teoremasi . Oldingi, ehtimollik va orqadagi formulalarni kiritish va natijada paydo bo'ladigan ifodani soddalashtirish yuqorida keltirilgan analitik ifodaga olib keladi.
Boshqa holatlar
Umuman olganda, orqa tarafdagi taqsimotni analitik ravishda olish imkonsiz yoki amaliy emas. Biroq, orqa tomonni an bilan taxmin qilish mumkin taxminiy Bayes xulosasi kabi usul Monte-Karlodan namuna olish [4] yoki turli xil Bayes .
Maxsus ish m 0 = 0 , Λ 0 = v Men {displaystyle {oldsymbol {mu}} _ {0} = 0, mathbf {Lambda} _ {0} = cmathbf {I}} deyiladi tizma regressiyasi .
Xuddi shunday tahlil ham ko'p o'zgaruvchan regressiyaning umumiy holati uchun o'tkazilishi mumkin va uning bir qismi Bayesianni nazarda tutadi kovaryans matritsalarini baholash : qarang Bayesiyalik ko'p o'zgaruvchan chiziqli regressiya .
Shuningdek qarang
Izohlar
^ Ushbu hisoblashning oraliq bosqichlarini O'Hagan (1994) ning Lineer modellar bobining boshida topish mumkin. ^ Qidiruv bosqichlar Fahrmeyr va boshqalarda. (2009) 188-betda. ^ Ushbu hisoblashning oraliq bosqichlarini O'Hagan (1994), 257-betda topish mumkin. ^ Karlin va Lui (2008) va Gelman va boshq. (2003) Bayes chiziqli regressiyasi uchun namuna olish usullaridan qanday foydalanishni tushuntiradi. Adabiyotlar
Box, G. E. P. ; Tiao, G. C. (1973). Statistik tahlilda Bayes xulosasi . Vili. ISBN 0-471-57428-7 .Karlin, Bredli P.; Lui, Tomas A. (2008). Ma'lumotlarni tahlil qilish uchun Bayesian usullari, uchinchi nashr . Boka Raton, FL: Chapman va Hall / CRC. ISBN 1-58488-697-8 . Faxrmeyr, L .; Kneyb T .; Lang, S. (2009). Regressiya. Modelle, Methoden und Anwendungen (Ikkinchi nashr). Geydelberg: Springer. doi :10.1007/978-3-642-01837-4 . ISBN 978-3-642-01836-7 . Fornalski K.V.; Parzych G.; Pylak M.; Satula D .; Dobrzíski L. (2010). "Ba'zi rekonstruksiya muammolariga Bayesiya mulohazalari va maksimal entropiya usulini qo'llash" . Acta Physica Polonica A . 117 (6): 892–899. doi :10.12693 / APhysPolA.117.892 . Fornalski, Kzysztof W. (2015). "Bayesning mustahkam regressiya tahlilini qo'llash". Xalqaro Jamiyat tizimlari fanlari jurnali . 7 (4): 314–333. doi :10.1504 / IJSSS.2015.073223 . Gelman, Endryu ; Karlin, Jon B.; Stern, Hal S.; Rubin, Donald B. (2003). Bayesian ma'lumotlar tahlili, ikkinchi nashr . Boka Raton, FL: Chapman va Hall / CRC. ISBN 1-58488-388-X .Goldshteyn, Maykl; Vuf, Devid (2007). Bayesning chiziqli statistikasi, nazariyasi va usullari . Vili. ISBN 978-0-470-01562-9 . Minka, Tomas P. (2001) Bayesning chiziqli regressiyasi , Microsoft tadqiqot veb-sahifasi Rossi, Piter E.; Allenbi, Greg M.; Makkullox, Robert (2006). Bayesiya statistikasi va marketingi . John Wiley & Sons. ISBN 0470863676 . O'Hagan, Entoni (1994). Bayes xulosasi . Kendallning rivojlangan statistika nazariyasi. 2B (Birinchi nashr). Halsted. ISBN 0-340-52922-9 . Sivia, D.S .; Skilling, J. (2006). Ma'lumotlarni tahlil qilish - Bayesiya qo'llanmasi (Ikkinchi nashr). Oksford universiteti matbuoti. Valter, Gero; Augustin, Tomas (2009). "Bayesning chiziqli regressiyasi - turli xil konjuge modellar va ularning oldingi ma'lumotlar ziddiyatiga nisbatan sezgirligi" (PDF) . Texnik hisobot raqami 069, Myunxen universiteti statistika bo'limi . Tashqi havolalar