Yilda matematika , bo'lingan farqlar bu algoritm , tarixiy ravishda logaritmalar va trigonometrik funktsiyalar jadvallarini hisoblash uchun ishlatilgan.[iqtibos kerak ] Charlz Babbig "s farq mexanizmi , erta mexanik kalkulyator , ushbu algoritmni o'z ishida ishlatish uchun mo'ljallangan.[1]
Bo'lingan farqlar a rekursiv bo'linish jarayon. Usul koeffitsientlarni hisoblashda ishlatilishi mumkin interpolatsion polinom ichida Nyuton shakli .
Ta'rif
Berilgan k + 1 ma'lumotlar punkti
( x 0 , y 0 ) , … , ( x k , y k ) { displaystyle (x_ {0}, y_ {0}), ldots, (x_ {k}, y_ {k})} The oldinga bo'lingan farqlar quyidagicha aniqlanadi:
[ y ν ] := y ν , ν ∈ { 0 , … , k } { displaystyle [y _ { nu}]: = y _ { nu}, qquad nu in {0, ldots, k }} [ y ν , … , y ν + j ] := [ y ν + 1 , … , y ν + j ] − [ y ν , … , y ν + j − 1 ] x ν + j − x ν , ν ∈ { 0 , … , k − j } , j ∈ { 1 , … , k } . { displaystyle [y _ { nu}, ldots, y _ { nu + j}]: = { frac {[y _ { nu +1}, ldots, y _ { nu + j}] - [y_ { nu}, ldots, y _ { nu + j-1}]} {x _ { nu + j} -x _ { nu}}}, qquad nu in {0, ldots, kj }, j in {1, ldots, k }.} The orqaga bo'lingan farqlar quyidagicha aniqlanadi:
[ y ν ] := y ν , ν ∈ { 0 , … , k } { displaystyle [y _ { nu}]: = y _ { nu}, qquad nu in {0, ldots, k }} [ y ν , … , y ν − j ] := [ y ν , … , y ν − j + 1 ] − [ y ν − 1 , … , y ν − j ] x ν − x ν − j , ν ∈ { j , … , k } , j ∈ { 1 , … , k } . { displaystyle [y _ { nu}, ldots, y _ { nu -j}]: = { frac {[y _ { nu}, ldots, y _ { nu -j + 1}] - [y_ { nu -1}, ldots, y _ { nu -j}]} {x _ { nu} -x _ { nu -j}}}, qquad nu in {j, ldots, k }, j in {1, ldots, k }.} Notation
Agar ma'lumotlar nuqtalari funktsiya sifatida berilgan bo'lsa ƒ ,
( x 0 , f ( x 0 ) ) , … , ( x k , f ( x k ) ) { displaystyle (x_ {0}, f (x_ {0})), ldots, (x_ {k}, f (x_ {k}))} ba'zida yozadi
f [ x ν ] := f ( x ν ) , ν ∈ { 0 , … , k } { displaystyle f [x _ { nu}]: = f (x _ { nu}), qquad nu in {0, ldots, k }} f [ x ν , … , x ν + j ] := f [ x ν + 1 , … , x ν + j ] − f [ x ν , … , x ν + j − 1 ] x ν + j − x ν , ν ∈ { 0 , … , k − j } , j ∈ { 1 , … , k } . { displaystyle f [x _ { nu}, ldots, x _ { nu + j}]: = { frac {f [x _ { nu +1}, ldots, x _ { nu + j}] - f [x _ { nu}, ldots, x _ { nu + j-1}]} {x _ { nu + j} -x _ { nu}}}, qquad nu in {0, ldots, kj }, j in {1, ldots, k }.} Funktsiyaning bo'lingan farqi uchun bir nechta yozuvlar ƒ tugunlarda x 0 , ..., x n ishlatiladi:
[ x 0 , … , x n ] f , { displaystyle [x_ {0}, ldots, x_ {n}] f,} [ x 0 , … , x n ; f ] , { displaystyle [x_ {0}, ldots, x_ {n}; f],} D. [ x 0 , … , x n ] f { displaystyle D [x_ {0}, ldots, x_ {n}] f} va boshqalar.
Misol
Uchun bo'lingan farqlar ν = 0 { displaystyle nu = 0} va ning birinchi bir nechta qiymati j { displaystyle j} :
[ y 0 ] = y 0 [ y 0 , y 1 ] = y 1 − y 0 x 1 − x 0 [ y 0 , y 1 , y 2 ] = [ y 1 , y 2 ] − [ y 0 , y 1 ] x 2 − x 0 = y 2 − y 1 x 2 − x 1 − y 1 − y 0 x 1 − x 0 x 2 − x 0 = y 2 − y 1 ( x 2 − x 1 ) ( x 2 − x 0 ) − y 1 − y 0 ( x 1 − x 0 ) ( x 2 − x 0 ) [ y 0 , y 1 , y 2 , y 3 ] = [ y 1 , y 2 , y 3 ] − [ y 0 , y 1 , y 2 ] x 3 − x 0 { displaystyle { begin {aligned} { mathopen {[}} y_ {0}] & = y_ {0} { mathopen {[}} y_ {0}, y_ {1}] & = { frac {y_ {1} -y_ {0}} {x_ {1} -x_ {0}}} { mathopen {[}} y_ {0}, y_ {1}, y_ {2}] & = { frac {{ mathopen {[}} y_ {1}, y_ {2}] - { mathopen {[}} y_ {0}, y_ {1}]} {x_ {2} -x_ {0} }} = { frac {{ frac {y_ {2} -y_ {1}} {x_ {2} -x_ {1}}} - { frac {y_ {1} -y_ {0}} {x_ {1} -x_ {0}}}} {x_ {2} -x_ {0}}} = { frac {y_ {2} -y_ {1}} {(x_ {2} -x_ {1}) (x_ {2} -x_ {0})}} - { frac {y_ {1} -y_ {0}} {(x_ {1} -x_ {0}) (x_ {2} -x_ {0}) )}} { mathopen {[}} y_ {0}, y_ {1}, y_ {2}, y_ {3}] & = { frac {{ mathopen {[}} y_ {1}, y_ {2}, y_ {3}] - { mathopen {[}} y_ {0}, y_ {1}, y_ {2}]} {x_ {3} -x_ {0}}} end {hizalangan }}} Rekursiv jarayonni yanada aniqroq qilish uchun bo'lingan farqlarni jadval shaklida joylashtirish mumkin:
x 0 y 0 = [ y 0 ] [ y 0 , y 1 ] x 1 y 1 = [ y 1 ] [ y 0 , y 1 , y 2 ] [ y 1 , y 2 ] [ y 0 , y 1 , y 2 , y 3 ] x 2 y 2 = [ y 2 ] [ y 1 , y 2 , y 3 ] [ y 2 , y 3 ] x 3 y 3 = [ y 3 ] { displaystyle { begin {matrix} x_ {0} & y_ {0} = [y_ {0}] &&& && [y_ {0}, y_ {1}] && x_ {1} & y_ {1} = [y_ {1}] && [y_ {0}, y_ {1}, y_ {2}] & && [y_ {1}, y_ {2}] && [y_ {0}, y_ {1} , y_ {2}, y_ {3}] x_ {2} & y_ {2} = [y_ {2}] && [y_ {1}, y_ {2}, y_ {3}] & && [ y_ {2}, y_ {3}] && x_ {3} & y_ {3} = [y_ {3}] &&& end {matrix}}} Xususiyatlari
( f + g ) [ x 0 , … , x n ] = f [ x 0 , … , x n ] + g [ x 0 , … , x n ] { displaystyle (f + g) [x_ {0}, nuqta, x_ {n}] = f [x_ {0}, nuqta, x_ {n}] + g [x_ {0}, nuqta, x_ {n}]} ( λ ⋅ f ) [ x 0 , … , x n ] = λ ⋅ f [ x 0 , … , x n ] { displaystyle ( lambda cdot f) [x_ {0}, dots, x_ {n}] = lambda cdot f [x_ {0}, dots, x_ {n}]} ( f ⋅ g ) [ x 0 , … , x n ] = f [ x 0 ] ⋅ g [ x 0 , … , x n ] + f [ x 0 , x 1 ] ⋅ g [ x 1 , … , x n ] + ⋯ + f [ x 0 , … , x n ] ⋅ g [ x n ] = ∑ r = 0 n f [ x 0 , … , x r ] ⋅ g [ x r , … , x n ] { displaystyle (f cdot g) [x_ {0}, dots, x_ {n}] = f [x_ {0}] cdot g [x_ {0}, dots, x_ {n}] + f [x_ {0}, x_ {1}] cdot g [x_ {1}, nuqta, x_ {n}] + nuqta + f [x_ {0}, nuqta, x_ {n}] cdot g [x_ {n}] = sum _ {r = 0} ^ {n} f [x_ {0}, ldots, x_ {r}] cdot g [x_ {r}, ldots, x_ {n} ]} Bo'lingan farqlar nosimmetrikdir: Agar σ : { 0 , … , n } → { 0 , … , n } { displaystyle sigma: {0, dots, n } to {0, dots, n }} keyin almashtirishdir f [ x 0 , … , x n ] = f [ x σ ( 0 ) , … , x σ ( n ) ] { displaystyle f [x_ {0}, dots, x_ {n}] = f [x _ { sigma (0)}, dots, x _ { sigma (n)}]}) f [ x 0 , … , x n ] = f ( n ) ( ξ ) n ! { displaystyle f [x_ {0}, dots, x_ {n}] = { frac {f ^ {(n)} ( xi)} {n!}}} qayerda ξ { displaystyle xi} ning eng kichigi va eng kattasi aniqlagan ochiq oraliqda x k { displaystyle x_ {k}} .Matritsa shakli Bo'lingan farqlar sxemasini yuqori qismga qo'yish mumkin uchburchak matritsa .Qo'yaylik T f ( x 0 , … , x n ) = ( f [ x 0 ] f [ x 0 , x 1 ] f [ x 0 , x 1 , x 2 ] … f [ x 0 , … , x n ] 0 f [ x 1 ] f [ x 1 , x 2 ] … f [ x 1 , … , x n ] ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 … f [ x n ] ) { displaystyle T_ {f} (x_ {0}, dots, x_ {n}) = { begin {pmatrix} f [x_ {0}] & f [x_ {0}, x_ {1}] & f [x_ {0}, x_ {1}, x_ {2}] & ldots & f [x_ {0}, dots, x_ {n}] 0 & f [x_ {1}] & f [x_ {1}, x_ { 2}] & ldots & f [x_ {1}, dots, x_ {n}] vdots & vdots & vdots & ddots & vdots 0 & 0 & 0 & ldots & f [x_ {n}] oxiri {pmatrix}}} .
Keyin u ushlab turadi
T f + g x = T f x + T g x { displaystyle T_ {f + g} x = T_ {f} x + T_ {g} x} T f ⋅ g x = T f x ⋅ T g x { displaystyle T_ {f cdot g} x = T_ {f} x cdot T_ {g} x} Bu Leybnits qoidasidan kelib chiqadi. Bu shuni anglatadiki, bunday matritsalarni ko'paytirish kommutativ . Xulosa qilingan holda, bir xil tugunlar to'plamiga nisbatan bo'lingan farq sxemalarining matritsalari a hosil qiladi komutativ uzuk . Beri T f x { displaystyle T_ {f} x} bu uchburchak matritsa, uning o'zgacha qiymatlar aniq f ( x 0 ) , … , f ( x n ) { displaystyle f (x_ {0}), nuqtalar, f (x_ {n})} . Ruxsat bering δ ξ { displaystyle delta _ { xi}} bo'lishi a Kronekker deltasi o'xshash funktsiya, ya'ni δ ξ ( t ) = { 1 : t = ξ , 0 : boshqa . { displaystyle delta _ { xi} (t) = { begin {case} 1 &: t = xi, 0 &: { mbox {else}}. end {case}}} Shubhasiz f ⋅ δ ξ = f ( ξ ) ⋅ δ ξ { displaystyle f cdot delta _ { xi} = f ( xi) cdot delta _ { xi}} , shunday qilib δ ξ { displaystyle delta _ { xi}} bu o'ziga xos funktsiya funktsiyani nuqtali ko'paytirishni. Anavi T δ x men x { displaystyle T _ { delta _ {x_ {i}}} x} qandaydir tarzda "xususiy matritsa "ning T f x { displaystyle T_ {f} x} : T f x ⋅ T δ x men x = f ( x men ) ⋅ T δ x men x { displaystyle T_ {f} x cdot T _ { delta _ {x_ {i}}} x = f (x_ {i}) cdot T _ { delta _ {x_ {i}}} x} . Biroq, ning barcha ustunlari T δ x men x { displaystyle T _ { delta _ {x_ {i}}} x} bir-birining ko'paytmasi, the matritsa darajasi ning T δ x men x { displaystyle T _ { delta _ {x_ {i}}} x} is 1. Demak, dan barcha xususiy vektorlarning matritsasini tuzishingiz mumkin men { displaystyle i} har birining uchinchi ustuni T δ x men x { displaystyle T _ { delta _ {x_ {i}}} x} . Xususiy vektorlar matritsasini bilan belgilang U x { displaystyle Ux} . Misol U ( x 0 , x 1 , x 2 , x 3 ) = ( 1 1 ( x 1 − x 0 ) 1 ( x 2 − x 0 ) ⋅ ( x 2 − x 1 ) 1 ( x 3 − x 0 ) ⋅ ( x 3 − x 1 ) ⋅ ( x 3 − x 2 ) 0 1 1 ( x 2 − x 1 ) 1 ( x 3 − x 1 ) ⋅ ( x 3 − x 2 ) 0 0 1 1 ( x 3 − x 2 ) 0 0 0 1 ) { displaystyle U (x_ {0}, x_ {1}, x_ {2}, x_ {3}) = { begin {pmatrix} 1 & { frac {1} {(x_ {1} -x_ {0}) )}} & { frac {1} {(x_ {2} -x_ {0}) cdot (x_ {2} -x_ {1})}} va { frac {1} {(x_ {3}) -x_ {0}) cdot (x_ {3} -x_ {1}) cdot (x_ {3} -x_ {2})}} 0 & 1 & { frac {1} {(x_ {2} -) x_ {1})}} & { frac {1} {(x_ {3} -x_ {1}) cdot (x_ {3} -x_ {2})}} 0 & 0 & 1 & { frac {1} {(x_ {3} -x_ {2})}} 0 & 0 & 0 & 1 & end {pmatrix}}} The diagonalizatsiya ning T f x { displaystyle T_ {f} x} sifatida yozilishi mumkin U x ⋅ diag ( f ( x 0 ) , … , f ( x n ) ) = T f x ⋅ U x { displaystyle Ux cdot operator nomi {diag} (f (x_ {0}), nuqta, f (x_ {n})) = T_ {f} x cdot Ux} . Muqobil ta'riflar
Kengaytirilgan shakl f [ x 0 ] = f ( x 0 ) f [ x 0 , x 1 ] = f ( x 0 ) ( x 0 − x 1 ) + f ( x 1 ) ( x 1 − x 0 ) f [ x 0 , x 1 , x 2 ] = f ( x 0 ) ( x 0 − x 1 ) ⋅ ( x 0 − x 2 ) + f ( x 1 ) ( x 1 − x 0 ) ⋅ ( x 1 − x 2 ) + f ( x 2 ) ( x 2 − x 0 ) ⋅ ( x 2 − x 1 ) f [ x 0 , x 1 , x 2 , x 3 ] = f ( x 0 ) ( x 0 − x 1 ) ⋅ ( x 0 − x 2 ) ⋅ ( x 0 − x 3 ) + f ( x 1 ) ( x 1 − x 0 ) ⋅ ( x 1 − x 2 ) ⋅ ( x 1 − x 3 ) + f ( x 2 ) ( x 2 − x 0 ) ⋅ ( x 2 − x 1 ) ⋅ ( x 2 − x 3 ) + f ( x 3 ) ( x 3 − x 0 ) ⋅ ( x 3 − x 1 ) ⋅ ( x 3 − x 2 ) f [ x 0 , … , x n ] = ∑ j = 0 n f ( x j ) ∏ k ∈ { 0 , … , n } ∖ { j } ( x j − x k ) { displaystyle { begin {aligned} f [x_ {0}] & = f (x_ {0}) f [x_ {0}, x_ {1}] & = { frac {f (x_ {0) })} {(x_ {0} -x_ {1})}} + { frac {f (x_ {1})} {(x_ {1} -x_ {0})}} f [x_ { 0}, x_ {1}, x_ {2}] & = { frac {f (x_ {0})} {(x_ {0} -x_ {1}) cdot (x_ {0} -x_ {2) })}} + { frac {f (x_ {1})} {(x_ {1} -x_ {0}) cdot (x_ {1} -x_ {2})}} + { frac {f (x_ {2})} {(x_ {2} -x_ {0}) cdot (x_ {2} -x_ {1})}} f [x_ {0}, x_ {1}, x_ { 2}, x_ {3}] & = { frac {f (x_ {0})} {(x_ {0} -x_ {1}) cdot (x_ {0} -x_ {2}) cdot ( x_ {0} -x_ {3})}} + { frac {f (x_ {1})} {(x_ {1} -x_ {0}) cdot (x_ {1} -x_ {2}) cdot (x_ {1} -x_ {3})}} + { frac {f (x_ {2})} {(x_ {2} -x_ {0}) cdot (x_ {2} -x_ {) 1}) cdot (x_ {2} -x_ {3})}} + & quad quad { frac {f (x_ {3})} {(x_ {3} -x_ {0}) cdot (x_ {3} -x_ {1}) cdot (x_ {3} -x_ {2})}} f [x_ {0}, dots, x_ {n}] & = sum _ {j = 0} ^ {n} { frac {f (x_ {j})} { prod _ {k in {0, dots, n } setminus {j }} (x_ { j} -x_ {k})}} end {hizalanmış}}}
A yordamida polinom funktsiyasi q { displaystyle q} bilan q ( ξ ) = ( ξ − x 0 ) ⋯ ( ξ − x n ) { displaystyle q ( xi) = ( xi -x_ {0}) cdots ( xi -x_ {n})} buni shunday yozish mumkin
f [ x 0 , … , x n ] = ∑ j = 0 n f ( x j ) q ′ ( x j ) . { displaystyle f [x_ {0}, dots, x_ {n}] = sum _ {j = 0} ^ {n} { frac {f (x_ {j})} {q '(x_ {j) })}}.} Shu bilan bir qatorda, biz ketma-ketlikning boshlanishidan boshlab orqaga qarab hisoblashimiz mumkin x k = x k + n + 1 = x k − ( n + 1 ) { displaystyle x_ {k} = x_ {k + n + 1} = x_ {k- (n + 1)}} har doim k < 0 { displaystyle k <0} yoki n < k { displaystyle n . Ushbu ta'rif beradi x − 1 { displaystyle x _ {- 1}} deb talqin qilinishi kerak x n { displaystyle x_ {n}} , x − 2 { displaystyle x _ {- 2}} deb talqin qilinishi kerak x n − 1 { displaystyle x_ {n-1}} , x − n { displaystyle x _ {- n}} deb talqin qilinishi kerak x 0 { displaystyle x_ {0}} Va hokazo. Bo'lingan farqning kengaygan shakli shunday bo'ladi
f [ x 0 , … , x n ] = ∑ j = 0 n f ( x j ) ∏ k = j − n j − 1 ( x j − x k ) + ∑ j = 0 n f ( x j ) ∏ k = j + 1 j + n ( x j − x k ) { displaystyle f [x_ {0}, dots, x_ {n}] = sum _ {j = 0} ^ {n} { frac {f (x_ {j})} { prod limitler _ { k = jn} ^ {j-1} (x_ {j} -x_ {k})}} + sum _ {j = 0} ^ {n} { frac {f (x_ {j})} { prod limitlar _ {k = j + 1} ^ {j + n} (x_ {j} -x_ {k})}}}
Yana bir tavsif cheklovlardan foydalanadi:
f [ x 0 , … , x n ] = ∑ j = 0 n lim x → x j [ f ( x j ) ( x − x j ) ∏ k = 0 n ( x − x k ) ] { displaystyle f [x_ {0}, dots, x_ {n}] = sum _ {j = 0} ^ {n} lim _ {x rightarrow x_ {j}} left [{ frac { f (x_ {j}) (x-x_ {j})} { prod limitlar _ {k = 0} ^ {n} (x-x_ {k})}} o'ng]}
Qisman fraksiyalar Siz vakillik qilishingiz mumkin qisman fraksiyalar bo'lingan farqlarning kengaytirilgan shaklidan foydalanish. (Bu hisoblashni soddalashtirmaydi, lekin o'zi qiziq.) Agar p { displaystyle p} va q { displaystyle q} bor polinom funktsiyalari , qayerda d e g p < d e g q { displaystyle mathrm {deg} p < mathrm {deg} q} va q { displaystyle q} jihatidan berilgan chiziqli omillar tomonidan q ( ξ ) = ( ξ − x 1 ) ⋅ ⋯ ⋅ ( ξ − x n ) { displaystyle q ( xi) = ( xi -x_ {1}) cdot dots cdot ( xi -x_ {n})} , keyin qisman fraksiya dekompozitsiyasidan kelib chiqadiki
p ( ξ ) q ( ξ ) = ( t → p ( t ) ξ − t ) [ x 1 , … , x n ] . { displaystyle { frac {p ( xi)} {q ( xi)}} = chap (t dan { frac {p (t)} { xi -t}} o'ng) [x_ { 1}, nuqta, x_ {n}].} Agar chegaralar bo'lingan farqlar qabul qilinadi, agar ba'zi birlari bo'lsa, bu ulanish ham amalga oshiriladi x j { displaystyle x_ {j}} mos keladi.
Agar f { displaystyle f} o'zboshimchalik darajasiga ega bo'lgan polinom funktsiyasidir va u tomonidan parchalanadi f ( x ) = p ( x ) + q ( x ) ⋅ d ( x ) { displaystyle f (x) = p (x) + q (x) cdot d (x)} foydalanish polinom bo'linishi ning f { displaystyle f} tomonidan q { displaystyle q} , keyin
p ( ξ ) q ( ξ ) = ( t → f ( t ) ξ − t ) [ x 1 , … , x n ] . { displaystyle { frac {p ( xi)} {q ( xi)}} = = chap (t dan { frac {f (t)} { xi -t}} o'ng) [x_ { 1}, nuqta, x_ {n}].} Peano shakli Bo'lingan farqlar quyidagicha ifodalanishi mumkin
f [ x 0 , … , x n ] = 1 n ! ∫ x 0 x n f ( n ) ( t ) B n − 1 ( t ) d t { displaystyle f [x_ {0}, ldots, x_ {n}] = { frac {1} {n!}} int _ {x_ {0}} ^ {x_ {n}} f ^ {( n)} (t) B_ {n-1} (t) , dt} qayerda B n − 1 { displaystyle B_ {n-1}} a B-spline daraja n − 1 { displaystyle n-1} ma'lumotlar nuqtalari uchun x 0 , … , x n { displaystyle x_ {0}, dots, x_ {n}} va f ( n ) { displaystyle f ^ {(n)}} bo'ladi n { displaystyle n} -chi lotin funktsiyasi f { displaystyle f} .
Bunga Peano shakli bo'lingan farqlarning va B n − 1 { displaystyle B_ {n-1}} deyiladi Peano yadrosi ikkalasi ham nomlangan bo'lingan farqlar uchun Juzeppe Peano .
Teylor shakli Birinchi buyurtma Agar tugunlar to'plangan bo'lsa, unda bo'linadigan farqlarning soni bo'yicha hisoblash noto'g'ri, chunki siz deyarli ikkita nolga bo'lasiz, ularning har biri yuqori nisbiy xato sababli o'xshash qiymatlarning farqlari . Ammo, biz buni bilamiz farqli takliflar taxminan lotin va aksincha:
f ( y ) − f ( x ) y − x ≈ f ′ ( x ) { displaystyle { frac {f (y) -f (x)} {y-x}} f (x)} uchun x ≈ y { displaystyle x y y} Ushbu taxminiylikni har doim identifikatsiyaga aylantirish mumkin Teylor teoremasi amal qiladi.
f ( y ) = f ( x ) + f ′ ( x ) ⋅ ( y − x ) + f ″ ( x ) ⋅ ( y − x ) 2 2 ! + f ‴ ( x ) ⋅ ( y − x ) 3 3 ! + … { displaystyle f (y) = f (x) + f '(x) cdot (yx) + f' '(x) cdot { frac {(yx) ^ {2}} {2!}} + f '' '(x) cdot { frac {(yx) ^ {3}} {3!}} + dots} ⇒ f ( y ) − f ( x ) y − x = f ′ ( x ) + f ″ ( x ) ⋅ y − x 2 ! + f ‴ ( x ) ⋅ ( y − x ) 2 3 ! + … { displaystyle Rightarrow { frac {f (y) -f (x)} {yx}} = f '(x) + f' '(x) cdot { frac {yx} {2!}} + f '' '(x) cdot { frac {(yx) ^ {2}} {3!}} + dots} Ning g'alati kuchlarini yo'q qilishingiz mumkin y − x { displaystyle y-x} kengaytirish orqali Teylor seriyasi orasidagi markazda x { displaystyle x} va y { displaystyle y} :
x = m − h , y = m + h { displaystyle x = m-h, y = m + h} , anavi m = x + y 2 , h = y − x 2 { displaystyle m = { frac {x + y} {2}}, h = { frac {y-x} {2}}} f ( m + h ) = f ( m ) + f ′ ( m ) ⋅ h + f ″ ( m ) ⋅ h 2 2 ! + f ‴ ( m ) ⋅ h 3 3 ! + … { displaystyle f (m + h) = f (m) + f '(m) cdot h + f' '(m) cdot { frac {h ^ {2}} {2!}} + f' '' (m) cdot { frac {h ^ {3}} {3!}} + nuqta} f ( m − h ) = f ( m ) − f ′ ( m ) ⋅ h + f ″ ( m ) ⋅ h 2 2 ! − f ‴ ( m ) ⋅ h 3 3 ! + … { displaystyle f (mh) = f (m) -f '(m) cdot h + f' '(m) cdot { frac {h ^ {2}} {2!}} - f' '' (m) cdot { frac {h ^ {3}} {3!}} + nuqta} f ( y ) − f ( x ) y − x = f ( m + h ) − f ( m − h ) 2 ⋅ h = f ′ ( m ) + f ‴ ( m ) ⋅ h 2 3 ! + … { displaystyle { frac {f (y) -f (x)} {yx}} = { frac {f (m + h) -f (mh)} {2 cdot h}} = f '(m) ) + f '' '(m) cdot { frac {h ^ {2}} {3!}} + dots} Yuqori tartib Teylor seriyasi yoki boshqa har qanday vakili funktsiyalar seriyasi bo'linib ketgan farqlarni taxmin qilish uchun printsipial jihatdan foydalanish mumkin. Teylor qatorlari cheksiz yig'indidir quvvat funktsiyalari . Funktsiyadan xaritalash f { displaystyle f} bo'lingan farqga f [ x 0 , … , x n ] { displaystyle f [x_ {0}, dots, x_ {n}]} a chiziqli funktsional . Ushbu funktsiyani summands funktsiyasiga ham qo'llashimiz mumkin.
Oddiy funksiya bilan tezkor quvvat yozuvlari: p n ( x ) = x n . { displaystyle p_ {n} (x) = x ^ {n}.}
Muntazam Teylor seriyasi - bu quvvat funktsiyalarining tortilgan yig'indisi: f = f ( 0 ) ⋅ p 0 + f ′ ( 0 ) ⋅ p 1 + f ″ ( 0 ) 2 ! ⋅ p 2 + f ‴ ( 0 ) 3 ! ⋅ p 3 + … { displaystyle f = f (0) cdot p_ {0} + f '(0) cdot p_ {1} + { frac {f' '(0)} {2!}} cdot p_ {2} + { frac {f '' '(0)} {3!}} cdot p_ {3} + dots}
Bo'lingan farqlar uchun Teylor seriyasi: f [ x 0 , … , x n ] = f ( 0 ) ⋅ p 0 [ x 0 , … , x n ] + f ′ ( 0 ) ⋅ p 1 [ x 0 , … , x n ] + f ″ ( 0 ) 2 ! ⋅ p 2 [ x 0 , … , x n ] + f ‴ ( 0 ) 3 ! ⋅ p 3 [ x 0 , … , x n ] + … { displaystyle f [x_ {0}, dots, x_ {n}] = f (0) cdot p_ {0} [x_ {0}, dots, x_ {n}] + f '(0) cdot p_ {1} [x_ {0}, nuqta, x_ {n}] + { frac {f '' (0)} {2!}} cdot p_ {2} [x_ {0}, nuqta , x_ {n}] + { frac {f '' '(0)} {3!}} cdot p_ {3} [x_ {0}, nuqta, x_ {n}] + nuqta}
Bilamizki, birinchisi n { displaystyle n} atamalar yo'qoladi, chunki bizda polinom tartibiga qaraganda farqlar tartibi yuqori va keyingi muddatda bo'linadigan farq bitta:
∀ j < n p j [ x 0 , … , x n ] = 0 p n [ x 0 , … , x n ] = 1 p n + 1 [ x 0 , … , x n ] = x 0 + ⋯ + x n p n + m [ x 0 , … , x n ] = ∑ a ∈ { 0 , … , n } m bilan a 1 ≤ a 2 ≤ ⋯ ≤ a m ∏ j ∈ a x j . { displaystyle { begin {array} {llcl} forall j Demak, bo'lingan farq uchun Teylor seriyasi aslida bilan boshlanadi f ( n ) ( 0 ) n ! { displaystyle { frac {f ^ {(n)} (0)} {n!}}} ga ko'ra bo'lingan farqning oddiy yaqinlashuvi bo'lingan farqlar uchun o'rtacha qiymat teoremasi .
Agar biz quvvat funktsiyalari uchun bo'lingan farqlarni odatdagi tarzda hisoblashimiz kerak bo'lsa, biz bo'lingan farqni hisoblashda bo'lgani kabi bir xil sonli muammolarga duch kelamiz. f { displaystyle f} . Yaxshisi, oddiyroq yo'l bor
t n = ( 1 − x 0 ⋅ t ) ⋯ ⋅ ( 1 − x n ⋅ t ) ⋅ ( p 0 [ x 0 , … , x n ] + p 1 [ x 0 , … , x n ] ⋅ t + p 2 [ x 0 , … , x n ] ⋅ t 2 + … ) . { displaystyle t ^ {n} = (1-x_ {0} cdot t) dots cdot (1-x_ {n} cdot t) cdot (p_ {0} [x_ {0}, dots , x_ {n}] + p_ {1} [x_ {0}, nuqta, x_ {n}] cdot t + p_ {2} [x_ {0}, nuqta, x_ {n}] cdot t ^ {2} + nuqta).} Natijada, biz ikkiga bo'lingan farqlarni hisoblashimiz mumkin p n { displaystyle p_ {n}} tomonidan a bo'linish ning rasmiy quvvat seriyalari . Qanday qilib biz hisoblashda bu kuchlarni ketma-ket hisoblashgacha kamayishini ko'ring p n [ h ] { displaystyle p_ {n} [h]} bir necha kishi uchun n { displaystyle n} .
Agar siz Teylor seriyasiga nisbatan butun bo'linish sxemasini hisoblashingiz kerak bo'lsa, ning bo'lingan farqlari haqidagi bo'limga qarang quvvat seriyasi .
Polinomlar va kuchlar qatori
Polinomlarning bo'lingan farqlari ayniqsa qiziq, chunki ular Leybnits qoidasidan foydalanishlari mumkin J { displaystyle J} bilan
J = ( x 0 1 0 0 ⋯ 0 0 x 1 1 0 ⋯ 0 0 0 x 2 1 0 ⋮ ⋮ ⋱ ⋱ 0 0 0 0 x n ) { displaystyle J = { begin {pmatrix} x_ {0} & 1 & 0 & 0 & cdots & 0 0 & x_ {1} & 1 & 0 & cdots & 0 0 & 0 & x_ {2} & 1 && 0 vdots & vdots && ddots & ddots & 0 & 0 & 0 & 0 && & x_ {n} end {pmatrix}}} uchun ajratilgan farq sxemasini o'z ichiga oladi identifikatsiya qilish funktsiyasi tugunlarga nisbatan x 0 , … , x n { displaystyle x_ {0}, dots, x_ {n}} , shunday qilib J n { displaystyle J ^ {n}} uchun bo'lingan farqlarni o'z ichiga oladi quvvat funktsiyasi bilan ko'rsatkich n { displaystyle n} Shunday qilib, a uchun bo'lingan farqlarni olishingiz mumkin polinom funktsiyasi φ ( p ) { displaystyle varphi (p)} ga nisbatan polinom p { displaystyle p} murojaat qilish orqali p { displaystyle p} (aniqrog'i: unga mos keladigan matritsali polinom funktsiyasi φ M ( p ) { displaystyle varphi _ { mathrm {M}} (p)} ) matritsaga J { displaystyle J} .
φ ( p ) ( ξ ) = a 0 + a 1 ⋅ ξ + ⋯ + a n ⋅ ξ n { displaystyle varphi (p) ( xi) = a_ {0} + a_ {1} cdot xi + dots + a_ {n} cdot xi ^ {n}} φ M ( p ) ( J ) = a 0 + a 1 ⋅ J + ⋯ + a n ⋅ J n { displaystyle varphi _ { mathrm {M}} (p) (J) = a_ {0} + a_ {1} cdot J + dots + a_ {n} cdot J ^ {n}} = ( φ ( p ) [ x 0 ] φ ( p ) [ x 0 , x 1 ] φ ( p ) [ x 0 , x 1 , x 2 ] … φ ( p ) [ x 0 , … , x n ] 0 φ ( p ) [ x 1 ] φ ( p ) [ x 1 , x 2 ] … φ ( p ) [ x 1 , … , x n ] ⋮ ⋱ ⋱ ⋱ ⋮ 0 … 0 0 φ ( p ) [ x n ] ) { displaystyle = { begin {pmatrix} varphi (p) [x_ {0}] & varphi (p) [x_ {0}, x_ {1}] & varphi (p) [x_ {0}, x_ {1}, x_ {2}] & ldots & varphi (p) [x_ {0}, dots, x_ {n}] 0 & varphi (p) [x_ {1}] & varphi (p) [x_ {1}, x_ {2}] & ldots & varphi (p) [x_ {1}, dots, x_ {n}] vdots & ddots & ddots & ddots & vdots 0 & ldots & 0 & 0 & varphi (p) [x_ {n}] end {pmatrix}}} Bu sifatida tanilgan Opits 'formulasi .[2] [3]
Endi darajasini oshirishni o'ylab ko'ring p { displaystyle p} abadiylikka, ya'ni. Teylor polinomini a ga aylantiring Teylor seriyasi .Qo'yaylik f { displaystyle f} a ga mos keladigan funktsiya bo'lishi quvvat seriyasi Siz qo'llaniladigan matritsa qatorini hisoblash orqali bo'lingan farqlar sxemasini hisoblashingiz mumkin J { displaystyle J} Agar tugunlar bo'lsa x 0 , … , x n { displaystyle x_ {0}, dots, x_ {n}} barchasi tengdir J { displaystyle J} a Iordaniya to'sig'i va hisoblash skalyar funktsiyani a ga umumlashtirishga qadar qaynaydi matritsa funktsiyasi foydalanish Iordaniya parchalanishi .
Oldinga farqlar
Ma'lumotlar nuqtalari teng masofada taqsimlanganda biz maxsus holatni olamiz oldinga farqlar . Ularni umumiy umumiy farqlarga qaraganda hisoblash osonroq.
"Bo'lingan qism" dan ekanligini unutmang oldinga bo'lingan farq ni tiklash uchun hali ham hisoblash kerak oldinga bo'lingan farq dan oldinga farq .
Ta'rif Berilgan n ma'lumotlar nuqtalari
( x 0 , y 0 ) , … , ( x n − 1 , y n − 1 ) { displaystyle (x_ {0}, y_ {0}), ldots, (x_ {n-1}, y_ {n-1})} bilan
x ν = x 0 + ν h , h > 0 , ν = 0 , … , n − 1 { displaystyle x _ { nu} = x_ {0} + nu h, h> 0, nu = 0, ldots, n-1} bo'lingan farqlar orqali hisoblash mumkin oldinga farqlar sifatida belgilangan
Δ ( 0 ) y men := y men { displaystyle Delta ^ {(0)} y_ {i}: = y_ {i}} Δ ( k ) y men := Δ ( k − 1 ) y men + 1 − Δ ( k − 1 ) y men , k ≥ 1. { displaystyle Delta ^ {(k)} y_ {i}: = Delta ^ {(k-1)} y_ {i + 1} - Delta ^ {(k-1)} y_ {i}, k geq 1.} Bo'lingan farqlar va oldinga qarab farqlar o'rtasidagi bog'liqlik[4]
f [ x 0 , x 1 , … , x k ] = 1 k ! h k Δ ( k ) f ( x 0 ) . { displaystyle f [x_ {0}, x_ {1}, ldots, x_ {k}] = { frac {1} {k! h ^ {k}}} Delta ^ {(k)} f ( x_ {0}).} Misol y 0 Δ y 0 y 1 Δ 2 y 0 Δ y 1 Δ 3 y 0 y 2 Δ 2 y 1 Δ y 2 y 3 { displaystyle { begin {matrix} y_ {0} &&& & Delta y_ {0} && y_ {1} && Delta ^ {2} y_ {0} & & Delta y_ {1 } && Delta ^ {3} y_ {0} y_ {2} && Delta ^ {2} y_ {1} & & Delta y_ {2} && y_ {3} &&& oxiri {matritsa}}} Shuningdek qarang
Adabiyotlar
^ Isaakson, Valter (2014). Innovatorlar . Simon va Shuster. p. 20. ISBN 978-1-4767-0869-0 . ^ de Bur, Karl , Bo'lingan farqlar , Surv. Taxminan. Nazariya 1 (2005), 46-69, [1] ^ Opits, G. Steigungsmatrizen , Z. Anjyu. Matematika. Mex. (1964), 44, T52-T54 ^ Yuk, Richard L.; Faires, J. Duglas (2011). Raqamli tahlil (9-nashr). p.129 . Lui Melvil Milne-Tomson (2000) [1933]. Sonli farqlarning hisobi . Amerika matematik sots. 1-bob: Bo'lingan farqlar. ISBN 978-0-8218-2107-7 .Miron B. Allen; Eli L. Isaacson (1998). Amaliy fan uchun raqamli tahlil . John Wiley & Sons. Ilova A. ISBN 978-1-118-03027-1 . Ron Goldman (2002). Piramida algoritmlari: geometrik modellashtirish uchun egri va sirtlarga dinamik dasturlash usuli . Morgan Kaufmann. 4-bob: Nyuton interpolatsiyasi va farq uchburchagi. ISBN 978-0-08-051547-2 . Tashqi havolalar