Simple Kriging Regression Task

From GM-RKB
Jump to navigation Jump to search

A Simple Kriging Regression Task is a [[]] that ...



References

2017

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Kriging#Simple_kriging Retrieved:2017-9-3.
    • Simple kriging is mathematically the simplest, but the least general. [1] It assumes the expectation of the random field to be known, and relies on a covariance function. However, in most applications neither the expectation nor the covariance are known beforehand.

      The practical assumptions for the application of simple kriging are:

      • wide sense stationarity of the field.
      • The expectation is zero everywhere: [math]\displaystyle{ \mu(x)=0 }[/math] .
      • Known covariance function [math]\displaystyle{ c(x,y)=\operatorname{Cov}(Z(x),Z(y)) }[/math] ;System of equations
    • The kriging weights of simple kriging have no unbiasedness condition

      and are given by the simple kriging equation system: : [math]\displaystyle{ \begin{pmatrix}w_1 \\ \vdots \\ w_n \end{pmatrix}= \begin{pmatrix}c(x_1,x_1) & \cdots & c(x_1,x_n) \\ \vdots & \ddots & \vdots \\ c(x_n,x_1) & \cdots & c(x_n,x_n) \end{pmatrix}^{-1} \begin{pmatrix}c(x_1,x_0) \\ \vdots \\ c(x_n,x_0) \end{pmatrix} }[/math] This is analogous to a linear regression of [math]\displaystyle{ Z(x_0) }[/math] on the other [math]\displaystyle{ z_1 , \ldots, z_n }[/math] .

      ;Estimation

      The interpolation by simple kriging is given by: : [math]\displaystyle{ \hat{Z}(x_0)=\begin{pmatrix}z_1 \\ \vdots \\ z_n \end{pmatrix}' \begin{pmatrix}c(x_1,x_1) & \cdots & c(x_1,x_n) \\ \vdots & \ddots & \vdots \\ c(x_n,x_1) & \cdots & c(x_n,x_n) \end{pmatrix}^{-1} \begin{pmatrix}c(x_1,x_0) \\ \vdots \\ c(x_n,x_0)\end{pmatrix} }[/math] The kriging error is given by: : [math]\displaystyle{ \operatorname{Var}\left(\hat{Z}(x_0)-Z(x_0)\right)=\underbrace{c(x_0,x_0)}_{\operatorname{Var}(Z(x_0))}- \underbrace{\begin{pmatrix}c(x_1,x_0) \\ \vdots \\ c(x_n,x_0)\end{pmatrix}' \begin{pmatrix} c(x_1,x_1) & \cdots & c(x_1,x_n) \\ \vdots & \ddots & \vdots \\ c(x_n,x_1) & \cdots & c(x_n,x_n) \end{pmatrix}^{-1} \begin{pmatrix}c(x_1,x_0) \\ \vdots \\ c(x_n,x_0) \end{pmatrix}}_{\operatorname{Var}(\hat{Z}(x_0))} }[/math] which leads to the generalised least squares version of the Gauss–Markov theorem (Chiles & Delfiner 1999, p. 159): : [math]\displaystyle{ \operatorname{Var}(Z(x_0))=\operatorname{Var}(\hat{Z}(x_0)) + \operatorname{Var}\left(\hat{Z}(x_0)-Z(x_0)\right). }[/math]