Linear Transformation Operation

From GM-RKB
(Redirected from Linear transformation)
Jump to navigation Jump to search

A Linear Transformation Operation is a transformation operation that preserves the operations of vector addition and scalar multiplication.

  • AKA: Linear Mapping.
  • Context:
    • It can be represented as, for [math]\displaystyle{ V }[/math] and [math]\displaystyle{ W }[/math] be vector spaces over the field [math]\displaystyle{ F }[/math], a linear transformation [math]\displaystyle{ V }[/math] into [math]\displaystyle{ W }[/math] is a function [math]\displaystyle{ T }[/math] from [math]\displaystyle{ V }[/math] into [math]\displaystyle{ W }[/math] such that [math]\displaystyle{ T(c\alpha + \beta)=c(T \alpha)+T \beta }[/math] for all [math]\displaystyle{ \alpha }[/math] and [math]\displaystyle{ \beta }[/math] in [math]\displaystyle{ V }[/math] and all scalars [math]\displaystyle{ c }[/math] in [math]\displaystyle{ F }[/math].
    • It can be a Surjective Linear Transformation.
    • It can be an Injective Linear Transformation.
    • It can be produced by a Linear Mapping Task.
    • It can range from being a Linear Function to a Linear Matrix Transformation.
    • For two linear transformations [math]\displaystyle{ T_1:V \to W }[/math] and [math]\displaystyle{ T_2:V \to W }[/math], [math]\displaystyle{ T_1T_2 }[/math] and [math]\displaystyle{ T_2T_1 }[/math] (composite transformations) are also linear transformations from [math]\displaystyle{ V \to W }[/math]. But [math]\displaystyle{ T_1T_2 \# T_2T_1 }[/math].
    • For a linear transformation [math]\displaystyle{ T:V \to W }[/math], the collection of all elements [math]\displaystyle{ w \in W }[/math] such that [math]\displaystyle{ w=T(v) }[/math] for [math]\displaystyle{ v \in V }[/math]is called range of T and is denoted as [math]\displaystyle{ ran(T) }[/math]. That is

      [math]\displaystyle{ ran(T)=\{T(v)|v \in V\} }[/math].

    • For a linear transformation [math]\displaystyle{ T:V \to W }[/math], the set of all elements of [math]\displaystyle{ V }[/math]that mapped into zero element by the linear transformation [math]\displaystyle{ T }[/math] is called the kernel or the null-space of [math]\displaystyle{ T }[/math] and denoted as [math]\displaystyle{ ker(T) }[/math]. That is

      [math]\displaystyle{ ker(T)=\{v|T(v)=0\} }[/math].

  • Example(s):
    • a Zero Map.
    • an Identity Map.
    • a Linear Transformation Addition Operation ([math]\displaystyle{ + }[/math]), where for [math]\displaystyle{ T_1:V \to W }[/math] and [math]\displaystyle{ T_2:V \to W }[/math], [math]\displaystyle{ T_1 + T_2 }[/math] is also a linear transformation (from [math]\displaystyle{ V \to W }[/math]).
    • a Constant Multiplication Function, [math]\displaystyle{ x \mapsto cx }[/math], where [math]\displaystyle{ c }[/math] is a constant.
    • a Fourier Transform.
    • a Haar Transform.
    • a Bilinear Function.
    • a Linear Projection, such as an orthogonal projection.
    • [math]\displaystyle{ T(x)=-x/2 }[/math], with scale compression and scale reflection.
    • a transformation [math]\displaystyle{ T:\R^3 \to \mathbb{R}^2 }[/math] defined by [math]\displaystyle{ T\left(\substack{ x \\y \\z}\right)=\left( \substack{ y+z \\y-z} \right) }[/math] is a linear transformation since [math]\displaystyle{ T }[/math] satisfies the property [math]\displaystyle{ T(c\alpha + \beta)=c(T \alpha)+T \beta }[/math] for all [math]\displaystyle{ \alpha, \beta \in V }[/math] and all scalars [math]\displaystyle{ c \in F }[/math].

      Let [math]\displaystyle{ \alpha=\left(\substack{ x_1 \\y_1 \\z_1}\right) \in V, \beta=\left(\substack{ x_2 \\y_2 \\z_2}\right) \in V }[/math] and scalar [math]\displaystyle{ c \in F }[/math]

      .

      So [math]\displaystyle{ T\alpha=T(\alpha)=T\left(\substack{ x_1 \\y_1 \\z_1}\right)=\left( \substack{ y_1+z_1 \\y_1-z_1} \right) \in \mathbb{R}^2=W, T \beta=T(\beta)=T\left(\substack{ x_2 \\y_2 \\z_2}\right)=\left( \substack{y_2+z_2 \\y_2-z_2} \right) \in \mathbb{R}^2=W }[/math]

      Then [math]\displaystyle{ T(c\alpha+\beta)=T\left(\substack{ cx_1+x_2 \\cy_1+y_2 \\cz_1+z_2}\right)=\left( \substack{ cy_1+y_2+cz_1+z_2 \\cy_1+y_2-cz_1-z_2} \right)=\left( \substack{ c(y_1+z_1)+y_2+z_2 \\c(y_1-z_1)+y_2-z_2} \right)=c\left( \substack{ y_1+z_1 \\y_1-z_1} \right)+ \left( \substack{ y_2+z_2 \\y_2-z_2} \right)=c(T\alpha) + T \beta }[/math], which proves that [math]\displaystyle{ T }[/math] is a linear transformation.

      With a little computation it can also be found that the transformation matrix [math]\displaystyle{ T=\begin{pmatrix}0 & 1 & 1 \\ 0 & 1 & -1 \end{pmatrix} }[/math]

      .

      It can be verified that [math]\displaystyle{ T\left(\substack{ x \\y \\z}\right)=\begin{pmatrix}0 & 1 & 1 \\ 0 & 1 & -1 \end{pmatrix} \left(\substack{ x \\y \\z}\right)=\left( \substack{ y+z \\y-z} \right) }[/math]

  • Counter-Example(s):
    • a Non-Linear Transformation, such as [math]\displaystyle{ x\mapsto x^2 }[/math].
    • Cosine Transform.
    • a transformation [math]\displaystyle{ T:\R^3 \to \mathbb{R}^1 }[/math] defined by [math]\displaystyle{ T\begin{pmatrix}x \\y \\z \end{pmatrix}=x^2+y^2+z^2 }[/math] is a not linear transformation since [math]\displaystyle{ T }[/math] does not satisfy the property [math]\displaystyle{ T(c\alpha + \beta)=c(T \alpha)+T \beta }[/math] for all [math]\displaystyle{ \alpha, \beta \in V }[/math] and all scalars [math]\displaystyle{ c \in F }[/math].

      The property can be verified as follows.

      Let [math]\displaystyle{ \alpha=\begin{pmatrix}x_1 \\y_1 \\z_1 \end{pmatrix} \in \mathbb{R}^3=V, \beta=\begin{pmatrix}x_2 \\y_2 \\z_2 \end{pmatrix} \in \mathbb{R}^3=V }[/math] and scalar [math]\displaystyle{ c \in F }[/math]

      .

      So [math]\displaystyle{ T\alpha=T(\alpha)=T\begin{pmatrix}x_1 \\y_1 \\z_1 \end{pmatrix}={x_1}^2+{y_1}^2+{z_1}^2 \in \mathbb{R}^1=W }[/math] and [math]\displaystyle{ T\beta=T(\beta)=T\begin{pmatrix}x_2 \\y_2 \\z_2 \end{pmatrix}={x_2}^2+{y_2}^2+{z_2}^2 \in \mathbb{R}^1=W }[/math]

      [math]\displaystyle{ T(c \alpha+\beta)=T\begin{pmatrix}cx_1+x_2 \\cy_1+y_2 \\cz_1+z_2 \end{pmatrix}={(cx_1+x_2)}^2+{(cy_1+y_2)}^2+{(cz_1+z_2)}^2 \# cT(\alpha)+T(\beta) }[/math].

  • See: Linear Model Training, Homomorphism, Independent Component Analysis, Vector Space, Category Theory, Linear Independence, Linear Algebra Concept.


References

2015


  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Linear_map#Definition_and_first_consequences Retrieved:2015-1-30.
    • Let V and W be vector spaces over the same field K. A function f: VW is said to be a linear map if for any two vectors x and y in V and any scalar α in K, the following two conditions are satisfied:
      • [math]\displaystyle{ f(\mathbf{x}+\mathbf{y}) = f(\mathbf{x})+f(\mathbf{y}) \! }[/math] additivity.
      • [math]\displaystyle{ f(\alpha \mathbf{x}) = \alpha f(\mathbf{x}) \! }[/math] homogeneity of degree 1
    • This is equivalent to requiring the same for any linear combination of vectors, i.e. that for any vectors x1, ..., xmV and scalars a1, ..., am ∈ K, the following equality holds: : [math]\displaystyle{ f(a_1 \mathbf{x}_1+\cdots+a_m \mathbf{x}_m) = a_1 f(\mathbf{x}_1)+\cdots+a_m f(\mathbf{x}_m). \! }[/math] Denoting the zero elements of the vector spaces V and W by 0V and 0W respectively, it follows that f(0V) = 0W because letting α = 0 in the equation for homogeneity of degree 1, :[math]\displaystyle{ f(\mathbf{0}_{V}) = f(0 \cdot \mathbf{0}_{V}) = 0 \cdot f(\mathbf{0}_{V}) = \mathbf{0}_{W} . }[/math] Occasionally, V and W can be considered to be vector spaces over different fields. It is then necessary to specify which of these ground fields is being used in the definition of "linear". If V and W are considered as spaces over the field K as above, we talk about K-linear maps. For example, the conjugation of complex numbers is an R-linear map CC, but it is not C-linear.

      A linear map from V to K (with K viewed as a vector space over itself) is called a linear functional.

      These statements generalize to any left-module RM over a ring R without modification.

2012

  • Mark V. Sapir. http://www.math.vanderbilt.edu/~msapir/msapir/feb19.html
    • QUOTE: A function from [math]\displaystyle{ \R^n }[/math] to [math]\displaystyle{ \R^m }[/math] which takes every [math]\displaystyle{ n }[/math]-vector [math]\displaystyle{ v }[/math] to the [math]\displaystyle{ m }[/math]-vector [math]\displaystyle{ Av }[/math] where [math]\displaystyle{ A }[/math] is a [math]\displaystyle{ m }[/math] by [math]\displaystyle{ n }[/math] matrix, is called a linear transformation. The matrix [math]\displaystyle{ A }[/math] is called the standard matrix of this transformation. If [math]\displaystyle{ n=m }[/math] then the transformation is called a linear operator of the vector space [math]\displaystyle{ \R^n\lt math\gt . \lt P\gt Notice that by the definition the linear transformation with a standard matrix A takes every vector : \lt math\gt (x_1,...,x_n) }[/math] from \mathbb{R}^n to the vector : [math]\displaystyle{ (A(1,1)x_1+...+A(1,n)x_n, A(2,1)x_1+...+A(2,n)x_n,...,A(m,1)x_1+...+A(m,n)x_n) }[/math]

      from \mathbb{R}^m where A(i,j) are the entries of A. Conversely, every transformation from \mathbb{R}^n to \mathbb{R}^m given by a formula of this kind is a linear transformation and the coefficients A(i,j) form the standard matrix of this transformation.

      Examples. 1. Consider the transformation of R2 which takes each vector (a,b) to the opposite vector (-a,-b). This is a linear operator with standard matrix : [math]\displaystyle{ [ -1 0] \\ [0 -1] }[/math]

2010

2009

2000