# Random Vector

A Random Vector is a random element that is a vector composed of random variables.

**AKA:**Random Subvector, Multivariate Random Variable.**Context:**- It can range from being an Abstract Random Vector to being a Random Vector Structure.
- It can range from being a Bivariate Random Vector to being a n-Variate Random Vector.
- It can range from being a Discrete Random Vector to being a Continuous Random Vector.
- It can range from being a Finite Random Vector to being an Infinite Random Vector.

**Example(s):****Counter-Example(s):**- a Random Tuple.
- a Random Matrix.
- a Random Tree.
- a Random Graph.

**See:**Random Variable Sequence, Joint Probability Function, Conditional Probability Function, Bayesian Network, Stochastic Process, Multivariate Data, Multivariate Normal Distribution.

## References

### 2015

- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Random_element#Random_vector Retrieved:2015-2-16.
- A
**random vector**is a column vector [math]\mathbf{X}=(X_1,...,X_n)^T [/math] (or its transpose, which is a row vector) whose components are scalar-valued random variables on the same probability space [math](\Omega, \mathcal{F}, P)[/math], where [math]\Omega[/math] is the sample space, [math]\mathcal{F}[/math] is the sigma-algebra (the collection of all events), and [math]P[/math] is the probability measure (a function returning each event's probability).Random vectors are often used as the underlying implementation of various types of aggregate random variables, e.g. a random matrix, random tree, random sequence, random process, etc.

- A

- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/multivariate_random_variable Retrieved:2015-2-16.
- In mathematics, probability, and statistics, a
**multivariate random variable**or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because there may be correlations among them — often they represent different properties of an individual statistical unit (e.g. a particular person, event, etc.). Normally each element of a random vector is a real number.Random vectors are often used as the underlying implementation of various types of aggregate random variables, e.g. a random matrix, random tree, random sequence, random process, etc.

More formally, a multivariate random variable is a column vector [math]\mathbf{X}=(X_1,...,X_n)^T [/math] (or its transpose, which is a row vector) whose components are scalar-valued random variables on the same probability space [math](\Omega, \mathcal{F}, P)[/math], where [math]\Omega[/math] is the sample space, [math]\mathcal{F}[/math] is the sigma-algebra (the collection of all events), and [math]P[/math] is the probability measure (a function returning each event's probability).

- In mathematics, probability, and statistics, a

### 2006

- (Dubnicka, 2006e) ⇒ Suzanne R. Dubnicka. (2006). “Random Vectors and Multivariate Distributions - Handout 5." Kansas State University, Introduction to Probability and Statistics I, STAT 510 - Fall 2006.
- TERMINOLOGY : If X and Y are random variables, then (X, Y ) is called a
**bivariate random vector**. In general, if X1,X2, ...,Xn denote n random variables, then X = (X1,X2, ...,Xn) is called an n-variate random vector. For much of this chapter, we will consider the n = 2 bivariate case. However, all ideas discussed herein extend naturally to higher dimensional settings. - TERMINOLOGY : Let X and Y be discrete random variables. Then, (X, Y ) is called a
**discrete random vector**, and the joint probability mass function (pmf) of X and Y is given by pX,Y (x, y) = P(X = x, Y = y), - TERMINOLOGY : Suppose that (X, Y ) is a
**discrete random vector**with joint pmf pX,Y (x, y). We define the conditional probability mass function (pmf) of X, given Y = y, as pX|Y (x|y) = pX,Y (x, y) pY (y), whenever pY (y) > 0. Similarly, the conditional probability mass function of Y, given X = x, as pY |X(y|x) = pX,Y (x, y) pX(x), whenever pX(x) > 0. - TERMINOLOGY : Suppose that (X, Y ) is a
**continuous random vector**with joint pdf fX,Y (x, y). We define the conditional probability density function (pdf) of X, given Y = y, as fX|Y (x|y) = fX,Y (x, y) fY (y) . - TERMINOLOGY : Suppose that (X, Y ) is a random vector (discrete or continuous) with joint cdf FX,Y (x, y), and denote the marginal cdfs of X and Y by FX(x) and FY (y), respectively. We say that the random variables X and Y are independent if and only if FX,Y (x, y) = FX(x)FY (y) for all values of x and y. Otherwise, we say that X and Y are dependent.

- TERMINOLOGY : If X and Y are random variables, then (X, Y ) is called a

### 2005

- (Rue & Held, 2005) ⇒ Havard Rue, and Leonhard Held. (2005). “Gaussian Markov Random Fields: Theory and Applications." CRC Press. ISBN:1584884320
- QUOTE: Let [math]\mathbf{x} = (x_1,x_2,x_3)^T[/math] be a random vector, then [math]x_1[/math] and [math]x_2[/math] are conditionally independent given [math]x_3[/math] if, for known value of [math]x_3[/math], discover [math]x_2[/math] tells you nothing new about the distribution of [math]x_1[/math].

### 2000

- (Hyvärinen & Oja, 2000) ⇒ Aapo Hyvärinen, and Erkki Oja. (2000). “Independent Component Analysis: Algorithms and Applications.” In: Neural Networks, 13(4-5). doi:10.1016/S0893-6080(00)00026-5.
- QUOTE: A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors.