# Matrix Multiplication Operation

A Matrix Multiplication Operation is a binary matrix operation that is a multiplication operation.

**Context:**- It can range from being a Matrix-Matrix Multiplication Operation to being a Matrix-Vector Multiplication Operation to being a Matrix-Scalar Multiplication Operation.
- It can be performed by a Matrix Multiplication System (that implements a matrix multiplication algorithm).

**Counter-Example(s):****See:**Hadamard Product (Matrices), Dot Product, Multiplication, Hadamard Product (Matrices).

## References

### 2015

- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Multiplication#Notation_and_terminology Retrieved:2015-1-17.
- In matrix multiplication, there is actually a distinction between the cross and the dot symbols. The cross symbol generally denotes a vector multiplication,Template:Clarification needed while the dot denotes a scalar multiplication. A similar convention distinguishes between the cross product and the dot product of two vectors. …
… However, matrix multiplication is not commutative ...

- In matrix multiplication, there is actually a distinction between the cross and the dot symbols. The cross symbol generally denotes a vector multiplication,Template:Clarification needed while the dot denotes a scalar multiplication. A similar convention distinguishes between the cross product and the dot product of two vectors. …

- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Matrix_multiplication#Matrix_product_.28two_matrices.29 Retrieved:2015-1-17.
- In mathematics,
**matrix multiplication**is a binary operation that takes a pair of matrices, and produces another matrix. Numbers such as the real or complex numbers can be multiplied according to elementary arithmetic. On the other hand, matrices are*arrays of numbers*, so there is no unique way to define "the" multiplication of matrices. As such, in general the term "matrix multiplication" refers to a number of different ways to multiply matrices. The key features of any matrix multiplication include: the number of rows and columns the original matrices have (called the "size", "order" or "dimension"), and specifying how the entries of the matrices generate the new matrix.Like vectors, matrices of any size can be multiplied by scalars, which amounts to multiplying every entry of the matrix by the same number. Similar to the entrywise definition of adding or subtracting matrices, multiplication of two matrices of the same size can be defined by multiplying the corresponding entries, and this is known as the Hadamard product. Another definition is the Kronecker product of two matrices, to obtain a block matrix.

One can form many other definitions. However, the most useful definition can be motivated by linear equations and linear transformations on vectors, which have numerous applications in applied mathematics, physics, and engineering. This definition is often called

*the*matrix product.^{[1]}^{[2]}In words, if**A**is an*n*× m*matrix and*m**B**is a*×*p*matrix, their matrix product*n ×**AB**is an*p*matrix, in which the*m*entries across the rows of**A**are multiplied with the*m*entries down the columns of**B**(the precise definition is below).This definition is not commutative, although it still retains the associative property and is distributive over entrywise addition of matrices. The identity element of the matrix product is the identity matrix (analogous to multiplying numbers by 1), and a square matrix may have an inverse matrix (analogous to the multiplicative inverse of a number). A consequence of the matrix product is determinant multiplicativity. The matrix product is an important operation in linear transformations, matrix groups, and the theory of group representations and irreps.

Computing matrix products is both a central operation in many numerical algorithms and potentially time consuming, making it one of the most well-studied problems in numerical computing. Various algorithms have been devised for computing

**C**= AB**, especially for large matrices.****This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g.**A**, vectors in lowercase bold, e.g.**a**, and entries of vectors and matrices are italic (since they are scalars), e.g.**A*A*and*a*. Index notation is often the clearest way to express definitions, and is used as standard in the literature. The*i, j*entry of matrix**is indicated by (**A)_{ij}or*A*_{ij}, whereas a numerical label (not matrix entries) on a collection of matrices is subscripted only, e.g.**A**_{1},**A**_{2}, etc.

- In mathematics,

- ↑ R.G. Lerner, G.L. Trigg (1991).
*Encyclopaedia of Physics*(2nd ed.). VHC publishers. ISBN 3-527-26954-1. - ↑ C.B. Parker (1994).
*McGraw Hill Encyclopaedia of Physics*(2nd ed.). ISBN 0-07-051400-3.

### 1999

- (Cohen & Lewis, 1999) ⇒ Edith Cohen, and David D. Lewis. (1999). “Approximating Matrix Multiplication for Pattern Recognition Tasks.” In: Journal of Algorithms, 30(2).
- ABSTRACT: Many pattern recognition tasks, including estimation, classification, and the finding of similar objects, make use of linear models. The fundamental operation in such tasks is the computation of the dot product between a query vector and a large database of instance vectors. Often we are interested primarily in those instance vectors which have high dot products with the query. We present a random sampling based algorithm that enables us to identify, for any given query vector, those instance vectors which have large dot products, while avoiding explicit computation of all dot products. We provide experimental results that demonstrate considerable speedups for text retrieval tasks. Our approximate matrix multiplication algorithm is applicable to products of k ≥ 2 matrices and is of independent interest. Our theoretical and experimental analysis demonstrates that in many scenarios, our method dominates standard matrix multiplication.