In linear algebra, a column vector or column matrix is an m × 1 matrix, that is, a matrix consisting of a single column of m elements,

\mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} \,.
Similarly, a row vector or row matrix is a 1 × m matrix, that is, a matrix consisting of a single row of m elements^{[1]}

\mathbf x = \begin{bmatrix} x_1 & x_2 & \dots & x_m \end{bmatrix} \,.
Throughout, boldface is used for the row and column vectors. The transpose (indicated by T) of a row vector is a column vector

\begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} \,,
and the transpose of a column vector is a row vector

\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix}^{\rm T} = \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix} \,.
The set of all row vectors forms a vector space called row space, similarly the set of all column vectors forms a vector space called column space. The dimensions of the row and column spaces equals the number of entries in the row or column vector.
The column space can be viewed as the dual space to the row space, since any linear functional on the space of column vectors can be represented uniquely as an inner product with a specific row vector.
Contents

Notation 1

Operations 2

Preferred input vectors for matrix transformations 3

See also 4

Notes 5

References 6
Notation
To simplify writing column vectors inline with other text, sometimes they are written as row vectors with the transpose operation applied to them.

\mathbf{x} = \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T}
or

\mathbf{x} = \begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}^{\rm T}
Some authors also use the convention of writing both column vectors and row vectors as rows, but separating row vector elements with
commas and column vector elements with
semicolons (see alternative notation 2 in the table below).

Row vector

Column vector

Standard matrix notation
(array spaces, no commas, transpose signs)

\begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}

\begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_m \end{bmatrix} \text{ or } \begin{bmatrix} x_1 \; x_2 \; \dots \; x_m \end{bmatrix}^{\rm T}

Alternative notation 1
(commas, transpose signs)

\begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}

\begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}^{\rm T}

Alternative notation 2
(commas and semicolons, no transpose signs)

\begin{bmatrix} x_1, x_2, \dots, x_m \end{bmatrix}

\begin{bmatrix} x_1; x_2; \dots; x_m \end{bmatrix}

Operations
Matrix multiplication involves the action of multiplying each row vector of one matrix by each column vector of another matrix.
The dot product of two vectors a and b is equivalent to the matrix product of the row vector representation of a and the column vector representation of b,

\mathbf{a} \cdot \mathbf{b} = \mathbf{a}^\mathrm{T} \mathbf{b} = \begin{bmatrix} a_1 & a_2 & a_3 \end{bmatrix}\begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix} = a_1 b_1 + a_2 b_2 + a_3 b_3 \,,
which is also equivalent to the matrix product of the row vector representation of b and the column vector representation of a,

\mathbf{b} \cdot \mathbf{a} = \mathbf{b}^\mathrm{T} \mathbf{a} = \begin{bmatrix} b_1 & b_2 & b_3 \end{bmatrix}\begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}\,.
The matrix product of a column and a row vector gives the dyadic product of two vectors a and b, an example of the more general tensor product. The matrix product of the matrix product of the column vector representation of a and the row vector representation of b gives the components of their dyadic product,

\mathbf{a} \otimes \mathbf{b} = \mathbf{a} \mathbf{b}^\mathrm{T} = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \end{bmatrix}\begin{bmatrix} b_1 & b_2 & b_3 \end{bmatrix} = \begin{bmatrix} a_1b_1 & a_1b_2 & a_1b_3 \\ a_2b_1 & a_2b_2 & a_2b_3 \\ a_3b_1 & a_3b_2 & a_3b_3 \\ \end{bmatrix} \,,
which is not equivalent to the column vector representation of b and the row vector representation of a,

\mathbf{b} \otimes \mathbf{a} = \mathbf{b} \mathbf{a}^\mathrm{T} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}\begin{bmatrix} a_1 & a_2 & a_3 \end{bmatrix} = \begin{bmatrix} b_1a_1 & b_1a_2 & b_1a_3 \\ b_2a_1 & b_2a_2 & b_2a_3 \\ b_3a_1 & b_3a_2 & b_3a_3 \\ \end{bmatrix} \,.
In this case the two matrices are different.
Preferred input vectors for matrix transformations
Frequently a row vector presents itself for an operation within nspace expressed by an n × n matrix M,

v M = p \,.
Then p is also a row vector and may present to another n × n matrix Q,

p Q = t \,.
Conveniently, one can write t = p Q = v MQ telling us that the matrix product transformation MQ can take v directly to t. Continuing with row vectors, matrix transformations further reconfiguring nspace can be applied to the right of previous outputs.
In contrast, when a column vector is transformed to become another column under an n × n matrix action, the operation occurs to the left,

p = M v \,,\quad t = Q p ,
leading to the algebraic expression QM v for the composed output from v input. The matrix transformations mount up to the left in this use of a column vector for input to matrix transformation. The natural bias to read lefttoright, as subsequent transformations are applied in linear algebra, stands against column vector inputs.
Nevertheless, using the transpose operation these differences between inputs of a row or column nature are resolved by an antihomomorphism between the groups arising on the two sides. The technical construction uses the dual space associated with a vector space to develop the transpose of a linear map.
For an instance where this row vector input convention has been used to good effect see Raiz Usmani,^{[2]} where on page 106 the convention allows the statement "The product mapping ST of U into W [is given] by:

\alpha (ST) = (\alpha S) T = \beta T = \gamma."
(The Greek letters represent row vectors).
Ludwik Silberstein used row vectors for spacetime events; he applied Lorentz transformation matrices on the right in his Theory of Relativity in 1914 (see page 143). In 1963 when McGrawHill published Differential Geometry by Heinrich Guggenheimer of the University of Minnesota, he uses the row vector convention in chapter 5, "Introduction to transformation groups" (eqs. 7a,9b and 12 to 15). When H. S. M. Coxeter reviewed^{[3]} Linear Geometry by Rafael Artzy, he wrote, "[Artzy] is to be congratulated on his choice of the 'lefttoright' convention, which enables him to regard a point as a row matrix instead of the clumsy column that many authors prefer."
See also
Notes

^ Meyer (2000), p. 8

^ Raiz A. Usmani (1987) Applied Linear Algebra Marcel Dekker ISBN 0824776224. See Chapter 4: "Linear Transformations"

^ Coxeter Linear GeometryReview of from Mathematical Reviews
References
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.