Complete Guide to NumPy Linear Algebra: Functions, Eigenvalues & Matrix Operations (2026)

  Linear Algebra with NumPy

Basic Matrix & Vector Products

   a. How to Use the NumPy dot() Function
   b. NumPy vdot() Function Explained with Examples
   c. Understanding NumPy inner() Function
   d. NumPy outer() Function: Step-by-Step Guide
   e. How to Work with NumPy matmul() Function

Understanding Eigenvalues and Eigenvectors in NumPy

   f. Master NumPy einsum(): Flexible Array and Matrix Computations
   g. NumPy linalg.eig(): Eigenvalues and Eigenvectors
   h. Using NumPy linalg.eigvals() for Eigenvalues
   i. NumPy linalg.eigh(): Hermitian Matrices Explained
   j. Eigenvalues of Symmetric Matrices (eigvalsh): Fast and Stable Computation

Matrix Properties & Diagnostics

   k. How to Compute Determinant with NumPy linalg.det()
   l. Matrix Trace (linalg.trace): Summing Diagonal Elements in NumPy
   m. Matrix Norm (numpy.linalg.norm): Understanding Vector and Matrix Magnitudes
   n. Matrix Rank (numpy.linalg.matrix_rank): Determining Linear Independence
   o. Condition Number (numpy.linalg.cond): Measuring Matrix Stability and Sensitivity

Matrix Utilities

   p. Matrix Diagonal (linalg.diagonal): Extracting Diagonal Elements Efficiently
   r. NumPy Matrix Transpose: Swap Rows and Columns Easily
   s. Solve Linear Systems with NumPy linalg.solve()
   t. Inverting Matrices Using NumPy linalg.inv()
   u. Pseudo-Inverse (linalg.pinv): Computing the Moore-Penrose Inverse in NumPy

Matrix Factorizations (Decompositions)


   p. Singular Value Decomposition (SVD): Breaking Matrices into Core Components
   r. QR Decomposition: Orthogonal-Triangular Factorization Explained
   s. Cholesky Decomposition: Efficient Factorization for Positive Definite Matrices

Practical / Applied Linear Algebra


   t. Least Squares Solutions - numpy.linalg.lstsq
   u. Covariance Matrices - numpy.cov

Linear Algebra with NumPy (numpy.linalg)

Linear algebra with NumPy is an essential skill for anyone working in data science, machine learning, or scientific computing in 2026. This updated 2026 article explores the core concepts of linear algebra—such as vectors, matrices, and linear equations—combined with the power of Python's NumPy library. By following this guide, beginners and professionals alike can efficiently perform complex mathematical operations on large datasets, solve systems of equations, and build a strong foundation for advanced data analysis and programming tasks using the latest tools and practices.

This tutorial uses NumPy version 2.0.2 for all examples and implementations.

Basic Matrix & Vector Products

How to Use the NumPy dot() Function

The dot is the product of two arrays:

import numpy as np

a = np.array([[ 0, 1],
              [-2, -3]])
b = np.array([[ 2, 4],
              [5, 2]])

x = np.dot(a,b)
print(x)

[[ 5 2]
[-19 -14]]

To find the result above:
[[0*2 + 1*5     0*4 + 1*2]
[-2*2 + -3*5     -2*4 + -3*2]]

NumPy vdot() Function Explained with Examples

The vdot() returns the dot product of two vectors:

import numpy as np

a = np.array([ 0, 1])
b = np.array([2,3])
x = np.vdot(a, b)

print(x)

The result is (0*2 + 1*3) 3.

Understanding NumPy inner() Function

The inner() calculates the inner product of two arrays.

import numpy as np

a = np.array([ 0, 1, 2, 3])
b = np.array([4, 5, 6, 7])
x = np.inner(a, b)

print(x)
  

a[0]*b[0] + a[1]*b[1] + a[2]*b[2] + a[3]*b[3]

The result is 38.

NumPy outer() Function: Step-by-Step Guide

The outer computes the outer product of two vectors:

import numpy as np

a = np.array([ 0, 1, 2])
b = np.array([4, 5, 6])
x = np.outer(a, b)

print(x)

[[ 0 0 0]
[ 4 5 6]
[ 8 10 12]]

You need to make the following multiplications for the result above: [0*4   0*5   0*6] [1*4   1*5   1*6] [2*4   2*5   2*6]

How to Work with NumPy matmul() Function

The matmul() calculates the matrix product of two arrays:

import numpy as np

a = np.array([[ 0, 1, 3],
              [-2, -3, 2],
              [1, 2, 1]])
b = np.array([[ 2, 4],
              [5, 2],
              [1,2]])

x = np.matmul(a,b)
print(x)

[[8 8]
[-17 -10]
[ 13 10]]

Understanding Eigenvalues and Eigenvectors in NumPy

Master NumPy einsum(): Flexible Array and Matrix Computations

The np.einsum() evaluates the Einstein summation convention on the operands:

import numpy as np

x = np.array([1,2,3])
y = np.array([2,4,6])
z = np.einsum("a, b", x, y)

print(z)

You need to multiply all the elements of the first array with the elements of the second array [[2*1 2*2 2*3] [4*1 4*2 4*3] [6*1 6*2 6*3]].

[[ 2 4 6]
[ 4 8 12]
[ 6 12 18]]

The np.einsum method can give the same result with matmul, if it has more than one dimension:

import numpy as np

x = np.array([[1,2],
              [3,4]])
y = np.array([[2,4],
              [1,3]])

z = np.einsum('ij,jk->ik', x, y)
k = np.matmul(x, y)

print(z)
print(k)

[[ 4 10]
[10 24]]
[[ 4 10]
[10 24]]

Eigenvalues and eigenvectors

NumPy linalg.eig(): Eigenvalues and Eigenvectors

The np.linalg.eig computes the eigenvalues and right eigenvectors of a square matrix.

import numpy as np

a = np.array([[ 0, 1],
              [ 2, 3]])

eigenvalues, eigenvectors = np.linalg.eig(a)

print("eigenvalues are: ", eigenvalues)
print("eigenvectors are: ", eigenvectors)

eigenvalues are: [-0.56155281 3.56155281]
eigenvectors are: [[-0.87192821 -0.27032301]
[ 0.48963374 -0.96276969]]

Using NumPy linalg.eigvals() for Eigenvalues

np.linalg.eigvals computes the eigenvalues of a general matrix:

import numpy as np

a = np.array([[ 0, 1],
              [ 2, 3]])
eigenvalues= np.linalg.eigvals(a)

print("eigenvalues are: ", eigenvalues)

eigenvalues are: [-0.56155281 3.56155281]

NumPy linalg.eigh(): Hermitian Matrices Explained

The np.linalg.eigh returns the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix.

import numpy as np

f = np.array([[ 0, 1+2j],
              [1-2j, 3]])
x, y = np.linalg.eigh(f)

print("eigenvalues are ", x)
print("eigenvectors are ", y) 

eigenvalues are [-1.1925824 4.1925824]
eigenvectors are [[-0.88235084-0.j -0.47059217+0.j ]
[ 0.21045522-0.42091043j -0.39459929+0.78919858j]]

Eigenvalues of Symmetric Matrices (eigvalsh): Fast and Stable Computation

The np.linalg.eigvalsh computes the eigenvalues of a complex Hermitian or real symmetric matrix. It's similar to np.linalg.eigh but the eigenvectors are not computed.

import numpy as np

f = np.array([[ 0, 1+2j],
              [1-2j, 3]])
eigenvalues= np.linalg.eigvalsh(f)

print("eigenvalues are: ", eigenvalues)

eigenvalues are: [-1.1925824 4.1925824]

Matrix Properties & Diagnostics

How to Compute Determinant with NumPy linalg.det()

The np.linalg.det() computes the determinant of an array.

import numpy as np

a = np.array([[6, 5],
              [4, 3]])
det = np.linalg.det(a)

print(det) 

In order to calculate the determinant, (6*3) - (5*4). The result is -1.999999999999999.

Matrix Trace (linalg.trace): Summing Diagonal Elements in NumPy

The np.linalg.trace method returns the sum along the diagonals of the array.

import numpy as np

a = np.array([[1, 5, 3], [4, 2, 1], [3, 6, 2]])
trace = np.linalg.trace(a)

print(trace)

The result is (1+2+2) 5.

The np.linalg.trace() method doesn't work in older versions of NumPy, you can get AttributeError: module 'numpy.linalg' has no attribute 'trace'. You can alternatively use the trace() method for the same result:

import numpy as np

a = np.array([[1, 2, 3], [3, 5, 6], [4, 3,2]])
trace = np.trace(a)
print(trace)

The result is 8.

Matrix Norm (numpy.linalg.norm): Understanding Vector and Matrix Magnitudes

numpy.linalg.norm is a function in NumPy that measures the magnitude (length) of vectors or matrices. Think of it as a generalization of the “distance formula” from geometry.

import numpy as np

x = np.array([3, 4]) 

print(np.linalg.norm(x))

The result is 5.0.

Matrix Rank (numpy.linalg.matrix_rank): Determining Linear Independence

The rank of a matrix is a fundamental concept in linear algebra that measures the number of linearly independent rows or columns in the matrix. In practical terms, it tells us the “effective dimensionality” of the data the matrix represents. A full-rank matrix has all rows or columns independent and can fully span its vector space, whereas a rank-deficient matrix contains redundant rows or columns that are linear combinations of others, reducing the space it spans. In NumPy, the function numpy.linalg.matrix_rank computes this value efficiently using singular value decomposition (SVD), making it numerically stable. Understanding a matrix's rank is essential for solving linear systems, detecting linear dependence, performing dimensionality reduction, and ensuring numerical stability in computations. For example:

import numpy as np

B = np.array([[1, 2],
              [2, 4]])
C = np.linalg.matrix_rank(B)

print(C)

The result is 1.

Condition Number (numpy.linalg.cond): Measuring Matrix Stability and Sensitivity

The condition number of a matrix is a measure of how sensitive the solution of a linear system is to small changes in the input. When solving a system Ax=b, tiny errors in b or small inaccuracies in the elements of A can get amplified in the solution x if the matrix is ill-conditioned. A well-conditioned matrix has a small condition number (close to 1), meaning small changes in input produce only small changes in the solution, while an ill-conditioned matrix has a very large condition number, where even tiny changes can cause huge differences in the result. In NumPy, you can compute this using numpy.linalg.cond:

import numpy as np

A_well = np.array([[2, 0],
                   [0, 3]])
A_ill = np.array([[1, 1],
                  [1, 1.000001]])

print("Well-conditioned matrix:", np.linalg.cond(A_well))
print("Ill-conditioned matrix:", np.linalg.cond(A_ill))

Well-conditioned matrix: 1.4999999999999996
Ill-conditioned matrix: 4000002.0003309543

This function helps identify whether a matrix is stable to work with, which is especially important in numerical computations, optimization, and linear regression. Understanding the condition number allows you to detect potential instability and choose more reliable methods when solving systems of equations.

Matrix Utilities

Matrix Diagonal (linalg.diagonal): Extracting Diagonal Elements Efficiently

The np.linalg.diagonal returns specified diagonals of a matrix:

import numpy as np

a = np.array([[1, 5], [4, 2]])
diag = np.linalg.diagonal(a)

print(diag)

[1 2]

The np.linalg.diagonal() method doesn't work in older versions of NumPy, you can get an AttributeError: module 'numpy.linalg' has no attribute 'diagonal'. You can alternatively use the diagonal() method for the same result:

import numpy as np

a = np.array([[1, 5], [4, 2]])
diag = np.diagonal(a)

print(diag)

[1 2]

NumPy Matrix Transpose: Swap Rows and Columns Easily

The np.linalg.matrix_transpose() function returns the transpose of a matrix or a stack of matrices passed as input.

import numpy as np

a = np.array([[1, 5], [4, 2]])
transpose = np.linalg.matrix_transpose(a)

print(transpose)

[[1 4]
[5 2]]

The np.linalg.matrix_transpose method doesn't work in older versions of NumPy, you can get an AttributeError: module 'numpy.linalg' has no attribute 'matrix_transpose'. You can alternatively use the transpose() method for the same result:

import numpy as np

a = np.array([[1, 5], [4, 2]])
transpose = np.transpose(a)   

Solve Linear Systems with NumPy linalg.solve()

The np.linalg.solve() solves a linear matrix equation, or system of linear scalar equations.

import numpy as np

a = np.array([[2, 2], [4, 2]])
b = np.array([6,12])
solution = np.linalg.solve(a,b)

print(solution)

[3. 0.]

2x + 2y = 6 and 4x + 2y = 12

Inverting Matrices Using NumPy linalg.inv()

The np.linalg.inv() computes the inverse of a matrix:

import numpy as np

a = np.array([[2, 2], [4, 2]])
inverse = np.linalg.inv(a)

print(inverse)    

[[-0.5 0.5]
[ 1. -0.5]]

If the determinant of the matrix is 0, you will get the "Singular matrix" error. It means your matrix is non-invertible. You can alternatively use linalg.pinv.

Pseudo-Inverse (linalg.pinv): Computing the Moore-Penrose Inverse in NumPy

If your matrix is not invertible (see linalg.inv), you can use numpy.linalg.pinv. linalg.pinv computes the (Moore-Penrose) pseudo-inverse of a matrix:

import numpy as np

x = [[2,2], [2,2]]
print(np.linalg.pinv(x))
    

[[0.125 0.125]
[0.125 0.125]]

It calculates the inverse of a matrix using its singular-value decomposition (SVD) and including all large singular values.

Matrix Factorizations (Decompositions)

Singular Value Decomposition (SVD): Breaking Matrices into Core Components

Singular Value Decomposition (SVD) is a powerful linear algebra tool that factors any matrix into three components: A=UΣVT. Here, U and V^T are orthogonal matrices representing rotations, and Σ is a diagonal matrix containing the singular values, which indicate how much the matrix stretches space along different directions. In NumPy, you can compute it using numpy.linalg.svd, which is widely used in dimensionality reduction (like PCA), low-rank approximations, and solving ill-conditioned systems. For example:

import numpy as np

A = np.array([[3, 1],
              [1, 3]])

U, S, Vt = np.linalg.svd(A)

print("U: ", U)
print("Singular values:", S)
print("Vt: ", Vt)

U: [[-0.70710678 -0.70710678]
      [-0.70710678 0.70710678]]
Singular values: [4. 2.]
Vt: [[-0.70710678 -0.70710678]
       [-0.70710678 0.70710678]]

QR Decomposition: Orthogonal-Triangular Factorization Explained

QR decomposition is a method that factors a matrix into two simpler matrices: an orthogonal matrix (Q) and an upper triangular matrix (R). In simple terms, it reorganizes the information in a matrix into a set of perpendicular directions (Q) and the weights needed to combine those directions (R). This decomposition is especially useful for solving systems of linear equations and performing numerical computations in a stable way, since working with orthogonal matrices helps reduce rounding errors. For example:

import numpy as np
A = np.array([[1, 2],
              [3, 4]])

Q, R = np.linalg.qr(A)
print("Q: ", Q)
print("R: ", R)

Q: [[-0.31622777 -0.9486833 ]
      [-0.9486833 0.31622777]]
R: [[-3.16227766 -4.42718872]
      [ 0. -0.63245553]]

Cholesky Decomposition: Efficient Factorization for Positive Definite Matrices

Cholesky decomposition is a factorization specifically for symmetric, positive-definite matrices, splitting A into a lower triangular matrix L such that A=LLT. It is particularly efficient for solving linear systems and appears frequently in statistics and optimization, for example when working with covariance matrices or positive-definite Hessians. In NumPy, it's computed using numpy.linalg.cholesky:

import numpy as np

A = np.array([[4, 2],
              [2, 3]])

L = np.linalg.cholesky(A)

print("L: ", L)
print("Check: L @ L.T = A", L @ L.T)

L: [[2. 0. ]
      [1. 1.41421356]]
Check: L @ L.T = A
[[4. 2.]
  [2. 3.]]

Practical / Applied Linear Algebra

Least Squares Solutions - numpy.linalg.lstsq

lstsq() finds the best-fit solution to a system of linear equations that may not have an exact answer. This happens when there are more equations than unknowns (over-determined systems). Instead of solving every equation perfectly, it finds the solution that minimizes the total error, making it widely used in linear regression, curve fitting, and data modeling.

import numpy as np

# 3 equations, 2 unknowns -> over-determined
A = np.array([[1, 1],
              [1, 2],
              [1, 3]])
b = np.array([1, 2, 2])

# Least squares solution
x, residuals, rank, s = np.linalg.lstsq(A, b, rcond=None)
print("Best-fit solution:", x)

Best-fit solution: [0.66666667 0.5 ]

In lstsq(), the rcond parameter sets a threshold for ignoring very small singular values when solving a linear system. Small singular values can make the solution unstable, especially if the matrix is nearly singular or ill-conditioned. By treating values below rcond x largest_singular_value as zero, NumPy stabilizes the computation and avoids huge errors, ensuring a more reliable least-squares solution.

Covariance Matrices - numpy.cov

Compute covariance matrices with cov() to analyze relationships between variables, a key step in PCA and multivariate data analysis.

import numpy as np

# Heights and weights of 5 people
heights = np.array([150, 160, 170, 180, 190])
weights = np.array([50, 60, 65, 75, 85])

# Stack into 2D array (rows = variables)
data = np.array([heights, weights])

# Covariance matrix
cov_matrix = np.cov(data)
print(cov_matrix)

[[250. 212.5]
[212.5 182.5]]

The result shows how two variables vary individually and together. The diagonal values (250 and 182.5) represent the variance of each variable, meaning how spread out their values are. The off-diagonal values (212.5) represent the covariance between the two variables. Since it's positive, it indicates that the variables tend to increase together. Also, notice that the matrix is symmetric, so Cov(X,Y) equals Cov(Y,X). Overall, this matrix tells us both the individual variability of each variable and how strongly they move together.

In NumPy, np.cov treats rows as variables and columns as samples by default. Covariance measures how two variables move together: a positive covariance means they tend to increase together, a negative covariance means one increases while the other decreases, and a covariance near zero indicates they are mostly independent. In simple terms, covariance shows whether two variables follow the same trend or move in opposite directions.

Conclusion

NumPy's linear algebra capabilities provide a powerful and efficient toolkit for handling everything from basic matrix operations to advanced decompositions and eigenvalue computations. By mastering functions like dot, eig, svd, lstsq, and cov, you can tackle real-world problems in data science, machine learning, physics, and engineering. Whether you are checking matrix properties, performing dimensionality reduction, or solving over-determined systems, NumPy makes it fast, reliable, and intuitive. Combining these tools with practical examples ensures that your code is not only correct but also optimized for performance, making NumPy a must-know library for anyone working with numerical computations in Python.

Next Steps

You can now explore the full NumPy course and Advanced NumPy Articles to continue learning journey.