Linear Algebra and Vector Math - Basics and Dot Product - by Looking Glass Universe

April 8, 2021

Linear Algebra

Vectors and Dot Product


Title: Vector addition and basis vectors | Linear algebra makes sense
Youtube - Link #1
Description: Introduction to this series and the basics of linear algebra and vectors.


Title: The meaning of the dot product | Linear algebra makes sense
Youtube - Link #2
Description: Deep dive into the dot product and what it represents and how to determine it.


Overview

I wanted to brush up on my vector math fundamentals, particularly with my understanding of the dot product and its geometric implications as it is something that comes up often in my game development path. While I am able to understand it when reading it and coding it for various projects, I wanted to build a more solid foundational understanding so that I could apply it more appropriately on my own. This video series has been very nice for refreshing my learning on these topics, as well as actually providing me a new way of looking at vector math that I think will really further my understanding in the future.

Video #1 - Vector addition and basis vectors

This was the introductory video to the series, and starts with vector addition. They then move on to linear combinations as an extension of basic vector addition. Next they show for 2D vectors that as long as you have two independent vectors, you can calculate any other vector using those two in some linear combination. This then relates to how vectors are normally written out, but they are simply using linear combinations of the standard orthonormal basis of something like x and y, or x, y, and z in 3D space.

This means a vector is simply 2 or 3 vectors created with the unit vector in the x, y, or z direction multiplied by some scalar and then summed up to create the resulting vector. This was actually a new way for me to look at vectors, as this is more intuitive when you are looking to create a new vector set to base vectors off of different from the standard x, y, z, but I never really thought to also apply it in the standard case. The x, y, z, or even i, j, k, became some standardized to me that I generally ignored them, but I think looking at them in this way will help make much more of linear algebra more consistent in my thinking space.

They then continue on to explain spans, spaces, and the term basis a bit more. A set of vectors can be called a span. If that span is all independent vectors, this indicates it is the smallest amount of vectors which can fully describe a space, and this is known as a basis. The number of basis elements is fixed, and this is the dimension of the space (like 2D or 3D). And for a given basis, any vector can only uniquely be defined in one linear combination of the basis vectors.

Video #2 - The meaning of the dot product

Dot Product

A really simple way of describing the dot product is that it shows "how much one vector is pointing in the same direction of another vector". If those two vectors are unit vectors, the dot product of two vectors pointing the same direction is 1, two vectors that are perpendicular would have a dot product of 0, and two vectors pointing directly opposite directions would have a dot product of -1. This is directly calculated as the cosine of the angle between the two vectors.

However, the dot product also factors in the magnitude of the two vectors. This is important because it makes the dot product a linear function. This also ends up being more useful when dealing with orthonormal basis vectors, which are unit vectors (vectors of length 1) that define the basis of a space and are all orthogonal to each other.

They cover a question where a vector u is given in the space of the orthonormal vectors v1 (horizontal) and v2 (vertical) and ask to show what the x value of the u vector is (which is the scalar component of the v1 vector part of the linear combination making up the vector u) with the dot product and vectors u and v1. Since v1 is a unit vector, this can be done directly by just the dot product (u . v1). They then show that similarly the y component would just be the dot product (u . v2). They explain this shows the ease of use of using the dot product along with an orthonormal basis, as it directly shows the amount of each basis vector used in the linear combination to create any vector. This can also be explained as "how much of u is pointing in each of the basis directions".

Since the dot product is linear, performing the dot product function on two vectors is the same whether done directly with those two vectors, or even if you break up one of the vectors before hand into a linear combination of other vectors and distribute it.

Example:
a . b = (x*v1 + y*v2) . b = x*v1 . b + y*v2 . b

Projecting a Vector onto Another Vector

They then cover the example I was very interested in, which is what is the length of the vector resulting in projecting vector A onto vector B in a general sense. The length, or magnitude, of this vector is the dot product divided by the magnitude of vector B. This is similar to the logic in the earlier example showing how vectors project onto an orthonormal basis, but since they had magnitudes of 1 they were effectively canceled out originally.

This then helped me understand to further this information to actually generate the vector which is the projection of vector A onto vector B, you then have to take that one step more by multiplying that result (which is a scalar) with the unit vector of B to get a vector result that factors in the proper direction. This final result ends up being the dot product of A and B, divided by the magnitude of B, then multiplied by the unit vector of B.

Example:
Projection vector C
C = (A . B) * ^B / ||B|| = (A . B) * B / ||B||^2

Dot Product Equations

They have generally stuck with the dot product equation which is:
a . b = ||a|| ||b|| cos (theta)

They finally show the other equation, which is:
a . b = a1b1 + a2b2 + a3b3 + ...
But they explain this is a special case which is only true sometimes. It requires that the basis you are using is orthonormal. So this will generally be true in many standard cases, but it is important to note that it does require conditions to be met. This is because the orthonormal basis causes many of the terms to cancel out, giving this clean result.

Comments

Popular posts from this blog

Online Multiplayer Networking Solution Tutorial Using Unity and Mirror - Tutorial by: Jason Weimann

Exporting FBX from Houdini with Color

AStar for 2D Pathfinding in Unity