Saturday, July 12, 2014

Eigen Vectors and Basis!


Looks like, I'm on a roll with Dr. Maths. He's explained Eigen vectors this time, and trust me this is the best one, so far I have come across.

Source: http://mathforum.org/library/drmath/view/51971.html
I will give you a physical description, i.e. using 2 or 3 dimensions
only, though the ideas can be extended to n dimensions where n is as
big as you please.

You will be aware that if say a (2x2) matrix M operates on a two-
dimensional column vector v, then that vector is transformed in
magnitude or direction or both to the vector v'.

    So   M.v = v'

Now for the general (2x2) matrix M there are 2 eigenvalues k1 and k2
with associated eigenvectors u1 and u2 with the property that:

        M.u1 = k1.u1

        M.u2 = k2.u2

So any point on the vector u1 is transformed to k1.u1 when operated
upon by M, and similarly any point on u2 will move to k2.u2 after
transformation by M.  In some problems where M is to transform a
complicated figure or we wish to describe the transformation clearly,
it is convenient to use u1 and u2 as the base vectors - i.e. give
coordinates of all points in terms of u1 and u2 rather than the usual
(x,y) coordinates, and the transformation matrix then becomes

        |k1    0|
        |0    k2|

We can find powers of matrices very conveniently using eigenvalues and
eigenvectors.  It is easy to show that

  M = (u1 u2)|k1   0|(u1 u2)^(-1)
             |0   k2|

where (u1 u2) is the 2x2 matrix P formed by the columns of u1 and u2.

 Then M^n = P|k1   0|^n P^(-1)
             |0   k2|

      M^n = P|k1^n    0|P^(-1)
             |0    k2^n|

In probability theory, powers of matrices are frequently required,
sometimes infinite powers, so some device for handling such a problem
is clearly very important.

Again, mind-blowing view of something that we hear daily. In short, if you choose your basis to represent a vector as the eigen vectors of the transformation (or Matrix) then working with the new transformation (or matrix) and new vector is simple and straight forward!

What actually is a Determinant of a Matrix? Two views!

From the way(s) it is calculated to the uses of finding it after doing such mystical calculations. , Determinant of a matrix has always been a mystery for me (except for few things like finding the rank of matrix). Today, I stumbled upon this link on Dr. Math:

http://mathforum.org/library/drmath/view/51440.html

 He has given two very different sounding yet equivalent definitions of the Determinant. Here I reproduced it in his own words:

The first is geometric. I assume you've plotted things in an x-y
coordinate system, right? I assume you can imagine doing the same
thing in three dimensions with an x-y-z coordinate system as well.

In 2-D, when you talk about the point (2, 4), you can think of the
"2" and "4" as directions to get from the origin to the point -
"move 2 units in the x direction and 4 in the y direction."  In
a 3-D system, the same idea holds - (1, 3, 7) means start at the
origin (0,0,0), go 1 unit in the x direction, 3 in the y direction,
and 7 in the z direction.

Similarly, you could have coordinates in one dimension, but there's
just one number.

The determinant of a 1x1 matrix is the signed length of the line from
the origin to the point. It's positive if the point is in the positive
x direction, negative if in the other direction.

In 2-D, look at the matrix as two 2-dimensional points on the plane,
and complete the parallelogram that includes those two points and the
origin. The (signed) area of this parallelogram is the determinant.
If you sweep clockwise from the first to the second, the determinant
is negative; otherwise, positive.

In 3-D, look at the matrix as 3 3-dimensional points in space.
Complete the parallepiped that includes these points and the origin,
and the determinant is the (signed) volume of the parallelepiped.

The same idea works in any number of dimensions.  The determinant
is just the (signed) volume of the n-dimensional parallelepiped.

Notice that length, area, volume are the "volumes" in 1-, 2-, and
3-dimensional spaces.  A similar concept of volume exists for
Euclidean space of any dimensionality.

Okay. That's the geometric definition. I like it because I can make a
mental picture of it. Here's the algebraic definition:

I'll do it in 3 dimensions, but exactly the same idea works in any
number of dimensions.  Let's look at the determinant of this matrix:

  | a11 a12 a13 |
  | a21 a22 a23 |
  | a31 a32 a33 |

The numbers after the "a" are the row and column numbers.

A permutation of a set of numbers is a re-arrangement.

For example, there are 6 permutations of the list (1 2 3), including
the "re-arrangement" that leaves everything unchanged). Ignore for the
moment the "+1" and "-1" after each one:

(1 2 3) -> (1 2 3)   +1
(1 2 3) -> (1 3 2)   -1
(1 2 3) -> (2 1 3)   -1
(1 2 3) -> (2 3 1)   +1
(1 2 3) -> (3 1 2)   +1
(1 2 3) -> (3 2 1)   -1

Now imagine that you start with three objects labelled 1, 2, and 3
arranged as they are on the left, and need to convert them to the
order on the right, but you're only allowed to swap one pair at a
time. To get to the final arrangement, you'll find that there are lots
of ways to do it, but every way (for a particular rearrangement)
always requires an even number of swaps or always requires an odd
number of swaps. I've labelled those that always need an even number
of swaps with +1 and those needing an odd number as -1 above.

Now write down 6 products of the "a" terms, where the first number
for each term is 1, 2, 3 and the second number is the rearrangement
above for each of the six rearrangements.

Here's what they are, in the same order as above.  Be sure you
understand this step:

a11*a22*a33
a11*a23*a32
a12*a21*a33
a12*a23*a31
a13*a21*a32
a13*a22*a31

The determinant is just the sum of all 6 terms, but put a "+" in
front if the rearrangement is even, and a "-" in front if the
rearrangement required an odd number of swaps.

Here's the answer:

+a11*a22*a33 -a11*a23*a32 -a12*a21*a33
+a12*a23*a31 +a13*a21*a32 -a13*a22*a31

For a 4x4 matrix, there will be 24 rearrangments, like this:

(1 2 3 4) -> (3 2 4 1) +1
...

so there will be 24 terms in the expression of the determinant.

For a 5x5 matrix there are 120 rearrangements, so there will be 120
terms in the determinant, and so on.

For an NxN matrix, there will be N! (N factorial) terms, where
factorial means you multiply together all the terms from N down to 1.
For example, 5! = "5 factorial" = 5x4x3x2x1 = 120.
 Seriously, this gyaan is totally refreshing and an eye-opener!