Friday, August 15, 2014

if...and only if


I always get confused with these things. I read it, understand it and then forget it. So that's why I wrote it down.

p -> q
This means if p then q. Two views on this implication:

1. Here p is the sufficient condition, in the sense that it is sufficient to conclude that q happened if we are sure that p happened. This also means, that there are other ways of making q happen and one of them is p.

2. Here q is the neccessary condition, in the sense that q is necessary condition for p to happen. But p may either choose to happen or not choose to happen and hence q is not sufficient condition for p.

Eg: Amit will eat the fruit -> fruit is an apple
1.  To conclude that the fruit is an apple, it is sufficient to find if Amit ate the fruit. So Amit will eat the fruit is sufficient condition for fruit to be an apple. Note that it is not the necessary condition as fruit could be apple but Amit did not eat it.
2. Here fruit being apple is necessary condition for Amit to eat the fruit since it means Amit will only eat apples. Note that, Amit will only eat apples and the given fruit being apple is not sufficient as Amit might choose not eating all the apples given to him.

In short, for q to happen, it is sufficient that p happens (and there could be other ways of making q happen). And, for p to happen, it is necessary that q happens (this is the only condition to make p happen and nothing else).

p <-> q
Here q is both necessary and sufficient condition for p. Consider:
Eg: Amit will eat fruit <-> fruit is an apple
It means that Amit will eat all and only those fruits that are apples. He will not leave any apple uneaten and he will not eat any other fruit. Hence, given fruit is apple is both a necessary and sufficient condition for Amit to eat the fruit.



P.S. p -> q means (a) if p then q or (b) q if p or (c) p only if q.
P.P.S. The 'if' in the above statements can be replaced by 'when' or 'whenever'.


Source: wikipedia.org




Saturday, July 12, 2014

Eigen Vectors and Basis!


Looks like, I'm on a roll with Dr. Maths. He's explained Eigen vectors this time, and trust me this is the best one, so far I have come across.

Source: http://mathforum.org/library/drmath/view/51971.html
I will give you a physical description, i.e. using 2 or 3 dimensions
only, though the ideas can be extended to n dimensions where n is as
big as you please.

You will be aware that if say a (2x2) matrix M operates on a two-
dimensional column vector v, then that vector is transformed in
magnitude or direction or both to the vector v'.

    So   M.v = v'

Now for the general (2x2) matrix M there are 2 eigenvalues k1 and k2
with associated eigenvectors u1 and u2 with the property that:

        M.u1 = k1.u1

        M.u2 = k2.u2

So any point on the vector u1 is transformed to k1.u1 when operated
upon by M, and similarly any point on u2 will move to k2.u2 after
transformation by M.  In some problems where M is to transform a
complicated figure or we wish to describe the transformation clearly,
it is convenient to use u1 and u2 as the base vectors - i.e. give
coordinates of all points in terms of u1 and u2 rather than the usual
(x,y) coordinates, and the transformation matrix then becomes

        |k1    0|
        |0    k2|

We can find powers of matrices very conveniently using eigenvalues and
eigenvectors.  It is easy to show that

  M = (u1 u2)|k1   0|(u1 u2)^(-1)
             |0   k2|

where (u1 u2) is the 2x2 matrix P formed by the columns of u1 and u2.

 Then M^n = P|k1   0|^n P^(-1)
             |0   k2|

      M^n = P|k1^n    0|P^(-1)
             |0    k2^n|

In probability theory, powers of matrices are frequently required,
sometimes infinite powers, so some device for handling such a problem
is clearly very important.

Again, mind-blowing view of something that we hear daily. In short, if you choose your basis to represent a vector as the eigen vectors of the transformation (or Matrix) then working with the new transformation (or matrix) and new vector is simple and straight forward!

What actually is a Determinant of a Matrix? Two views!

From the way(s) it is calculated to the uses of finding it after doing such mystical calculations. , Determinant of a matrix has always been a mystery for me (except for few things like finding the rank of matrix). Today, I stumbled upon this link on Dr. Math:

http://mathforum.org/library/drmath/view/51440.html

 He has given two very different sounding yet equivalent definitions of the Determinant. Here I reproduced it in his own words:

The first is geometric. I assume you've plotted things in an x-y
coordinate system, right? I assume you can imagine doing the same
thing in three dimensions with an x-y-z coordinate system as well.

In 2-D, when you talk about the point (2, 4), you can think of the
"2" and "4" as directions to get from the origin to the point -
"move 2 units in the x direction and 4 in the y direction."  In
a 3-D system, the same idea holds - (1, 3, 7) means start at the
origin (0,0,0), go 1 unit in the x direction, 3 in the y direction,
and 7 in the z direction.

Similarly, you could have coordinates in one dimension, but there's
just one number.

The determinant of a 1x1 matrix is the signed length of the line from
the origin to the point. It's positive if the point is in the positive
x direction, negative if in the other direction.

In 2-D, look at the matrix as two 2-dimensional points on the plane,
and complete the parallelogram that includes those two points and the
origin. The (signed) area of this parallelogram is the determinant.
If you sweep clockwise from the first to the second, the determinant
is negative; otherwise, positive.

In 3-D, look at the matrix as 3 3-dimensional points in space.
Complete the parallepiped that includes these points and the origin,
and the determinant is the (signed) volume of the parallelepiped.

The same idea works in any number of dimensions.  The determinant
is just the (signed) volume of the n-dimensional parallelepiped.

Notice that length, area, volume are the "volumes" in 1-, 2-, and
3-dimensional spaces.  A similar concept of volume exists for
Euclidean space of any dimensionality.

Okay. That's the geometric definition. I like it because I can make a
mental picture of it. Here's the algebraic definition:

I'll do it in 3 dimensions, but exactly the same idea works in any
number of dimensions.  Let's look at the determinant of this matrix:

  | a11 a12 a13 |
  | a21 a22 a23 |
  | a31 a32 a33 |

The numbers after the "a" are the row and column numbers.

A permutation of a set of numbers is a re-arrangement.

For example, there are 6 permutations of the list (1 2 3), including
the "re-arrangement" that leaves everything unchanged). Ignore for the
moment the "+1" and "-1" after each one:

(1 2 3) -> (1 2 3)   +1
(1 2 3) -> (1 3 2)   -1
(1 2 3) -> (2 1 3)   -1
(1 2 3) -> (2 3 1)   +1
(1 2 3) -> (3 1 2)   +1
(1 2 3) -> (3 2 1)   -1

Now imagine that you start with three objects labelled 1, 2, and 3
arranged as they are on the left, and need to convert them to the
order on the right, but you're only allowed to swap one pair at a
time. To get to the final arrangement, you'll find that there are lots
of ways to do it, but every way (for a particular rearrangement)
always requires an even number of swaps or always requires an odd
number of swaps. I've labelled those that always need an even number
of swaps with +1 and those needing an odd number as -1 above.

Now write down 6 products of the "a" terms, where the first number
for each term is 1, 2, 3 and the second number is the rearrangement
above for each of the six rearrangements.

Here's what they are, in the same order as above.  Be sure you
understand this step:

a11*a22*a33
a11*a23*a32
a12*a21*a33
a12*a23*a31
a13*a21*a32
a13*a22*a31

The determinant is just the sum of all 6 terms, but put a "+" in
front if the rearrangement is even, and a "-" in front if the
rearrangement required an odd number of swaps.

Here's the answer:

+a11*a22*a33 -a11*a23*a32 -a12*a21*a33
+a12*a23*a31 +a13*a21*a32 -a13*a22*a31

For a 4x4 matrix, there will be 24 rearrangments, like this:

(1 2 3 4) -> (3 2 4 1) +1
...

so there will be 24 terms in the expression of the determinant.

For a 5x5 matrix there are 120 rearrangements, so there will be 120
terms in the determinant, and so on.

For an NxN matrix, there will be N! (N factorial) terms, where
factorial means you multiply together all the terms from N down to 1.
For example, 5! = "5 factorial" = 5x4x3x2x1 = 120.
 Seriously, this gyaan is totally refreshing and an eye-opener!

Sunday, June 15, 2014

Unary Subset Sum: myth or reality?

Subset Sum problem [SUBSET SUM ]:
Given a list of n numbers and a number k, is there a subset of the numbers that
adds to exactly k? For example, the answer is yes for  <(3, 4, 12, 7, 4), 20> and no
for <(3, 4, 12, 7, 4), 6>.

When the input is expressed in binary (or any other base except unary), it takes exponential time to solve this problem. But hang on!

Unary Subset Sum problem [UNARY SSUM]:
Same problem but the input is expressed in unary!

Result: UNARY SSUM is in P!

Here are the details, that are taken from http://www.cs.virginia.edu/~evans/cs3102-s10/ps/ps6/ps6-comments.pdf

  • Why does the NP-completeness proof for SUBSET SUM fail to show UNARY SSUM is NP-complete?
    Answer. The crux of the question is that we measure complexity as a function from the size of the input to the number of steps required to run the decided on that input. When the input is represented in unary, the size of the input is equal to the value (that is, it takes x squares to represent the input value x). When the input is represented in binary (or any higher base), it takes log x squares to represent the input value x. Hence, the if size of the input in binary is N, the size of the same input in unary is 2^N. This, the size of the input increases exponentially, even though the problem is of the same difficulty as it would be for the corresponding binary input. The reduction proof fails because the reduction from 3SAT to UNARY SSUM requires more than polynomial time. The input size increases exponentially, so the transformation to generate the corresponding input for UNARY SSUM requires a number of steps that is exponential in the size of the input.
  • Show that UNARY SSUM ∈ P.
    Answer. To show a language is in P, we need to show there is a polynomial-time algorithm that decides it. Note the we can simulate a nondeterministic TM in exponential time using a deterministic TM. Hence, if SUBSET SUM is in NP, we know UNARY SSUM (the same problem, but with exponentially larger input size) is in P, since we can solve it by first converting the input to binary representation (which can be done in polynomial time) and shrinks the input size to log N. Then, we can simulate the nondeterministic polynomial-time TM running on this input (which has size log N) in time that is exponential in its input size, which is polynomial in log N.