Tuesday, December 2, 2014

Week 11 and 12!

The final weeks! All about inductions or should I say proof by induction.

So what is induction?
When you want to prove a property P for every natural integer there is, you can go through the long process of testing the said property for every natural integer there is. Though since there is an infinite number of them, it can take up to several lives to do so.

This is why, mathematicians came with the proof by induction. What are its principles and how does it work?

Suppose you have a property P you want to prove for a certain continuous set of natural integers (just like sequences). Then you'd have to prove the property P for two cases.
-the "Base case" which testing the property P for a certain n integer to begin with, it's not always zero, we will explain why.
-Inductive case: for all n that come after the base case n, we prove that the property is true. So, it sums up to proving that for all k belonging to the natural integers, P(k) true => P(k+1) true.

Why proving two cases and not just stick to the inductive case? The base case has to be proven because zero isn't always a case where the property is true, in fact, we cannot be sure that a true cases always followed by a right one when doing a simple computation. For instance, if P(0), P(1) can be false depending on the property. The base case and inductive have to chain together, otherwise, one in the chain being false in the middle, or the end, would mean that the whole property is false.

We saw in mathematics and calculus that generally, when you prove a certain property P, you start by the case then go into the inductive case. However, we've seen that it's smarter and safer to start with the inductive case as with that, we can find a n, where we are sure, the property is always true for and above, thus using it as a base case and claiming our statement true.

Here is the following general procedure to prove an inductive statement...

Suppose P(n) is a property for for all n belonging to the natural integers.

Prove for all k natural integers, P(k) => P(k+1) as you normally would.
Conclude it.
Prove that P(n) is true for a certain starting n you find during the induction case.
Conclude it.
Conclude both for all.

Monday, November 24, 2014

Week 6 to 10 at CSC165!

I know I haven't posted for a while, but I have reasons for that, other courses (including tests and assignements) as well this course's tests and assignements. But as you may already know (or not), I keep notes specially made for my SLOG. So here goes some heavy content, summed up as much as possible.

Week 6:
This week has furthered the concept of cases, multiple quantifiers and proof limit.

----------
For cases, as we know, for proofs can't be made for all cases or the general case, there sometimes are limited amounts of cases that have to be proven individually, their sum (not mathematical sum) can prove whether it is wrong or not.

Say a statement is true for all reals. But you can't just prove for the general case.
So... For all reals x, P(x) => Q(x)
As we've seen before, proofs aren't always simple and have to go through an intermediate step R(x), but in some cases, we can't really link in a straight line these, we have to go through multiple cases.
Now what if P(x) => Q(x) has to be proven for negative reals, positive reals and zero itself? If you were to prove that statement true, all of the cases (if you're trying to prove the statement true) must be true. Once done, conclude all of the three cases together as see if their sum actually makes up for the set you're working on.
Proof by cases is in other words, a decomposition of possibilities.

---------
For Multiple Quantifiers as we may have seen or not, equal quantifiers can be swapped or compressed if the same and next to each other in a statement. Otherwise, they may not be swapped.

In addition, each quantifier following each other can split the proof into different levels of indentation, which means each quantifier separates a different box/universe or environment which have to be focused on individually to later de-intend, conclude each every level of indentation.

Example:
For all X, There exists a Y such that P(X, Y) => Q(X, Y)
Assume all X belong to a certain set.
            Let Y = ____
                                 Then... (and the proof goes on...)
                                 Conclude that P(X, Y) => Q(X, Y)
            Conclude that there exists a Y such that P(X, Y) => Q(X, Y)
Conclude that For all X, There exists a Y such that P(X, Y) => Q(X, Y)

(If X and Y were preceded by the same quantifier you could intend them the both once, compress them in the same indent and even swap them around)

--------
We have contextualized the example of multiple quantifiers with limit proofs, using the epsilon-delta proof seen in Maths.

Week 7-8:
These two weeks we have explored sorting algorithms, algorithms that sort lists of integers or values and several sorting algorithms. This context introduced us to the notion of execution time and steps that program take to run a program.

First we know that programs do no run the same, depending on how they treat data and the environment their on (computers, processors, hard-drives). These constrains affect the speed of which data is treated and as programmers, we want data to be treated as fast as possible while being efficient.

In order to do this, we can overestimate a run-time for our programs. Big O is used for it to calculate the number of steps a program can take, often expressed through a function.

The proof works as following.
Imagine the number of steps expressed in a function that takes positive reals and spits out positive reals. Say you'd have a function f and an overestimate function g. Saying that f belongs to O(g) would be proving the following.

Then the proof for overestimates would be the following:

There exists a positive real c, there exists a natural integer B, such that for all n natural integer, n >= B => f <= cg

We have made this proof with both polynomials and non-polynomials.

Week 9-10:
On top of overestimates, we also need to find the lower bounds and the both at the same time. For that we respectively use Omega and Theta.

Let us start with Omega.

Omega is the alternate version of big-Oh.

The proof would be following, instead of having to prove that for a certain c, f <= cg we'd rather prove that f >= cg.

Which means the whole proof or statement would be the following:
There exists a positive real c, there exists a natural integer B, such that for all n natural integer, n >= B => f >= cg
In this case, we are sure that f is no smaller than cg.

Theta on the other hand would be the combination of both Omega and Big O, which means that...if were to take to two functions, and f, we would have to prove that f is bounded in between them, which means no bigger than one and no smaller than the other.



Tuesday, October 14, 2014

Week 3, 4 and 5 at CSC165

Now for those who wonder how can I write (or dare to write) several weeks compressed in single posts, it's because I keep notes specific to what I write on this SLOGs. Moving on. There may be repeat from last weeks' contents.
 
Week 3
*As a continuity to last week's content we've learned that:
"unless if" is equivalent to if not.
 
*We were also introduced to conjunctions and disjunctions. To make it simple, what are they?
 
1)Conjunction is basically linking two arguments together such that the whole statement in which they are is only true if both of them are true. Any other possibility such as either being true or both false will lead to false.
If we were to sum this up with a truth table, it would give the following:
-True AND True is True
-True AND False is False
-False AND True is False
-False AND False is False
Conjunction's symbol is an upside down V (^)
( P ^ Q )
 
2)Disjunction is linking two arguments together as well such the whole statement is true if either of the arguments is true. The only instance of this leading to false is if both the arguments are simultaneously false.
If we were to sum this up with a truth table, it would give the following:
-True AND True is True
-True AND False is True
-False AND True is True
-False AND False is False
Conjunction's symbol is a V
(P V Q)
 
We've also been warned about English trickeries when it comes to interpreting symbolic statements to English or the other way around. Figuring out what is the antecedent, arguments, consequents is the first job to do, no point rushing.
 
*We've also been introduced to Negations, which is to say, negating a statement whole but it's not easy as it may sound, but here are a few important notes about it.
 
-P => Q negated would give P ^ ~Q (P implies Q would give P AND NOT Q)
-When a universal statement is negated, it becomes an existential one and vice versa.
 
*Another important point was the scope of our statements and the arguments within, to where they extend, symbolically, we can use parentheses to define the scope.
 
*Of course with the course we have to know that depending on the scope, some statements may be always satisfied, in some cases and never (contradiction)
 
*Distributive, commutative and associative that apply in Maths also apply here to a greater extent.
 
 
 
*DeMorgan's Law is as follows:
 

 
*Transitivity is the process of taking a statement and adjusting it so we can fit it into a Venn Diagram or showing its equivalence to another statement. It's helpful to know the following:
 
 
 
If I remembered these, maybe I would have done better at the tutorial quiz. But overall, this week's material was quite heavy, not to learn, but apply and knowing how to do it.
 
 
Week 4
This week was a quick introduction to mixed quantifiers and proofs.
 
-Several quantifier can be placed in a statement, though, their placement order matters if the quantifiers are not the same.
For instance, two existential quantifiers can be swapped, same goes for universal quantifiers. 
How ever a universal quantifier followed by an existential is not equivalent to an existential followed for a universal. To understand this, I had to go through a deep thinking.
 between the quantifiers as the "which for" word, it would give the following:
For the first statement it would give: There exists an y in S2 FOR every x in S1 such that x + y = 5.
For the second however it could give: There exists a Y in S2 WHICH for all X in S1 such that x + y = 5.
Put that way, both do not mean the same.
Several equivalent quantifiers belonging to a same set can be expressed as couples or combinations belonging to a set power the number of element in that combination.

-Proofs: In order to accept some statement as true, going through a truth table isn't enough. We have to go through the complex of proof.
Existential and Universal proofs have different procedures:

1) Universal: When you can't prove a statement just going from P to Q, you have to pass through one intermediate R or several (which is or are already proven). Because as we've seen through the previous weeks:
If P(x) => R(x) and R(x) => Q(x) then P(x) => Q(x)
Of course, we have to proceed through several steps and intend at each.
-We assume that the antecedent is true.
-For every variable, if it's universal, then we assume then it belongs to a set.
-If it is unique to its case, we say that it belongs to the set we took it from.
-After a chaining of all intermediate steps, we link them together.
-We conclude and assume the statement as true.

2) Existential: For an existence to be proven right, all we need is to find one case for which the statement is true.
 
Although proofs may seem easy, they do require a little understanding of previous weeks content. I personally have already dealt with them since Maths in French Highschools is mostly based on doing proofs more than results.
 
 
Week 5
This week has extended what we've learned of proofs, although I could write a lot, I will sum it up.
 
-Sometimes, to prove a universal passing through the contrapositive may lead to a shorter, easier to do and understand proof.
-Sometimes, when we want to prove something as false, we can go through a contradiction and show the absurd result if universal. If existential, it only necessary not to find any cases where the statement is true.
 
Conclusion: The past three weeks have been charged with new content and expansion of previously seen content. I failed two of the quizzes so far because I could not retain some of the formulas even if I had the idea in mind. And the mid-term was terrible, not because I did not know how to apply what I've already. I did apply correctly, to a question that I've misunderstood. I still think that difficulty number one in any test or assignment is to figure out what exactly do they ask you. Number two would be what do they give and how to use it. (And understand what they give you!)

Friday, September 19, 2014

Week 1 and 2 at CSC165

Well, I do have to admit that this post is certainly late, but better late than never they say.
Well, needless to say that Week 1 of class at CSC165 was an introduction to the content that we were going to face during the semester, and most importantly, what was its goal as well as our goal(s) during the length of the course. Week 2 has a bit more of content, and less introduction

Week 1
Let's get started with what this course is about, what should be done in it, why and how.
From what I understood (hopefully I understood right), the course is about the usage of logic (or/and mathematical logic) in programming. Thing is about it, is it has to be precise. The terms used, the order in which they're put and that for several reasons. One of them being, for a certain problem, if we do not understand what exactly is the problem, then we won't find an adequate solution or answer. The amount of detail given for a problem or solution is as important. The more information, the more we know of what to do or not to do. (The street car drama example given in lecture felt a bit awkward but helped me understand the whole principle of precision). We've also learned that symbols are pretty useful if they can be understood, but not does mean saturating a proof with symbols will make it any better. Always verifying our work is a good step to make sure no error was made or to improve and simply the solution or problem, rigor is a most.

Another important part of the first week, was quantifiers. In programing, they are vital when it comes to determining whether a statement is true or false, or what "quantity" of objects are we dealing with. To make it short, for a certain set F, full of objects:
-"All" refers to every single object, so if were to consider a statement. If all objects have a value of true, and my statement was "All objects in this set are true." Then my statement would be true. To prove this wrong, we'd need a counter-example. Which means at LEAST ONE false is needed to prove my statement false.
-"Some" or "There exists" refers to a single or more objects in the said set. For instance, if I state that "Some of the objects in F are true" while the set contains two false and one true, the statement would be true, because at least one of the elements of the set would be true. 
In examples:
 
F = { true, true, true }
"All objects are true" is true. All elements are true.
F = { true, true, false } 
"All objects are true" is false. One element is false.
F = { true, false, false } 
"Some objects are true" is true. At least one element is true.
F = { false, false, false } 
"Some objects are true is false. No element is true.
 
THING TO REMEMBER: The empty set ({}) is a trickster. If you consider an "All" statement, with the empty set it will be always true since the set is empty and contains no counter-example. If we consider a "some" (there exists) statement with the empty set, it will always be false as no object is present in the set to verify the statement.
 
The "Not" negates any statement it is applied to or part of statement. "Not"s can stack.
 
Most of the examples were illustrated with Ven Diagrams which are as helpful as they could be.
 
Week 2
Not only have I learned a bit more about quantifiers, but also how to falsify or verify some statements. The previous paragraph also helped us (me) learn how to falsify and/or verify a statement.
Aside from what was previously stated, I've learned a bit more about antecedents and consequents.
 
In a nutshell, when you imply something, you link two objects, events or states.
Explained, it sounds like this "If P, then Q". (P being the antecedent and Q the consequent).
This part has the hardest to assimilate because it went beyond what I expected of logic.
 
Let's say P implies Q or P => Q
If P is true, then Q is true as well. Little did I know that the only to prove this statement wrong was actually to have P true and Q false.
Consider the following:
 
P (true) => Q (true) (Statement is true, because P led to Q)
P (false) =>  Q (true) (Statement is true. You think it would be false, but no. Since P is already false, you have no way to prove that it leading to Q true is false.)
P (false) => Q (false) (Same as the above)
P (true) => Q (false) (Since P is true, and Q false, the statement is wrong. Because we implied that P led to Q, in this case, it does not, thus proving the whole statement wrong)
 
This is how I understood it. Now for the other part. Equivalence means a two-sided implication.
In other words: P leads to Q and Q leads to P.
P <=> Q
In other words: P => Q and Q => P
 
Even though that covers the most of it, it took a bit of thinking to quite grasp the concept, but once I got these sentences that explained the why these did not seem natural by intuitions, it sorted ideas in my mind.

In the end, Week 1 and Week 2 changed my views of logic, even if I had a small grasp of it because of previous C++ experienced.