Proof - Greedy Algo
0.0.1 Greedy Swap Proof
0.0.1.1 Example problem:
\[ A = [a_1 , a_2, a_3 ...]\]
\[ O = [o_1, o_2, o_3 ...]\]
Each element of A represents a state.
\[ Assume\ A \neq O \] (NOTE THIS IS NOT A PROOF BY CONTRADICTION AS THERE MAY BE MULTIPLE OPTIMAL SOLUTIONS THAT ARE NOT EQUAL)
Therefore
\[ Either\ \]
\[\exists x \in O , x \notin A \]
\[ \exists x \in A, x \notin O\]
\[ a_p = o_q , a_q = o_p \ Swap\]
Prove by contradiction
0.0.2 Greedy Inductive Lookahead Proof
0.0.2.1 Example problem: A frog can take 1..k Steps each turn, get least number of turns to reach the n Step.
State = turn
Greedy Solution, take the max k Step each State
Proof of Correctness: Show by induction that for all states, the current state is the most optimal
Instinctively we believe that jumping farther in position is correlated to optimal least number of jumps.
Denote \(P(i,..)\) as the position function after \(i\) steps. \[P(i,..)\ \propto\ Optimality\]
Proving our naive Greedy algo \(J\) will jump at least as far as some theoretical optimal algo \(J^*\)
Prove: \(P(i+1,J^*) \leq P(i+1,J)\)
IH: \(P(i,J^*) \leq P(i,J)\)
LEM: \(P(i,J) \lt P(i+1,J^*) \lor P(i,J) \geq P(i+1,J^*)\)
Arrow labeled “T” means obvious truth \(P(i,J) \leq P(i+1,J)\).
i to i+1 means you made a jump so you must be farther or at least the same position.
Dashed arrow labeled “lt” and “gte” comes from LEM which will help us through case analysis.
Filled arrow head means unstrict inequality \(\geq\)
Empty arrow head means strict inequality \(\gt\)
We want to prove dotted arrow labeled “?”.
PROOF:
Using LEM will help us prove our goal.
- LEM Case “gte” \(P(i,J) \geq P(i+1,J^*)\)
Obvious from composition of inequalities:
Given: \(P(i+1,J^*) \leq P(i,J)\) by “gte” case of LEM
Given: \(P(i,J) \leq P(i+1,J)\) by obvious truth
Conclusion: \(P(i+1,J^*) \leq P(i+1,J)\)
Goal solved for one case of LEM
- LEM Case “lt” \(P(i,J) \lt P(i+1,J^*)\) :
Observation: Look at the maximum number of positions one can move in 1 transition.
Our greedy algo \(J\) by definition will take r steps (by definition since it’s greedy).
No evidence \(J^*\) the Theoretical optimal algo will take r steps.
PURPLE DOUBLE ARROW MEANS EQUALITY.
Basically it says our greedy algo WILL take r steps.
\(\color{purple}{Greed: P(i+1,J)=P(i,J)+r}\)
Notice how \(P(i,J)+r\) behaves like a sink or a Terminal object.
In general, finding terminal objects is a hint at optimality.
- Both cases of LEM proved
By induction we shown our greedy algo will always be ahead in terms of position which is correlated to optimality.
remarks: Now that I look at it, LEM may not have been necessary but technically the proof is still correct.