# CS5330 - Randomized Algorithm Review/Summary

So I took CS5330 a randomized algorithms course this semester in NUS taught by Prof. Seth Gilbert. The course content in itself is nothing short of beautiful.

I will be describing the broad things I could take away from the course. All the material + my notes for the course can be found here

Note - I was one of the average students in the course, so don’t consider these as the only possible learnings from this course. In case you are already acquainted with this stuff just read up this blog post recommended by our prof. It is the tl;dr version.

- Inequalities:
- We are taught a LOT of inequalities, this image consists of all those that were taught and useful.

- We have probabilities inequalities in this course like Union Bound, Markov, Chebyshev and Chernoff. These are taught and applied aggressively throughout the course. One imp. thing to note is that if you are bad at a Probability course like MA2216 because you aren’t good with pdf/join distributions/proofs for continuous distributions like Gaussian, Poisson, etc. then it shouldn’t affect your performance in this course, since here the R.Vs are generally Bernoulli or binomial in most cases AND often we are not trying to get a precise answer for a probabilistic event, instead we are always trying to bound it. Getting a hang of where to add/drop stuff when trying to bound things algebraically is a skill that one picks up during this course and which is quite hard to become good at.
- We also exploit this kind of algebraic structure a lot in the course: , where c is a small positive integer.

- We are taught a LOT of inequalities, this image consists of all those that were taught and useful.
- Min-Cut Kargers: An elegant algorithm to do min-cut. Key Insights were
- If a problem of size (in this case finding min-cut of a graph of size ) can be shrunk to a problem smaller in size (in this case to the size of with a decent success probability (here it is ), then instead of decomposing the problem like a straight chain, i.e to go from , and keep on reducing the success probability from (almost 0). We can instead do branching :)
- Branching here refers to this: Let us define to be the solve function for the problem of size which returns the min-cut. Then instead of the chain method where we go from , now we will do something like this
- , i.e call two instances of smaller size and take the better answer. (Note - both of them initially work on the same graph of size , but because of randomness they will be contracting edges randomly, i.e the two instances of size being called, won’t be clones of one another)
- Now analyzing this branching process you can realize that it has layers in the recursion tree and each layer has doubled the nodes of the previous layer. The analysis is similar to a merge-sort algo and slightly slower than the chaining method. BUT the benefit in this approach comes from the fact that the probability of correctness is amplified hugely. Effectively now the success probability of the algorithm = probability that there is at least one path from the root to leaf in the recursion tree that has all success edges, where you can traverse an edge downwards successfully with a probability of . This gets lower bounded by (using a non-trivial tree analysis argument), which is MUCH better than earlier success odds of .
- A nice argument is also shown that there can at most be distinct min-cuts because success probability of Karger’s algo for a specific min-cut is at least , so if you add it up for all distinct min-cuts it shouldn’t exceed , therefore #distinct min-cuts .

- QuickSort Analysis:
- Two ways to analyze the expected complexity. An interesting thing to learn was that JUST commenting on Expected Time Complexity of an algorithm is NOT enough to say it is a good/fast algorithm. Think about plotting Time taken (y-axis) vs Probability (x-axis) graph, it can happen that the tail doesn’t fall rapidly in this graph, so although mean is low, but there is enough variance that often your algo runs super slowly.
- I tried to think of an algorithm with this kind of slowness, but I think it is hard to formalize such a case.

- Because if this happens then , no longer remains in and instead goes up, since also contributes to
- Therefore we also analyze with what probability is the time complexity far away from the mean and try to show that this is very low. Aka .
- The insight was, that just like in statistics, the mean of a distribution is NOT an idle way to boil down all the information about the distribution, similarly here just boiling down all the information to and commenting about it is NOT enough to be confident about the algorithm.
- Additional References: Must read, about a unifying way to view mean, median, mode of a given statistic

- Stable Matching:
- Not a big topic. We were primarily introduced to deferred decision making and stochastic domination. The problem in itself was put forward as a Balls and Bins problem.
- Stochastic Domination although a simple concept, turns out to be very powerful when analyzing something. It basically comments that instead of analyzing a complicated probabilistic event we upper bound that event by a simpler one and analyze the simpler one.
- Example: Algorithm is successful with probability , where is some complicated function of . But you know . Then just say let us be pessimistic and say that it is successful with probability exactly. Then if for this simpler algorithm we realize that it runs in with high probability, then we CAN SAY that Algorithm definitely runs in at most with same high probability.
- Additional References: Wiki link for stochastic domination

- Hashing:
- Open chaining is reduced to a balls and bins problem and analyzed using that.
- Linear Probing has a somewhat hard analysis to grasp. The intuition of making a binary tree to define clusters is still not super clear to me.
- I guess one of the key points is
- Now we show is small, that is exponentially decreasing with . To show this we need 4th-moment inequalities and a non-trivial/magical idea involving a “binary tree” and “crowded contiguous segment definition”. I am still unclear as to why we need all these components for the proof to go through and still in the process of trying to understand this portion.

- Flajolet Martin:
- Perhaps the MOST insane algorithm I have ever seen. The algorithm is like super short and has just 3 - 4 steps mainly. But as a competitive programmer, the magical thing is it enables you to “COUNT NUMBER OF DISTINCT ELEMENTS IN A STREAM/ARRAY USING A MIN FUNCTION AGGREGATION”. Obviously, it has two parameters for controlling the algorithm. Firstly, to improves its closeness to optimal answer in terms of delta and secondly the error probability with which it does not fall into the delta range. From a practical competitive programming standpoint, the algorithm is slightly redundant since it requires a lot of runs to reduces both these errors to enable it to pass on online judges. But still, its idea is super fascinating.
- We discuss the FM algorithm, then FM+ which takes the average of a lot of instances of the FM algorithm.
- This averaging of a lot of results is USEFUL in reducing the variance of the algorthm and thus making delta smaller. (This concept is somewhat more general in CS and equivalent to why people use random forests over decision trees in ML-algorithms to reduce variance in their results)
- Then we make FM++ algorithm, which runs a lot of instances of FM+ and sort all the answers and returns the median. This is done because for the median to go bad (i.e lie in error region), at least more than HALF of the FM+ runs need to go bad. Since in FM+ we control the bad probability by some constant (ex. in lecture we used ). Now effectively the FM++ will fail only if more than half of the runs give a bad result. This we can see intuitively, decreases exponentially with the number of runs of FM+ we do. It is kind of like saying you have a coin which with probability $\frac{1}{4}$ gives HEAD and with remaining gives TAILS. Then what is the probability that more than 50% of the times in a run of tosses it gives HEADS. This can be seen to decrease exponentially with using Chernoff Bound.
- The prof also told us that this “FIRST MEAN, THEN MEDIAN” technique is more of a general technique in randomized algos and also tested this in the midterm examination.
- Additional References: Increment counter algorithm, with a similar idea, tested for midterm

- Min Set Cover:
- Model the problem as ILP (Integer Linear Programming) Problem. Then hand wave and say ILP is ALMOST Like LP. Use LP solver as a black-box. But wait…now the solutions are real numbers and not integers, so you use ROUNDING to get the results. Here rounding the number naively might not be good and the way you round your results is problem specific. In the case of Set Cover the prof showed us a specific rounding method which worked. Using that rounding method he showed that the algorithm gives a valid answer with probability , which is although reasonably high, but constant. NOTE - This is NOT the probability of being OPTIMAL, but instead of just giving a VALID SET COVER.
- Now the interesting issue is that if you think we might be able to do something like Karger branching / FM++ and combine multiple runs of this algo to increase this probability then you are correct, HOWEVER, there is no method to merge the answers which keep the answer small. So what you do is RUN this instance times and take the union of all the sets found, which increases the VALID SET COVER probability exponentially, thus making the algorithm work w.h.p (with high probability), however, the answer NO LONGER REMAINS CLOSE TO OPTIMAL. Instead, the union of instances effectively makes the size of the set cover = * (size of the optimal set cover).
- An interesting thing to note here is that this shows you don’t get a linear approximation of min set cover (optimization version is in NP-Hard) by using Randomized Algorithms. You can get a logarithmic factor approximation of this problem with very high probability.

- Random Walks and Expert Learning:
- All these techniques are fairly advanced as compared to the topics discussed above and the fact that we did not have problem sets on these (all these topics are post week7-week8) highlight the fact that some technicalities in these lectures were hand-waved OR not meant to be understood completely by an average Joe like me. So I don’t think I am in a spot to give any insights on these topics.

- Probabilistic Methods:
- This semester the prof did not go through this topic, however, if you search on the internet it is a somewhat standard topic in many randomized algorithm courses. The problem that the probabilistic method tries to tackle is to comment on the existence / bound of a certain deterministic thingy using probability argument.
- Example: Just above at point 1. We see that Karger’s algorithm shows us a side-result that the number of distinct min-cuts is bounded by , this is an example of a probabilistic method use case.
- Another example that one can try out is to show that a 3-SAT problem with clauses will have at least one solution instance which satisfies clauses.
- Additional References: MAX-3SAT Notes from another uni

Conclusion: The course structure is amazing and they teach a lot of good stuff. Prof. Seth Gilbert explains these algorithms very intuitively and you understand the feel of how the inequalities and math should work out after a few of his lectures. As for the grading component this semester, the module primarily consisted of Problem Sets, Midterm, Experimental Project and an Explanatory Paper on a randomized algorithm related research topic. The module does not have a final exam, so…the Random Walks and Expert Learning portion is NOT really graded anywhere.