Safekipedia

Randomized algorithm

Adapted from Wikipedia · Adventurer experience

A simple diagram showing how two points (vertices) in a graph can be connected or combined.

A randomized algorithm is a special step-by-step plan that uses randomness to help it work. Instead of doing the same thing each time, it makes random choices to decide what to do next. This helps it work well on average, even if the result changes each time it runs.

There are two main types of these algorithms. One type always gives the right answer but may take different amounts of time because of its random choices. An example is Quicksort, a way to sort lists quickly. The other type might sometimes give the wrong answer or not finish, depending on its random choices. These are useful for problems where finding an exact answer quickly is very hard.

In real use, computers often pretend to use random numbers using something called a pseudorandom number generator. This works well most of the time but might not be perfect like a true random source.

Randomized algorithms are important because they can be the best way to solve some difficult problems, even if they don't always give the exact answer every time.

Motivation

Imagine you have a list of items where half are labeled "a" and half are labeled "b". Your task is to find an "a" in this list.

One way to do this is the Las Vegas algorithm. This method picks a random item from the list until it finds an "a". It will always find an "a" eventually, but it might take a few tries. On average, it finds the "a" quickly.

Another way is the Monte Carlo algorithm. This method also picks random items, but it stops after a certain number of tries, called k. If it finds an "a" within those tries, it succeeds. If not, it fails. This method is faster because it has a set number of steps, but there's a small chance it might not find an "a".

Randomized algorithms are helpful when someone might try to trick the algorithm with bad inputs. They are often used in areas like cryptography and quantum computing, where true randomness is very important.

Computational complexity

Computational complexity theory helps us understand algorithms that use randomness. We think of these algorithms as special kinds of machines called probabilistic Turing machines.

There are two main types of randomized algorithms: Las Vegas algorithms always give the right answer but can take different amounts of time. Monte Carlo algorithms usually give the right answer but can sometimes make mistakes.

We study groups of problems based on how these randomized algorithms work. One group is RP. It includes problems where a quick randomized algorithm can always say "no" correctly, and says "yes" correctly at least half the time. Another group is BPP. It includes problems where a quick randomized algorithm can solve them with a small chance of error. This is like the randomized version of problems we can solve quickly without randomness.

Early history

Sorting

Quicksort was created by Tony Hoare in 1959 and shared in 1961. That same year, Hoare also shared a way to quickly find the middle value in a list, called the quickselect algorithm. For many years, people wondered if there was a sure way to find the middle value super fast, but this wasn’t solved until 1973.

Number theory

In 1917, Henry Cabourn Pocklington made a smart way to find special math answers called square roots for certain big numbers. Later, in 1970, Elwyn Berlekamp found a quick way to solve math puzzles with curves. Then in 1977, Robert M. Solovay and Volker Strassen discovered a fast test to check if numbers are prime — numbers that can only be divided by 1 and themselves. Shortly after, Michael O. Rabin showed that an idea from 1976 by Miller’s primality test could also work as a fast test.

Data structures

One of the first tools to organize information in a mixed-up way was the hash table, started by Hans Peter Luhn at IBM in 1953. In the following years, other smart ways to organize data were found, like linear probing in 1954. Later, new ideas like universal hash functions made these tools even better.

Other helpful tools came along too. In 1970, Burton Howard Bloom made something called a Bloom filter to check if information might be in a collection. Then in 1989, Raimund Seidel and Cecilia R. Aragon created a special kind of tree called a treap, and William Pugh invented another called a skip list.

Implicit uses in combinatorics

Before computers widely used mixed-up methods, a mathematician named Paul Erdős used luck to prove that certain math objects could exist. This clever way of using luck became known as the probabilistic method. Erdős first used this in 1947 to show that special kinds of math maps called Ramsey graphs exist. Later in 1959, he used a trickier mixed-up method to prove that there are maps with very long paths and many colors needed to paint them.

Examples

Quicksort

Quicksort is a common way to sort numbers. It can take a long time if the numbers are already sorted. But if we choose the middle number randomly, it usually finishes faster, no matter what the numbers are.

Randomized incremental constructions in geometry

In computational geometry, one way to build shapes like a convex hull or Delaunay triangulation is to mix up the points and add them one by one. This mixing makes the process more even and keeps the time it takes low.

Min cut

Figure 1: Contraction of vertex A and B

Main article: Karger's algorithm

Karger’s algorithm finds the smallest group of connections in a graph that link all parts together. It works by randomly combining points until only two groups stay, then checking the connections between them. This method usually finds the right answer after trying a few times.

Derandomization

Randomness in algorithms is like a tool, similar to space and time. Derandomization is the way to reduce or remove this randomness. We do not yet know if all algorithms can be derandomized without making them slower.

There are special ways to derandomize some algorithms:

Where randomness helps

Sometimes, using randomness can help solve problems faster or more easily. For example, if you have a very long string of letters and need to find a specific letter, a computer that can make random choices can do this quicker than one that follows strict steps.

Randomness is also useful in special computers and systems that need to give answers that are close to correct most of the time. It can help check if two pieces of information are the same using fewer steps than a system that doesn’t use randomness. There are even ways to measure the size of certain shapes using randomness, which cannot be done without it.

Related articles

This article is a child-friendly adaptation of the Wikipedia article on Randomized algorithm, available under CC BY-SA 4.0.

Images from Wikimedia Commons. Tap any image to view credits and license.