# Greedy Algorithm

A greedy algorithm in optimiziation problems tries to iteratively improve a solution by maximizing the objective function. Any greedy algorithm starts from a current solution and tries to improve on it by choosing its next candidate solution by selection of a, normally small, step according to some step-selecting function; if the step can be applied at all, the objective function reveals if the solution is improved. If so, the trial solution is accepted as the currently best solution, if not a new step is chosen (i.e. by simply reducing the step width) until an improvment can be achieved or the algorithm has to stop. Normally, a convergence criterion also allows to end the search before too many steps are undertaken that do not substantially improve the objective function further. A straight-forward step-selectiong function can be the local gradient of the objective function.

Greedy algorithms are normally fast, and, if the problem is known to be convex (or precisely, if it can be reduced to a problem with optimal substructures), it will converge to the globally optimal solution, but the thread of being trapped in a local maximum is imminent and can be shown in simple examples:

As you can see above, even in the one-dimensional case, greedy algortihms can be trapped in a local maximum. Sometimes restarting from different points can improve the situation, like in the hill-climbing scenario. There, if the starting points are sampled dense enough, eventually the algortihm will converge to the global maximum. The situation in a multi-dimensional scenario is normally even worse, with different length-scales in different dimension making alternate sampling cumbersome. Note that convex problems can always be solved with greedy algortihmns.

Last update: 2. September 2021