Is there a systematic classification of the most common computer science algorithms?

Metronome

Junior Member
Joined
Jun 12, 2018
Messages
128
In combinatorics, the twentyfold way (and better known predecessor the twelvefold way) classifies counting problems via a single mathematical framework, functions with certain properties mapping domains with certain properties to codomains with certain properties. While not exhaustive of all combinatorics problems, this gives some structure to the space instead of treating one problem as fundamentally about ice cream, another about table arrangement, another about circular table arrangement, another about books on a shelf, etc. Granted, this structure may just be helpful for mentally organizing the problems and real-world problem modeling, and not necessarily suggestive of similarity in their solution formulas.

Is there some similar structure for computer science algorithms? It is clear that many common algorithms are just tweaked versions of one another. Is there an abstract framework which ignores the physical interpretations such as knapsacks, fluid flow, bin packing, etc., and aims to arrange as many algorithms as possible into a systematic classification?
 
I think computation classes are the closest you can get simply because algorithms depend on their purpose. You can hardly compare a sorting algorithm with a factorization algorithm, but you can compare two sorting algorithms, two factorization algorithms, and in general runtime, space, and with or without an oracle.
 
Is there some similar structure for computer science algorithms?
If you can make list/s of algorithms that you have in mind (e.g. sorting algorithms, factorization algorithm, knapsacks, fluid flow, bin packing, etc.) - that are different yet (possibly) similar "in your mind", we can start to ferret those out,
 
I don't think complexity classes are the partition I'm after.
If you can make list/s of algorithms that you have in mind (e.g. sorting algorithms, factorization algorithm, knapsacks, fluid flow, bin packing, etc.) - that are different yet (possibly) similar "in your mind", we can start to ferret those out,
The context in which this arose is that I'm writing a very dumb, limited-functionality Magic the Gathering simulator to estimate the early-game impact of different mana bases. Given a hand, I want an algorithm to play lands and nonlands over a fixed number of turns (say 5) in an order that minimizes unused mana, perhaps penalizing unused mana on earlier turns more.

My first idea was to formula it as a multiple knapsack problem, with a 1 slot, 2 slot, 3 slot, 4 slot, and 5 slot knapsack corresponding to the 5 turns. Then I considered that playing cards from the hand feels a lot like a max flow or min cost flow problem, in which the source is the hand, the sink is the stack, and the capacity constraints are the mana requirements. The discreteness of the card problem shouldn't matter due to the integral flow theorem (I'm not sure if this still holds for min cost flow).

I feel like if someone asked me to describe the difference between problems in the general family of knapsack and the general family of flow networks, my answer would be heavily tied to the physical analogy of each problem. In combinatorics (which I've only encountered as a cursory excursus while studying discrete math), the presentation of each problem as living in a distinct physical space always felt unhelpful for the purpose of modeling, while the twelvefold/twentyfold way puts many problems into the same framework, foregrounding instead the essential questions which classify problems such as "Are the functions being counted right-total or not?" There is a sense of how many hops away a combinatorics problem is from another in this framework.

I thought something similar might be helpful for algorithms. Of course, the more algorithms that could be fit into a such framework, the more useful it would be.
 
Top