In this tutorial, I will talk a little about the classification, stability and complexity of each algorithm, more regarding this you can read a little bit lower of this article. Then, the advantages or disadvantages of each algorithm are shown, which should give you the big picture of each algorithm. In the end there is a conclusion to sum things up or maybe to emphasis certain things.
Sorting the elements is an operation that is encountered very often in the solving of problems. For this reason, it is important for a programmer to learn how sorting algorithms work.
A sorting algorithm refers to putting the elements of a data set in a certain order, this order can be from greater to lower or just the opposite, and the programmer determines this. In practice, the most common order types are those of numbers and strings (chars) or lexicographical order.
Having an array of elements, a1, a2, … , an, it is required to sort the elements that the following condition is assured:
a1 < = a2 < = a3 < = … < = ai <= aj < = … < = an.
The sorting algorithms can be classified by:
- The class of complexity (more on this later);
- Use of recursion function calls;
- Stability (more on this later);
- How the elements are sorted, by use of comparison or distribution or other method.
A classification of the types of sorting that you can learn in the following lessons:
|Exchange sort:||Bubble sort||Comb sort||Gnome sort||Quick sort|
|Selection sort:||Selection sort||Heap sort|
|Insertion sort:||Insertion sort||Shell sort|
|Distribution sort:||Bucket sort||Counting sort||Radix sort|
The stability refers to keeping the items with the same sorting key in order. Let’s say we have the following example:
34, 56, 29, 51
If we were to sort the list only by using the first digit, the list would show up like:
29, 34, 56, 51
Remember, we only sorted the list by the first digit. In an unstable algorithm, 56 and 51 could appear in reverse order, but at a stable algorithm, it maintains the initial order.
There are some sorting algorithms which can perform better on certain cases, others can sort only certain data types, but all the sorting algorithms have at least one thing in common: complexity.
Complexity of Algorithms
The complexity of a given algorithm is expressed by the worst-case execution
time, using the big O notation.
The "O" is read "order" as in "order of magnitude" or "on the order of".
The big O notation expresses the complexity of an algorithm without restricting the statement to a particular computer architecture or time period; even if computers get faster, or if the algorithm is ported to another software platform, the worst case execution time of an algorithm does not change. Additionally, when comparing two algorithms, it is readily apparent which will execute faster in the extreme case.
Most commonly, you will encounter:
O(1) – Constant time: the time necessary to perform the algorithm does not
change in response to the size of the problem.
O(n) – Linear time: the time grows linearly with the size (n) of the problem.
O(n^2) – Quadratic time: the time grows quadratic-ally with the size (n) of the problem. In big O notation, all polynomials with the same degree are equivalent, so O(3*n2+3*n+6) = O(n2)
O(log(n)) – Logarithmic time
O(2^n) – Exponential time: the time required grows exponentially with the size of the problem.
There are some algorithms that have the same complexity on average case, or best case scenario, but on worst case some algorithms may perform better, faster than other.
Also, depending on the data set, some algorithms might not function as expected, for example some algorithms might have problem when sorting strings.
A few words before we start discussing the first algorithm. There is no perfect sorting algorithm, of course some are faster than other, but they take up more memory and some are slower but requires less memory usage. The programmer must compromise between the speed and memory usage and choose the proper sorting algorithm.
In the end, I would recommend having a piece of paper and a pen, and trying to do yourself an example of the algorithms that you want to learn, because that way you will understand better and you will not forget very soon how that algorithm works on a daily basis.
Also, do not jump straight to the code sample of the algorithm, unless you know what is going on there, instead try to do yourself the implementation of the algorithm and if you get stuck, you can take a peek at the code. I will say it again, do not forget the DEBUGGER, it is a crucial tool in coding certain things, you can see exactly what happens in the memory of the computer and most certainly will help you overpass a problem that you may be having, unless the problem is a logical one.