# Merge Sort Java Beispiel Essay

## 5.11. The Merge Sort¶

We now turn our attention to using a divide and conquer strategy as a way to improve the performance of sorting algorithms. The first algorithm we will study is the **merge sort**. Merge sort is a recursive algorithm that continually splits a list in half. If the list is empty or has one item, it is sorted by definition (the base case). If the list has more than one item, we split the list and recursively invoke a merge sort on both halves. Once the two halves are sorted, the fundamental operation, called a **merge**, is performed. Merging is the process of taking two smaller sorted lists and combining them together into a single, sorted, new list. Figure 10 shows our familiar example list as it is being split by . Figure 11 shows the simple lists, now sorted, as they are merged back together.

Figure 10: Splitting the List in a Merge Sort

Figure 11: Lists as They Are Merged Together

The function shown in ActiveCode 1 begins by asking the base case question. If the length of the list is less than or equal to one, then we already have a sorted list and no more processing is necessary. If, on the other hand, the length is greater than one, then we use the Python operation to extract the left and right halves. It is important to note that the list may not have an even number of items. That does not matter, as the lengths will differ by at most one.

Once the function is invoked on the left half and the right half (lines 8–9), it is assumed they are sorted. The rest of the function (lines 11–31) is responsible for merging the two smaller sorted lists into a larger sorted list. Notice that the merge operation places the items back into the original list () one at a time by repeatedly taking the smallest item from the sorted lists.

The function has been augmented with a statement (line 2) to show the contents of the list being sorted at the start of each invocation. There is also a statement (line 32) to show the merging process. The transcript shows the result of executing the function on our example list. Note that the list with 44, 55, and 20 will not divide evenly. The first split gives [44] and the second gives [55,20]. It is easy to see how the splitting process eventually yields a list that can be immediately merged with other sorted lists.

In order to analyze the function, we need to consider the two distinct processes that make up its implementation. First, the list is split into halves. We already computed (in a binary search) that we can divide a list in half \(\log n\) times where *n* is the length of the list. The second process is the merge. Each item in the list will eventually be processed and placed on the sorted list. So the merge operation which results in a list of size *n* requires *n* operations. The result of this analysis is that \(\log n\) splits, each of which costs \(n\) for a total of \(n\log n\) operations. A merge sort is an \(O(n\log n)\) algorithm.

Recall that the slicing operator is \(O(k)\) where k is the size of the slice. In order to guarantee that will be \(O(n\log n)\) we will need to remove the slice operator. Again, this is possible if we simply pass the starting and ending indices along with the list when we make the recursive call. We leave this as an exercise.

It is important to notice that the function requires extra space to hold the two halves as they are extracted with the slicing operations. This additional space can be a critical factor if the list is large and can make this sort problematic when working on large data sets.

Self Check

- Q-44: Given the following list of numbers: <br> [21, 1, 26, 45, 29, 28, 2, 9, 16, 49, 39, 27, 43, 34, 46, 40] <br> which answer illustrates the list to be sorted after 3 recursive calls to mergesort?
- (A) [16, 49, 39, 27, 43, 34, 46, 40]
- This is the second half of the list.
- (B) [21,1]
- Yes, mergesort will continue to recursively move toward the beginning of the list until it hits a base case.
- (C) [21, 1, 26, 45]
- Remember mergesort doesn't work on the right half of the list until the left half is completely sorted.
- (D) [21]
- This is the list after 4 recursive calls

- Q-45: Given the following list of numbers: <br> [21, 1, 26, 45, 29, 28, 2, 9, 16, 49, 39, 27, 43, 34, 46, 40] <br> which answer illustrates the first two lists to be merged?
- (A) [21, 1] and [26, 45]
- The first two lists merged will be base case lists, we have not yet reached a base case.
- (B) [[1, 2, 9, 21, 26, 28, 29, 45] and [16, 27, 34, 39, 40, 43, 46, 49]
- These will be the last two lists merged
- (C) [21] and [1]
- The lists [21] and [1] are the first two base cases encountered by mergesort and will therefore be the first two lists merged.
- (D) [9] and [16]
- Although 9 and 16 are next to each other they are in different halves of the list starting with the first split.

In computer science, **merge sort** (also commonly spelled **mergesort**) is an efficient, general-purpose, comparison-basedsorting algorithm. Most implementations produce a stable sort, which means that the implementation preserves the input order of equal elements in the sorted output. Mergesort is a divide and conquer algorithm that was invented by John von Neumann in 1945.^{[2]} A detailed description and analysis of bottom-up mergesort appeared in a report by Goldstine and von Neumann as early as 1948.^{[3]}

## Algorithm[edit]

Conceptually, a merge sort works as follows:

- Divide the unsorted list into
*n*sublists, each containing 1 element (a list of 1 element is considered sorted). - Repeatedly merge sublists to produce new sorted sublists until there is only 1 sublist remaining. This will be the sorted list.

### Top-down implementation[edit]

Example C-like code using indices for top down merge sort algorithm that recursively splits the list (called *runs* in this example) into sublists until sublist size is 1, then merges those sublists to produce a sorted list. The copy back step is avoided with alternating the direction of the merge with each level of recursion.

### Bottom-up implementation[edit]

Example C-like code using indices for bottom up merge sort algorithm which treats the list as an array of *n* sublists (called *runs* in this example) of size 1, and iteratively merges sub-lists back and forth between two buffers:

### Top-down implementation using lists[edit]

Pseudocode for top down merge sort algorithm which recursively divides the input list into smaller sublists until the sublists are trivially sorted, and then merges the sublists while returning up the call chain.

**function**merge_sort(

*list*m) //

*Base case. A list of zero or one elements is sorted, by definition.*

**if**length of m ≤ 1

**then**

**return**m //

*Recursive case. First, divide the list into equal-sized sublists*//

*consisting of the first half and second half of the list.*//

*This assumes lists start at index 0.*

**var**left := empty list

**var**right := empty list

**for each**x

**with index**i

**in**m

**do**

**if**i < (length of m)/2

**then**add x to left

**else**add x to right //

*Recursively sort both sublists.*left := merge_sort(left) right := merge_sort(right) // Then merge the now-sorted sublists.

**return**merge(left, right)

In this example, the merge function merges the left and right sublists.

**function**merge(left, right)

**var**result := empty list

**while**left is not empty

**and**right is not empty

**do**

**if**first(left) ≤ first(right)

**then**append first(left) to result left := rest(left)

**else**append first(right) to result right := rest(right) //

*Either left or right may have elements left; consume them.*//

*(Only one of the following loops will actually be entered.)*

**while**left is not empty

**do**append first(left) to result left := rest(left)

**while**right is not empty

**do**append first(right) to result right := rest(right)

**return**result

### Bottom-up implementation using lists[edit]

Pseudocode for bottom up merge sort algorithm which uses a small fixed size array of references to nodes, where array[i] is either a reference to a list of size 2 ^{i} or 0. *node* is a reference or pointer to a node. The merge() function would be similar to the one shown in the top down merge lists example, it merges two already sorted lists, and handles empty lists. In this case, merge() would use *node* for its input parameters and return value.

**function**merge_sort(

*node*head) // return if empty list

**if**(head == nil)

**return**nil

**var**

*node*array[32]; initially all nil

**var**

*node*result

**var**

*node*next

**var**

*int*i result = head // merge nodes into array

**while**(result != nil) next = result.next; result.next = nil

**for**(i = 0; (i < 32) && (array[i] != nil); i += 1) result = merge(array[i], result) array[i] = nil // do not go past end of array

**if**(i == 32) i -= 1 array[i] = result result = next // merge array into single list result = nil

**for**(i = 0; i < 32; i += 1) result = merge(array[i], result)

**return**result

## Natural merge sort[edit]

A natural merge sort is similar to a bottom up merge sort except that any naturally occurring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (alternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being convenient data structures (used as FIFO queues or LIFO stacks).^{[4]} In the bottom up merge sort, the starting point assumes each run is one item long. In practice, random input data will have many short runs that just happen to be sorted. In the typical case, the natural merge sort may not need as many passes because there are fewer runs to merge. In the best case, the input is already sorted (i.e., is one run), so the natural merge sort need only make one pass through the data. In many practical cases, long natural runs are present, and for that reason natural merge sort is exploited as the key component of Timsort. Example:

Tournament replacement selection sorts are used to gather the initial runs for external sorting algorithms.

## Analysis[edit]

In sorting *n* objects, merge sort has an average and worst-case performance of O(*n* log *n*). If the running time of merge sort for a list of length *n* is *T*(*n*), then the recurrence *T*(*n*) = 2*T*(*n*/2) + *n* follows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add the *n* steps taken to merge the resulting two lists). The closed form follows from the master theorem for divide-and-conquer recurrences.

In the worst case, the number of comparisons merge sort makes is equal to or slightly smaller than (*n* ⌈lg *n*⌉ - 2^{⌈lg n⌉} + 1), which is between (*n* lg *n* - *n* + 1) and (*n* lg *n* + *n* + O(lg *n*)).^{[5]}

For large *n* and a randomly ordered input list, merge sort's expected (average) number of comparisons approaches *α*·*n* fewer than the worst case where

In the *worst* case, merge sort does about 39% fewer comparisons than quicksort does in the *average* case. In terms of moves, merge sort's worst case complexity is O(*n* log *n*)—the same complexity as quicksort's best case, and merge sort's best case takes about half as many iterations as the worst case.^{[citation needed]}

Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can only be efficiently accessed sequentially, and is thus popular in languages such as Lisp, where sequentially accessed data structures are very common. Unlike some (efficient) implementations of quicksort, merge sort is a stable sort.

Merge sort's most common implementation does not sort in place;^{[6]} therefore, the memory size of the input must be allocated for the sorted output to be stored in (see below for versions that need only *n*/2 extra spaces).

## Variants[edit]

Variants of merge sort are primarily concerned with reducing the space complexity and the cost of copying.

A simple alternative for reducing the space overhead to *n*/2 is to maintain *left* and *right* as a combined structure, copy only the *left* part of *m* into temporary space, and to direct the *merge* routine to place the merged output into *m*. With this version it is better to allocate the temporary space outside the *merge* routine, so that only one allocation is needed. The excessive copying mentioned previously is also mitigated, since the last pair of lines before the *return result* statement (function *merge* in the pseudo code above) become superfluous.

One drawback of merge sort, when implemented on arrays, is its *O*(*n*) working memory requirement. Several in-place variants have been suggested:

- Katajainen
*et al.*present an algorithm that requires a constant amount of working memory: enough storage space to hold one element of the input array, and additional space to hold*O*(1) pointers into the input array. They achieve an*O*(*n*log*n*) time bound with small constants, but their algorithm is not stable.^{[7]} - Several attempts have been made at producing an
*in-place merge*algorithm that can be combined with a standard (top-down or bottom-up) merge sort to produce an in-place merge sort. In this case, the notion of "in-place" can be relaxed to mean "taking logarithmic stack space", because standard merge sort requires that amount of space for its own stack usage. It was shown by Geffert*et al.*that in-place, stable merging is possible in*O*(*n*log*n*) time using a constant amount of scratch space, but their algorithm is complicated and has high constant factors: merging arrays of length n and m can take 5*n*+ 12*m*+*o*(*m*) moves.^{[8]}This high constant factor and complicated in-place algorithm was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston^{[9]}presented a straightforward linear time algorithm*practical in-place merge*to merge a sorted list using fixed amount of additional space. They both have used the work of Kronrod and others. It merges in linear time and constant extra space. The algorithm takes little more average time than standard merge sort algorithms, free to exploit O(n) temporary extra memory cells, by less than a factor of two. Though the algorithm is much faster in practical way but it is unstable also for some list. But using similar concept they have been able to solve this problem. Other in-place algorithms include SymMerge, which takes*O*((*n*+*m*) log (*n*+*m*)) time in total and is stable.^{[10]}Plugging such an algorithm into merge sort increases its complexity to the non-linearithmic, but still quasilinear,*O*(*n*(log*n*)^{2}). - A modern stable linear and in-place merging is block merge sort

An alternative to reduce the copying into multiple lists is to associate a new field of information with each key (the elements in *m* are called keys). This field will be used to link the keys and any associated information together in a sorted list (a key and its related information is called a record). Then the merging of the sorted lists proceeds by changing the link values; no records need to be moved at all. A field which contains only a link will generally be smaller than an entire record so less space will also be used. This is a standard sorting technique, not restricted to merge sort.

## Use with tape drives[edit]

An external merge sort is practical to run using disk or tape drives when the data to be sorted is too large to fit into memory. External sorting explains how merge sort is implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is sequential (except for rewinds at the end of each pass). A minimal implementation can get by with just 2 record buffers and a few program variables.

Naming the four tape drives as A, B, C, D, with the original data on A, and using only 2 record buffers, the algorithm is similar to Bottom-up implementation, using pairs of tape drives instead of arrays in memory. The basic algorithm can be described as follows:

- Merge pairs of records from A; writing two-record sublists alternately to C and D.
- Merge two-record sublists from C and D into four-record sublists; writing these alternately to A and B.
- Merge four-record sublists from A and B into eight-record sublists; writing these alternately to C and D
- Repeat until you have one list containing all the data, sorted—in log
_{2}(*n*) passes.

Instead of starting with very short runs, usually a hybrid algorithm is used, where the initial pass will read many records into memory, do an internal sort to create a long run, and then distribute those long runs onto the output set. The step avoids many early passes. For example, an internal sort of 1024 records will save 9 passes. The internal sort is often large because it has such a benefit. In fact, there are techniques that can make the initial runs longer than the available internal memory.^{[11]}

A more sophisticated merge sort that optimizes tape (and disk) drive usage is the polyphase merge sort.

## Optimizing merge sort[edit]

On modern computers, locality of reference can be of paramount importance in software optimization, because multilevel memory hierarchies are used. Cache-aware versions of the merge sort algorithm, whose operations have been specifically chosen to minimize the movement of pages in and out of a machine's memory cache, have been proposed. For example, the **tiled merge sort** algorithm stops partitioning subarrays when subarrays of size S are reached, where S is the number of data items fitting into a CPU's cache. Each of these subarrays is sorted with an in-place sorting algorithm such as insertion sort, to discourage memory swaps, and normal merge sort is then completed in the standard recursive fashion. This algorithm has demonstrated better performance ^{[example needed]} on machines that benefit from cache optimization. (LaMarca & Ladner 1997)

Kronrod (1969) suggested an alternative version of merge sort that uses constant additional space. This algorithm was later refined. (Katajainen, Pasanen & Teuhola 1996)

Also, many applications of external sorting use a form of merge sorting where the input get split up to a higher number of sublists, ideally to a number for which merging them still makes the currently processed set of pages fit into main memory.

## Parallel merge sort[edit]

Merge sort parallelizes well due to use of the divide-and-conquer method. Several parallel variants are discussed in the third edition of Cormen, Leiserson, Rivest, and Stein's *Introduction to Algorithms*.^{[12]} The first of these can be very easily expressed in a pseudocode with fork and join keywords:

*Sort elements lo through hi (exclusive) of array A.*

**algorithm**mergesort(A, lo, hi)

**is**

**if**lo+1 < hi

**then**//

*Two or more elements.*mid = ⌊(lo + hi) / 2⌋

**fork**mergesort(A, lo, mid) mergesort(A, mid, hi)

**join**merge(A, lo, mid, hi)

This algorithm is a trivial modification from the serial version, and its speedup is not impressive: when executed on an infinite number of processors, it runs in Θ(*n*) time, which is only a Θ(log *n*) improvement on the serial version. A better result can be obtained by using a parallelized merge algorithm, which gives parallelism Θ(*n* / (log *n*)^{2}), meaning that this type of parallel merge sort runs in

time if enough processors are available.^{[12]} Such a sort can perform well in practice when combined with a fast stable sequential sort, such as insertion sort, and a fast sequential merge as a base case for merging small arrays.^{[13]}

Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with Richard Cole using a clever subsampling algorithm to ensure *O*(1) merge.^{[14]} Other sophisticated parallel sorting algorithms can achieve the same or better time bounds with a lower constant. For example, in 1991 David Powers described a parallelized quicksort (and a related radix sort) that can operate in *O*(log *n*) time on a CRCWparallel random-access machine (PRAM) with *n* processors by performing partitioning implicitly.^{[15]} Powers^{[16]} further shows that a pipelined version of Batcher's Bitonic Mergesort at *O*((log *n*)^{2}) time on a butterfly sorting network is in practice actually faster than his *O*(log *n*) sorts on a PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix and parallel sorting.

## Comparison with other sort algorithms[edit]

Although heapsort has the same time bounds as merge sort, it requires only Θ(1) auxiliary space instead of merge sort's Θ(*n*). On typical modern architectures, efficient quicksort implementations generally outperform mergesort for sorting RAM-based arrays.^{[citation needed]} On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-access sequential media. Merge sort is often the best choice for sorting a linked list: in this situation it is relatively easy to implement a merge sort in such a way that it requires only Θ(1) extra space, and the slow random-access performance of a linked list makes some other algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely impossible.

As of Perl 5.8, merge sort is its default sorting algorithm (it was quicksort in previous versions of Perl). In Java, the Arrays.sort() methods use merge sort or a tuned quicksort depending on the datatypes and for implementation efficiency switch to insertion sort when fewer than seven array elements are being sorted.^{[17]} The Linux kernel uses merge sort for its linked lists.^{[18]}Python uses Timsort, another tuned hybrid of merge sort and insertion sort, that has become the standard sort algorithm in Java SE 7 (for arrays of non-primitive types),^{[19]} on the Android platform,^{[20]} and in GNU Octave.^{[21]}

## Notes[edit]

**^**Skiena (2008, p. 122)**^**Knuth (1998, p. 158)**^**Jyrki Katajainen and Jesper Larsson Träff (1997). "A meticulous analysis of mergesort programs".**^**Powers, David M. W. and McMahon Graham B. (1983), "A compendium of interesting prolog programs", DCS Technical Report 8313, Department of Computer Science, University of New South Wales.**^**The worst case number given here does not agree with that given in Knuth's*Art of Computer Programming, Vol 3*. The discrepancy is due to Knuth analyzing a variant implementation of merge sort that is slightly sub-optimal**^**Cormen; Leiserson; Rivest; Stein.*Introduction to Algorithms*. p. 151. ISBN 978-0-262-03384-8.**^**Katajainen, Jyrki; Pasanen, Tomi; Teuhola, Jukka (1996). "Practical in-place mergesort".*Nordic J. Computing*.**3**(1): 27–40. CiteSeerX 10.1.1.22.8523.**^**Geffert, Viliam; Katajainen, Jyrki; Pasanen, Tomi (2000). "Asymptotically efficient in-place merging".*Theoretical Computer Science*.**237**: 159–181. doi:10.1016/S0304-3975(98)00162-5.**^**Huang, Bing-Chao; Langston, Michael A. (March 1988). "Practical In-Place Merging".*Communications of the ACM*.**31**(3): 348–352. doi:10.1145/42392.42403.**^**Kim, Pok-Son; Kutzner, Arne (2004).*Stable Minimum Storage Merging by Symmetric Comparisons*. European Symp. Algorithms. Lecture Notes in Computer Science.**3221**. pp. 714–723. CiteSeerX 10.1.1.102.4612. doi:10.1007/978-3-540-30140-0_63. ISBN 978-3-540-23025-0.**^**Selection sort. Knuth's snowplow. Natural merge.- ^
^{a}^{b}Cormen et al. 2009, pp. 797–805 **^**Victor J. Duvanenko "Parallel Merge Sort" Dr. Dobb's Journal & blog[1] and GitHub repo C++ implementation [2]**^**Cole, Richard (August 1988). "Parallel merge sort".*SIAM J. Comput*.**17**(4): 770–785. CiteSeerX 10.1.1.464.7118. doi:10.1137/0217049**^**Powers, David M. W. Parallelized Quicksort and Radixsort with Optimal Speedup,*Proceedings of International Conference on Parallel Computing Technologies*. Novosibirsk. 1991.**^**David M. W. Powers, Parallel Unification: Practical Complexity, Australasian Computer Architecture Workshop, Flinders University, January 1995**^**OpenJDK Subversion^{[dead link]}**^**linux kernel /lib/list_sort.c**^**jjb. "Commit 6804124: Replace "modified mergesort" in java.util.Arrays.sort with timsort".*Java Development Kit 7 Hg repo*. Archived from the original on 2018-01-26. Retrieved 24 Feb 2011.**^**"Class: java.util.TimSort<T>".*Android JDK Documentation*. Archived from the original on January 20, 2015. Retrieved 19 Jan 2015.**^**"liboctave/util/oct-sort.cc".*Mercurial repository of Octave source code*. Lines 23-25 of the initial comment block. Retrieved 18 Feb 2013.

## References[edit]

- Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990].
*Introduction to Algorithms*(3rd ed.). MIT Press and McGraw-Hill. ISBN 0-262-03384-4. - Katajainen, Jyrki; Pasanen, Tomi; Teuhola, Jukka (1996). "Practical in-place mergesort".
*Nordic Journal of Computing*.**3**. pp. 27–40. ISSN 1236-6064. Retrieved 2009-04-04. . Also Practical In-Place Mergesort. Also [3] - Knuth, Donald (1998). "Section 5.2.4: Sorting by Merging".
*Sorting and Searching*. The Art of Computer Programming.**3**(2nd ed.). Addison-Wesley. pp. 158–168. ISBN 0-201-89685-0. - Kronrod, M. A. (1969). "Optimal ordering algorithm without operational field".
*Soviet Mathematics - Doklady*.**10**. p. 744. - LaMarca, A.; Ladner, R. E. (1997). "The influence of caches on the performance of sorting".
*Proc. 8th Ann. ACM-SIAM Symp. on Discrete Algorithms (SODA97)*: 370–379. CiteSeerX 10.1.1.31.1153. - Skiena, Steven S. (2008). "4.5: Mergesort: Sorting by Divide-and-Conquer".
*The Algorithm Design Manual*(2nd ed.). Springer. pp. 120–125. ISBN 978-1-84800-069-8. - Sun Microsystems. "Arrays API". Retrieved 2007-11-19.
- Sun Microsystems. "java.util.Arrays.java". Retrieved 2007-11-19.
^{[permanent dead link]}

## Comments