CLASSES OF ALGORITHM


ALGORITHM

An algorithm is any special method of solving a certain kind of problem or an idea behind the computer programs. More specifically , Algorithm refers to a precise method useable by a computer for the solution of a particular computational problem.

CLASSES OF ALGORITHM 

Algorithms can be classified depending on number of criteria and factors.

  1. CLASSIFICATION BY IMPLEMENTATION

A. RECURSION OR ITERATION

A recursive algorithm calls itself repeatedly upto a certain condition.

Recursive algorithms use repetitive blocks like loops and sometimes additional data structures like Queues , stacks , lists to solve the given computational problems. Fibonacci series and Towers of hanoi are well known problems in recursive implementation. 

B. LOGICAL

The logic part expresses the idea that may be used in the computation and the control part determines the way in which statements are executed under some conditions.

C. SERIAL

Algorithms are analyzed with the hypothesis that machine execute one instruction of an algorithm at a time. such computers are called serial computers.

D. PARALLEL 

Parallel algorithms adopt the theory that several processors can work on a problem at the same time, while distributed algorithms make use of multiple machines connected with a network.

E. DETERMINISTIC OR NON-DETERMINISTIC

Deterministic algorithms are used to solve the problem, with exact decision at every step of the algorithm while non-deterministic algorithm solves problems by guessing although typical guesses are made more accurate by the set of heuristics.

F. EXACT OR APPROXIMATE

As many algorithms converge to an exact solution, there are some approximations algorithms which provide the solution close to the actual or exact solution. approximation algorithms use can either a deterministic or a random strategy.

 

2. CLASSIFICATION BY DESIGN TECHNIQUES

A) BRUTE FORCE

  • It is direct/most straightforward approach.
  • It is directly based on problem definition and definitions of concepts.
  • It usually not very efficient but easy to implement
  • It is useful when problem size is small
  • For example: Linear search , Matrix Multiplication

B) DIVIDE AND CONQUER

  • Partitioning a problem into several sub problems
  • Solve the sub problems recursively
  • Combine the solutions of the sub problems to obtain the solution to the original problem
  • For example: Mergesort , Quicksort , Multiplication of two large integers , strassen’s Matrix Multiplication.

C) DECREASE AND CONQUER

  • Solving a problem by reducing it to a smaller one
  • Solve  the smaller problem recursively
  • Extend the solution of the smaller problem to the original problem.
  • For example : Insertion sort , Euclid’s algorithm , Binary search.

D) TRANFORM AND CONQUER

  • Tranform the input so that it can  be solved more easily
  • Transform the instance into another instance of the same problem with some special properties i.e heapsort
  • Make use of different internal representations i.e hashing
  • Process the input to obtain auxiliary information i.e preorder transversal for determining ancestry in a tree

D) DYNAMIC PROGRAMMING

  • Dynamic programming solves optimization problems by combining solutions to sub problems.
  • It is similar to divide and conquer except that the sub problems overlap
  • First formulate the problem using a recurrence relation
  • Observe the dependency among the sub problems and either use for loops or memoization to implement the algorithm i.e optimal matrix multiplication, longest common subsequence

Dynamic programming uses optimal substructure from the bottom up:

  1. first find optimal solutions to subproblems
  2. Then choose which to use in optimal solution to problem.

Dynamic programming is a stage wise method suitable for optimization problems whose solutions are viewed as the result of a sequence of decisions. It avoids full enumeration by pruning early partial decision solutions that cannot possibly lead to optimal solution.

As divide and conquer solve the  problems top down , a dynamic programming is a bottom up technique.

Bottom up refers :

  1. Start with smallest subproblems
  2. Combining their solutions obtain the solutions to subproblems of increasing size
  3. Until arrive at the solution of the original problem.

E) GREEDY METHOD

A greedy algorithm is a method for finding an optimal solution to some problem involving a large , homogeneous data structure such as an array , a tree or a graph, by starting from an optimal solution to some component or small part of the data structure and extending it, by considering additional components of the data structure one by one, to an optimal global solution.

The principle advantage of greedy algorithms is that they are usually straightforward, easy to understand and easy to code. Their principle disadvantage is that for many problems there is no greedy algorithm.

A greedy algorithm is similar to a dynamic programming algorithm, but difference is that solutions to the subproblems do no have to be known at each stage; instead a “greedy” choice can be made of what looks best for the moment.

  • Greedy algorithms work in phases
  • In each phase, a decision is made that appears to be good, without regard for future  consequences. for example: Kruskal’s MST algorithm, Dijkstra algorithm.

F) ITERATIVE IMPROVEMENT

  • Start with a solution to the problem
  • Iteratively make some changes to the solution to improve it.
  • Stop when the solution cannot be improved anymore
  • For example:  Bipartite matching , stable marriage algorithm , max flow.

G) STATE SPACE SEARCH

  • Formulate the problem as a search through a space of solutions.
  • Backtracking – corresponds to a depth traversal of the search tree. For example: N queens problem.
  • Branch and bound (minimization) – It is used in optimization problems and use an admissible heuristic; h(n) < = c(n) ; h(n) <= d(n,m)+h(m)

H) LINEAR PROGRAMMING

Linear programming involves the maximization of minimization of an objective function with some constraints. Many problems can be stated in a linear programming way, and then be solved by a generic algorithm such as the simplex algorithm. A more complex variant of linear programming is called integer programming, where the solution space is restricted to the integers.

For example:

Max Z = x1 + 3×2 + 9×3

Linear programming can be solved using various techniques including

  1. Graphical method
  2. Transportation Method
  3. integer Programming
  4. Dynamic Programming
  5. Game Theory

I) REDUCTION

This technique involves a difficult problem by transforming it into a better known problem for which we have asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms.

J) SEARCH AND ENUMERATION

Many problems can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking.

Leave a comment