Theory of Computation : areas, history & importance


What is Theory of computation?

‘Theory of Computation’ or ‘Theory of Automata’ is the core area of computer science and engineering; it is the branch that aims to attempts the deep understanding of computational processes by means of effectively solving the problems via mathematical models, tools, and techniques. This understanding is important for its applications that include various model of computation like algorithm, compiler and VLSI design, to the creation of intelligent technology, cognitive psychology, and philosophy. This broad area of computation is divided into three major branches:

  1. Computational Complexity theory,
  2. Computability theory, and
  3. Automata theory and language

Complexity theory:

To be solving the problems via computers the first question rises in every one mind that is, “What makes some problems computationally hard and other problems are computationally easy?”.

In a informal way a problem is called “computationally easy”, if it is efficiently solvable. For example of “easy” problems are as follows;

  1. Sorting a sequence of, say, 1,000,000 numbers,
  2. Searching for a name in a telephone directory, and
  3. Computing the fastest way to drive from Ottawa to Miami etc.

On the other hand, a problem is called “computationally hard”, if it cannot be solved efficiently, or if we are unable to determine that the problem will solve efficiently. For examples of “computationally hard” problems are as follows;

  1. Time table scheduling for all courses at Carleton,
  2. Factoring a 300-digit integer into its prime factors, and
  3. Computing a layout for chips in VLSI etc.

Computability theory:

According to this theory in 1930’s Kurt Godel, Alonzo Church, Alan Turing, Stephen Kleene and Emil Post introduced a computational theory, that theoretical model proposed in order to understand which functional mathematical problems solvable and unsolvable led to the development of real computers. Computational theory is also known as recursion theory which is the result of studded computable functions and Turing degrees.

Automata theory:

It is the study of abstract mathematical machine and it deals with definitions and properties of different types of “computation models”. Examples of such computational models are:

  • Finite Automata: These are used in text processing, compilers, and hardware design.
  • Context-Free Grammars: These are used to define programming languages and in Artificial Intelligence.
  • Context-Sensitive Grammars: It is less general than unrestricted grammars used for compiler designing and in Artificial intelligence.
  • Turing Machines: These form a simple abstract model of a “real” computer, such as your PC at home

The meaning of Automata is doing something and something done by itself, it is word that comes from Greek word (Αυτόματα).

The study of these major branches of computation is well to deep understand about the fundamental capabilities and limitations of computers. Although initially ‘Theory of Automata’ is the study of abstract computing devices or a sometimes called machine but today’s real machines are the successful resultants of this abstract.

History of Theory of automata:

Before 1930’s: Alan Turing Studied an abstract machine that had all the capabilities of today’s computers to solve problems. A. Turing’s goal was to describe precisely that boundary between what a computing machines could do and what it could not do.

1931’s to 1950’s: Simpler kinds of machines were used which we called ‘Finite Automata’. These automata originally proposed to model brain function, turned out to be extremely useful for a variety of other purposes like designing software’s to checking the behavior of digital circuit used in computers etc..

Late 1950’s to 1960’s: N. Chomsky began the study of formal ‘grammars’ that are not strictly belongs to the machines, but these grammars have closer relationships to abstracts automata. In present world these grammars serves as the basis of some important software components, including parts of compilers.

After 1960’s: Stephen Cook takes the charge and extended the Turing’s study of what could and what could not be computed. Finally in 1971 S. Cook was succeed to separate those problems that can be solved efficiently by computer form those problems that can in principle be solved, but in practically it take so much time that computers are useless for all but very small instances of the problem. The latter class of problem is called ‘Intractable’ or well knows as ‘NP-hard’ problems.

Important reasons why study Theory of computation:

The major reasons about the importance to study of theory of computation are listed below;

  1. The importance to study the theory of computation is to better understand the development of formal mathematical models of computation that reflect the real-world of computer.
  2. To achieve deep understanding about the mathematical properties of computer hardware and software.
  3. Mathematical definitions of the computation and the algorithm.
  4. To rectify the limitations of computers and answer what kind of problems can be computed?


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.