What is Theory of computation?
‘Theory of Computation’ or ‘Theory of Automata’ is the core area of computer science and engineering; it is the branch that aims to attempts the deep understanding of computational processes by means of effectively solving the problems via mathematical models, tools, and techniques. This understanding is important for its applications that include various model of computation like algorithm, compiler and VLSI design, to the creation of intelligent technology, cognitive psychology, and philosophy. This broad area of computation is divided into three major branches:
Complexity theory:
To be solving the problems via computers the first question rises in every one mind that is, “What makes some problems computationally hard and other problems are computationally easy?”.
In a informal way a problem is called “computationally easy”, if it is efficiently solvable. For example of “easy” problems are as follows;
- Sorting a sequence of, say, 1,000,000 numbers,
- Searching for a name in a telephone directory, and
- Computing the fastest way to drive from Ottawa to Miami etc.
Also check: nfa applications
On the other hand, a problem is called “computationally hard”, if it cannot be solved efficiently, or if we are unable to determine that the problem will solve efficiently. For examples of “computationally hard” problems are as follows;
- Time table scheduling for all courses at Carleton,
- Factoring a 300-digit integer into its prime factors, and
- Computing a layout for chips in VLSI etc.
Computability theory:
According to this theory in 1930’s Kurt Godel, Alonzo Church, Alan Turing, Stephen Kleene and Emil Post introduced a computational theory, that theoretical model proposed in order to understand which functional mathematical problems solvable and unsolvable led to the development of real computers. Computational theory is also known as recursion theory which is the result of studded computable functions and Turing degrees.
Also check: cfg examples with solutions
Automata theory:
It is the study of abstract mathematical machine and it deals with definitions and properties of different types of “computation models”. Examples of such computational models are:
- Finite Automata: These are used in text processing, compilers, and hardware design.
- Context-Free Grammars: These are used to define programming languages and in Artificial Intelligence.
- Context-Sensitive Grammars: It is less general than unrestricted grammars used for compiler designing and in Artificial intelligence.
- Turing Machines: These form a simple abstract model of a “real” computer, such as your PC at home
The meaning of Automata is doing something and something done by itself, it is word that comes from Greek word (Αυτόματα).
The study of these major branches of computation is well to deep understand about the fundamental capabilities and limitations of computers. Although initially ‘Theory of Automata’ is the study of abstract computing devices or a sometimes called machine but today’s real machines are the successful resultants of this abstract.
History of Theory of automata:
Before 1930’s: Alan Turing Studied an abstract machine that had all the capabilities of today’s computers to solve problems. A. Turing’s goal was to describe precisely that boundary between what a computing machines could do and what it could not do.
1931’s to 1950’s: Simpler kinds of machines were used which we called ‘Finite Automata’. These automata originally proposed to model brain function, turned out to be extremely useful for a variety of other purposes like designing software’s to checking the behavior of digital circuit used in computers etc..
Late 1950’s to 1960’s: N. Chomsky began the study of formal ‘grammars’ that are not strictly belongs to the machines, but these grammars have closer relationships to abstracts automata. In present world these grammars serves as the basis of some important software components, including parts of compilers.
After 1960’s: Stephen Cook takes the charge and extended the Turing’s study of what could and what could not be computed. Finally in 1971 S. Cook was succeed to separate those problems that can be solved efficiently by computer form those problems that can in principle be solved, but in practically it take so much time that computers are useless for all but very small instances of the problem. The latter class of problem is called ‘Intractable’ or well knows as ‘NP-hard’ problems.
Important reasons why study Theory of computation:
The major reasons about the importance to study of theory of computation are listed below;
- The importance to study the theory of computation is to better understand the development of formal mathematical models of computation that reflect the real-world of computer.
- To achieve deep understanding about the mathematical properties of computer hardware and software.
- Mathematical definitions of the computation and the algorithm.
- To rectify the limitations of computers and answer what kind of problems can be computed?
Theory of computation is used to slove the problems in an efficient way in the formal way or in the informal way, It is also known as the recursion theory in which it’s used to understand about the mathematical problems in solved/unsolved way.
It is to understand about the mathematical properties and the definations of different models, it’s used to create the logical models in a proficient way
it will teach us about about the mathematical concepts and logical arguments
It will be focusing on solving the problems, and it deals with the computaion of solving the problems using mathematical models and by using the algorithms
The Theory of Computation (or Automata) is a fundamental part of computer science and engineering. It focuses on understanding computational processes deeply by using mathematical models and techniques to solve problems effectively.