Branches of AI
Q. What are the branches of AI?
A. Here's a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.
logical AI
What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96b] lists some of the concepts involved in logical aI. [Sha97] is an important text.
search
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.
pattern recognition
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.
representation
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.
inference
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning.
common sense knowledge and reasoning
This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.
learning from experience
Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
planning
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.
epistemology
This is a study of the kinds of knowledge that are required for solving problems in the world.
ontology
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.
heuristics
A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful. [My opinion].
genetic programming
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations. It is being developed by John Koza's group and here's a tutorial.
Sunday, August 31, 2008
Tuesday, August 19, 2008
AI definition c/o Brittanica
the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—as, for example, discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.
Why I think PayDotCom is the Best!
Hi
Sean DeHoney here...
If you are familiar with Clickbank.com (R), or even if you are not but you want to make profits online, then you will want to check this out ASAP ...
While I like Clickbank, and they are a great marketplace... they are limited to many restrictions to sell products or earn affiliate commissions...
Well, there is a GREAT NEW SERVICE now...
It is a new FREE marketplace where you can sell any product you want.
Yours OWN product...
- OR - (the best part)
You can become an INSTANT Affiliate for ANY item in their HUGE marketplace.
It is called PayDotCom.com!
Did I mention it is 100% FREE to Join!
This site is going to KILL all other marketplaces and I by now, almost EVERY SINGLE SERIOUS online marketer has an account with PayDotCom.com
So get yours now and see how much they offer...
OH! - Also, they have their won affiliate program now that pays you COLD HARD cash just for sharing the site with people like I am doing with you...
They give you cool tools like BLOG WIDGETS, and they even have an advertising program to help you get traffic to your site.
If you want an ARMY of affiliates to sell your products for you, they also allow you to have Free placement in their marketplace!
Even better... If your product becomes one of the Top 25 products in its category in the marketplace (not that hard to do)...
...then you will get Free advertising on the Blog Widget which is syndicated on THOUSANDS of sites World Wide and get Millions of impressions per month.
So, what are you waiting for...
PayDotCom.com ROCKS!
Get your FREE account now...
http://paydotcom.net/?affiliate=431095
Thanks,
Sean DeHoney
P.S. - Make sure to get your Account NOW while it is Free to join.
Sean DeHoney here...
If you are familiar with Clickbank.com (R), or even if you are not but you want to make profits online, then you will want to check this out ASAP ...
While I like Clickbank, and they are a great marketplace... they are limited to many restrictions to sell products or earn affiliate commissions...
Well, there is a GREAT NEW SERVICE now...
It is a new FREE marketplace where you can sell any product you want.
Yours OWN product...
- OR - (the best part)
You can become an INSTANT Affiliate for ANY item in their HUGE marketplace.
It is called PayDotCom.com!
Did I mention it is 100% FREE to Join!
This site is going to KILL all other marketplaces and I by now, almost EVERY SINGLE SERIOUS online marketer has an account with PayDotCom.com
So get yours now and see how much they offer...
OH! - Also, they have their won affiliate program now that pays you COLD HARD cash just for sharing the site with people like I am doing with you...
They give you cool tools like BLOG WIDGETS, and they even have an advertising program to help you get traffic to your site.
If you want an ARMY of affiliates to sell your products for you, they also allow you to have Free placement in their marketplace!
Even better... If your product becomes one of the Top 25 products in its category in the marketplace (not that hard to do)...
...then you will get Free advertising on the Blog Widget which is syndicated on THOUSANDS of sites World Wide and get Millions of impressions per month.
So, what are you waiting for...
PayDotCom.com ROCKS!
Get your FREE account now...
http://paydotcom.net/?affiliate=431095
Thanks,
Sean DeHoney
P.S. - Make sure to get your Account NOW while it is Free to join.
Wednesday, August 13, 2008
Artificial Intelligence.
There is so much out there and I hope to be adding more soon, if you would like to comment to make suggestions for sites with articles it would be greatly appreciated. I am low level programmer but am a brilliant web developer and know hot to promote and market websites. So if you have and idea that you want to get out there I can help you. I like standing up for the little guys. I would be willing to consider doing a website for a small project or a project with limited funds. Of course if you want and need a site you can pay me for that. ;-) Contact me with the info below...
sean@farpoint-systems.com
sean@farpoint-systems.com
History of AI
History of AI research
Main articles: history of artificial intelligence and timeline of artificial intelligence
In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.[28]
The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956.[29] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:[30] computers were solving word problems in algebra, proving logical theorems and speaking English.[31] By the middle 60s their research was heavily funded by the U.S. Department of Defense[32] and they were optimistic about the future of the new field:
* 1965, H. A. Simon: "[M]achines will be capable, within twenty years, of doing any work a man can do"[33]
* 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[34]
These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced.[35] In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research in AI. This was the first AI Winter.[36]
In the early 80s, AI research was revived by the commercial success of expert systems (a form of AI program that simulated the knowledge and analytical skills of one or more human experts) and by 1985 the market for AI had reached more than a billion dollars.[37] Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.[38] Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.[39]
In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.[40] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[41]
Main articles: history of artificial intelligence and timeline of artificial intelligence
In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.[28]
The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956.[29] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:[30] computers were solving word problems in algebra, proving logical theorems and speaking English.[31] By the middle 60s their research was heavily funded by the U.S. Department of Defense[32] and they were optimistic about the future of the new field:
* 1965, H. A. Simon: "[M]achines will be capable, within twenty years, of doing any work a man can do"[33]
* 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[34]
These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced.[35] In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research in AI. This was the first AI Winter.[36]
In the early 80s, AI research was revived by the commercial success of expert systems (a form of AI program that simulated the knowledge and analytical skills of one or more human experts) and by 1985 the market for AI had reached more than a billion dollars.[37] Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.[38] Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.[39]
In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.[40] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[41]
In the Beginning
Artificial intelligence (AI) is both the intelligence of machines and the branch of computer science which aims to create it.
Major AI textbooks define artificial intelligence as "the study and design of intelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3] defines it as "the science and engineering of making intelligent machines."[4]
Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[5] General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of some AI research.[6]
AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic.[7] AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.[8]
Other names for the field have been proposed, such as computational intelligence,[9] synthetic intelligence,[9] intelligent systems,[10] or computational rationality.[11] These alternative names are sometimes used to set oneself apart from the part of AI dealing with symbols (considered outdated by many, see GOFAI) which is often associated with the term “AI” itself.
Major AI textbooks define artificial intelligence as "the study and design of intelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3] defines it as "the science and engineering of making intelligent machines."[4]
Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[5] General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of some AI research.[6]
AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic.[7] AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.[8]
Other names for the field have been proposed, such as computational intelligence,[9] synthetic intelligence,[9] intelligent systems,[10] or computational rationality.[11] These alternative names are sometimes used to set oneself apart from the part of AI dealing with symbols (considered outdated by many, see GOFAI) which is often associated with the term “AI” itself.
Subscribe to:
Posts (Atom)