The major focus of this essay is the future of Artificial Intelligence (AI). As a way to better understand how AI is likely to grow I intend to firstly explore the history and used state of AI. By showing how its role in our existence has changed and expanded so far, I will be better able to predict its future trends. Artificial intelligence is the term that is commonly used for computers that can think. This is actually a true term when you think about it. A computer is a completely artificial machine, made up of parts developed for a specific purpose. If the machine is given any kind of intelligence it must come from man himself, because the computer lacks the capability to perform such a task on its own. With this in mind, the researchers for artificial intelligence are working on a way to make the computers of the future more human like in nature. This is done by way of intelligence chips that are built into the computer system which teaches the machine how to learn on its own through outside sources and not having to be lexapro no rx prompted to do so by man.

John McCarthy initial coined the term artificial intelligence in 1956 at Dartmouth College. At this point electronic computers, the obvious platform for such a technology were continue to less than thirty years old, the size of talk halls and had storage systems and processing systems that were too slow to do the concept justice. It wasn’t until the virtual boom of the 80’s and 90’s that the hardware to build the systems on started to gain ground on the ambitions of the AI theorists and the field actually started to pick up. If artificial intelligence might match the advances made last decade in the decade to come it is actually set to be as common a part of our daily lives while computers have in our lifetimes. Artificial intelligence has needed many different descriptions put to it since its delivery and the most important shift it’s made in its history to date is in how it has defined it is is designed. When AI was young its aims were limited to replicating the function of the human mind, as the analysis developed new intelligent things to duplicate such as insects or genetic materials became apparent. The limitations of the field were also turning out to be clear and out of this AI as we understand it at present emerged. The first AI systems followed a purely symbolic technique. Classic AI’s approach was to build intelligences on a set of figurative spokesmen and rules for manipulating them. One of the most important problems with such a system is that of symbol grounding. If every single one of knowledge in a system is represented by a set of symbol as well as a particular set of symbols (“Dog” for example) has a definition made up of some symbols (“Canine mammal”) then the definition needs a definition (“mammal: creature with four legs, and a constant internal temperature”) and this definition needs a definition and so on. When does this symbolically defined knowledge get described in a manner that doesn’t want further definition to be complete? An experiment by a large company to build a computer that would defeat grand master chess champions took place some years ago. The company believed that they had created the perfect chess playing machine and put it to the test. In the end the grand master would win simply because he was able to think outside of the box rather than based on the statistics that the computer was playing with. This showed the world that we are still light years away from actual thinking machines that will replace the human race. The computer needs to come full circle from its current state and earn the marks that it can think with emotion and not just logic.

Sorry, the comment form is closed at this time.

   
© 2017 Power Conversion, Intelligent Motion Suffusion theme by Sayontan Sinha