top of page


Updated: Aug 23, 2021 the first half of the century, science fiction emerged with the concept of artificially intelligent robots.By the 1950s, there was a scientist, mathematician, and philosopher who had been culturally assimilated into the concept of artificial intelligence.That person was Alan Turing, a young British wise man who explored the mathematical possibility of artificial intelligence.Turing suggested that humans use information available as well as reason to solve problems and make decisions, so why can't machines do the same? This was the logical framework for his 1950 paper Computing Machinery and Intelligence, in which he discussed how to build intelligent machines and test their intelligence.


Unfortunately, Talking is worthless.What kept Turing from going to work at that moment and there?First, computers needed to change fundamentally. Before 1949, computers lacked an important prerequisite for intelligence: they could not store commands, only execute them. In other words, computers could be told what to do, but they couldn't remember what they were doing. Second, computing was extremely expensive. In the early 1950s, the cost of renting a computer was $ 200,000 a month. Only prestigious universities and big tech companies could afford to have frantic fun in these uncharted waters. A proof of concept was needed, as well as advocacy from high-profile individuals, to convince funding sources that it was worth going after machine intelligence.


Five years later, proof of the concept was initiated through logic theorists Allen Newell, Cliff Shaw, and Herbert Simon. Logic Theorist was a program designed to mimic a person's problem-solving skills and was funded by the research and Development (RAND) Corporation. It is considered by many to be the first artificial intelligence program and was presented at The Dartmouth summer Artificial Intelligence Research Project.It was hosted by John McCarthy and Marvin Minsky in 1956. At this historic conference, McCarthy a great job imagining the union effort by senior researchers from various areas of artificial intelligence on an open-ended discussion gathered for.Unfortunately, the conference did not meet expectations McCarthy's; people come and go as they please, and failed to reach agreement on the standard methods for the field. Despite this, everyone wholeheartedly joined the feeling that “AI”was achievable. The significance of this event cannot be underestimated as it catalyzes the next two decades of artificial intelligence research.


From 1957 to 1974, “AI” flourished. Computers can store more information and have become faster, cheaper and more accessible. Machine learning algorithms have also improved, and people have gotten better at knowing which algorithm to apply to their problems. Early shows such as Newell and Simon showed the overall problem solver and Joseph Weizenbaum's Elisa problem solving goals and the correct mention of colloquial language interpretation respectively. These achievements and the advocacy of leading researchers (i.e. DSRPAI participants) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to raise funds.Artificial intelligence researchers at various institutions and the government were particularly interested in a machine that could translate spoken language and high-efficiency data processing into writing. Optimism was high and expectations were even higher. In 1970, Marvin Minsky told Life magazine, “we'll have a machine with the general decency of the average person in three to eight years.” However, while the basic proof of principle was there, there was a long way to go before final goals such as natural language processing, abstract thinking and self-recognition could be achieved.

Breaking AI's initial idea exposed a number of hurdles. The biggest was the lack of computational power to do something important: computers could not store enough information or process it fast enough. For example, to communicate, it is necessary to know the meaning of many words and understand them in many combinations. Hans Moravec, who was McCarthy's doctoral student at the time, noted that "computers are still a million times too weak to exhibit intelligence." As patience waned, so did funding, and research followed a slow course for a decade.

In the 1980s, artificial intelligence was reignited by two sources: an expansion of the algorithmic toolkit and an increase in funding. John Hopfield and David Rumelhart popularized “deep learning” techniques that enable computers to learn using experience. Edward Feigenbaum, on the other hand, introduced expert systems that mimic the decision-making process of a human expert. The Program asks a specialist in a field how to respond to a particular situation, and when this is learned for almost any situation, non-experts can seek advice from that program. Specialized systems are widely used in industries. The Japanese government has largely funded specialist systems and other artificial intelligence-related efforts as part of the Fifth Generation Computer Project.(FGCP). Between 1982 and 1990, they invested $ 400 million to revolutionize computer processing, implement logical programming, and deconstruct artificial intelligence. Unfortunately, many of the ambitious goals have not been achieved. But it can be argued that the indirect effects of Fgcp have inspired a talented young generation of engineers and scientists. Regardless, fgcp's funding was cut and AI fell out of the spotlight.

Ironically, in the absence of government funding and public hype, AI has flourished. During the 1990s and 2000s, many of the key goals of artificial intelligence had been achieved. In 1997, the reigning World Chess Champion and grandmaster Gary Kasparov was defeated by Deep Blue, a chess-playing computer program from IBM. This highly publicised match was the first time a reigning world chess champion had lost to a computer and served as a major step towards an artificially intelligent decision-making program. In the same year, speech recognition software developed by Dragon Systems was applied to Windows. This was another major step forward, but in the direction of an effort to interpret the spoken language. There seemed to be no problem that the machines couldn't handle. As evidenced by Kismet, a robot developed by Cynthia Breazeal that can recognize and display emotions.


We haven't gotten smarter about how we Code artificial intelligence, so what's changed? It turns out that the basic limit of computer storage that held us back 30 years ago is no longer an issue. Moore's law had doubled the memory and speed of computers every year, and in many cases, it exceeded every need. That's exactly how Deep Blue managed to beat Gary Kasparov in 1997 and how Google's Alpha Go was able to beat Chinese Go champion Ke Jie just a few months ago. The roller coaster of AI research; We saturate AI's capabilities to the level of our current computing power-computer storage and processing speed-and then wait for Moore's law to catch up again.


We now live in the age of “ big data, ” where we have the capacity to collect a huge amount of information that is too cumbersome for a person to process. In this regard, the application of artificial intelligence has already been quite efficient in many sectors such as technology, banking , marketing and entertainment. We found that big data and big computing allow artificial intelligence to learn through brute force, even if algorithms don't develop much. There may be evidence that Moore's law has slowed down a bit, but the surge in data has certainly not lost any momentum.. Breakthroughs in Computer Science, Mathematics or neuroscience all serve as potential exits from the ceiling of Moore's law.


So, what is hidden for the future? In the near future, AI language seems to be the next big thing. In fact, it's already going on. You don't remember the last time you called a company and spoke directly to a person, do you? Even machines are looking for us these days! You can imagine interacting with an expert system in a fluid conversation, or having a conversation in two different languages translated in real time. We also see driverless cars. In the long run, the goal is general intelligence, a machine that transcends human cognitive abilities in all tasks.Apparently, we will allow AI to develop steadily in society and work frantically.

Related Posts

See All


דירוג של 0 מתוך 5 כוכבים
אין עדיין דירוגים

הוספת דירוג
bottom of page