top of page

WHAT ARE THE ENDPOINTS IN ARTIFICIAL INTELLIGENCE DEVELOPMENT?

Updated: Aug 23, 2021

While Modern narrow AI is limited to performing specific tasks within their expertise, these systems can sometimes perform superhuman, or even in some cases exhibit superior creativity, a trait often considered inherently human.

There has been a lot of development to make a definitive list, but some key points include:

· In 2009, Google showed that its driverless Toyota Prius could put Society on a path towards driverless vehicles by completing more than 10 100-mile journeys each.

* In 2011, IBM Watson computer system, USA quiz Jeopardy! , he beat the two best players the show produced. To win the show, Watson used natural language processing and analytics in vast pools of data processed to answer questions posed by humans, usually in less than a second.

· Another breakthrough in 2012 heralded the potential of artificial intelligence to overcome a large number of new tasks previously thought to be too complex for any machine. That year, the AlexNet system triumphed decisively in the ImageNet large-scale visual recognition competition. The accuracy of AlexNet was enough to halve the error rate in image recognition competition compared to competing systems.


AlexNet's performance demonstrated the power of learning systems based on neural networks, a model that has existed for decades for machine learning, but eventually realized its potential thanks to improvements in architecture and leaps in parallel processing power made possible by Moore's law. The knack of machine learning systems for meeting computers made headlines that year when Google introduced a system to recognize an internet favorite, cat training.

The next demonstration of the effectiveness of machine learning systems to gain public attention was Google DeepMind AlphaGo artificial intelligence's 2016 victory over a human grandmaster in Go, an ancient Chinese game whose complexity has baffled computers for decades. Go has about 200 moves per round, compared to about 20 moves in chess. Over the course of a Go game, there are many possible moves, each of which is explored in advance to determine the best game, which is very computationally costly. Instead, AlphaGo was trained in how to play the game by taking the moves played by human experts in 30 million Go games and getting them into deep learning neural networks.

Training these deep learning networks can take a very long time and require a huge amount of data to be retrieved and replicated as the system gradually improves its model to achieve the best results.


However, more recently Google has improved its training process with AlphaGo Zero, a system that plays" totally random " games against it and then learns from it. Demis Hassabis, CEO of Google DeepMind, also unveiled a new version of AlphaGo Zero, mastered in chess and shogi games.

And artificial intelligence continues to cross new milestones: a system trained by OpenAI has defeated the world's best players in one-on-one matches of the online multiplayer game Dota 2.


That same year, OpenAI created artificial intelligence robots that invented their own language to collaborate and achieve their goals more effectively, followed by Facebook training robots to negotiate and lie.


2020 was the year in which an artificial intelligence system gained the ability to write and speak like a human on almost every subject you can think of.


The system in question, known as Generative Pre-trained Transformer 3 or GPT-3 for short, is a neural network trained on billions of English-language articles found in the open network.

Soon after it was made available for testing by the nonprofit organization OpenAI, the internet was flooded with GPT-3's ability to produce articles on almost any subject that fed into it, articles that at first glance were often difficult. It is distinguished from what is written by a person. Similarly, impressive results were achieved in other areas with the ability to convincingly answer questions on a wide range of topics and even pass a novice JavaScript encoder.


But while many of the articles created by GPT-3 had an air of truth, further tests found that the sentences created often offered superficially plausible but confusing statements, and sometimes completely absurd statements, that were not successful.


There is still great interest in using the model's natural language understanding as the basis for future services. So much so that OpenAI beta can be used to select developers to build their software through the API. In addition, information provided through Microsoft's Azure cloud platform will be included in future services.


Perhaps the most striking example of artificial intelligence's potential came in late 2020, when Google's attention-based neural network AlphaFold 2 showed a result that some have deemed worthy of the Nobel Prize in Chemistry.

The system's ability to look at the building blocks of a protein known as amino acids and derive the 3D structure of the protein can profoundly affect the speed at which diseases are understood and drugs are developed. In the competition for Critical Evaluation of Protein structure prediction, AlphaFold 2 determined the 3D structure of a protein with an accuracy that rivals crystallography, which is the gold standard for convincing modeling of proteins.


Unlike crystallography, which takes months to produce results, AlphaFold 2 can model proteins within hours. The 3D structure of proteins which play such an important role in human biology and diseases and with such an acceleration, enzymes with potential applications in other fields where it is used in biotechnology, medical science was hailed as a turning point for me.


Related Posts

See All
bottom of page