The History of AI
A comprehensive history of Artificial Intelligence right from it's lesser known days to the days of Generative AI
Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. - John McCarthy.
From the dawn of time, human beings have always been fascinated and interested in the building of Machines which display intelligence, for instance, the Ancient Egyptians and Romans were awe-struck by religious statues, clearly manipulated by priests, that gestured and gave prophecies.
Medieval lore is packed with tales like those, of items which could move and talk like their human masters, there have been stories of Sages from the middle ages which had access to a homunculus -a small artificial man that was actually a living sentient being-, in fact, 16th century Swiss Philosopher Theophrastus Bombastus was quoted saying; “We shall be like gods. We shall duplicate God’s greatest miracle-the creation of man.” Our species’ latest attempt at creating synthetic intelligence is now known as AI.
In this article, I hope to provide (comprehensively) a history of Artificial Intelligence right from its lesser-known days (when it wasn’t even called AI) to the current age of Generative AI.
How I hope to approach this
This article will break down the history of AI into nine(9) milestones. The milestones will be expanded upon, it should be noted that milestones will not be treated as disparate and unrelated, rather, their links to the overall history of Artificial Intelligence and the progression from immediate past milestones will be discussed as well. Below are the milestones to be covered:
The Dartmouth Conference
The Perceptron
The AI boom of the 1960s
The AI winter of the 1980s
Expert Systems
The Emergence of Natural Language Processing and Computer Vision
The Rise of Big Data
Deep Learning
Generative AI
The Dartmouth Conference
The Dartmouth Conference of 1956 is a seminal event in the history of AI, it was a summer research project that took place in the year 1956 at Dartmouth College in New Hampshire, USA. The conference was the first of its kind, in the sense that it brought together researchers from seemingly disparate fields of study; Computer Science, Mathematics, Physics, etc with the sole aim of exploring the potential of Synthetic Intelligence(the term AI hadn’t been coined yet), The participants included John McCarthy, Marvin Minsky and other prominent scientists and researchers.
During the conference, the participants discussed a wide range of topics related to AI, such as natural language processing, problem-solving, and machine learning. They also laid out a roadmap for AI research, including the development of programming languages and algorithms for creating intelligent machines. This conference is considered a seminal moment in the history of AI as it marked the birth of the field and also the moment the name “Artificial Intelligence” was coined.
The Dartmouth Conference had a significant impact on the overall history of AI. It helped to establish AI as a field of study and encouraged the development of new technologies and techniques. The participants set out a vision for AI, which included the creation of intelligent machines that could reason, learn, and communicate like human beings. This vision sparked a wave of research and innovation in the field.
Following the conference, John McCarthy and his colleagues went on to develop the first AI programming language, LISP. This language became the foundation of AI research and remains in existence today.
The conference also led to the establishment of AI research labs at several universities and research institutions, including MIT, Carnegie Mellon, and Stanford. One of the most significant legacies of the Dartmouth Conference is the development of the Turing test. Alan Turing, a British mathematician, proposed the idea of a test to determine whether a machine could exhibit intelligent behaviour indistinguishable from a human. This concept was discussed at the conference and became a central idea in the field of AI research. The Turing test remains an important benchmark for measuring the progress of AI research today.
The Dartmouth Conference was a pivotal event in the history of AI. It established AI as a field of study, set out a roadmap for research, and sparked a wave of innovation in the field. The conference’s legacy can be seen in the development of AI programming languages, research labs, and the Turing test.
The Perceptron
The Perceptron is an Artificial neural network architecture designed by Psychologist, Frank Rosenblatt in the year 1958 and it gave traction to what is famously known as the Brain Inspired Approach to AI, where researchers build AI systems to mimic the human brain.
In technical terms, the Perceptron is a binary classifier that can learn to classify input patterns into two categories. It works by taking a set of input values and computing a weighted sum of those values, followed by a threshold function that determines whether the output is 1 or 0. The weights are adjusted during the training process to optimize the performance of the classifier.
The Perceptron was seen as a major milestone in AI because it demonstrated the potential of machine learning algorithms to mimic human intelligence. It showed that machines could learn from experience and improve their performance over time, much like humans do. The Perceptron was also significant because it was the next major milestone after the Dartmouth conference. The conference generated a lot of excitement about the potential of AI, but it was still largely a theoretical concept. The Perceptron, on the other hand, was a practical implementation of AI that showed that the concept could be turned into a working system.
The Perceptron was initially touted as a breakthrough in AI and received a lot of attention from the media. However, it was later discovered that the algorithm had limitations, particularly when it came to classifying complex data. This led to a decline in interest in Perceptron and AI research in general in the late 1960s and 1970s.
However, the Perceptron was later revived and incorporated into more complex neural networks, leading to the development of deep learning and other forms of modern machine learning. Today, the Perceptron is seen as an important milestone in the history of AI and continues to be studied and used in the research and development of new AI technologies.
The AI Boom of the 1960s
As we spoke about earlier, the 1950s was a momentous year for the AI community due to the creation and popularization of the Perceptron artificial neural network, the Perceptron was seen as a breakthrough in AI research and sparked a great deal of interest in the field and said interest was a stimulant for what became known as the AI BOOM.
The AI boom of the 1960s was a period of significant progress and interest in the development of artificial intelligence (AI). This was due to the fact that it was a time when computer scientists and researchers were exploring new methods for creating intelligent machines and programming them to perform tasks traditionally thought to require human intelligence. In the 1960s, the obvious flaws of the perceptron were discovered and so researchers began to explore other AI approaches beyond the Perceptron. They focused on areas such as symbolic reasoning, natural language processing, and machine learning.
This research led to the development of new programming languages and tools, such as LISP and Prolog, that were specifically designed for AI applications. These new tools made it easier for researchers to experiment with new AI techniques and to develop more sophisticated AI systems, during this time, the US government also became interested in AI and began funding research projects through agencies such as the Defense Advanced Research Projects Agency (DARPA). This funding helped to accelerate the development of AI and provided researchers with the resources they needed to tackle increasingly complex problems.
The AI boom of the 1960s culminated in the development of several landmark AI systems. One example is the General Problem Solver (GPS), which was created by Herbert Simon, J.C. Shaw, and Allen Newell. GPS was an early AI system that could solve problems by searching through a space of possible solutions. Another example is the ELIZA program, created by Joseph Weizenbaum, which was a natural language processing program that simulated a psychotherapist.
In summary, the AI boom of the 1960s was a period of significant progress in AI research and development. It was a time when researchers explored new AI approaches and developed new programming languages and tools specifically designed for AI applications. This research led to the development of several landmark AI systems that paved the way for future AI development.
The AI Winter of the 1980s
The AI Winter of the 1980s refers to a period of time when research and development in the field of Artificial Intelligence (AI) experienced a significant slowdown. This period of stagnation occurred after a decade of significant progress in AI research and development from the year 1974 to 1993.
As discussed in the past section, the AI boom of the 1960s was characterised by an explosion in AI research and applications but immediately following that came the AI winter which occurred during the 1980s, this was due to the fact that many of the AI projects that had been developed during the AI boom were failing to deliver on their promises, and the AI research community was becoming increasingly disillusioned with the lack of progress in the field. This led to a funding cut, and many AI researchers were forced to abandon their projects and leave the field altogether.
According to the Lighthill report from the UK science research commission,
AI has failed to achieve it’s grandiose objectives and in no part of the field have the discoveries made so far produced the major impact that was then promised.
The AI Winter of the 1980s was characterized by a significant decline in funding for AI research and a general lack of interest in the field among investors and the public. This led to a significant decline in the number of AI projects being developed, and many of the research projects that were still active were unable to make significant progress due to a lack of resources.
Despite the challenges of the AI Winter, the field of AI did not disappear entirely. Some researchers continued to work on AI projects and make important advancements during this time, including the development of neural networks and the beginnings of machine learning. However, progress in the field was slow, and it was not until the 1990s that interest in AI began to pick up again (we are coming to that).
Overall, the AI Winter of the 1980s was a significant milestone in the history of AI, as it demonstrated the challenges and limitations of AI research and development. It also served as a cautionary tale for investors and policymakers, who realized that the hype surrounding AI could sometimes be overblown and that progress in the field would require sustained investment and commitment.
Expert Systems
Expert systems are a type of artificial intelligence (AI) technology that was developed in the 1980s. Expert systems are designed to mimic the decision-making abilities of a human expert in a specific domain or field, such as medicine, finance, or engineering. During the 1960s and early 1970s, there was a lot of optimism and excitement around AI and its potential to revolutionize various industries. However, as we discussed in the past section, this enthusiasm was dampened by a period known as the AI winter, which was characterized by a lack of progress and funding for AI research.
The development of expert systems marked a turning point in the history of AI, as pressure was mounted on the AI community to provide practical, scalable, robust, and quantifiable applications of Artificial Intelligence, expert systems served as proof that AI systems could be used in real life systems and had the potential to provide significant benefits to businesses and industries. Expert systems were used to automate decision-making processes in various domains, from diagnosing medical conditions to predicting stock prices.
In technical terms, expert systems are typically composed of a knowledge base, which contains information about a particular domain, and an inference engine, which uses this information to reason about new inputs and make decisions. Expert systems also incorporate various forms of reasoning, such as deduction, induction, and abduction, to simulate the decision-making processes of human experts.
Overall, expert systems were a significant milestone in the history of AI, as they demonstrated the practical applications of AI technologies and paved the way for further advancements in the field. Today, expert systems continue to be used in various industries, and their development has led to the creation of other AI technologies, such as machine learning and natural language processing.
The emergence of NLPs and Computer Vision in the 1990s
This period is when AI research and globalization begin to pick up some momentum and it is also the entry into the modern era of Artificial Intelligence. As discussed in the previous section, expert systems came into play around the late 1980s and early 1990s, However, expert systems were limited by the fact that they relied on structured data and rules-based logic. They struggled to handle unstructured data, such as natural language text or images, which are inherently ambiguous and context-dependent.
To address this limitation, researchers began to develop techniques for processing natural language and visual information. In the 1970s and 1980s, significant progress was made in the development of rule-based systems for NLP and Computer Vision. However, these systems were still limited by the fact that they relied on pre-defined rules and were not capable of learning from data.
In the 1990s, advances in machine learning algorithms and computing power led to the development of more sophisticated NLP and Computer Vision systems. Researchers began to use statistical methods to learn patterns and features directly from data, rather than relying on pre-defined rules. This approach, known as machine learning, allowed for more accurate and flexible models for processing natural language and visual information.
One of the most significant milestones of this era was the development of the Hidden Markov Model (HMM), which allowed for the probabilistic modelling of natural language text. This led to significant advances in speech recognition, language translation, and text classification, similarly, in the field of Computer Vision, the emergence of Convolutional Neural Networks (CNNs) allowed for more accurate object recognition and image classification. These techniques are now used in a wide range of applications, from self-driving cars to medical imaging.
Overall, the emergence of NLP and Computer Vision in the 1990s represented a major milestone in the history of AI, as it allowed for more sophisticated and flexible processing of unstructured data. These techniques continue to be a focus of research and development in AI today, as they have significant implications for a wide range of industries and applications.
The Rise of Big Data
The concept of big data has been around for decades, but its rise to prominence in the context of artificial intelligence (AI) can be traced back to the early 2000s. For the sake of a sense of completion, let’s briefly discuss the term, Big Data.
For data to be termed big, it needs to fulfil 3 core attributes, Volume, Velocity, and Variety. Volume refers to the sheer size of the data set, which can range from terabytes to petabytes or even larger. Velocity refers to the speed at which the data is generated and needs to be processed. For example, data from social media or IoT devices can be generated in real time and needs to be processed quickly. Variety refers to the diverse types of data that are generated, including structured, unstructured, and semi-structured data.
Before the emergence of big data, AI was limited by the amount and quality of data that was available for training and testing machine learning algorithms. Natural language processing (NLP) and computer vision were two areas of AI that saw significant progress in the 1990s, but they were still limited by the amount of data that was available. For example, early NLP systems were based on hand-crafted rules, which were limited in their ability to handle the complexity and variability of natural language. The rise of big data changed this by providing access to massive amounts of data from a wide variety of sources, including social media, sensors, and other connected devices. This allowed machine learning algorithms to be trained on much larger datasets, which in turn enabled them to learn more complex patterns and make more accurate predictions.
At the same time, advances in data storage and processing technologies, such as Hadoop and Spark, made it possible to process and analyze these large datasets quickly and efficiently. This led to the development of new machine learning algorithms, such as deep learning, which are capable of learning from massive amounts of data and making highly accurate predictions.
Today, big data continues to be a driving force behind many of the latest advances in AI, from autonomous vehicles and personalized medicine to natural language understanding and recommendation systems. As the amount of data being generated continues to grow exponentially, the role of big data in AI will only become more important in the years to come.
Deep Learning
The emergence of Deep Learning is a major milestone in the globalization of modern Artificial Intelligence, ever since the Dartmouth Conference of the 1950s, AI has been recognized as a legitimate field of study and the early years of AI research focused on symbolic logic and rule-based systems, which involved manually programming machines to make decisions based on a set of predetermined rules. While these systems were useful in certain applications, they were limited in their ability to learn and adapt to new data.
It wasn’t until after the rise of big data that deep learning became a major milestone in the history of AI. With the exponential growth of data, researchers needed new ways to process and extract insights from vast amounts of information. Deep learning algorithms provided a solution to this problem by enabling machines to automatically learn from large datasets and make predictions or decisions based on that learning.
Deep learning is a type of machine learning that uses artificial neural networks, which are modelled after the structure and function of the human brain. These networks are made up of layers of interconnected nodes, each of which performs a specific mathematical function on the input data. The output of one layer serves as the input to the next, allowing the network to extract increasingly complex features from the data.
One of the key advantages of deep learning is its ability to learn hierarchical representations of data. This means that the network can automatically learn to recognize patterns and features at different levels of abstraction. For example, a deep learning network might learn to recognize the shapes of individual letters, then the structure of words, and finally the meaning of sentences. The development of deep learning has led to significant breakthroughs in fields such as computer vision, speech recognition, and natural language processing. For example, deep learning algorithms are now able to accurately classify images, recognize speech, and even generate realistic human-like language.
In conclusion, deep learning represents a major milestone in the history of AI, made possible by the rise of big data. Its ability to automatically learn from vast amounts of information has led to significant advances in a wide range of applications, and it is likely to continue to be a key area of research and development in the years to come.
Generative AI
This is the point in the AI timeline where we currently dwell as a species. Generative AI is a subfield of artificial intelligence (AI) that involves creating AI systems capable of generating new data or content that is similar to the data it was trained on. This can include generating images, text, music, and even videos.
In the context of the history of AI, generative AI can be seen as a major milestone that came after the rise of deep learning. Deep learning is a subset of machine learning that involves using neural networks with multiple layers to analyze and learn from large amounts of data. It has been incredibly successful in tasks such as image and speech recognition, natural language processing, and even playing complex games such as Go.
Transformers, a type of neural network architecture, have revolutionized generative AI. They were introduced in a paper by Vaswani et al. in 2017 and have since been used in various tasks, including natural language processing, image recognition, and speech synthesis. Transformers use self-attention mechanisms to analyze the relationships between different elements in a sequence, allowing them to generate more coherent and nuanced output. This has led to the development of large language models such as GPT-4(ChatGPT), which can generate human-like text on a wide range of topics.
AI art is another area where generative AI has had a significant impact. By training deep learning models on large datasets of artwork, generative AI can create new and unique pieces of art. The use of generative AI in art has sparked debate about the nature of creativity and authorship, as well as the ethics of using AI to create art. Some argue that AI-generated art is not truly creative because it lacks the intentionality and emotional resonance of human-made art. Others argue that AI art has its own value and can be used to explore new forms of creativity.
Large language models such as GPT-4 have also been used in the field of creative writing, with some authors using them to generate new text or as a tool for inspiration. This has raised questions about the future of writing and the role of AI in the creative process. While some argue that AI-generated text lacks the depth and nuance of human writing, others see it as a tool that can enhance human creativity by providing new ideas and perspectives.
In summary, generative AI, especially with the help of Transformers and large language models, has the potential to revolutionize many areas, from art to writing to simulation. While there are still debates about the nature of creativity and the ethics of using AI in these areas, it is clear that generative AI is a powerful tool that will continue to shape the future of technology and the arts.
Summary
As we have covered, the history of Artificial Intelligence has been a very interesting one wrought with potential, anti-climaxes and phenomenal breakthroughs but in a sense, with applications like ChatGPT, and Dalle. E, and others, we have only just scratched the surface of the possible applications of AI, and of course the challenges, there is definitely more to come and I implore all of us to keep an open mind and be definitely Optimistic while being indefinitely pessimistic.