The Best Type of Problems for Current AI
A framework for structuring AI solutions to complex problems
Artificial Intelligence at its core is a tool. Like a hammer is commonly used for hitting nails and bludgeoning stuff, similarly, not all types of problems can be solved by Artificial Intelligence. The purpose of this essay is to discuss the characteristics which problems must possess to make them perfect problem sets for AI systems.
Let’s get right into it.
Massive Combinatorial Space
Problems that require a large hypothesis space are well suited for AI. In other words, problems that involve a huge number of potential configurations, permutations, choices, and arrangements which have to be tried before an optimal choice is picked are excellent problem sets for AI. A good example of this is chess. Chess has 10^20 possible board positions and it is somewhat impossible for humans to efficiently permutate all those positions, on the other hand, it is quite easy for Machine Learning Algorithms like minimax search to search through a large part of each position’s game tree and find the most optimal position.
Problems like chip design, protein folding, scheduling, logistics, and other board games like Go also have huge combinatorial spaces. Essentially, the more nodes, factors, and constraints there are the higher the solution space.
Problems of this sort are called NP-hard problems. Np-hard problems are simply problems which have so many possible solutions that they are hard to solve computationally. AI systems thrive in these sorts of problems because they can efficiently search through these large solutions and find the optimal solution. In fact the bigger the solution space the better AI performs.
Clear Objective Function
AI, by nature, needs clear goals to optimize against. It is fair to say one of the reasons why the creation of AGI is proving extremely difficult is because of the multi-faceted nature of general intelligence. Humans can achieve general intelligence because we have a huge plethora of skills like; pattern recognition, logical reasoning, mental simulation(imagination), etc at our disposal but reducing all of these skills to a single optimizable metric oversimplifies them.
A few examples of metrics that can be optimized include; reducing error rate, maximising accuracy in predicting fouls in real-time gameplay, reducing fuel consumption, providing content recommendations to users tailored to their preferences to optimize interaction etc.
The reason why this is an important attribute required by problems to be solved by AI is AI needs measurable objectives to judge its performance. In other words, AI systems require quantitative feedback on their performance to optimize their predictions.
Lots of Data and/or an Accurate Simulator
To proficiently perform tasks, AI systems need experience of said tasks being performed. A good rule of thumb is; that the more experience available to an AI model the better it will be at performing x task.
Experience here can come in 2 forms; real-world data or simulations. A good example of real-world data in use is language models. At their core, Language model AI systems are trained on a large corpus of human interactions(tweets, essays, articles, books, etc) to understand human language syntax, structure, and formation to expertly generate convincing renditions of generalized verbose human conversation.
A relatable example of simulations as a form of experience is self-driving cars. Self-driving cars are extensively trained on simulations of roads before being tested on real roads. Simulation is commonly used as an alternative to real-world data, that is, it is used when there is an absence or shortage of real-world data.
Experience is what empowers AI to learn the process, nuances, patterns, and edge cases involved in performing tasks. Without it, they would essentially be blind. Taking the time to build diverse datasets and detail-rich simulations results in resilient and powerful AI systems.
Example Cases of Problems well suited to and solved by AI
Here I will discuss 2 problems solved by AI and how the existence of the attributes discussed above made them perfect problem cases for AI.
Alpha Go
AlphaGo is an AI system developed by Google Deepmind to play the game of Go. It famously beat the world Go champion Lee Sedol.
The game of Go is played on a 19-by-19 board. The huge space of possible moves provides a Massive Combinatorial Space that can’t be solved by brute-force engineering, Alpha Go’s deep Neural Networks made it capable of assessing more gameplay permutations than humans can. it had a Clear Objective Function to defeat opponents and it was trained on Lots of Data from the internet of the game being played.
Alpha Fold
Alpha Fold is another AI system designed by Google Deep Mind to fold protein structures based on their amino acid sequence. Proteins fold into complex shapes with astronomically large conformations and this provides a Large Combinatorial Space. Its Objective Function was to minimize the error between predicted protein structures and actual protein structures. It was trained on lots of known protein structure data and Chemical Principles guiding structure folding.
End Note
It is important to note that the characteristics listed above are attributed to problems solvable by current-age AI. In other words, Future AI systems will not be limited to problems with the characteristics listed above.
I wonder when people will realize that when humans are freed from NP hard problems they can truly unlock the creativity that makes human cognition so powerful?
We are slowly but surely closing those gaps. Only time can really say how long it will be before AGI is attained (assuming it is), but those places where humans can do things better than machines are going to continue falling by the wayside, one at a time.