Artificial Intelligence (AI) startups are some of the most susceptible to hype and magical thinking. As a result, the reality of what modern AI can actually achieve becomes convoluted.
To be fair, many of the things that we now consider routine—like asking Alexa or Siri to tell us the weather or being auto-tagged on Facebook and iPhone photos—could have been considered “magical” just a few decades ago.
The advancements in amounts of data (especially from the web), algorithms (like convolutional neural networks), and representations (advanced and automated feature engineering) have radically improved our ability to do what was previously not possible with AI. However, there are still fundamental limits to what our foreseeable technology can do.
What is modern AI?
Modern AI/ML is statistics. This is sometimes controversial.
Fancy machine learning techniques are much more complicated but are often in the same family as the linear regression children learn in high school. Now, this isn’t universally true. Unsupervised techniques, generative algorithms, etc., are reasonably different, but even those other techniques are still statistical in nature.
What does this mean? It means that it isn’t magic, and it’s certainly not artificial general intelligence (AGI) from science fiction.
You can’t put in random, junk data (or no data) and expect the AI/ML to do much. You also shouldn’t expect robot workers to come for everyone’s jobs all of a sudden.
Why don’t AIs do more things?
AI-driven robotics have no instincts. Humans have natural dexterity and a built-in understanding of how to move around. In fact, a human child has more advanced and flexible manual control than robots despite the billions of dollars in programming and R&D sunk into them.
AI-controlled robots in manual labor jobs are more driven by desperation in the labor market caused by secular labor shortages rather than a natural fit between tool and task. As Clayton Wood, CEO of Picnic Robotics (pizza-making robot) has said, the company doesn’t really take jobs—it takes job openings.
The issue with AI programmers—of which there are graveyards full of startups attempting it—is that even for somewhat “rote” programming, most of it still requires some creativity and ineffable leaps of intuition to do well. In this case, there have not really been significant advances, even though natural language algorithms like GPT-3 now have certain non-crazy approximations of coders. It’s an area of “knowledge work” where it really isn’t taking advantage of AI’s strengths.
So where is AI strong?
AI inherits the strengths of the silicon that it runs on. Computers are fast (in calculations) and have vast memory/recall that can be quickly retrieved. You would expect AI to excel in fields that take advantage of these strengths.
One such field is finance and investing, where specific tasks are basically this. If you’re trading options and trying to take advantage of esoteric arbitrages, being able to quickly calculate and know what calculations should be used is a distinct advantage. We see a lot of algorithmic trading in areas similar to this. However, investing in “fundamentals” and other investing styles still requires some of that intuition, even if analysts do crunch numbers/financials from companies.
In healthcare, the ideal GP is actually one that sees all patients (so can see trends), remembers all possible ailments and treatments, and instantly incorporates all new information and studies. This sounds a lot more like an AI than any realistic human doctor. However, all “data” in healthcare practice is still hard to capture. Some of what doctors use to diagnose are never entered into electronic medical records (EMRs) or directly entered into anywhere (doctors might read body language or see certain cues from patients).
Thus, we will likely see AI work side-by-side with humans in all of these areas, augmenting human capabilities like other technical tools. As we get more advanced in data collection and sensing—for example, if in healthcare we have easy ambient sensing, self-data collection/entry (like in telemedicine), and just a model where more information is digitized—we could very well see more “AI doctors.” We could expect that some of them would exceed human capabilities and do the job better.
There are certainly some things to be careful of. AI algorithms have a well-documented, nasty tendency to internalize biases that humans have—after all, if there are biases exhibited from humans, they are often reflected in the data. If we rely on AI a lot suddenly, we will make these biases/injustices explicit and “scale them up,” and unintentionally universalize them in our society. However, there is still great promise in these things that AI can do, and many opportunities to improve human quality of life in general with what AI is strong at.
We shouldn’t expect magic or androids like Data from Star Trek walking around any time soon. Ideally, startups should focus on these areas where AI has an advantage—and help take the world forward in ways that are actually possible, vs. promising science fiction.