Top 5 AI and Machine Learning Trends of 2023
You may also like:
Artificial intelligence (AI) and machine learning (ML) have the potential to change the way we operate.
In fact, PwC estimates that the AI’s impact on the global economy could be north of $15 trillion by 2030.
There are very few technologies that could have this kind of effect on the world in the near future.
Read on if you want to understand the biggest trends in the AI and ML ecosystem.
1. Deep Learning Supports In-Depth Data Analysis
Probably the biggest revolution in the field of artificial intelligence has been Deep Learning.
Search interest in "Deep Learning" has grown by 2,933% over the last decade.
Deep Learning is a subset of machine learning. And it is basically a way for machines to mimic the human brain.
Deep Learning as a subset of the entire AI ecosystem.
They do this by creating layers of artificial neural networks (like in our brains) and use these to process large amounts of unstructured data.
This allows a machine to learn how to classify or analyze inputs without being programmed to handle specific tasks.
While the concept has been around for a while, the deep learning “revolution” really started in 2012.
A team competing in a Kaggle challenge used Deep Learning to detect the negative effects of chemicals in household items.
Since this breakthrough, ML usage has exploded.
IFI Claims Patent Services reports that the number of machine learning patents is growing at a 46.01% CAGR.
Number of ML patents filed by year.
The use cases of deep learning are enormous, stretching across all industries and ranging from predictive maintenance to product strategy.
Back in 2018, McKinsey found that deep learning could be used to improve outcomes in 69% of use cases.
But adoption is more difficult than it seems. In 2020, McKinsey still reported that only 16% of organizations had advanced deep learning programs past initial piloting stages.
This may be about to change.
Almost every industry in the world could benefit from deep learning. In fact, many modern products are starting to rely on it.
Take for instance, self-driving cars.
Search interest in “autonomous driving” has grown by 950% in 10 years.
A major problem facing the autonomous vehicle industry is unprecedented scenarios.
Many companies use 3D maps or former driving data to train their vehicles. When a strange situation occurs on the road, however, the vehicle doesn’t know how to react.
But deep learning could allow autonomous cars to make connections and learn in ways that don’t involve past scenarios.
Some of the most advanced autonomous driving companies are already implementing deep learning into their products.
You can also gauge the future potential of deep learning by looking at who is working on it.
And you don’t have to look very far to see that many of the largest tech companies in the world are investing heavily in deep learning.
Take chatbots, for example. For years, chatbots have been malfunctioning, giving repetitive responses, and just not sounding human.
But now, Google claims it has created the best chatbot in the world.
Searches for "Google Meena" have grown by 3,700% over the last 15 years.
And apparently, Google Meena communicates in a way that is impressively close to human standards.
Google Meena scored higher than every other chatbot on the Sensibleness and Specificity Average (SSA) test.
As we’ll discuss below, these same deep learning programs are also driving innovation in things like natural language processing (NLP) and healthcare.
For instance, OpenAI’s GPT-3 natural language processing algorithm contains more “neurons” than the human brain.
2. Natural Language Processing Drives New Use Cases for AI
Few things in the AI/ML industry have more promising business uses than natural language processing (NLP).
Searches for "Natural Language Processing" have increased by 144% over the last half-decade.
Our world is wrapped in text. Analyzing, formatting, translating, and using documents and texts is essential to all types of business around the world.
And it’s not just words. NLP is being and will be used to analyze data in ways that are far different than prior statistical methods we’ve used.
So, what is NLP?
Well, it’s basically a way for computers to speak our language.
In the past, computers have had to translate our human language to code. But by using NLP, machines will be able to gain intelligence from our data as it sits in its natural state.
Humans created 2.5 quintillion bytes of data every day in 2020. And a lot of this is made up of human-readable text.
NLP is used by businesses all around the world to determine the sentiment of text, classify text, extract meaning and keywords from text, and analyze text.
Example of NLP sentiment analysis.
Example of NLP keyword extraction from a block of text.
Back in 2019, Facebook developed an NLP program that came in first place in a language translation competition.
And tools like Grammarly have long been using NLP to improve grammar across a range of digital environments.
Searches for "Grammarly" have grown by 259% over the last 5 years.
In the legal and commercial space, dozens of companies have begun using NLP to analyze dense legal contracts, as well as generate new ones.
And now, NLP is even being used to create completely new documents and articles.
OpenAI’s GPT-3 shocked the world with its debut in 2020.
The NLP software can create texts that read almost exactly like human writers.
Searches for "GPT-3" are up by 4,200% over the last 2 years.
Today, OpenAI claims it is producing roughly 4.5 billion words a day.
The NLP software is being used by more than 300 apps, where thousands of developers are using it to generate content.
In the span of just one year, GPT-3 is estimated to be producing at least twice as much content as all WordPress blogs on the internet.
This is especially impressive considering WordPress powers roughly 40% of the internet.
Advances in NLP are still in the early innings, and innovations like GPT-3 will only improve with time.
3. AI and ML Spark a Healthcare Revolution
Over the last year, in particular, AI and ML have been incredibly transformative in the healthcare industry.
AI advancements were a key component in dealing with the global pandemic. And with the rise of telemedicine and other health technology innovations, AI will only become more important.
2020 also exposed problems in the healthcare industry as it currently operates.
For instance, COVID-19 compounded problems associated with the nurse and healthcare worker shortage.
Certain AI initiatives, however, are looking to solve this.
According to Accenture, AI can allow nurses to handle 20% more pent-up patient demand over the next few years.
This is probably why 90% of hospitals have some kind of AI initiative planned for the near future. This is a massive increase, up from under half of all hospitals in 2019.
2020 was also a record-breaking year for healthcare investment. And the first quarter of 2021 showed massive growth as well.
In Q1 2021, $2.5 billion (more than double the first quarter of 2020) was invested in healthcare startups focused on AI.
And, according to CB Insights, AI and ML in healthcare were mentioned on more than 2,000 first-quarter earnings calls.
One of the most innovative companies merging machine learning and healthcare is Insitro.
The company uses machine learning to analyze large biological data sets.
Searches for "Insitro" have risen by 1,400% over the last 5 years.
Insitro raised $400 million in a Series C in 2021.
Strive Health is another AI-driven company that is making waves in the healthcare industry.
Searches for "Strive Health" have grown by 105% over the last 5 years.
The company focuses on preventative kidney care.
Kidney disease affects roughly 37 million people. And only 10% of people that are chronically affected are aware that they have the disease.
Strive Health is trying to prevent chronic kidney disease by using AI technology that can predict kidney disease progression with more than 95% accuracy.
Strive recently raised $140 million from Alphabet’s venture arm, among other investors.
Perhaps one of the most impressive innovations has come not from a startup, but a large tech company that’s not even associated with the healthcare industry.
Google’s DeepMind venture created its AlphaFold program in 2018.
AlphaFold uses DeepMind’s deep learning technology to predict protein structures.
Searches for "AlphaFold" have grown by 8,600% over the last 5 years.
Put simply, amino acid chains in our bodies fold in different ways to form different protein structures.
Protein can fold into roughly 200 million different formations, making it all but impossible to classify different structures based on observation.
This is where AlphaFold and deep learning come in.
Google’s newest AlphaFold 2 has performed better than every other competitor out there when it comes to predicting future protein structures.
AlphaFold 2 has changed the way scientists predict protein structures.
Over time, AI initiatives like AlphaFold and Insitro hope to change how we treat and manage illnesses.
4. Augmented Intelligence Improves Human Decision-Making
Ever since the advent of AI, people have been worried about being replaced.
And while some still disagree, many commentators believe that AI will mostly augment human intelligence in the future.
This is what is known as augmented intelligence.
Search interest in “augmented intelligence” has risen by 3,200% over the last 10 years.
And Gartner predicted that AI augmentation would add more than $2 trillion to the US economy in 2021.
How augmented intelligence will create value in the future.
The demand for human-augmentation of AI is growing as well.
In 2020, Amazon Web Services released what it is calling A2I – a platform where developers can find humans to review their AI models and systems.
As Amazon explains, sometimes you need a human to review certain outcomes and tasks to catch mistakes or problems that a machine just can’t.
Augmented AI’s role in the AI value chain.
5. TinyML Attempts to Integrate ML and IoT
Over the last several years, the computing power of ML algorithms has become too large to host on a local device.
But edge devices that collect data have limited computing power. Because of this, it has become necessary to develop ML algorithms that can operate with little local memory or computing power.
TinyML hopes to solve this.
Searches for "TinyML" have grown by 2,300% over the last 5 years.
Basically, TinyML defines any machine learning algorithm that can operate on some kind of embedded device or edge device in an IoT system.
Devices traditionally had to collect a lot of useless data. Then send it to a cloud provider so it can be analyzed by ML algorithms.
TinyML could change this by solving two problems at once.
First, it allows IoT devices to analyze data using limited energy and computing power. And second, it allows these devices to only collect useful data.
You can see this in everyday devices already. For instance, most smartphones and home speakers may have what is called “wake” words.
These words (or other keywords, such as the device’s name) force the device to only collect data when it needs to (i.e., useful data).
Any way you look at it, the edge computing market is going to be massive (predictions range from around $40-60 billion by the late 2020s).
And TinyML will likely be powering the entire market.
The cost of a typical microcontroller (small computers that power local devices like printers and factory sensors) is around 60 cents per unit.
If more and more of these cheap devices are able to power machine learning models at the edge, then a lot of data can be collected and analyzed for very cheap.
Fortunately for the IoT economy, ABI Research predicts that there will be roughly 2.5 billion devices shipped by 2030 that will have TinyML capabilities.
And while the algorithms are still typically trained on larger servers, many experts are predicting that within five years models will be trained at edge devices as well.
Those are the top AI and ML trends to look for over the next few years.
Artificial intelligence and machine learning will undoubtedly change how we operate in the world.
And many of those changes are related to new forms of data collection and analysis.
As more novel data is collected (largely via IoT), new AI and ML initiatives will change how we utilize that data.