Wiz considers $500m Dazz Acquisition as AI Cybersecurity Booms
Cybersecurity giant Wiz is reportedly lining up a potential acquisition of Israeli startup Dazz. The deal would enhance the US firm’s AI credentials.
Dazz, which specializes in cloud security remediation, only completed a $50 million funding round back in July. That came at a valuation of $400 million.
“Dazz” searches are up 133% in the last 5 years.
According to CTech, Wiz is now actively exploring the acquisition of Dazz. The report speculates that based on the valuation earlier in the year, any purchase price could be set at $500 million or higher.
That comes amid information from Dazz that it has grown its annual recurring revenue by 400% between 2023 and 2024. It also says that it has tripled its workforce and expanded operations throughout the US and Europe.
Its technology, which uses AI to help find and fix critical issues within cloud infrastructures, claims to reduce the mean time to remediation by 810%, cutting down risk windows from weeks to hours.
Dazz tracks down security issues within cloud infrastructures.
Both Wiz and Dazz were present alongside Amazon Web Services last July at an event organized by the Boston chapter of the Cloud Security Alliance.
VC firm Cyberstarts, comprising a number of successful founders from the cybersecurity industry, has previously backed both businesses.
This potential deal is far from the only one happening in a booming AI cybersecurity space.
Branding itself “the data-first security company”, Normalyze is expected to become part of Proofpoint within the next month. That’s according to Techzine.
Search volume for “Normalyze” is still relatively low and spiky, but it’s up 4200% in the last 2 years.
Normalyze uses AI to “classify valuable and sensitive data at scale”. After that, the platform assesses and prioritizes potential risks and vulnerabilities, before providing insights into possible remedial and preventive steps.
A deal would “close AI security gaps” for Proofpoint. Up to now, the company’s main area of operation has been the human element of data protection.
That’s a smart focus, with almost three-quarters of data breaches attributable to human error. But across the industry, increasingly sophisticated AI is helping to target the significant — and growing — risk of cybercrime.
The problems AI cybersecurity is trying to solve
Working to cut down on human error can drastically reduce the chances of costly data breaches. But the threat of malicious third parties certainly cannot be ignored.
According to the ITRC Annual Data Breach Report, there are as many as 11 victims of a malware attack per second. That’s more than 340 million victims per year.
And this year, North America has seen a 15% increase in the number of ransomware attacks.
(It’s little wonder that cybersecurity is an increasingly tough search keyword to target for companies within the industry.)
Get More Search Traffic
Use trending keywords to create content your audience craves.
Rising AI cybercrime
AI can be part of the solution. But it is also a growing part of the problem.
Searches for “AI cyber attacks” have increased by 8400% in the last 5 years.
AI is transforming entire industries. Unfortunately, cybercrime is no exception.
Bad actors can reap all the familiar benefits of AI: automation, efficient data collection, and continual evolution and improvement of methodologies, to name a few.
Malicious GPTs can generate malware. AI can adapt ransomware files over time, reducing their detectability and improving their efficacy.
The technology can even help to target humans, the greatest vulnerability in most security networks. Generative AI can greatly enhance phishing attacks.
Use of AI in phishing attacks
Phishing is the most common kind of cyberattack. That’s when grouped together with “pretexting”, a similar but more targeted attempt to extract details from a system user.
In 50% of cases, phishing targets user credentials. That often means going after passwords to gain entry.
You might not imagine AI having a particularly sizable effect in this relatively low-tech area of cybercrime. But the technology can help hackers to devise more believable personas in order to convince victims to part with sensitive information.
A study published earlier this year found that 60% of participants were convinced by AI-created phishing attacks. That was comparable to the success rate of messages devised by human experts.
One of the phishing emails generated by AI for the purposes of a recent study.
Moreover, follow-up research found that AI is able to automate the entire phishing process. That means these broadly equivalent success rates are being achieved at a 95% reduced cost.
Voice phishing — or “vishing” — brings another layer of sophistication to phishing attacks.
“Vishing” searches are up 167% in the last 5 years.
Vishing is the process of impersonating a trusted person’s voice in order to access information or money. AI has made that task far easier.
There are any number of legitimate uses for voice cloning technology. Podcastle, for instance, offers it for digital touch-ups in podcasts.
“Podcastle” searches have risen steeply, no doubt helped by its AI voice cloning technology.
But this AI-powered technological development has been a gold mine for vishing schemes too. Phishing attacks have increased by 60% in the last year due to AI voice cloning.
There have been some high-profile examples. Last year, MGM Resorts was the victim of a cyberattack that ended up costing $100 million.
It all started with a voice phishing scam, where artificial intelligence was used to replicate an employee’s voice and secure system access.
Even more recently, a finance worker in Hong Kong was tricked into wiring $25 million to a scammer following a faked Zoom call with the CFO.
How AI cybersecurity is fighting back
Increasingly sophisticated threats require increasingly sophisticated cybersecurity. AI is being used in highly innovative ways in order to keep data safe.
The benefits of AI for cybersecurity experts are not too dissimilar to the benefits for cyber criminals: the ability to quickly analyze large amounts of data, automate repetitive processes, and spot vulnerabilities.
As a result, 61% of Chief Information Security Officers believe they are likely to use generative AI as part of their cybersecurity setup within the next 12 months. More than a third have already done so.
And according to IBM, the average saving in data breach costs for organizations that already use security AI and automation extensively is $2.22 million.
It’s little wonder that the AI cybersecurity market was valued at $22.1 billion last year. By 2033, that figure is forecast to reach $147.5 billion (a 20.8% CAGR).
AI cybersecurity tackles AI cybercrime directly
Some of the uses for AI in cybersecurity have arisen directly from a need to counter new AI threats. For example: AI voice detectors to combat fishing.
“AI voice detector” searches are up 99X+ in the last two years.
The early market leader is simply named AI Voice Detector. Users can upload audio files or download a browser extension to check for AI voices online (for instance, in a Zoom or Google Meet call).
Detection is powered by an AI tool. It has already detected 30,000 AI voices, serving more than 20,000 clients.
AI Voice Detector returns probabilities of a voice being natural or AI-generated.
Meanwhile, some of the cybersecurity responsibility for preventing voice phishing falls on the makers of the AI voice cloning technology. A process known as AI watermarking is rising to prominence.
ElevenLabs is one of the leading AI voice cloning providers. It has taken steps to ensure listeners can find out whether a clip originates from its own AI generator.
“ElevenLabs” searches are up 7800% in the last 2 years.
Its “speech classifier” tool analyzes the first minute of uploaded clips. From that, it is able to detect the likelihood that it was created using ElevenLabs in the first place.
ElevenLabs can check for the involvement of its own AI in the creation of voice clips.
Away from vishing, Google recently created an invisible watermark capable of labeling all text that has been generated by Gemini, its AI software.
Unlike the leading AI voice detection tools, this does not deal in probabilities. It is a true watermark, denoting the provenance of AI-generated text beyond any doubt.
Wider adoption of this technology by generative AI providers would be warmly received by educational institutions. But it would also be huge for cybersecurity, with users able to easily determine whether potentially suspicious emails were crafted using artificial intelligence.
AI cybersecurity for the cloud
Cybersecurity is having to keep pace with more than just AI threats. There’s also been a mass migration to cloud services over the past few years.
By last year, 70% of organizations reported that more than half of their infrastructure had moved to the cloud. 65% operate a multi-cloud system.
There has been a corresponding surge in Cloud-Native Application Protection Platforms (CNAPPs) within the cybersecurity space.
“CNAPP” searches are up 99X+ in the last 5 years.
CNAPPs are all about building cybersecurity solutions specifically for the cloud. As opposed to playing catch-up by tacking on ad-hoc fixes and adjusting existing measures that may no longer be fit for purpose.
Dazz is not a CNAPP in its own right, although it is built specifically for the cloud, and is designed to work in conjunction with CNAPPs. And it shares the same overarching goal of consolidating cloud cybersecurity measures.
Prisma Cloud does brand itself as a CNAPP. And like Dazz, it has integrated AI into its cybersecurity solutions.
“Prisma Cloud” searches are up 378% in the last 5 years.
Prisma uses AI as a “force multiplier” when it comes to Attack Surface Management (ASM). By improving the speed, quality and reliability of data collection, AI can make ASM more efficient and effective.
Prisma integrates AI into its holistic cybersecurity platform.
The platform also works to counter cybersecurity risks associated with the legitimate use of AI in a business setting. Prisma secures vulnerabilities relating to potential data exposure or unsafe/unauthorized model usage.
AI takes aim at “zero-day” vulnerabilities
Cybersecurity is inherently on the defensive against cyber threats. AI can strengthen that defense.
But AI could have the potential to go even further. It could strike against “zero-day” vulnerabilities — weak points in systems that have not previously been exposed, and for which no known fixes or patches are readily available.
“Zero-day” vulnerabilities can be particularly hard for cybersecurity to mitigate.
This is a more proactive form of cybersecurity, and it’s one being pioneered by Google. Its Project Zero team has long been taking aim at zero-day threats, and has now joined forces with AI team DeepMind.
The result is Big Sleep. And the technology has already discovered its first real-world zero-day vulnerability: an “exploitable stack buffer underflow” in SQLite.
The open-source database is widely used. Google’s Big Sleep team reported the vulnerability to the developers in early October, and it was fixed on the same day.
Big Sleep believes this is the first time an AI agent has found a “previously unknown memory-safety issue in widely-used real-world software.”
This is still a nascent area of AI cybersecurity. But it is certainly an interesting one to watch.
Stay ahead of AI cybersecurity trends
AI is having a transformative effect everywhere. But few industries have been affected as profoundly as cybersecurity.
The threats posed by cyber criminals will never be the same again. They have gotten smarter, more efficient and harder to contain.
Yet at the same time, the tools in the cybersecurity arsenal have become far more sophisticated as well. AI can help to counter the arising novel threats, and to provide improved defense against the threats that already existed.
AI cybersecurity solutions are in high demand, as seen by the slew of big-money investments and acquisitions. Dazz could become the latest if Wiz moves ahead with a deal — but whatever happens, this is certainly a space to watch closely.