African organisations are facing more than 3,000 cyberattacks per organisation per week on average, the highest volume globally, as the rapid deployment of artificial intelligence (AI) across business operations outpaces the security measures needed to protect it, according to new research from Check Point Software Technologies.
The findings, drawn from the firm’s AI Threat Landscape Report covering January and February 2026, come as enterprises across the continent accelerate their adoption of generative and agentic AI, often without adequate visibility, governance, or protection in place.
The Agentic Era and Machine-Speed Threats
Check Point researchers describe the current moment as the beginning of what they call the Agentic Era, a shift from AI as a productivity aid to AI as an autonomous operational system capable of executing tasks across enterprise environments without human instruction at every step.
The implications for cybersecurity are significant. In one documented case, a single developer used an AI-powered development environment to author a sophisticated Linux-based malware framework featuring modular command-and-control architecture, rootkits, cloud and container enumeration tools, and more than 30 post-exploitation plugins. The initial assessment by Check Point Research (CPR) was that the framework was likely the product of a coordinated, multi-person development effort conducted over months. Operational security failures by the developer later revealed that the entire framework was built by one person using an AI-powered integrated development environment.
The case illustrates a core concern from the report: AI is compressing the time and expertise required to build sophisticated cyberattacks, enabling lone actors to operate at a scale previously associated only with well-resourced criminal groups or state actors.
Shadow AI Creates New Blind Spots
Analysis of generative AI activity across enterprise networks in January and February 2026 found that one in every 31 prompts, approximately 3.2 percent, posed a high risk of sensitive data leakage, including the potential sharing of confidential business information, regulated data, source code, or other sensitive corporate content with external AI services. High-risk prompt activity affected 90 percent of organisations that use generative AI tools on a regular basis.
Employees are using an average of 10 or more AI tools, creating what the report describes as Shadow AI environments, AI deployments invisible to traditional security controls and increasingly attractive to attackers seeking to exploit unmonitored access points.
Ian van Rensburg, Head of Security Engineering for Africa at Check Point Software Technologies, said the speed asymmetry between attackers and defenders is widening. “Attackers are operating at machine speed, while many organisations are still defending at human speed,” he said. “AI must be secured as a system, not as a tool. That means protecting models, data, prompts, application programming interfaces, and autonomous agents, not just the infrastructure around them.”
Governance Gap Widens
With private-sector AI adoption outpacing national AI strategies in key African markets, a governance gap has emerged that requires urgent attention. The convergence of European Union data laws and African data protection frameworks has also made cyber resilience essential to trade, as African exporters increasingly need to demonstrate compliance to maintain market access.
Hendrik de Bruin, Head of Security Consulting for SADC at Check Point, stressed the institutional stakes. “Without clear risk classification, visibility, and accountability, AI systems can quickly become a blind spot rather than a competitive advantage,” he said. “AI adoption at scale requires trust.”
Africa faces a critical share of the global cybersecurity talent shortage, with over 200,000 unfilled cybersecurity roles across the continent, meaning cyber sovereignty increasingly depends on building local expertise rather than importing it.
Check Point is calling for security-by-design and risk-based governance to be embedded into national AI strategies from the outset, warning that organisations that treat security as an afterthought will struggle to realise AI’s full economic value.


