AI innovation is outpacing security, and the industry is not ready for this yet – David Maidment, ARM.
The accelerating pace of Artificial Intelligence (AI) innovation is leaving security measures in its dust, creating an alarming vulnerability across the technology landscape, according to the latest PSA Certified 2024 Security report. While AI offers immense potential, its rapid proliferation, especially at the edge, significantly expands the attack surface for cyber threats. A concerning 71% of AI-driven cyber threats have been flagged as a major worry, highlighting the urgent need for enhanced security measures to keep pace with AI’s advancements.
Last year’s report revealed a growing emphasis on security, with businesses increasing their spending in this area. However, the advent of new regulations and the expanding deployment of AI models at the edge have intensified the need for robust security practices. David Maidment, responsible for the secure devices ecosystem at Arm, warned that “AI innovation is outpacing security, and the industry is not ready for this yet.”
A survey of 1,260 global technology decision-makers revealed that security has become a top priority for 73% of respondents over the past year, with 69% specifically citing AI advancements as the reason. Despite these concerns, 67% of respondents believe that the benefits of AI still outweigh the risks. However, a concerning gap remains in AI-security integration, hindering organisations from fully realising AI’s potential.
The report also highlights the growing trend of edge computing, driven by security concerns. A significant 85% of respondents believe that these concerns will push more AI use cases to the edge. Processing, analysing, and storing data at the edge offers improved efficiency, better security, and faster data processing, but it also necessitates ensuring the security of edge-based devices and their connections to cloud-based services.
A major challenge lies in the fact that organisations are not adopting a security-by-design approach, missing out on the benefits of embedding security from the outset. This issue is exacerbated by the fragmented nature of the supply chain, with multiple companies involved in device creation. Erik Wood, senior director for IoT secure MCU products at Infineon, emphasised the need for new device security concepts and a widened attack surface to protect valuable machine learning models hosted locally in devices.
The report concludes on a positive note, with 70% of respondents believing that the electronics industry is developing trusted hardware suitable for the AI era. However, it underscores the critical need for the entire value chain to take collective responsibility and accelerate security investment and best practices to maintain consumer trust in AI-driven services. The rapid proliferation of AI cannot happen in isolation; it must be matched with trust, which can only be achieved by investing in robust security measures.
Thousands of senior engineers and procurement professionals subscribe to our LinkedIn Market Intel newsletter – get yours here
For more help with looking at supply chain options, contact Astute Electronics