Balancing innovation and control in 2025

Making predictions for the future is always fraught with difficulties. The futurologists are prone to sprinkle their predictions with liberal doses of caveats! However, what I am certain about is that the perennial battle between innovation and control will continue.

This debate is not just restricted to the field of cybersecurity. Throughout human history there have been people who experiment and innovate. Equally, there have been people who like to protect and control what they have. They are comfortable to continue with established practices in well-trodden domains.

In the cybersecurity context, I see this debate continuing in these three contexts:

  • Artificial Intelligence (AI)
  • Government legislations and regulations
  • Quantum computing

Artificial Intelligence: Risks and opportunities

Shadow IT

For cybersecurity professionals, the fundamental question is always “What are we protecting?” Without a comprehensive answer to this question, any efforts by the cybersecurity staff are likely to be off target to some degree or the other. This problem is compounded by the fact that the battle between hackers and cybersecurity staff is an asymmetrical one. The cybersecurity staff must protect their entire threat surface, whereas the hacker has to simply find one entry point.

As AI becomes more accessible, every organization will have people wanting to try out and experiment with this powerful technology. Indeed, in most organizations it is imperative to do so, otherwise there is a major risk of being overtaken by the competition. Without strong governance, the users will access whichever online generative tool is most accessible and easy to use. These may be outside the control of the IT department, thus leading to a proliferation of Shadow IT. This means cybersecurity staff will not know what IT and data assets need to be protected as they will not have visibility of them.

I predict that organizations will continue to implement and expand mechanisms to mitigate this risk in 2025. These could include more stringent governance processes, more training for staff and the introduction of tooling.

Attackers

For attackers, the advent of AI has made their job easier. I predict that the frequency and sophistication of attacks will continue to grow in 2025.

Usually, the attackers perform some sort of reconnaissance of the target organization. This can take many days, and perhaps even weeks. However, with the aid of Artificial Intelligence, this reconnaissance can be done much quicker.

Phishing is often the preferred method of entry for many hackers. The sophistication of phishing has increased. So, the days of looking out for spelling or grammar errors to spot phishing emails are fast disappearing! Phishing emails will continue to be even more convincing as the appearance and the content will be more fine-tuned to the receiver’s context and local language. Furthermore, we are already seeing many instances of attackers using AI to generate fake images, audio clips and even videos of people. This trend is likely to continue into 2025.

Defenders

The use of AI in cybersecurity defense is still at an early phase of maturity. Most of the progress, which will continue into 2025, is in three key areas: end-user assistance, incident management and security tooling.

End-users are increasingly using chatbots to get advice on cybersecurity matters. Large organizations often have many security policy documents that provide a lot of detail on what is or is not allowed. It is simply not feasible for each end-user to know every cybersecurity policy by heart. Thus, many organizations are using AI chatbots to make these policy documents more easily accessible for the end-users.

Incident management can involve a lot of text being generated. Sometimes, in ransomware or in state-sponsored attacks these incidents may stretch over long periods with many twists and turns. The ability of AI to summarize these helps operational cybersecurity staff and management too.

There is always an arms race between attackers and defenders in terms of tooling. Virtually all security tooling suppliers are incorporating AI into their products. I expect to see more announcements by these security tooling suppliers throughout 2025 on how AI is making their product better.

I should advise caution at this point. It is well known that AI systems can produce hallucinations. Although these will become less common in years to come, right now we need to be careful about how much AI automation we can rely on. Another disturbing recent discovery is that AI systems can scheme and lie, if contradictory instructions are given. So, the use of AI in cybersecurity defense must be configured with care and appropriate data models used.

Handling divergent government legislations and regulations

As cyber threat levels increase across the globe, governments are either considering cybersecurity legislation, or already enforcing it. However, the problem is that different countries take different approaches to doing that. So, if an organization operates across several countries, it needs to be compliant across multiple legislative policies and regulations.

This is where difficulties arise.

Sometimes, it is a question of definition. For example, how are product cybersecurity requirements defined? Or what is the legal definition of a vulnerability? Another area of divergence is establishing when an organization should report cyber incidents. The requirements for this vary from country to country.

All this fragmentation of cybersecurity regulation means that organizations have difficulties in ensuring they are compliant in all the countries where they operate. And this has repercussions. The cost goes up. This needs to be passed on to the value chain, making them less competitive. So, this ends up being a classic case of the innovation vs. control debate.

Will the situation change in 2025? The answer is probably not.

Governments in many parts of the globe may feel that introducing cybersecurity legislations could be a major lever to pull in the fight against state-sponsored attackers and criminals. In other parts of the world, the government philosophy may be to deregulate rather than regulate, to stimulate innovation. However, I do predict that there will be more dialogue across countries. Organizations like the Charter of Trust (of which Atos is a partner) will facilitate and push for such cooperation. I hope that this will lead to an optimization of regulations across the globe that will balance innovation with control.

Preparing for Quantum Computing

Various countries and global organizations are trying to build a workable quantum computer.

The technology is still immature and mass-produced quantum computers are probably some years away. As of December 2022, over half of the top international quantum computing experts estimated there is at least a 50% chance that a quantum computer capable of breaking RSA-2048 encryption within 24 hours will be developed by 2037.

Of course, once quantum computers become a working reality, the biggest impact on cybersecurity will be the obsolescence of current asymmetric encryption algorithms particularly. However, if the reality of working quantum computers is some years away, why do we need to do anything in 2025?

The risks are two-fold. Firstly, when quantum computers arrive, the change in computing power will be felt very quickly in the IT industry. Organizations that do not take mitigating actions would be wide open to an avalanche of cyberattacks in a short space of time. We could witness a wide-scale demise of organizations. The second risk is that attackers will continue to harvest data while we wait for workable quantum computers. This involves breaking into IT systems and stealing large volumes of data. These can then be stored and fed into quantum computers when they were mature enough. It should be emphasized that these risks are higher where asymmetric encryption is used. Examples include web-based financial transactional services.

In line with this, I would recommend all major organizations to focus their attention on how to mitigate quantum computer risks. One practical step would be to create an inventory of encryption algorithms utilized in the organization’s asset base. This would enable the quantification of this new threat surface. Another mitigation would be to explore if post-quantum cryptographic algorithms can replace existing ones. This is a developing area, and it will be interesting to see how much of this algorithm replacement will be feasible in 2025.

Striking the right balance in 2025

Whether it is AI, legislations or quantum computing, we shall definitely see the twin forces of innovation and control battle it out in 2025. However, the question is not which one of those two will win, but how will the world optimize the balance between the two.