The pandemic has accelerated the use of technology as businesses find new ways to remain resilient, but it has also inevitably led to a rise in cyber-attacks – a trend that is expected to intensify with the advent of more emerging technologies such as 5G and the Internet of Things (IoT).
What this means is a greater need for more cyber security professionals to counter the attacks, yet the reality is that they are in short supply globally. To fill this manpower gap, a key solution is Artificial Intelligence (AI) that has the potential to do some of the work of cyber professionals.
It is especially useful for doing laborious tasks such as curating intelligence from threat reports, blogs and news articles. Such AI-powered technologies are also able to sift through massive data and offer quicker insights.
While it is clear that a partnership between AI and cyber security experts offers a winning solution to thwart cyber threats, many organisations face challenges when applying AI to cybersecurity.
Roadblocks for cyber AI implementation
The concept of AI-enabled cybersecurity is not new, but there are four main issues that prevent a wider adoption: low accuracy, a lack of trust, few purposeful use cases and poor readiness of an AI infrastructure.
Due to the sensitive nature of cyber security, operational data is difficult to come by. This results in many cyber AI models being developed using only a small subset of data, hindering organisations from getting a comprehensive cyber protection.
In other situations where cyber AI is deployed, cyber analysts usually have no idea what AI algorithms were used. Nor do they have control over them or able to improve the AI model on their own. Not surprisingly, cyber analysts would deem AI untrustworthy for operations.
On the lack of purposeful use cases, this is due to a disconnect between cyber experts and AI practitioners where both have vastly different skill sets – cyber experts may not know how to wield AI in their day-to-day operations, while AI practitioners lack the necessary expertise to produce good use cases in cybersecurity, which diminish the opportunities for cyber AI.
More importantly, finding a suitable AI infrastructure in security is not easy. Most AI infrastructures today are cloud-based, which clash with cybersecurity needs that typically require an on-premise solution.
Building a platform for cyber AI
So how do we address these limitations? At ST Engineering, we drive and operationalise cyber AI across three areas: infrastructure, data science modelling and managing AI operations.
We build an on-premise platform that covers various data sources, from AI modelling to their deployment for operations. The platform allows us to easily create AI algorithms in security environments and operationalise them to handle real-world operations.
It is also designed to be fully in-line with our security operations centre to complement, not replace, existing security appliances. Utilising specially crafted data pipeline for use cases, we automate the sifting of data and extraction of relevant information for AI prediction.
We understand that AI can make mistakes. The question is how do we create an ecosystem that allows AI to learn from these mistakes and perform better over time. We answer this by designing feedback loops for the cyber analyst to correct the AI’s forecast when it makes a wrong prediction.
Our platform also has a feature called “algorithms as a service” – building specialised algorithms only once but for multiple usage. This is due to the prevailing need for data privacy that does not allow for the sharing of an AI model that has been trained on a specific organisation’s data. Our feature enables partners, customers, or cyber analysts who have the same use case to easily reuse the algorithm with their own data.
Here’s how it works: When an algorithm such as a malicious domain name detector is developed, it will be deployed onto our cybersecurity platform. When a partner wishes to use this algorithm, all that needs to be done is to prepare their data, upload it onto the platform and trigger the automated modelling process.
Throughout this process, the analyst will be able to evaluate the AI model training process and redefine the model as required before deploying it. With such capabilities, our customised algorithms are able to scale up for various users while respecting data privacy and also provide the ability to translate AI algorithms developed in-house into operational environments easily.
Effective AI-enabled cyber defence is not built on data science alone. Cyber tradecraft must be encapsulated with AI to build better solutions for our cyber analysts.
To achieve that, we collaborate closely with our cybersecurity experts to identify essential and relevant use cases for their daily operations. These use cases can be formed across a broad spectrum from cyber threat detection to automated report analysis.
What is critical for a successful collaboration is the sharing of knowledge – cyber analysts discuss their pain points and needs while AI practitioners talk about how AI can tackle a particular problem.
Through greater collaboration, AI practitioners collect high quality data and uncover new patterns that allow us to design a customised AI algorithm to solve the problem. With this working relationship, our cyber experts not only understand the relevance of AI but also acquire the ability to tweak the model.
In a fast-paced cyber environment, data changes to the environment are a common sight. After deployment, we continuously monitor the AI model’s performance for degradation and data drifts. When this happens, we refine the model where we retrain, review and redeploy the system.
AI is an ever-evolving field. To keep innovating, we work closely with Institutes of Higher Learning, such as the Singapore University of Technology and Design, to tap on their cutting-edge R&D and keep pace with the latest AI algorithms.
With proper infrastructure, we can access operational data and better understand the data to create purposeful use cases. The infrastructure also allows for rapid testing of the algorithms until they are suitable for deployment.
With its multiplier effects, I am confident that we can better harness the power of AI in cybersecurity.