AI is the new buzzword in today’s cybersecurity market place of products. However, there is a lot of hype from the vendor community and heightened expectations from customers on what AI can do for them. Lost in all this buzz are deeper questions to ask AI based technology that help you judge whether:
- The technology adds true incremental value
- Time and cost needed to manage and maintain the technology
- Does the AI based technology put your security risks on a secular downward trend? Isn’t this the promise of AI?
Let’s play out a simple scenario of an AI powered technology reporting that there’s a 70% chance of malware. What exactly does an L1 analyst do with that information? Is this information incrementally better than traditional tools at the analyst’s disposal? Or does it add yet another mixed signal that humans are already challenged in processing.
No matter the solution, here are specific AI centric questions that will enable you to evaluate a technology within the parameters of any other questions you have related to the problem-solution fit.
- Statistical Analysis v/s AI Models: Ask whether the technology does simple statistical calculations to detect outlier behaviors or is it building machine learning or deep learning models. Many prior generation behavioral analytics solutions analyze log data in a log store such as Splunk and performed statistical regression to identify outliers or abnormal incidents. Simple statistical calculations provide no additional capability than what are natively offered in tools like Splunk, where well designed queries would have yielded similar results.
- How much training data is needed before value is derived? AI models need training data before they can be deployed in your environment. Providing this data and building the model to perform in your environment could take considerable time and effort. To overcome this challenge, some technologies come out of the box with pre-trained data; with the promise that you will get immediate value from the start. In this scenario, ask how much data the pre-built model been pre-trained on, ask specific questions on where the data came from so you can judge whether that data corpus is representative of what you might see in your environment. In general, more data, specifically more relevant data, will always beat a better model. The nature of AI, especially with newer techniques such as deep learning, is that the data dictates the results to a large extent, and the model itself has relatively less impact.
- Supervised and Unsupervised Learning: If the solution relies on supervised learning models it requires labeled data sets to be able to predict patterns and behaviors going forward. Labeling is one of the most cost intensive aspects of using AI and machine learning. Some technologies auto-apply labels from existing data sources. Here we need to take a deep look at the quality of the labels from existing sources. The quality of the labels dictate quality of results. Unsupervised learning on the other hand does not require labels and builds a baseline of normal behavior. Any abnormal behavior is flagged for further investigation.
- Are network effects applicable: Network effects in the security context is data or learning from many customer environments applied to your environment, bringing more volume and variety to the data set. Thus we obtain better results in the long run by avoiding problems such as overfitting or being able to apply predictions on situations that have yet to occur in your data. Network effects are highly valuable in domains where learning is transferable from one customer environment to another; ransomware detection is an excellent example of the benefits of network effect. This means as a customer you too have to participate in providing data to the network. Ask questions on what kind of data is shared and in what form. In other words, does raw data need to be shared or is some form of masked transfer learning is put in place. Network effects are also about providing ongoing value because the learning of the network is automatically passed on to the customer.
- Time to achieve steady state: Most AI systems initially produce noisy results. This is because (a) Not enough data is available initially (b) Improper or inaccurate data labeling from existing sources, which must be corrected over time (c) Tuning model parameters to your environment requires experimentation and time.
- Human in the loop requirement: In general, soon after implementing an AI based security solution, any security insights you receive would require humans to investigate, validate and perhaps take action such as disabling a compromised user or a compromised device’s access to applications, network or data. Ask who is needed for this activity i.e. what expertise levels are required. How much time is needed. And ease of use for humans to provide this expert input.
These questions will help you better understand the true value provided by your AI based security product.