Dimension Data > Security > Behavioural analytics and artificial intelligence demand a relook at identity

Behavioural analytics and artificial intelligence demand a relook at identity

TwitterFacebookGoogle+LinkedIn

 

Mark Thomas: Group CTO Cybersecurity, Dimension Data

Tim “TK” Keanini: Distinguished Engineer, Security Business Group Cisco

The reignition of interest in, and the acceleration of the capabilities of artificial intelligence (AI) are providing security professionals with an expanded toolbox with which to ply their trade. Among those tools is the application of AI subset machine learning to the field of behavioural analytics, which identifies patterns in the way that people and objects interact on the network. Those patterns can play a valuable role in bolstering identity management and threat detection.

Machine learning underpins behavioural analytics, because it can constantly monitor and evaluate millions of interactions, establishing a baseline of ‘normal’ user behaviour and associating those to individual actors. It can then learn to seek out and identify unusual deviations and potentially suspicious activity which might signal malicious intent. These are the findings of Dimension Data’s Cybersecurity 2018 IT Trends.

An expanded challenge

While identity management has always presented a challenge, it has largely revolved around managing the identity of people: employees, visitors, partners, suppliers and service providers. While user identity itself is constantly evolving, there is now a new set of problems relating to the identity of objects in the cloud. Those objects could be anything: databases, applications, widgets, IP addresses, workloads, clusters.

The nature of cloud computing, and the availability of multiple service types, means many of these objects are ephemeral. They could exist for a matter of seconds.

They could be legitimate, or they could be malicious.

Added to this, Sisyphean identity management challenge is the prevailing business reality: people work around the clock, as do automated systems.

Conceptually, behavioural analytics offers one of the answers to this challenge. With this technique, the behaviour of an object (or person) is identified and recognised by the system. For human users, a new factor is introduced for authentication: a password (something you know), a one-time PIN (something you have), fingerprints (something you are), and the way in which you interact and use the systems and data accessed to do your job (something you do) – measured continuously rather than at a point in time.

For applications and other objects, the same applies. The way in which the application behaves provides clues as to its intentions and legitimacy. Behaviour becomes an additional layer, which continuously monitors how users and objects access data, applications and other objects, where they interface from, how and what they do. The number of mouse clicks, how the mouse moves, and how the keys on a keyboard are pressed (how long the keys are depressed is known as ‘dwell time’; the time between key presses is ‘flight time’), all leave tiny identification clues.

(Note: advanced malware is built to circumvent these techniques. It tries to understand if it’s in a sandbox or a production environment, ‘keeping it’s cool’ if it decides it’s contained).

Choose the right tools

Implementing behavioural analytics doesn’t necessarily require reaching into the AI or machine learning toolbox. Instead, it starts with approaching the organisation and ensuring it can express its business logic. In other words, it begins with a conversation between technologists and business users – and that conversation must uncover a business problem. From that flows business rules and logic.

In some instances, straight up statistics deliver results in behavioural analytics: if an object looks like a printer, but behaves like a developer, it’s cause for alarm. It’s the anomaly which for the security analyst is the needle in the haystack.

But the difficulty is that attackers no longer hack into your networks; they simply log in. Their behaviour is rarely obvious, or even anomalous. They count on swimming through the noise and avoiding becoming the signal. And they constantly seek to outmanoeuvre network defences. We have reached the point where traditional static defences are no longer adequate in a world where breaches are carried out using compromised credentials.

This is where machine learning can add value. It enables security systems to learn to identify threats without being explicitly programmed to do so. Supervised machine learning can be trained to examine ‘labelled’ data sets over which classifiers have run, examining behavioural patterns and contextualising it with existing information to identify the wisps of a trail left by unauthorised users.

Unsupervised machine learning can be applied to everything left over (‘unlabelled data’) after the classifiers have run on it. Acting on this ‘gravy bowl’ of messy data, unsupervised machine learning can create clusters from what at first glance appears to be nothing more than nonsense, but which could be telling behavioural identifiers: time of day, user role, location from which access is made, the presence or absence of erratic inputs, spikes in activity.

Boosting security operations, business continuity

A fundamental challenge for security operations is the shortage of skilled people; the broader discipline of AI holds the promise of augmenting intelligence, making decisions which discriminate between benign and malicious activity around the clock on behalf of users, and orchestrating responses.

Machine learning can have a tremendous impact on security operations centres, equipping them to act with greater speed and precision. And for those who view security as a business continuity issue, a more effective operations centre delivers an obvious advantage.

However, achieving that impact depends on correct implementation of the tools, and the attention of skilled people who understand the outputs from those tools. No tool is a silver bullet, and nor does it act in isolation. What the increased application of AI to the security field does is to provide yet another facet to the multidimensional modern threat environment.

The challenges don’t end there. Machine learning is being used to make more critical decisions, beyond, for example, suggesting the next movie to watch or song to listen to. But along with that performance comes responsibility. A machine doesn’t explain the decisions it makes. When those decisions accidentally lock a key executive out of his systems access, or pilot a self-driving car into a pedestrian, there are consequences. Therefore, it’s becoming necessary to expose the logic and decision-making processes in a way that people can understand.

Finally, before any tool is introduced, it must be assessed in regards to the benefit it brings to the business. AI holds much promise. But unless it’s lowering operational cost, improving business continuity performance or driving a competitive advantage, it has no place in the organisation.