Our site uses cookies to make it work and to help us give you the best possible user experience. By using our site, you agree to our use of cookies. To find out more about cookies and how you can disable them, please read our cookies statement. 

Cookie Settings

You can manage your cookie settings by turning cookies on and off.

Click on the different cookie  headings to find out more about the types of cookies we use on this site and to change your settings. Please be aware that if you choose to turn off  cookies, certain areas of our site may not work and your browsing experience may be impacted.

For further information on how we use cookies, please see our cookies statement. 

Strictly Necessary Cookies

(Req)

These cookies are essential for the technical operation of and proper functioning of our site  and enable you to register and login, to easily move around our site, and to access secure areas. Without these cookies our site won't function properly.  

These cookies are required

Performance Cookies

Performance cookies allow us to collect aggregated and anonymous data on how our site is used, such as the number of visitors to our site, how you navigate around and the time spent on our site and also to identify any errors in functionality. These cookies also help us to improve the way our site works by ensuring that you can find what you’re looking for easily, to better understand what you are interested in and to measure the effectiveness of the content of our site. 

Marketing Cookies

These cookies allow us to advertise our products to you and allow us to pass this information on to our trusted third parties so that they can advertise our products to you on our behalf. All information these cookies collect is aggregated and therefore anonymous. No personal information is shared to third parties. Any personal information collected while using our website could be used for direct marketing from Dimension Data only.

Behavioural analytics and artificial intelligence demand a relook at identity

Blog

 Cisco

By Mark Thomas: Group CTO Cybersecurity, Dimension Data and Tim “TK” Keanini: Distinguished Engineer, Security Business Group Cisco

The reignition of interest in, and the acceleration of the capabilities of artificial intelligence (AI) are providing security professionals with an expanded toolbox with which to ply their trade. Among those tools is the application of AI subset machine learning to the field of behavioural analytics, which identifies patterns in the way that people and objects interact on the network. Those patterns can play a valuable role in bolstering identity management and threat detection.

Machine learning underpins behavioural analytics, because it can constantly monitor and evaluate millions of interactions, establishing a baseline of ‘normal’ user behaviour and associating those to individual actors. It can then learn to seek out and identify unusual deviations and potentially suspicious activity which might signal malicious intent. These are the findings of Dimension Data’s Cybersecurity 2018 IT Trends.

An expanded challenge

While identity management has always presented a challenge, it has largely revolved around managing the identity of people: employees, visitors, partners, suppliers and service providers. While user identity itself is constantly evolving, there is now a new set of problems relating to the identity of objects in the cloud. Those objects could be anything: databases, applications, widgets, IP addresses, workloads, clusters.

The nature of cloud computing, and the availability of multiple service types, means many of these objects are ephemeral. They could exist for a matter of seconds.

They could be legitimate, or they could be malicious.

Added to this, Sisyphean identity management challenge is the prevailing business reality: people work around the clock, as do automated systems.

Conceptually, behavioural analytics offers one of the answers to this challenge. With this technique, the behaviour of an object (or person) is identified and recognised by the system. For human users, a new factor is introduced for authentication: a password (something you know), a one-time PIN (something you have), fingerprints (something you are), and the way in which you interact and use the systems and data accessed to do your job (something you do) – measured continuously rather than at a point in time.

For applications and other objects, the same applies. The way in which the application behaves provides clues as to its intentions and legitimacy. Behaviour becomes an additional layer, which continuously monitors how users and objects access data, applications and other objects, where they interface from, how and what they do. The number of mouse clicks, how the mouse moves, and how the keys on a keyboard are pressed (how long the keys are depressed is known as ‘dwell time’; the time between key presses is ‘flight time’), all leave tiny identification clues.

(Note: advanced malware is built to circumvent these techniques. It tries to understand if it’s in a sandbox or a production environment, ‘keeping it’s cool’ if it decides it’s contained).

Choose the right tools

Implementing behavioural analytics doesn’t necessarily require reaching into the AI or machine learning toolbox. Instead, it starts with approaching the organisation and ensuring it can express its business logic. In other words, it begins with a conversation between technologists and business users – and that conversation must uncover a business problem. From that flows business rules and logic.

In some instances, straight up statistics deliver results in behavioural analytics: if an object looks like a printer, but behaves like a developer, it’s cause for alarm. It’s the anomaly which for the security analyst is the needle in the haystack.

But the difficulty is that attackers no longer hack into your networks; they simply log in. Their behaviour is rarely obvious, or even anomalous. They count on swimming through the noise and avoiding becoming the signal. And they constantly seek to outmanoeuvre network defences. We have reached the point where traditional static defences are no longer adequate in a world where breaches are carried out using compromised credentials.

This is where machine learning can add value. It enables security systems to learn to identify threats without being explicitly programmed to do so. Supervised machine learning can be trained to examine ‘labelled’ data sets over which classifiers have run, examining behavioural patterns and contextualising it with existing information to identify the wisps of a trail left by unauthorised users.

Unsupervised machine learning can be applied to everything left over (‘unlabelled data’) after the classifiers have run on it. Acting on this ‘gravy bowl’ of messy data, unsupervised machine learning can create clusters from what at first glance appears to be nothing more than nonsense, but which could be telling behavioural identifiers: time of day, user role, location from which access is made, the presence or absence of erratic inputs, spikes in activity.

Boosting security operations, business continuity

A fundamental challenge for security operations is the shortage of skilled people; the broader discipline of AI holds the promise of augmenting intelligence, making decisions which discriminate between benign and malicious activity around the clock on behalf of users, and orchestrating responses.

Machine learning can have a tremendous impact on security operations centres, equipping them to act with greater speed and precision. And for those who view security as a business continuity issue, a more effective operations centre delivers an obvious advantage.

However, achieving that impact depends on correct implementation of the tools, and the attention of skilled people who understand the outputs from those tools. No tool is a silver bullet, and nor does it act in isolation. What the increased application of AI to the security field does is to provide yet another facet to the multidimensional modern threat environment.

The challenges don’t end there. Machine learning is being used to make more critical decisions, beyond, for example, suggesting the next movie to watch or song to listen to. But along with that performance comes responsibility. A machine doesn’t explain the decisions it makes. When those decisions accidentally lock a key executive out of his systems access, or pilot a self-driving car into a pedestrian, there are consequences. Therefore, it’s becoming necessary to expose the logic and decision-making processes in a way that people can understand.

Finally, before any tool is introduced, it must be assessed in regards to the benefit it brings to the business. AI holds much promise. But unless it’s lowering operational cost, improving business continuity performance or driving a competitive advantage, it has no place in the organisation.

Previous Article: SDN is redefining the data centre Next Article: Speeding into the future of network automation

You may be interested in

Roads
Blog

SDN is redefining the data centre

To understand how the data centre is changing within the enterprise, it’s important to recognise that its essential functions are not changing.

Read blog
Doctor examining an Xray
Blog

The rise of blockchain at SXSW Part 2

In my last SXSW round-up blog, we left off with a recap of SXSW Interactive, where blockchain and distributed ledger technology (DLT) was far and away the hottest topic.

Read blog
Women working on computers
Blog

Eyes wide open: Raising cybersecurity's profile in your business

Our 2017 Global Threat Intelligence Report showed that, year on year, 11% more businesses were improving their incident response ability. But 68% still had no formal incident response plan.

Read blog
Tent close to water
Blog

Top IT trends in 2018: Cybersecurity

2018 is here. Part of the process of gearing yourself up for a new year means arming yourself with information on IT trend predictions for cybersecurity this year.

Read blog