Skip to main content

How multi-factor
authentication AI could transform security

AI report

Imagine a video surveillance system monitoring a perimeter fence of a weapons facility. It will easily verify there is movement in the area where there should be none. But how can Artificial Intelligence (AI) decide with 99.9% certainty that it’s a human and not an animal?

Further verification comes from a super sensitive infrared scanner mounted on the fence, which can detect if the heat signature and frequencies coming from the object are that of a human.

Meanwhile, a targeted microwave human heartbeat detector confirms if the object has a human heartbeat. Simultaneously, a radio frequency detector can pick up if the object is carrying an electronic device such as a mobile phone and is able to capture the unique ID of that device for future comparison.

A millimeter radar also scans the object to detect if the radio emission is within human range and an E-nose detects if the object smells like a human.

Based on the results of this multi-factor authentication process, the AI makes a decision about whether to notify the security team on site that they should investigate further. 

Improving sensing technologies

While all these sensing technologies are available today, they are not necessarily able to sense at range for remote analysis. Similarly, AI capable of making a split second decision like this one does not exist yet.  

But a new report published today titled Artificial intelligence and its applications in security written by David Quinn, Product Manager at G4S in the UK, says it could be a reality in the future.

Given humans possess an impressive range of senses (we can smell and distinguish one trillion scents for example) - and rely on multiple senses, both consciously and subconsciously, to understand our environment - improving sensing technologies is an obvious development area.

AI which is capable of deciding if an object is a human or animal as outlined in the earlier scenario is likely to be the first step in this process. This in itself could prove incredibly useful for the security industry given the number of false alarms that are caused by animals, taking up both time and resources. 

Societies need to consider the application of these technologies and apply them for the force of good. However, they have the potential to change how we protect public and private spaces.
DAVID quinn, Product manager at G4S in the UK

Differentiating between individuals

Once this becomes possible, the next target could be to develop AI that can distinguish between different persons using the same process of multi-factor authentication. Facial, audio and gait recognition are already in use but AI might, in the future, be able to determine one individual from another via their heartbeat, smell and radiation as well.

If such technologies were possible they would likely be installed at transport hubs to make it practically impossible for criminals on the run to evade the authorities, or at other buildings and locations where they intend to do harm. AI with this level of capability could even potentially directly inform the police about a person of interest too.

Of course, in order to identify specific individuals you would need to have samples from them. However, even without, it still might be possible for AI to identify if it is ‘sensing’ the same person from yesterday at the weapons facility - just like a ‘snapshot’ would be used for facial recognition today.

Risks

If technologies with this kind of capability are possible it would be a significant development for the security sector, which to date has mainly relied on sight through its emphasis on video monitoring. But developments like this do not come without risks. 

One well-known issue with machine learning is that, as the name suggests, the machine needs to learn about new targets by “reading” them a number of times. This leaves a vulnerability in that any anomalies initially detected will be ignored or treated erroneously.

And as the level, complexity and accuracy of AI increases, regulators will need to ensure they keep pace to avoid misuse or untoward harm. Non-democratic governments and police states could also abuse AI with this capability to keep ever close tabs on people. 

David comments: “Societies need to consider the application of these technologies and apply them for the force of good. However, they have the potential to change how we protect public and private spaces - having very reliable AI which only informs humans of events requiring intervention allows them to concentrate on critical tasks.”

The full report is available to read at https://www.g4s.com/en-gb/what-we-do/security-solutions/commercial-security-systems/tech-talks

^