San Francisco: Researchers have found that threat actors could use artificial intelligence (AI) tools to steal user passwords with near-perfect accuracy by “listening” to an unsuspecting person’s keystrokes, a new study has shown.
According to the results published in the study by the US-based Cornell University, when the AI programme was activated on a nearby smartphone, it was able to accurately reproduce the typed password with a whopping 95 per cent accuracy.
A group of computer scientists from the UK trained an AI model to recognise keystroke sounds on the 2021 version of a MacBook Pro — dubbed a “popular off-the-shelf laptop”, reports New York Post.
During a Zoom video conference, the hacker-friendly AI tool was also extremely accurate while “listening” to typing through the laptop’s microphone.
According to the researchers, it reproduced the keystrokes with 93 per cent accuracy, a record for the medium.
Moreover, the researchers cautioned that many users are unaware that bad actors could monitor their typing in order to breach accounts — a type of cyberattack known as an “acoustic side-channel attack”.
“The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector but also prompts victims to underestimate (and therefore not try to hide) their output,” the study said.
“For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound,” it added.
To test accuracy, the researchers pressed 36 keys on the laptop 25 times each, with each press “varying in pressure and finger”.
The programme could “listen” for distinguishing elements of each key press, such as sound wavelengths.
The smartphone — an iPhone 13 mini, was positioned 17 centimetres away from the keyboard.