Complex arrays of microphones are often used by law enforcement and the military to help quickly pinpoint where the sound of gunfire originates. But researchers at Carnegie Mellon University have found that videos captured by smartphones can be just as useful for determining the location of a shooter.
The Video Event Reconstruction and Analysis system—or VERA, for short—was developed at CMU’s Language Technologies Institute with the cooperation of SITU Research who shared its expertise on ballistics and architecture, and the tool was released last month as free-to-use open-source code at the Association for Computing Machinery’s International Conference on Multimedia in Nice, France.
Using machine learning, VERA first synchronizes footage from multiple videos shot on smartphones in and around an event where a shooting occurs. The more footage collected the more accurate the results will be, but the researchers found the system even performed well when using footage from just three devices. Once synchronized, VERA calculates the position of where each video was filmed based on landmarks and other notable features in the actual footage.
The system then processes the audio from each clip, specifically identifying two distinct sounds: the crack of the shock wave created by the supersonic bullet in flight, and the sound of the blast emanating from the weapon’s muzzle. The time delay between the two parameters provides a crucial clue, but the sounds also help reveal the type of gun used, which in turn helps determine the speed of the bullet. By processing all of that information, VERA is then able to determine the location of the shooter with a surprising level of accuracy.