Researchers at the University of Michigan have devised a new way to unlock greater amounts of data from acoustic fields by essentially turning down the pitch of sound waves. That additional information could boost performance of passive sonar and echolocation systems for detecting and tracking adversaries in the ocean, medical imaging devices, seismic surveying systems for locating oil and mineral deposits, and possibly radar systems as well.
Acoustic fields are unexpectedly richer in information than is typically thought, according to David Dowling, a professor in U-M's Department of Mechanical Engineering and the lead researcher. He likens his approach to solving the problem of human sensory overload. Sitting in a room with eyes closed, one would have little trouble locating someone speaking at normal volume without looking. Speech frequencies are right in the comfort zone for human hearing. Now, imagine yourself in the same room when a smoke alarm goes off. That annoying screech is generated by sound waves at higher frequencies, and in the midst of them, it would be difficult for you to locate the source of the screech without opening your eyes for additional sensory information. The higher frequency of the smoke alarm sound creates directional confusion for the human ear. The new techniques developed by Dowling and his team will allow just about any signal to be shifted to a frequency range where you will no longer be confused.
Navy sonar arrays on submarines and surface ships deal with a similar kind of confusion as they search for vessels on the ocean surface and below the waves. The ability to detect and locate enemy ships at sea is a crucial task for naval vessels. Sonar arrays are typically designed to record sounds in specific frequency ranges. Sounds with frequencies higher than an array's intended range may confuse the system; it might be able to detect the presence of an important contact but still be unable to locate it.
Any time sound is recorded, a microphone takes the role of the human ear, sensing sound amplitude as it in varies in time. Through a mathematical calculation known as a Fourier transform, sound amplitude versus time can be converted to sound amplitude versus frequency. With the recorded sound translated into frequencies, Dowling puts his technique to use. He mathematically combines any two frequencies within the signal's recorded frequency range, to reveal information outside that range at a new, third frequency that is the sum or difference of the two input frequencies. This information at the third frequency is something that they haven't traditionally had before.
In the case of a Navy vessel's sonar array, that additional information could allow an adversary's ship or underwater asset to be reliably located from farther away or with recording equipment that was not designed to receive the recorded signal. In particular, tracking the distance and depth of an adversary from hundreds of miles away—far beyond the horizon—might be possible. And what's good for the Navy may also be good for medical professionals investigating areas of the body that are hardest to reach, such as inside the skull. Similarly, remote seismic surveys that parse through the earth seeking oil or mineral deposits could also be improved.
The science that goes into biomedical ultrasound and the science that goes into Navy sonar, according to Dowling, are nearly identical. The waves that he studies are scalar, or longitudinal, waves. Electromagnetic waves are transverse, but those follow similar equations. Also, seismic waves can be both transverse and longitudinal, but again they follow similar equations.
The study was published in the Physical Review Fluids journal.