What could help human behavior? Not what could change it. This is where we start when it comes to developing artificial intelligence for our audio solutions at Jabra. As a tech company, we’re constantly pushing forward at the confluence of anthropology and technology in order to create products that make life better. And though AI and 5G are enabling a captivating technology landscape ahead, we’re also focused on how AI can make a difference today.
Which is why SmartSound demonstrates our first implementation of AI in our audio products. While third-party augmented audio experiences and voice-assisted software promise further benefits for hearables in the future, we are committed to developing in-house AI for our hardware to enhance all of those experiences today, as well as in the future. For these reasons, the Elite 85h headphones were a critical project to get right from a software perspective, as our first headphones to truly integrate AI into the listening experience.
The route to intelligent audio
Though GN has been leading communication innovations for 150 years, one of our more recent partnerships was key to enabling our in-house software development work in the field of AI. Seven years ago, audEERING was founded as a spin-off of the Technical University of Munich. Today, the company is leading in the field of intelligent audio analysis and emotional artificial intelligence. As a part owner in the company, Jabra developed SmartSound in partnership with audEERING to deliver better experiences as we create audio solutions that are increasingly of use all day long.
The audEERING audio software recognizes acoustic scenes thanks to machine intelligence and deep learning techniques. Because for headsets to function and excel in all areas of media and communication, they’ll need to understand what environments they are in to deliver a responsive audio solution. Within minutes, we can go from noisy commutes to needing awareness in traffic and then wanting complete calm in the office or at home. As we navigate our days, it makes sense that our technology should be adjusting our audio to optimize for the environments we move through.
The audEERING audio software recognizes acoustic scenes thanks to machine intelligence and deep learning techniques.
Learning your surroundings
The first step towards creating personalized audio was to create a model which can detect more than 6000 unique sound characteristics. Lacing these together with contextual intelligence, our AI then uses this real-time analysis to adapt the headphone’s audio output based on your personal preferences.
A balancing act
Balancing smart active noise cancellation and microphone-fed external sounds, your listening experience can respond to the environment around you. Four of eight microphones are used by the powerful digital ANC to filter out the noise around you, while six are engaged during calls to block out wind noise and background distractions. With Juniper Research reporting that voice assistants in use are predicted to triple to 8 billion by 2023, creating technology that supports a voice-first environment will enable augmented audio experiences that help us live smarter, listen smarter and do more.
Ahead of the rise in voice, we wanted to give full control over how much noise is fed in via the microphones and how active your noise cancellation is so that anyone can customize their music or calls profiles, and have them change seamlessly during daily movements and changing environments.
Whether it’s an important transport announcement, always being available to take calls, or completely checking out during your travels, SmartSound moves with you. Hearables and AI have a high-growth future ahead, but being able to control your surrounding noise is a reality today, and it’s just the start of the intelligent audio wave ahead.