How AI Music Startup AudioShake Is Expanding Content Localization

Audience engagement is a primary focus for most music artists, and Forbes reports that efforts are evolving for creators to grow their fan bases. Take Spotify's Release Details Page and Audience Engagement Stats, which measure the impact of marketing efforts and contextualizes audience growth. And Apple AAPL Music's Music Analytics API that offers aggregated, anonymized counts of listeners and plays to shed light on fans while protecting their privacy.

Just as customized streaming powers fan engagement, localizing music videos and content into various languages allows artists to further optimize their global marketing strategies to attract and maintain lasting connections to fans. In that spirit, YouTube recently launched a multi-language audio feature that allows creators to add dubbing to their videos so that the content can be viewed and enjoyed by an international audience.

Early this year, YouTube also announced that creators who tested multi-language dubbed videos enjoyed more than 15 percent of their watch time from views in the video’s non-primary language. On average, this past January, viewers watched more than 2 million hours of dubbed video daily. And YouTubers like James Stephen Donaldson, better known as Mr. Beast, who brought stunt and challenge-centered videos onto the platform, agree that dubbing is vital for creators trying to reach a wider audience and maximize fan engagement beyond a local market.

While artists may be eager to begin dubbing, professional services to localize their content cost upwards of hundreds of dollars per minute. AI is one possible way to localize content, though the production quality of AI often falls short to that of professional dubbing services since music and dialogue featured in music videos can interfere with accuracy of automatic transcription, translation and captioning.

AI music startup AudioShake is disrupting the music industry with its on-demand platform and API that makes quality dubbing and content localization viable for content creators at all levels. The company's AI music-separation technology allows musicians, audio engineers, producers, publishers, labels and other content creators to apply their audio content to new uses in karaoke, sync licensing, re-mixes, spatial audio, VR/AR, gaming, dubbing and social apps.

MORE FOR YOU

"We take audio and help make it accessible and interactive," says AudioShake CEO Jessica Powell. "Image and video has come very far, but not audio, and that's what AudioShake addresses. Breaking it into its building blocks allows you to do many things across media and entertainment."

Used by major labels, music publishers and film production studios, AudioShake's patented stem separation technology cleanly deconstructs audio including music, dialogue and sound effects from any video into parts that can be used across music, film, dubbing, transcription and synthetic voice. The technology has even spurred fan engagement on TikTok where users played guitar with Green Day.

​AudioShake was used to remaster English rock band The Libertines​' 2002 album​, "Up The Bracket​," for the​ 20th anniversary release​.

Joe Smith, production manager at Rough Trade Records​, worked on the project and says, "When working with older material, it has long been a challenge to find project files we can tidy up and do the album justice. AudioShake was a crucial element in making a remaster of this iconic album possible, without it we wouldn’t have had the same level control and care."

Less than two years after launching AudioShake, Powell was collaborating with three major label groups and publishers including Primary Wave, Hipgnosis, Spirit, peermusic, Concord, Downtown and Reservoir. She has also taken on projects with Hollywood film and television studios and dubbing and transcription companies. What sets AudioShake apart, Powell says, is its deep learning, "source separation" AI solution focused on B2B speech, dialogue and music separation.

"What's going to really change the media landscape quite a bit is an area of content localization. You might have YouTube creators who want to get their videos out in 40 languages and reach more eyeballs, and our technology can strip the incoming audio dialogue for a much cleaner speech that can be taken through automatic translation, transcription and into a synthetic voice with a much higher degree of accuracy," Powell explains. "Meanwhile, we can retain all the original music so that the new videos have that original music, so you get a really faithful localization that doesn't require a Hollywood budget."

In addition to a partnership with cielo24, AudioShake recently teamed up with Dubverse, a fully-automated AI dubbing solution for creators to localize content quickly. AudioShake has raised $5 million total, including $2.7 million in a recent seed round with investors Metallica-backed Black Squirrel Partners, AJR and Crush Ventures. As Powell's startup invests in research and engineering teams along with AI model training, artists can look forward to better localization to bolster fan engagement and revenue.

View the rest of the story here

Comments

Popular posts from this blog

Aileen Cannon—Judge Hearing Trump’s Documents Case—Reportedly Made ‘Significant’ Mistakes In June Trial

POV: Researchers often undercount how much visual misinformation is on Facebook

10 Best End of the World Love Stories From TV and Movies