YouTube is launching a new feature giving creators the power to make music with AI-generated vocals from famous pop stars including John Legend, Sia and Charlie Puth, the platform announced Thursday, amid rising tensions between creative industries and the tech sector over who owns material produced by generative artificial intelligence.
YouTube said the experimental AI feature, called Dream Track, will allow a “small group of select U.S. creators” to generate unique 30-second tracks for use on Shorts, the Google-owned platform’s short-form answer to TikTok.
From Thursday, YouTube said Dream Track will be able to generate music in the style of nine artists—Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Papoose, Sia, T-Pain and Troye Sivan—from a typed text prompt.
All aspects of the soundtrack—lyrics, instruments and voice—will be generated by the AI tool, YouTube said, adding that the software is powered by Google DeepMind’s “most advanced music generation model to date,” Lyria.
YouTube executives Lyor Cohen and Toni Reid, writing in a blog post announcing the launch, said the nine artists have all chosen to “shape the future of AI in music” and are “intensely curious” about how AI tools can help “push the limits of what they thought possible.”
YouTube said it is hoping to develop AI tools that could craft a new guitar riff from a hummed tune or take a pop track and give “it a reggaeton feel,” which will potentially be available to participants in its Music AI Incubator to test out later this year.
Video demonstrations of the new tool show how Dream Track could be used to create a Charlie Puth style track from the prompt “a ballad about how opposites attract, upbeat acoustic” and a T-Pain style track from “a sunny morning in Florida, R&B”.
The limited launch of YouTube’s AI music tools follows the platform launching a bevy of other AI-powered for creators to use, including AI-generated backgrounds, topic suggestions for videos and music search. It also comes just days after it cracked down on synthetic content on the platform amid growing concerns in the industry and among regulators, governments and civil society groups that realistic looking audio and video could fuel disinformation and enable new forms of abuse. It introduced measures requiring creators to disclose when they have created or fabricated content that is realistic, including using AI tools, or face penalties and risk suspension. Other platforms like TikTok are also rolling out tools and requirements to ensure deepfakes and AI content is clearly flagged as fabricated or altered.
With the proliferation of increasingly impressive and realistic content content from generative AI models like OpenAI’s text and image generators ChatGPT and Dall-E, there have been increasing calls for tools to help people distinguish between real and fabricated content. Leading AI companies have pledged to add watermarks to AI-generated content to help rebuild the eroded boundaries of what content is real and fake online. In a post on Dream Track’s launch, Google Deepmind, the tech giant’s U.K.-based AI research lab, said audio published through its Lyria model will be watermarked with “content that’s inaudible to the human ear and doesn’t compromise the listening experience” but allows for the detection even after audio content is manipulated.
What To Watch For
It’s not clear when or if YouTube will roll out its AI tools beyond the select test cohort or whether new artists will be added. The tool’s release comes amid escalating tensions between creative sectors and AI firms over the ownership of material produced by generative AIs and the rights of companies to build these models on material, such as music, owned and performed by other people that it may incorporate into itself. These tensions have rippled across industries and worked their way into the courts, including for authors like Game of Thrones writer George R. R. Martin and music publishers over lyrics for songs by stars including Beyoncé, Gloria Gaynor and Katy Perry.