Apple Music is the latest platform to join the ranks if those offering users AI disclosures. The platform has introduced a new system called Transparency Tags.
Apple Music has said that AI music added or uploaded to its platform must now have a “Transparency Tag”. It advised that a Transparency Tag should be applied when a “material portion” of the audio track has been created using AI or features AI-generated content. There are four types of Transparency Tags that apply to four different aspects of the service: songs; composition; music videos, and album artwork. Multiple tags can be applied to a musical work as they relate to each area. Interestingly, the composition category also includes lyrics.
According to Music Business World, Apple said in a newsletter: “Proper tagging of content is the first step in giving the music industry the data and tools needed to develop thoughtful policies around AI.”
The application of these tags are required to be applied by the so-called supply chain, not the platform. Distributors of the music (whether its the label or directly for smaller artists) have the discretion to decide qualifies as AI-generated and where to apply the tags. It is unclear whether there will be a process to check that distributors are indeed complying with the policy.
Why are AI-generated content disclosures so important? It’s about informed choice
Whether you’re on board with it or not, AI and its integration into all facts of our digital lives is happening – and rapidly. Advances in AI-generated content now mean it’s becoming difficult for even the most digitally-savvy to distinguish between what’s real and what’s not. Services like Nano Banana and ByteDance’s just-released SeeDance generate images and videos that are so realistic, anyone could create images or scenes of events that actually didn’t happen. Many are calling this shift to a world of fake media the death of photographic evidence.
In response, a number of platforms have responded in a bid to protect users from AI-generated content; especially given that AI content can often be misleading, promote mis- and disinformation, or show a distorted or unrealistic view of the world. Flagging AI content is essential for factual accuracy and fair representations of reality.
AI-generated content also raises with it a number of ethical concerns. Generative AI needs to be trained on a source material. Image generators are trained on existing art, photographs and other visual works. Music generators are trained on the music and lyrics from existing artists and bands. Language models are trained on existing books, journalism, research, poetry and other written works. Most of the creators of these original works have had their work stolen for commercial use without consent or financial compensation.
Meanwhile, the companies behind these generative AI bots receive millions in fees from users, selling a product created from other people’s labour. AI is also significantly more resource-intensive than other technologies like regular Google Search, sucking down mass quantities of electricity and water, sparking concerns for the environmental impacts.
It’s for these reasons that AI-generated content transparency is so important. Users need to be able to understand if the content they’re consuming is accurate, and whether or not it’s contributing to and feeding broader social inequalities. This allows them to make an informed choice.
Which other services have AI transparency?
Meta, which owns platforms Facebook, Instagram and Threads, said in 2024 that it would roll out AI disclosures on content, although it is unclear where we are in this process. Importantly, Meta said the disclosures are or would be applied when the platforms can detect the content is AI-generated.
“In the coming months, we will label images that users post to Facebook, Instagram and Threads when we can detect industry standard indicators that they are AI-generated,” Meta said in a release on its newsroom. If you’re wondering why you’re yet to see an AI disclosure on these platforms, it’s likely the rollout isn’t complete, or the platforms are having difficulty knowing what is AI and what is not.
Spotify announced it was requiring distributors to flag if any part of their uploaded content was AI-generated back in September 2025. It also announced enhanced monitoring of impersonation and spam. French streaming app Deezer has gone down a different route to Apple and Spotify, however. It built an AI-generated detection system that automatically tags content, rather than relying to distributors to disclose whether they’ve used AI.
YouTube has a controversial AI disclosure policy. It requires AI disclosure when using the likeness of a real person, when altering footage of real events and places or generating realistic scenes. It also clears outlines that no disclosure is required for people who use AI-visual enhances or beauty filters to change a person’s appearance and that any content that is easily identifiable as AI is also exempt. The situation with Pinterest also comes with controversy. Pinterest does have a Gen-AI system that flags and tags content that is AI-edited or generated but it’s not particularly reliable. Some users have gone as far as saying the platform is now unusable. What was once a great source of inspiration, is filled with sterile and largely useless images.
Even Australian commercial radio now has an AI-disclosure policy, after a radio station put AI generated hosts on air for a number of months before it got caught.
Stay inspired, follow us.




