Not in ten years’ time. Now. Only a short time earlier, the idea of a “human-made” stamp on music, images or film might have sounded like a slightly eccentric thought experiment. But the world has moved very fast.
In mid-2025, an AI-generated "rock band," The Velvet Sundown, quietly climbed to more than a million monthly listeners on Spotify with retro-sounding songs and a superficially convincing backstory.
Most listeners assumed they were hearing a real group, even if their "band photos" were quite obviously generated. Only later did it become clear that the music, the images and even the “band members” were synthetic creations, assembled with generative tools.
Since then, an entirely artificial R&B singer, Xania Monet, became the first known AI “artist” to earn enough radio play to debut on a Billboard airplay chart.
A bidding war between labels followed, ending in a reported multi-million-dollar deal with Hallwood Media.
Now, Paul McCartney and a large group of other recording artists have released an album of silent songs, containing nothing but recordings of empty spaces, under the title Is This What We Want, protesting the proposed changes to UK copyright law which would allow AI companies to use these artists' works with no licensing requirement.
Industrial shift
Behind these visible stories sits an industrial shift.
The AI music platform Suno, which turns text prompts into full songs, has just raised 250 million dollars at a 2.45-billion-dollar valuation. The French streaming service Deezer now says that roughly a third of all tracks uploaded to its platform every day – around 50,000 pieces of music – are already AI-generated.
So far, those tracks account for only about half a percent of listening, but up to 70 percent of their streams appear to be fraudulent, driven by bots gaming the royalty system.
In parallel, Deezer and the polling firm Ipsos asked 9,000 people in eight countries to listen to AI-generated songs next to human-made ones. Ninety-seven percent of listeners could not reliably hear the difference.
So we are entering a world where AI-generated music is everywhere, where it sounds real to almost everyone, and where the financial incentives strongly favour flooding the system with cheap, synthetic sound.
If you are a working musician, that is not an abstract problem. It goes straight to the question of whether you can pay your rent. And yet, as stark as this looks, there is also an opportunity hidden inside it.
The culture business – the world of writers, musicians, designers, filmmakers, actors, editors and all the people around them – employs a few tens of millions of people worldwide.
Compared with the global workforce, that is nothing. A drop in a shot glass, in a bucket, in the ocean. And yet, try to imagine civilisation without them.
Despite this relatively small number of people, cultural work in aggregate is a massive industry in itself, and it fuel tourism, enhances national soft power, furthers education, and often leads urban (and rural) regeneration, all in addition to the primary value of books, songs, designs, tapestries, paintings as objects of beauty, and communication.
With fewer than one in a hundred people globally working in culture, this small group holds the narrative keys to how we understand the past, interpret the present and imagine the future. They shape stories that define communities and nations. They safeguard memory and turn complexity into something we can talk about. They help anchor democracy through publishing in independent media and asking uncomfortable questions.
Artists create shared rituals through festivals, performances and gatherings. And they do all that regardless of how difficult or indeed dangerous it my be.
In Nazi-occupied Warsaw, inside the closed walls of the ghetto, there were five professional theatres and a symphony orchestra. They began to perform only weeks after that terrible place was set up and continued daily until its destruction, and the murder of its inhabitants.
In the Balkans, during the wars of the 1990s, artists kept painting, writers kept writing, musicians continued to play. They did this not because they were safe, or well paid, but because the impulse to make meaning is stubborn, and survives hardship and privation.
Over the last four years I have had many conversations with Ukrainian artists, writers and musicians who continue to create, just as the Russians keep bombing their cities and villages.
The net result of this, in fact, is that the world has finally been learning about Ukraine's long history and rich culture, but what a price to pay...
Flawless, frictionless 'content'
I have come to realise that that culture workers are society’s operating system. We encode values, process trauma and imagine the next version of the world, often without a stable income, safe contracts or the security that people in more conventional jobs take for granted.
That life has always been precarious, and that has been an accepted fact in the arts circles. Alas, of late, platform economies have made it more so by rewarding scale over craft, and now generative AI introduces a new kind of threat, because it does not just automate tasks. It can produce objects that compete directly with human-made art in the same charts, the same playlists, the same feeds.
But it also creates a sharp contrast, and in that contrast lies an opportunity. In a sea of flawless, frictionless “content,” people start to look for signals of something else.
Virginia Dignum, a leading AI ethics researcher who advises the European Union and the World Economic Forum, puts it very simply: “Signals of authenticity will soon matter more than content.”
Madeleine Schulz, writing in Vogue Business, makes a related point: in a world that feels “numb, dubious and algorithmic, craft feels humanist, sensual and true… craft and provenance are essentially the way to prove value.”
Krzysztof Pelc, a professor of international political economy, goes further: “Research suggests that people tend to prefer art they believe is human-made… This ‘authenticity’ will likely become increasingly prized, with consumers seeking works that reflect individual human vision and passion.”
The interesting thing is that the data now backs them up. A new series of experiments at Columbia Business School found that when people were shown very similar images and told which ones were human-made and which were AI-generated, they valued the human-made works much more highly – up to 60 percent more – even though they often could not tell them apart by sight.
‘Verified Human Content'
Human-labelled art was rated as more creative, more labour-intensive and more valuable, especially when displayed next to AI-labelled pieces.
As one of the researchers put it, “I’m waiting for the day when I’m scrolling through my algorithm and see a ‘Verified Human Content’ label.” In other words: AI does not automatically destroy the value of human creativity. In some contexts, it can even increase it, as long as we can clearly tell which is which. That is the opportunity.
Agnieszka Cichocka, CEO of CreativeTech Poland, with a long track record of projects at the intersection of art and technology, sees both the potential and the likely pitfalls: “We spend our days experimenting with exactly the tools everyone is talking about. We see the upside very clearly: AI can help musicians compose, produce and reach audiences in ways that were unimaginable even five years ago.
"But we also see a hard fork in the road. One path leads to a healthier ecosystem where human creators use these tools and still get paid for the value they bring. The other leads to platforms stockpiling synthetic catalogues that never generate a single royalty payment for a living musician.
"Our job now is to make sure the first path remains open – through standards, policy, and concrete products – so that in ten years’ time there is still a meaningful need for organisations like ours, because there are still human artists making work that deserves to be licensed and rewarded.”
Before we get to the “how,” it is worth saying one more thing about process.
Technology has always been part of art. From oil paint to synthesizers, from printing presses to digital cameras, we constantly invent new tools that make some things easier and other things possible for the first time. What is different now is not that tools are powerful. It is that they can give you plausible results without requiring you to go through the friction that normally shapes a piece of work.
Pretend art
Generative systems, especially when used in “lazy mode,” short-circuit the creative process. There seems to be no need to wrestle with the blank page if you can have your first draft written for you in the time it takes to make coffee.
There seems to be no need to struggle with chords if you can ask for “indie ballad, early Coldplay vibe” and receive a polished imitation in seconds.
When we ask these systems to produce finished songs or images for us, what we get is not art, however. It is an approximation of art: something that looks or sounds like the result of human searching, but without the actual search. Pretend art. Ersatz art. Make-believe art.
On its own, that approximation is not the end of the world. The real danger comes when platforms and producers flood the cultural space with such approximations because they are cheap to make and easy to monetise.
We end up with a twin assault: amateur users chasing quick dopamine hits and professional operations using AI to optimise revenue. The spectrum narrows; horizons shrink; variety dies out.
The worry is not that people use tools. It is that we drift into a normalisation of the perfect fake, and in that flood the necessarily slower, more idiosyncratic work of human artists is drowned out.
Of course, when the marginal cost of producing another song or another piece of visual art nears zero, what’s the perceived value of ANY song or piece of visual art?
Value accrues only to those who produce such work at scale, and those who distribute it. Attempting to compete against such massive machinery can only lead to frustration. We have seen this with photography, now we are seeing it with every piece of artistic output.
One of Poland's leading young pianists and composers, Michał Salamon, can see the creative potential of the technology, of course, but worries about the broader context.
“Look, I’m not scared of the technology itself. As a composer, I can see ways AI can help sketch ideas, test arrangements, even open doors I might not have thought of.
"What scares me is how fast the business side is twisting that into something ugly. You can feel the streaming platforms and big labels leaning towards a future where most of what they ‘release’ is synthetic catalogue that never gets tired, never negotiates, never needs to be paid.
"That isn’t ‘innovation,’ it’s a quiet, systematic eviction of working musicians from their own field. If AI is going to be part of music, fine – but it cannot be an excuse to turn human artists into unpaid R&D for a library of fake products.”
'Hallmark for human creativity'
So the question becomes: how do we keep the door open for that slower, more difficult work? How do we help people who do care, and will care even more in the future, to find it?
One potential answer is deceptively simple. We build a system that lets us say, in a verifiable way: this work is human-made. Not to police people’s tastes. Not to ban tools. But to give us a way to attach recognition, money and policy support to the work that still comes out of human effort and human listening. Think of it as a kind of “hallmark for human creativity.”
This would need to be voluntary and creator-driven. Nobody should be forced to join, and the idea is not to set up a council of elders deciding what counts as “real art.” This is not about aesthetics, but rather about economics.
The point is to give artists, writers, filmmakers and musicians a way to declare, “this is how this work was made,” and to have that declaration recorded in a trustworthy way.
It would be administered by trusted cultural institutions or non-profits – arts councils, culture ministries, international networks of creators – not by the big technology platforms themselves. We have already seen what happens when platforms mark their own homework.
Technically, bringing such a solution to live need not be difficult. The pieces are already here: open metadata standards that can travel with a file; blockchain or other tamper-resistant registers; public databases that anyone can query; watermarking and content-provenance tools being developed by news organisations and image agencies.
At its simplest, a certification scheme could have three main tiers. The first would be “human-created,” for works conceived and executed by identifiable people using tools that do not themselves generate new content. The second would be “human–AI hybrid, with author disclosure,” for work where a person has used generative systems in the process – as an instrument, a collaborator or a sketchpad – and is willing to describe how. The third would be “AI-generated without human intervention,” for outputs where no individual claims artistic authorship.
The exact wording is less important than the principle: we make the process visible again.
Once that visibility exists, all kinds of practical uses become possible.
Arts councils and culture ministries could build the certification into their funding and acquisition processes, using it as one factor when deciding where public money should go. Digital platforms and marketplaces – Etsy, Bandcamp, Substack, Patreon, streaming services – could offer filters and playlists that prioritise human-certified work, for those who want it. (Watching the uptake of such a system with great interest would be the platforms in whose interest it may not be to offer anything resembling AI/human differentiation.) Festivals and prize committees could require disclosure of generative methods as part of their submission rules. Yes, some already do. This would make their job easier.
Buyers and collectors could use the label in the same way they use “organic” or “fair trade” in food: as a quick way to find work that aligns with their values. For some it would be a nice-to-have. For others, especially institutions with a public mission, it could become a requirement.
We are already seeing early prototypes of this idea.
Actor and director Justine Bateman has launched the Credo23 film festival in Los Angeles, where filmmakers must pledge that no generative AI has been used “in any way” to qualify, and films receive a visible stamp of being entirely human-made.
On the technical and policy side, people like John C. Havens at the Institute of Electrical and Electronic Engineers (IEEE) have spent the past decade building frameworks for “ethically aligned design” in AI, turning ideas about human well-being into concrete standards. The fact that John happens to be a fine blues musician probably has something to do with that.
The point is: this is not fantasy. If we can track serial numbers for washing machines, certify chemicals as safe or unsafe, and build global systems of copyright registration and licensing, we can certainly build a registry for human-made cultural work.
Yes those things have existed for a long while, but there was a time when they did not. The stakes are higher, but the underlying task is similar: a design problem that sits at the intersection of law, technology and cultural policy.
Of course, there are risks. A “human-made” label could easily turn into gatekeeping if it is controlled by a narrow group of institutions. It could create a new hierarchy in which certain styles or traditions are favoured because they fit neatly into a bureaucratic form.
There is also the risk of gaming. Any system that adds financial incentive will attract people who want value without doing the work. If certification is run or captured by major platforms, it could become yet another marketing tool rather than a genuine signal.
'Pure' and 'impure' practice
And we need to be careful not to stigmatise artists who use generative tools in honest, transparent ways as part of their process. There is nothing sacred about suffering through technical tasks that a machine can do better.
The line is not between “pure” and “impure” practice. It is between work where a human bears artistic responsibility and work where no such responsibility exists.
But none of these challenges are reasons to give up. They are reasons to design the system well.
If culture workers are part of society’s symbolic infrastructure, then a human-made certification system is one way to tag that infrastructure before it is overwritten. It is a symbolic reaffirmation of human authorship in an algorithmic age, but it can also be a very practical tool.
It gives policymakers something concrete around which to build support – tax incentives, grant schemes, visibility rules, education programmes. It gives platforms and broadcasters an easy way to prioritise human work when they choose to. It gives audiences who do care a way to act on that care without needing a PhD in signal analysis.
Most of all, it reminds us that behind every certified song, book, picture or film there is a person, or a group of people, who chose to spend their limited time alive making that thing, rather than something else.
I keep coming back to a simple thought: the ability of artists to make a living is not just an internal arts problem. It is a question of civilisational self-defence. If the tens of millions of people who hold that line lose their already fragile footing, the cost will not only be measured in euros or dollars. It will be measured in how we remember, how we argue, how we imagine, as a society.
In a world of perfect fakes, imperfect human work is worth protecting
The rise of AI-generated content presents a genuine existential threat to those livelihoods. But, if we are smart, it also presents an existential opportunity: to say out loud, and design into our systems, that human creativity is not just “content.” It is labour, skill, attention and love.
A “human-made” mark on a song or a film will not solve everything. It will not magically fix streaming economics or end fraud. But it is a start. It is a way to make the invisible visible again.
The technology to do this exists. The ideas are on the table. The question now is whether we care enough, collectively, to build it – and to insist, gently but firmly, that in a world of perfect fakes, the imperfect, risky, deeply human work is still worth protecting.
Ralph Talmont
The author is a Warsaw-based author, entrepreneur, multimedia producer and communications consultant.
© Ralph Talmont 2025; thecreativefarm.substack.com