by Mohsina Alam in Culture & Lifestyle on 2nd April, 2026

Most people, whether consciously or unconsciously, engage with AI every day. Virtual assistants like Siri and Alexa have transformed and simplified household chores. AI overview on Google will neatly summarise an answer to your questions, so you don’t even have to read the search results. Spotify DJ and Netflix recommendations, powered by AI, know what you want to consume before you even know yourself. Some people now have close relationships with ChatGPT, treating the programme as a personal assistant, a close confidante or even a significant other.
Whether or not you’re on board with the anthropomorphicisation of algorithms like ChatGPT, it’s undeniable that AI has profoundly changed our day-to-day lives. As the influence of artificial intelligence increases, we are seeing it being adopted into the creative industries, creating complex problems for artists and creatives.
In recent years, the creative industries have demonstrated a growing curiosity towards AI. Paul McCartney’s usage of AI in 2023 allowed him to create what he called ‘the final Beatles record.’ The musician, alongside sound mixers, used AI tools to ‘purify’ a decades-old recording of John Lennon’s vocals to create a new song. McCartney commented that AI is “kind of scary, but exciting, because it’s the future.”
Not everybody shares McCartney’s excitement about the potential of AI Clouds of controversy hang heavy over the usage of AI in the creative industries. You might remember the intense discourse last year surrounding the release of the film, The Brutalist, after editor David Jancso revealed that he used AI tools to alter Adrien Brody and Felicity Jones’ Hungarian accents and to generate buildings shown in the film.
Jancso reasoned that the use of AI was to speed up post-production and reduce costs. He went on to say that the edits were minor and did not change the essence of the actors’ performance, stating that the actors did a ‘fantastic job,’ but he ‘wanted to perfect [their voices] so that not even locals will spot any difference.’
Yet, many critics and fans alike expressed their discomfort with the decision to use AI for editing and what this means for the future of film.
One user on X wrote, “The Brutalist AI s*** makes me so sad because how many times will this happen in the future where I see some beautifully crafted movie and find out it hid AI in parts just to cheap out without soul.”
“Some of the greatest stories of behind-the-scenes filmmaking happened because they had to get creative without the budget or time… [using] AI to fill those gaps [is] heartless.”

TV Producer Yasmeen Noor* shared a similar worry about the trajectory of AI within the TV and film industries. She states, “I would rather see the creative industry completely reject AI because it’s impossible to control it. People won’t use it for 20% of its capacity, they’ll go the whole way,” perhaps eventually getting to a point where actors become unnecessary, if deepfakes can be created of their voices and likeness.
Since my chat with Yasmeen, we have seen this shift begin to happen.
In the space of a year, AI has gone from being a tool used to create ‘minor’ edits in films like The Brutalist to now being relied upon almost entirely to generate the basis of a series, such as Primordial Soup’s new AI-generated series, ‘On This Day…1776’ on the TIME YouTube channel.
Yasmeen has witnessed how AI has been embraced by senior leaders within her industry and how it has impacted her colleagues. The production company she works for has begun using AI to generate voiceovers that are ‘cheaper’ and ‘quicker’ to create compared to hiring voice actors. Further, storyboard artists are disappearing in favour of using AI generated storyboards and designs.
Wajeeha J., a freelance editor and producer, has noticed a similar shift in her work. She has observed AI mostly impacting post-production teams.
The normalisation of AI has led to the loss of entry-level positions, as junior editors are being replaced with AI tools.
“But AI isn’t reliable enough to do the job,” she caveats, “It still makes mistakes which have to be checked by people. Senior-level editors are taking on low-level production work because companies aren’t investing in teams. It makes me feel worried for the next generation of creatives who won’t have the opportunity to learn and progress in the industry.”
I share Wajeeha’s concerns for the next generation of creatives. Entry-level jobs provide a way for young, inexperienced candidates to learn and develop their skills and understand the different aspects of the filmmaking process. To scrap these roles will make an already inaccessible industry even harder to break into for those without eons of experience or existing connections.
Although the algorithms aren’t yet perfect, AI is advancing rapidly, making it more difficult to spot. A year ago, we might have laughed at our parents for not being able to immediately spot an AI generated image, but with image generation programmes like Nano Banana Pro, developed by Google DeepMind that create scarily realistic images based on user prompts, it’s become all too easy to mistake an artificial image for something real. The same is true in the music industry.
The Velvet Sundown are a band boasting millions of streams on Spotify. After the release of two albums in June 2025, fans began getting suspicious about the band’s scarce online presence and the possibility of the music being AI generated after a report that they had used the AI platform Suno to produce their music.
On July 5th 2025, the ‘band’ posted a statement to their official social media describing themselves as ‘Not quite human. Not quite machine.’ The statement confirms that every aspect of the band – the members, stories, music, voices, and lyrics – are ‘original creations generated with the assistance of artificial intelligence tools employed as creative instruments.’ Long story short, the band members do not exist and their music is generated artificially.
The Velvet Sundown experiment has provoked conversations about the ethics of AI in the music industry. The tech companies that create generative AI programmes ‘train’ the programmes by inputting existing artists’ work – often without compensation to the artists and without the artists’ permission. Listeners of The Velvet Sundown commented that there was an eerie echo of musicians like Father John Misty and Of Monsters and Men within the band’s music.
Many artists have been impacted by the proliferation of AI in the music industry. In 2023, a song called Heart On My Sleeve, which used AI to clone the voices of Drake and The Weeknd, went viral on social media. The creator of the song stated that he used software that was trained on the musician’s voices – which Drake later revealed he was unaware of. Drake’s representative, Universal Music Group, wrote to streaming services including Spotify and Apple Music, asking them to prohibit artificial intelligence companies from accessing their artists’ catalogues.
Aside from the financial and time-saving benefits, some artists have defended the use of AI in the creative industries by suggesting it is embracing technological evolution, which can help artists push new boundaries in their work while saving money.
Personally, I’m not convinced. As we have already seen, AI is trained on the work of hundreds of thousands of artworks that have come before it, and is inclined to create an end product that is a mixture of these works. AI embracers will respond to this by arguing that all artwork is inspired from other art and that no piece of art can be called truly original, so what’s the issue with AI doing the same thing?
To that I would say, when people are inspired by art, they form a connection with it, or it sparks some kind of reaction within them. The artwork they subsequently create is an emotional response to their inspiration, demonstrating active engagement with the works preceding them. AI cannot do this because it doesn’t have the capacity to emotionally comprehend and respond to art. The products it creates end up being cheap copies and the user behind the prompt doesn’t have to feel bad, because it’s not clear even to them who exactly they are plagiarising. How bleak.
As we have seen, AI presents several problems for those in the creative industries. It is increasingly replacing artists working in media production and has exposed a plethora of ethical issues surrounding artist ownership, rights, and authenticity.
But most depressingly, the normalisation of AI in the creation of art, attempts to destroy the creative process itself. For The New Yorker, Ted Chiang wrote that ‘art is something that results from making a lot of choices.’ Writers agonising over whether or not to include a specific adverb in their dialogue, actors deliberating over their stance in a scene, painters mixing colours to convey the exact light hitting their muse’s hair. These are all choices that artists make continually in order to convey their ideas meaningfully and effectively.
To paraphrase Chiang, when you give an AI programme like ChatGPT or Grok a prompt and ask it to create a song, or an image, or a story, you are no longer making the choices that an artist has to make. AI has to fill in all the gaps, which it will do by either creating the average of all the previous work it has been trained on or copying the style of a certain artist. Whichever the case, the result is unoriginal, uninteresting and, by nature, no better than average.
This is why the suggestion that AI will be to the creative industries what photography and CGI were doesn’t persuade me. Digital art, like graphic design, photography, animation and CGI, requires artistic vision, input, design, and perspective. They are crafts that use digital tools to create new art, and crucially, are commanded at every step by skilled artists continually making choices. Users of AI algorithms lack this sense of control when creating art, leading to unreliable results. If the final product is a surprise to the artist, can they actually call themselves the artist?
TV producer Yasmeen takes issue with people using AI to create art, calling it ‘lazy’ and ‘inauthentic.’
“What are they learning from using it?” she contemplates, “How are they supposed to improve?”
Generative AI takes away the creative process from artists by supplying them with the answers, inhibiting growth and development through learning. Utilising AI to make these decisions completely negates the journey of discovering what it is you want to share and how you want to share it.
The companies that promote generative AI claim that it is a way to democratise art – it makes the creative process easy and accessible. Anyone can create a song, publish a novel, or create a piece of art without actually having to do the work of creation. This prioritises inspiration and idea over execution, when in reality, the two need to work in tandem to create truly meaningful, innovative art.
Art is a reminder to all of us that we are not alone in our experiences of the world. It is a form of connection for so many people. To quote James Baldwin,
“You read something which you thought only happened to you, and you discover that it happened 100 years ago to Dostoyevsky. This is a great liberation for the suffering, struggling person, who always thinks that he is alone. This is why art is important. Art would not be important if life were not important, and life is important.”
Art has always been a means of communication between the artist and the audience. The beauty of art, to me, is that it attempts to portray subjectivity, shaped by our beliefs and our experiences of the world. That experience is the product of a life lived – relationships, connections with society, faith, nature. AI programmes have never felt the sun on their non-existent faces. They’ve never fallen in love, had their imaginary heart broken, felt true rage or fear or excitement, and they never will.
These things are necessary to create art that means something. Humans draw on experiences, and AI pulls together averages and statistics. When has the latter ever been conducive to the creation of good art?
It’s not all doom and gloom, though. Artists and creatives have been loudly resisting the permeation of AI in the industry and championing traditional forms of creativity again. The Spanish singer-songwriter, Rosalia, was hailed by listeners and critics alike for her 2025 album LUX. In particular, her collaboration with the London Symphony Orchestra on the song Berghain to create a haunting orchestral soundtrack against German operatic vocals was praised. Fans on social media posting reaction videos shared that they felt the song was exciting, pushed boundaries, and was unlike anything else being put out at the time.
Linton Stephens, presenter of Radio 3’s Classical Fix show, commented that the song was experimental and innovative, stating,
“What’s unique [about the song] is how Rosalía then brings in her own style and influence. She drops down the octave, and the genres begin to morph from traditional to modern. That’s what innovation is all about…Also, I think with the introduction of AI, instrumental collaboration from the orchestral world reminds us that it’s authentically human-made.”
Leading creatives in the film industry have not shied away from speaking out against the adoption of AI within filmmaking. Frankenstein director Guillermo del Toro told NPR that he is “not interested [in AI], nor will [he] ever be interested. To Reuters, Steven Spielberg declared, “I do not want AI to make any creative decisions that I can’t make myself.”
Wajeeha J. summarises it frankly, “No one in the industry likes AI because it defeats our role in telling genuine stories. It’s being pushed on us by executives who want to save time and money.”
But she’s noticed the rise of a ‘counter-culture’ in response to the sinister prospect of AI becoming totally normalised in the industry.
“There’s definitely a pushback against artificial content. I’ve noticed people feeling inspired to shoot on film again, using Super 8 or Super 16 cameras to create videos. People are tired of ‘Netflix’ light and things looking super glossy. I think people want to see authentic art again that reminds them of the past.”
The pushback to the use of AI in film by audiences also makes it perfectly clear that the majority of people don’t want to watch AI-generated content either. @TheCinegogue summed it up perfectly on X,
“The only upside to stuff like this [Aronofsky’s series] is seeing how strong the negative backlash is, which will hopefully stop filmmakers from making more of it.”

It’s clear that the topic of AI is contentious within the creative industries. Some artists are curious about its possibilities, while others are entirely against it being embraced in the creative process.
As AI-generated art continues to pervade the creative industries, I think the best way to resist it is to actively incorporate human stories into our lives as much as possible. This could look like going to a local community theatre performance, visiting a small exhibition by an artist you don’t know, or maybe even going to a poetry reading. By showing up for the artists who are telling their real, authentic stories, we encourage others, and ourselves, to do the same. When we remember that our art is important because our lives are important, anything AI creates pales in comparison.
*Name changed for privacy
Mohsina Alam is a freelance writer based in Brighton, England. Her work has covered varied topics, including art, literature, fashion, politics, and lifestyle. She is the creator of the love club, a Substack newsletter all about writing, personal thoughts and observations, and, of course, love.