Hollywood Agency Signs First AI Actor
A Hollywood talent agency is preparing to sign its first nonhuman actor, an AI entity named Tilly Norwood. This character was created by Eline van der Velden, a human actress and comedian. Van der Velden seems quite pleased to act as a bridge between cost-conscious film executives and engineers providing digital solutions for on-screen representation. For several years, she has aimed to expose the technical shortcomings in the film industry, and now it appears she’s succeeded in her mission.
The music industry, having faced its own technological upheaval since the days of Napster, demonstrates a similar response to AI. Spotify is adjusting its policies to accommodate curated AI music applications, emphasizing its proactive stance against AI impersonation. They’ve initiated efforts towards establishing a standard international contract that outlines disclosure for AI-generated content.
This role—creating nonhuman avatars to portray what once required human emotion—might be termed “nonhuman avatar technicians.”
This move towards increased protection and scrutiny of AI-generated music represents significant progress. Perhaps there’s hope after all. Somewhere within the corporate sphere, we hear cries for authenticity from humans longing for real connections. Spotify’s approach to managing the relationship between digital entities and people could serve as a model for other companies. As this integration continues, it’s crucial that genuine human feedback is incorporated.
But before anyone congratulates themselves, there’s an important question that lingers: Why are leading AI companies choosing to roll out creative applications first? Would we prefer to reinvent roles like HR or paralegal? It’s a bit conspiratorial, but it’s worth pondering. If we can sidestep AI chaos in the arts, can we, as humans, manage to stave off a broader encroachment of AI into business, law, and religion? We should aim higher.
In the past, conspiracy theorists have pointed to historical forces—and perhaps secret agendas—suggesting division and control. It’s hard not to notice the split between left and right with respect to art and technology. Spotify has recently removed 75 million dubious songs, which isn’t insignificant. This action illustrates the company’s effort to uphold the sanctity of art, yet their history and the constant push to “drive numbers” remind audiences and critics to remain vigilant.
Neko Case, a notable singer-songwriter from the Pacific Northwest known for her candid political views, expressed her despair over the impact of AI in music. From her latest album, she reflects, “I had the privilege of creating this record in the studio, ensuring that every sound was made by real people.”
It’s easy to envision how discussions around these issues will evolve. Posing as mere puppeteers to cater to public preferences isn’t a sustainable model; we might label them as “technical creators of nonhuman avatars,” working against the tide of false accolades.
Reflecting on the current state of AI and the immense $500 billion investment from the US government—an example of the fear of missing out (FOMO)—it feels like we might be heading towards another bubble. Leaders in Hollywood and the music industry appear to be reeling, unsure of their next steps, compelled by an urge to latch onto any “creative” talent that might arise from AI. But is AI genuine creative talent? I think not.
Hollywood must find ways to reinvigorate itself. Artists like Case are urged to seek deeper avenues, aiming for authenticity amid prevailing ideological divides. Currently, audiences seem to prefer real artistry and human perspectives in storytelling. However, adapting to audience desires might require exploring new options sooner rather than later.





