About Psychedelic visual interpretations of famous poems
Poems have always been a subject of fascination for AI researchers. Over the years, various tech giants like Google, Microsoft, and OpenAI have dabbled in creating AI models that can compose, correct, or even convert images into poetry. However, a new video by computer artist and animator Glenn Marshall has taken a unique approach. Using AI, Marshall has created visual interpretations of the poem ‘In the Bleak Midwinter’ by the 19th-century English writer Christina Rossetti.
- Visualization Technique: The AI creates visualizations that are both striking and slightly unsettling. These aren’t high-definition animations but have the feel of low-resolution GIFs. The visual style can be compared to the video of the song ‘Drag Ropes’ by the band Storm Corrosion.
- Story2Hallucination Library: Marshall employed the Story2Hallucination library to transform words into video. This results in some parts of the video interpreting the poem’s words literally. For example, the phrase “water like a stone” is visualized as a stone emerging from water.
- Narration Tool: The narration in the video is powered by the vo.codes tool, which uses the voice of Christopher Lee to bring the poem to life.
- Growing Trend: The use of AI to create visualizations from text is on the rise. OpenAI recently introduced a library named DALL-E that can convert words and phrases into images. This trend indicates a future where poets and writers might have access to a plethora of AI tools to turn their writings into visual masterpieces.
- Integration with Other AI Models: The potential to integrate this visualization technique with other AI models, like OpenAI’s DALL-E, could lead to even more sophisticated and diverse visual interpretations of text in the future.
- Customizable Visual Styles: In the future, there might be options for users to choose and customize the visual style in which the poem or any text is interpreted, allowing for a more personalized experience.
- Interactive Visual Narrations: Future iterations might allow users to interact with the visual narrations, enabling them to explore different interpretations or even modify the visuals in real-time as they engage with the content.