The Purple Durian

View Original

Meditative Pixels | Unraveling Screen and AI Obsession in a Meta Lyric Music Video Featuring Alan Watts from Nox Vahn & Marsh's 'Come Together'

See this content in the original post

"Meditative Pixels" aims to be a thought-provoking and innovative electronic music video that critiques how we consume content on screens and our growing obsession with AI-generated content. Initially conceived as a project for a Time-Based Design class at Parsons, the video evolved from an assignment focused on kinetic text into a unique, multi-layered exploration of screen culture and AI. Featuring Nox Vahn & Marsh's track "Come Together," which masterfully remixes an excerpt from Alan Watts' insightful lecture "Getting Into the Meditative State" into an immersive electronic soundscape, the video employs AI-generated visuals to create a captivating experience.

The video takes viewers on a journey through the evolution of screen consumption, starting with karaoke machines, which popularized displaying lyrics on-screen. As the video progresses, we transition to the era of desktop computers, where lyric videos gained traction on platforms like YouTube. Finally, we arrive at the present day, where mobile screens dominate our lives and have become the primary medium for consuming content.

"Meditative Pixels" aims to challenge viewers to question their relationship with screens, AI-generated content, and how they consume media. By integrating Alan Watts' powerful insights with Nox Vahn & Marsh's captivating electronic track, AI-generated visuals, and a critique of the evolution of screens, "Meditative Pixels" aims to offer a unique and engaging perspective on the role of screens and AI in our lives and the impact they have on our understanding of music, spoken word, and ourselves. Personally, the video serves as a reminder to step back, reevaluate our screen habits and fascination with AI, and focus on the message rather than the medium.

See this content in the original post

AI Image-Generated Content:

In creating the "Meditative Pixels" music video, I tried an innovative approach to generating captivating visuals by utilizing the outpainting capabilities of OpenAI's advanced generative model, DALL-E 2. The process began with sourcing images from the platform Midjourney, which provided the base content for the AI-powered extrapolation.

DALL-E 2, a powerful generative model, combines extensive training data with natural language understanding to analyze visual structures, styles, and patterns. By providing a text prompt that describes the desired output, DALL-E 2 can generate images that extend or modify the original image while maintaining visual coherence.

For instance, one of the images sourced from Midjourney depicted people singing karaoke in Japan. To extend this image using DALL-E 2's outpainting feature, the prompt "1960s Japan, vintage, women singing karaoke with a karaoke machine in the background" was provided. This prompted the AI model to generate an extrapolated image that expanded the scene while preserving the elements of the original image, adding a 1960s vintage aesthetic, and placing a karaoke machine in the background.

The background images were based off an old book cover that I had that I fed into DALL-E 2's image to image algorithm.

Which you can have a lot of fun with like by putting in a Where’s Waldo image and getting a really nice painting from it, also used in the video.