FRVR’s AI game development platform will let users create their own video games using text prompts. The demo shows it taking a description of a top-down shooter and generating the code for a playable game. As the user refines their prompts, the game gets better. Users will be able to publish and monetize the games they create too. Full walkthrough video here.
A typical flipbook animation displays its images at roughly 12 frames per second. Andymation wanted to know if he could smooth out the steps in his flipbooks by filling extra frames with AI interpolation software. Will the additional frames improve the look of his animations or make them just look incredibly weird?
We’ve seen what artificial intelligence can do when asked to make a pizza commercial; now AI has been turned loose on a beer spot. Private Island used Stable Diffusion, Runway, and Modelscope to generate their version of those summertime commercials with bros ogling women while they sip their bland light beer.
French filmmaker Remi Molettee is known for their wildly inventive generative digital art. In the AI-enabled short film, Symbiosis, Remi transformed dancers the Ebinum Brothers into living, moving tree roots. As their bodies move and intertwine, they appear as wooden surrogates for blood vessels and veins.
Machine learning tech has enabled some truly imaginative imagery. Artist Vadim Epstein helped create a set of ML text-to-video tools that generated this landscape of pseudo-realistic organic forms incorporating veins, bones, and plants. Best enjoyed in full screen with headphones on. Soundtrack by Dvar.
AI image generation technology continues to improve – though it’s still not great at dealing with things like text and logos. While the results of ThomasDotCodes’ breakfast cereal experiment are questionable, we enjoyed watching the weird stuff styleGAN2 came up with when trained with 700 images of cereal boxes.
This experimental animated film takes us deep into a web of humans and technology. The surreal effect was created by artist ThomasDotCodes with the help of VQGAN-CLIP – a natural language image generation toolkit. Watch in 1080p or higher in full-screen mode for maximum impact. Then, go deeper into the Matrix.
Tools like DALL·E 2 have proven it’s possible for AI tech to create art based on text. Neural Synesthesia fed text descriptions of the history of the Earth and the evolution of its species into StableDiffusion, which it used as a guide to creating the video Voyage Through Time. The music is Order from Chaos by Max Cooper.
Machine learning technology can produce some fascinating results when asked to create art. TikTok user Recursive Identity used AI tech to create this trippy-as-hell generative art using the work of artist Edward Hopper as its data source. As we fly deeper and deeper into the painting, the images seem to turn in on themselves.
Motion designer Eddy Koek likes to create cool experimental visuals, using computer code as his medium. Here’s a compilation of some of his hypnotic moving images that he typically shares on his Instagram page. He also licenses his patterns for use by VJs as background art.
Paris filmmaker Benjamin Bardou’s experimental short is just one of a series of dreamlike visuals which explore life in a fictitious city known as Megalopolis. In this episode, an unknown intelligence examines the passengers aboard a subway car. You can view more from the series on the artist’s website or Vimeo page.
A trio of classical musicians teamed up with interactive artists Ouchhh on this innovative performance art work for Ars Electronica, using sensors to measure data from its cellist’s Delta, Theta, Alpha, Beta and Gamma brainwave activity to generate real-time visuals influenced by emotion, focus, auditory, and other neural response.