Two Minute Papers shows off a fascinating technology that can identify patterns in images then move them in the direction it thinks are most appropriate for their shape. The research paper is available here, and you can play with a version of the tech in the cinemagraph app Motionleap.
Based on NVIDIA’s GauGAN research project, Canvas is an AI-powered computer graphics app that can produce realistic landscapes from doodles. It can even export parts of the drawing as Photoshop layers. The Beta version of the app is available for Windows 10 users with any RTX graphics card.
The 1963 classic Jason and the Argonauts is known for its masterful use of stop-motion by the great Ray Harryhausen. CaptRobau was curious to know what the animation would look like at a higher frame rate, so he used motion interpolation software to smooth out the action. He did something similar to the original King Kong.
If you’ve ever seen the music video for the Gorillaz track Clint Eastwood, you might have noticed that the animated band’s movements don’t sync up with every note of the music. Chickenscopes decided to create an updated version of the opening scene where the music really matches the movements. Original video here.
Often copied, never duplicated, The Beatles were arguably the most influential pop band of the 20th century. With a little help from their friends OpenAI Jukebox, Broccaloo conducted an experiment to see what kind of music it would come up with if you fed it a bunch of the Fab Four’s classic tunes. The results are downright weird.
If you’ve ever shot a video where you wanted everyone to do something at the same time, you know it’s hard to get that level of coordination. Two Minute Papers shows off an AI-based technology that can re-time moving elements in a video, complete with shadows, reflections, and deformations. Read the full paper here.
The Mars Curiosity Rover has been roaming the surface of Mars since 2012. Using AI tech, Curiosityandbeyond upscaled and colorized a series of images captured by the rover over the course of a year. We can’t wait to see what kind of images and data the Perseverence Rover sends home after it lands in early 2021.
“She knows where you got your legs.” After collecting a number of the Southern rock band’s song lyrics into a database, Funk Turkey pointed an AI bot at the file, and asked it to create words for a new ZZ Top song. The result: the ridiculous, but catchy tune “Funky with My Baby.”
If you’ve used Photoshop’s content-aware fill, you know that it’s gotten easier to remove objects from still images. Doing the same in video is much trickier, but as Two Minute Papers explains, there’s new AI-based tech that’s really good at removing objects from moving images. It can also expand content into missing areas.
Stop-motion animator LEGOEddy ran one of his 15 fps animations through a tool called DAIN, which converted his original video to a buttery-smooth 60 fps. The software not only interpolates frames but is able to properly handle depth-of-field and occlusion (objects hidden behind others.) Learn more on Two Minute Papers.
Machine learning technology continues to get more and more impressive – especially when it comes to working with images. A group of researchers from China are showing off DeepFaceDrawing, an amazing piece of software which can synthesize photorealistic human faces using nothing more than a rough pencil sketch.
Only like the marshmallows from Lucky Charms? Well you could buy a bag without the oat bits, or you could do what these guys from Google did, and build a machine that separates them for you. The Teachable Sorter can actually be used to recognize and sort other objects, and you can get the code, 3D files, and build details here.
UK tech company Sonantic has developed an AI-driven text-to-speech system that can generate digital voices with much greater expression than others. In this clip, you’ll hear “Faith” a completely artificial voice character act out a story with tremendous emotion. Look out voice actors, the robots are coming for you!
Flying multiple drones near each other can lead to accidental collisions. Now, engineers at Caltech have developed an data-driven method that can safely control the movement of multiple drones in crowded spaces, without pre-mapping the space or knowing what patterns the other drones will fly in.
Video technician Denis Shiryaev of Neural Love took some early 20th century film footage from Tokyo, Japan, and processed it to increase its resolution and frame rate, repair damage, and add colorization. The result is sort of a living postcard of the time and place. The ambient sounds were previously added by Guy Jones.
Disproportionately big eyes, a sailor’s outfit, brightly-colored hair… these are but a few of the trademark characteristics of girls in anime. Over at MakeGirlsMoe, machine learning algorithms can generate a seemingly endless set of anime characters, as seen in the video here.
Ever wonder what a Rick Astley song written by an AI might sound like? YouTuber Lil’Alien demonstrates what neural network called Jukebox came up with when asked to create more of Never Gonna Give You Up based on the data fed into its electronic brain. Play with more weird AI music on the Jukebox Sample Explorer.
Meme and chart generator site imgflip has built a tool that uses artificial intelligence technology to automatically generate text for their most popular meme templates. Most of the things its computer brain comes up with are absurd, but every once in a while, it strikes comedy gold.
We’ve all been on conference calls where you wondered if your presence was really necessary. Our preferred approach is to tune out until we hear our names mentioned, but Matt Reed’s idea is even better. He created a AI-based clone to stand in for him. It’s more than a little rough around the edges though.
The Art Assignment argues that whether it be something as primitive as bones or as advanced as a neural network, there’s always a human touch at the root of all machines used to make art. We like to think of it from the other end: art is unfinished until a human mind ponders it.
What do you get when you take a bunch of old Garfield comic strips and feed them into a machine learning algorithm designed to morph them into a cohesive animation sequence? Nightmares, that’s what. Go figure it was the guy behind Garfield Gameboy’d that tipped us off to Daniel Hanley’s bizarre AI experiment.
Comedian Keaton Patti claims to have fed an AI system 1000 hours of footage from Batman movies (we didn’t know there were that many), and then let its tech produce a new script based on what it learned. Nerd Odyssey posted the very silly result of his efforts, with animation by C4DNerd.
What happens when you load up an artificial intelligence neural network with lyrics to country music’s greatest hits, and then ask it to write its own tune? You end up with this gem, courtesy of Botnik Studios. We have to wonder if this is how Bad Lip Reading creates its song lyrics.
Machine learning tech keeps getting more impressive. OpenAI created two teams of competing agents – one that seeks, and the other that hides. What’s truly amazing is how the self-supervised agents started creating obstacles to improve their chances, despite being given no explicit incentive to use those objects.