If you’ve ever shot a video where you wanted everyone to do something at the same time, you know it’s hard to get that level of coordination. Two Minute Papers shows off an AI-based technology that can re-time moving elements in a video, complete with shadows, reflections, and deformations. Read the full paper here.
THE BEST Ai
The Mars Curiosity Rover has been roaming the surface of Mars since 2012. Using AI tech, Curiosityandbeyond upscaled and colorized a series of images captured by the rover over the course of a year. We can’t wait to see what kind of images and data the Perseverence Rover sends home after it lands in early 2021.
“She knows where you got your legs.” After collecting a number of the Southern rock band’s song lyrics into a database, Funk Turkey pointed an AI bot at the file, and asked it to create words for a new ZZ Top song. The result: the ridiculous, but catchy tune “Funky with My Baby.”
If you’ve used Photoshop’s content-aware fill, you know that it’s gotten easier to remove objects from still images. Doing the same in video is much trickier, but as Two Minute Papers explains, there’s new AI-based tech that’s really good at removing objects from moving images. It can also expand content into missing areas.
Stop-motion animator LEGOEddy ran one of his 15 fps animations through a tool called DAIN, which converted his original video to a buttery-smooth 60 fps. The software not only interpolates frames but is able to properly handle depth-of-field and occlusion (objects hidden behind others.) Learn more on Two Minute Papers.
Machine learning technology continues to get more and more impressive – especially when it comes to working with images. A group of researchers from China are showing off DeepFaceDrawing, an amazing piece of software which can synthesize photorealistic human faces using nothing more than a rough pencil sketch.
Only like the marshmallows from Lucky Charms? Well you could buy a bag without the oat bits, or you could do what these guys from Google did, and build a machine that separates them for you. The Teachable Sorter can actually be used to recognize and sort other objects, and you can get the code, 3D files, and build details here.
UK tech company Sonantic has developed an AI-driven text-to-speech system that can generate digital voices with much greater expression than others. In this clip, you’ll hear “Faith” a completely artificial voice character act out a story with tremendous emotion. Look out voice actors, the robots are coming for you!
Flying multiple drones near each other can lead to accidental collisions. Now, engineers at Caltech have developed an data-driven method that can safely control the movement of multiple drones in crowded spaces, without pre-mapping the space or knowing what patterns the other drones will fly in.
Video technician Denis Shiryaev of Neural Love took some early 20th century film footage from Tokyo, Japan, and processed it to increase its resolution and frame rate, repair damage, and add colorization. The result is sort of a living postcard of the time and place. The ambient sounds were previously added by Guy Jones.
Disproportionately big eyes, a sailor’s outfit, brightly-colored hair… these are but a few of the trademark characteristics of girls in anime. Over at MakeGirlsMoe, machine learning algorithms can generate a seemingly endless set of anime characters, as seen in the video here.
Ever wonder what a Rick Astley song written by an AI might sound like? YouTuber Lil’Alien demonstrates what neural network called Jukebox came up with when asked to create more of Never Gonna Give You Up based on the data fed into its electronic brain. Play with more weird AI music on the Jukebox Sample Explorer.
Meme and chart generator site imgflip has built a tool that uses artificial intelligence technology to automatically generate text for their most popular meme templates. Most of the things its computer brain comes up with are absurd, but every once in a while, it strikes comedy gold.
We’ve all been on conference calls where you wondered if your presence was really necessary. Our preferred approach is to tune out until we hear our names mentioned, but Matt Reed’s idea is even better. He created a AI-based clone to stand in for him. It’s more than a little rough around the edges though.
The Art Assignment argues that whether it be something as primitive as bones or as advanced as a neural network, there’s always a human touch at the root of all machines used to make art. We like to think of it from the other end: art is unfinished until a human mind ponders it.
What do you get when you take a bunch of old Garfield comic strips and feed them into a machine learning algorithm designed to morph them into a cohesive animation sequence? Nightmares, that’s what. Go figure it was the guy behind Garfield Gameboy’d that tipped us off to Daniel Hanley’s bizarre AI experiment.
Comedian Keaton Patti claims to have fed an AI system 1000 hours of footage from Batman movies (we didn’t know there were that many), and then let its tech produce a new script based on what it learned. Nerd Odyssey posted the very silly result of his efforts, with animation by C4DNerd.
What happens when you load up an artificial intelligence neural network with lyrics to country music’s greatest hits, and then ask it to write its own tune? You end up with this gem, courtesy of Botnik Studios. We have to wonder if this is how Bad Lip Reading creates its song lyrics.
Machine learning tech keeps getting more impressive. OpenAI created two teams of competing agents – one that seeks, and the other that hides. What’s truly amazing is how the self-supervised agents started creating obstacles to improve their chances, despite being given no explicit incentive to use those objects.
Károly of Two Minute Papers explains how Researchers at Google have devised an AI-backed translation tech which maps speech to speech without an intermediate translation to text. What’s really amazing is that it can synthesize the voice of the original speaker into the other language. Listen to some of its voice samples here.
Experimental band Hardcore Anal Hydrogen created a trippy and vibrant music video for their thrash metal track Jean-Pierre, created with the help of artificial intelligence tools like Deep Dream, Neutral Style Transfer, and DeepFlow. Read more about the project here.
Researchers from NVIDIA demonstrate “A Style-Based Generator Architecture for Generative Adversarial Networks,” which is a fancy way of describing artificial intelligence that’s capable of creating human face variants and other objects that never actually existed in real life.
Use Arrow Keys ← → for Faster Navigation