Posts

Draw what you imagine!

Image
  What’s New:    A picture worth a thousand words now takes just three or four words to create, thanks to GauGAN2, the latest version of NVIDIA Research’s wildly popular AI painting demo. Key Insight:  T he deep learning model behind GauGan  allows anyone to channel their imagination into photorealistic masterpieces — and it’s easier than ever. Simply type a phrase like “sunset at a beach” and AI generates the scene in real time. Add an additional adjective like “sunset at a  rocky  beach,” or swap “sunset” to “afternoon” or “rainy day” and the model, based on  generative adversarial networks , instantly modifies the picture With the press of a button, users can generate a segmentation map, a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like sky, tree, rock and river, allowing the smart paintbrush to incorporate these doodles into stunning images.   How it works:   G

Cartoon your image

Image
  What’s New:   Cartoon image generation using AI namely using Convolutional Neural Network and Generative Adversarial   Network. Key Insight: Cartoons are an artistic form widely used in our daily life. In addition to artistic interests, their applications range from publication in printed media to storytelling for children’s education. Like other forms of artworks, many famous cartoon images were created based on real-world scenes. Figure shows a real-world scene whose cor-responding cartoon image appeared in the animated film. However, manually recreating real-world scenes in cartoon styles is very laborious and involves substantial artistic skills. To obtain high-quality cartoons, artists have to draw every single line and shade each color region of target scenes. Such tools also provide a useful addition to photo editing software such as Instagram and Photoshop. How it works: Convolutional Neural Networks (CNNs) have received considerable attention for solving many compute

Stock-Trading Test Bed

Image
  If you buy or sell stocks, it’s handy to test your strategy before you put real money at risk. Researchers devised a fresh approach to simulating market behavior.  What's new : Andrea Coletta and colleagues at Sapienza University of Rome used a Conditional Generative Adversarial Network (cGAN) to  model  a market’s responses to an automated trader’s actions. Key insight:  Previous approaches tested a simulated trader in a virtual market populated by other simulated traders. However, real-world markets tend to be too complex to be modeled by interactions among individual agents. Instead of simulating market participants, a cGAN can model aggregated sales and purchases in each slice of time. Conditional GAN basics:   Given a random input, a typical GAN learns to produce realistic output through competition between a discriminator that judges whether output is synthetic or real and a generator that aims to fool the discriminator.   A  cGAN  works the same way but adds an input — in

Image to Animation model

Image
  What's new :  Generating a video sequence with desired motions and object using a driving video sequence. This has lot of applications like movie production, photography etc. Key insight:  Creating a synthesized video from a Deep generative model requires some steps. If you remember Style transfer, we decoupled  style  and  content  of the image to get the target image. It’s similar here except that we are decoupling  motions  and  appearance . Motions : They are achieved by a driving video with a similar object we intend to animate. Appearance : For this, we use a source image, the object which we will be the object in the generated movie. Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video.  This framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same

Your AI pair programmer

Image
  Trained on billions of lines of public code, GitHub Copilot puts the knowledge you need at your fingertips, saving you time and helping you stay focused. GitHub Copilot is powered by Codex, the new AI system created by OpenAI. GitHub Copilot understands significantly more context than most code assistants. So, whether it’s in a docstring, comment, function name, or the code itself, GitHub Copilot uses the context you’ve provided and synthesizes code to match. Together with OpenAI, we’re designing GitHub Copilot to get smarter at producing safe and effective code as developers use it. How it Works Extends your editor:  GitHub Copilot is available as an extension for Neovim, JetBrains, and Visual Studio Code. You can use the GitHub Copilot extension on your desktop or in the cloud on  GitHub Codespaces . And it’s fast enough to use as you type. Speaks all the languages you love:  GitHub Copilot works with a broad set of frameworks and languages. The technical preview does especially we

Scam Definitely! Way to detect robocalls

Image
  Robocalls slip through smartphone spam filters, but a new generation of deep learning tools promises to tighten the net.                                                                                                                                 What’s new:  Research proposed fresh approaches to thwarting robocalls. Such innovations soon could be deployed in apps,  IEEE Spectrum  reported. How it works:  RobocallGuard , devised by researchers at Georgia Institute of Technology and University of Georgia, answers the phone and determines whether a call is malicious based on what the caller says.  TouchPal , proposed by a team at Shanghai Jiao Tong University, UC Berkeley, and TouchPal Inc., analyzes the call histories of users en masse to identify nuisance calls . RobocallGuard starts by checking the caller ID. It passes along known callers and blocks blacklisted callers. Otherwise, it asks the caller who they are trying to reach and listens to the reply using a neural network that 

The Many Faces of Genetic Illness

Image
  People with certain genetic disorders share common facial features. Doctors are using computer vision to identify such syndromes in children so they can get early treatment. What’s new:   Face2Gene  is an app from Boston-based  FDNA  that recognizes genetic disorders from images of patients’ faces. Introduced in 2014, it was upgraded recently to identify over 1,000 syndromes (more than three times as many as the previous version) based on fewer examples. In addition, the upgrade can recognize additional conditions as photos of them are added to the company’s database — no retraining required. How it works:  New  work  by Aviram Bar-Haim at FDNA, Tzung-Chien Hsieh at Rheinische Friedrich-Wilhelms-Universität Bonn, and colleagues describes the revised model.  Face2Gene’s underpinning is a convolutional neural network that was pretrained on  500,000 images of 10,000 faces  and fine-tuned on proprietary data to classify 299 conditions such as Down syndrome and Noonan syndrome.  The devel