I am an applied scientist at Amazon in
Madrid, where I
work on developing computer vision and machine learning techniques to solve catalog issues.
Before, I did my PhD at Universidad de Zaragoza, where I was
advised by Diego Gutierrez and Belen Masia. During my PhD, I was working on problems
at the interface between computer vision, computer graphics, and human perception.
You can contact me at mlgns at amazon dot com
 路 
 路 
 路 
 路 
Publications
I am interested in topics at the interface between computer vision and computer graphics. Those
include - but are not limited to - how can we inversely model the world i.e. acquire material
properties, light, or geometry from simple input sources (images); or how can we develop faster/
more intuitive methods to manipulate digital assets or foster artistic processes.
We perform in-the-wild intuitive material editing using perceptual attributes. We recover high
frequency details from the input while mantaining the intuitive editing capacity of the model.
We rely on an estimation of the
geometry from the input image and an editing network that uses high-level perceptual attributes to
perform intuitive material editing.
We introduce a neural-based similarity metric that learns from perceptual data. It
outperforms state of the art, is aligned with human perception, and
can be used for several applications.
We introduce a similarity model capable of retrieving icons based on their style and visual identity. We
rely on a siamese model paired with a triplet loss function that learns from crowd-sourced data.
We develop a transfer learning method that fine tunes the initial layers of a convolutional neural
network. This allows it to learn low-level features (strokes) wich are important in illustrations.
Feel free to steal this website's source
code. Do not scrape the HTML from this page itself, as it includes analytics
tags that you do not want on your own website — use the github code instead. Also, consider
using Leonid Keselman's Jekyll fork of this page.