Manuel Lagunas

I am an applied scientist at Amazon in Madrid, where I work on developing computer vision and machine learning techniques to solve catalog issues. Before, I did my PhD at Universidad de Zaragoza, where I was advised by Diego Gutierrez and Belen Masia. During my PhD, I was working on problems at the interface between computer vision, computer graphics, and human perception.

You can contact me at mlgns at amazon dot com

 路   路   路   路 

profile photo

Publications

I am interested in topics at the interface between computer vision and computer graphics. Those include - but are not limited to - how can we inversely model the world i.e. acquire material properties, light, or geometry from simple input sources (images); or how can we develop faster/ more intuitive methods to manipulate digital assets or foster artistic processes.

You can find my PhD thesis highlighted.

In-the-wild Material Appearance Editing using Perceptual Attributes
J. Daniel Subias, Manuel Lagunas
Computer Graphics Forum (CGF, proc. Eurographics), 2023
arXiv

We perform in-the-wild intuitive material editing using perceptual attributes. We recover high frequency details from the input while mantaining the intuitive editing capacity of the model.

A Generative Framework for Image鈥恇ased Editing of Material Appearance using Perceptual Attributes
Johanna Delanoy, Manuel Lagunas, Jorge Condor, Diego Gutierrez, Belen Masia
Computer Graphics Forum (CGF), 2022
Project pagearXivCodeBib

We rely on an estimation of the geometry from the input image and an editing network that uses high-level perceptual attributes to perform intuitive material editing.

Learning Visual Appearance: Percetion, Modeling and Editing
Manuel Lagunas
supervised by Diego Gutierrez and Belen Masia, 2021
Cum Laude (highest grade awarded by the institution where the PhD was defended)
Pdf

This thesis improves visual content creation algorithms by connecting physical parameters to intuitive human attributes related to visual appearance.

Single-image Full-body Human Relighting
Manuel Lagunas, Xin Sun, Jimei Yang, Ruben Villegas, Jianming Zhang, Zhixin Shu, Belen Masia, Diego Gutierrez
Eurographics Symposium on Rendering (Proc. EGSR), 2021
arXivCodeBib

We train a generative model to perform in-the-wild human relighting lifting the assumptions on materials being Lambertian.

The Joint Role of Geometry and Illumination on Material Recognition
Manuel Lagunas, Ana Serrano, Diego Gutierrez, Belen Masia
Journal of Vision (JoV), 2021
Project pagearXivBib

Comprehensive study on the influence of geometry, illumination, and their frequencies in our performance recognizing materials from images.

The Role of Objective and Subjective Measures in Material Similarity Learning
Johana Delanoy, Manuel Lagunas, Ignacion Galve, Diego Gutierrez, Ana Serrano, Roland Fleming, Belen Masia
ACM Transactions on Graphics Posters, 2020
AbstractPoster

Analysis of subjective and objective measures when we develop computational methods for material similarity.

A Similarity Measure for Material Appearance
Manuel Lagunas, Sandra Malpica, Ana Serrano, Elena Garces, Diego Gutierrez, Belen Masia
ACM Transactions on Graphics (TOG, Proc. SIGGRAPH), 2019
Project pageArxivCodeBib

We introduce a neural-based similarity metric that learns from perceptual data. It outperforms state of the art, is aligned with human perception, and can be used for several applications.

The Effect of Motion on the Perception of Material Appearance
Ruiquan Mao, Manuel Lagunas, Belen Masia, Diego Gutierrez
ACM Symposium on Applied Perception (SAP), 2019
PdfBib

Comprehensive study of the effect of motion in our perception of high-level perceptual attributes that describe material appearance.

Learning Icons Appearance Similarity
Manuel Lagunas, Elena Garces, Diego Gutierrez
Multimedia Tools and Applications (MTAP), 2018
ArxivBib

We introduce a similarity model capable of retrieving icons based on their style and visual identity. We rely on a siamese model paired with a triplet loss function that learns from crowd-sourced data.

Transfer Learning for Illustration Classification
Manuel Lagunas, Elena Garces
Conferencia Espa帽ola de Informatica Grafica (Proc. CEIG), 2017
ArxivCodeBib

We develop a transfer learning method that fine tunes the initial layers of a convolutional neural network. This allows it to learn low-level features (strokes) wich are important in illustrations.



Feel free to steal this website's source code. Do not scrape the HTML from this page itself, as it includes analytics tags that you do not want on your own website — use the github code instead. Also, consider using Leonid Keselman's Jekyll fork of this page.