Teaching Machines Beauty
The way we see ourselves is defined by thousands of subtle and subjective nuances. A great portrait is only one picture out of many tries. But even though this one photo might be the one the photographer considers to be the most beautiful, doesn't mean the portrait subject sees it the same way. It's about the way you squint your eyes when you smile or how you purse your lips to look roguish, things somebody else just cannot see.

Brainchild is a machine learning based camera prototype that can be taught a person's individual sense for portrait aesthetics and make it accessible as a function. This function enables photographers to see through the eyes of the person who trained the camera, allowing them to fuse both individual understandings of aesthetic.

The visible impact on portraits shot with Brainchild might be very subtle or even invisible for many people. But for the ones being portrayed, who trained the camera, as well as for the photographer a new and personal aesthetic quality emerges, which changes the overall feeling and expression of the image.

left: ideal aesthetic, right: divergent aesthetic
Aesthetic Fusion

Machines are a billion times faster in quantitative tasks than we are. For a long time, the problem was one of not understanding quality. But that has changed with the improvement of so-called deep learning techniques — a subset of AI or machine learning.

By teaching Brainchild how you would like to be portrayed, it can assess the quality of what it sees in real time, and give haptic feedback when it perceives you looking your best, so the photographer can take a picture as if they would see you through your own eyes. Taking portraits is a very intimate art. Rather than taking the human out of the loop, Brainchild augments this human-to-human interaction by making it more collaborative, allowing the photographer to merge their aesthetic understanding with your's.

Comparison of portrait photography with traditional camera and Brainchild camera
Human Teaching, Machine Learning

The intervals between interacting with machine-learning products and experiencing their learnings are often very long. There are many technical challenges, but more accessibility and immediate feedback would help us teach more actively and develop a better intuition for them. In return, machines could learn better and become more personal.

Brainchild is designed to provide immediate feedback. You teach it how you like to be portrayed, instead of letting it guess, and every picture you take based on its haptic recommendation serves as feedback on how well it learned and how well you taught it. This co-adaptive approach helps you to develop an intuition for the system rather than a purely rational understanding.

Original prototype of the Brainchild camera
The Brainchild

How does the Brainchild prototype work? There are countless technical parameters we could talk about, but in the most simple terms, it makes use of a so-called transfer learning process based on a fine-tuned convolutional neural network. Transfer learning in general is a technique frequently used in computer vision. Brainchild uses a customized version optimized for very short feedback-loops with as little as 1 minute of training.

Brainchild needs to be taught five fundamentals:

1. What a human face looks like
2. How to learn about new faces
3. How to isolate the face in a picture
4. The way the portrait subject looks normally
5. The way the portrait subject looks when they deem themselves looking their best

The first two points require huge amounts of data, number crunching, and time..
The third point addresses the fact that when you take a portrait there is always something behind you. To focus only on you without distraction, Brainchild extracts your face from the background. That is something it learns before you use it.

Then you become the teacher. In order to learn how you look your best, it needs to know how you look normally. So, you feed it a few examples of both, either by taking new snapshots, or by using pictures stored on your phone.

To help Brainchild differentiate between the two picture sets you simply twist the front plate, switching between two learning modes. The serious smiley means every picture you take will teach Brainchild how you look “normally”. Switching to the happy smiley means every picture you take will teach it what it looks like when you look your best. This is how you get to know each other.

Left: training-mode “normal”, Right: training-mode “good”
As soon as the camera spots you in a way that matches what you taught it about looking good, it will gently vibrate and light up the camera’s trigger button. The vibration and brightness increase the closer you match with your own ideal. Brainchild doesn’t snap the photo for you, it just signals you at the optimal moment to do so.

You can now hand it over to anybody and enable them to see you through your own eyes, to portray you in the way you like it, and to fuse the way they see you with yours.

top-eft: photo-mode no feedback, top-right: photo-mode light and vibration feedback, bottom: portrait scenario
Currently we are working on a full body pose add-on for fashion photography. What would it look like if the model would train the camera of a photographer, infusing her own image of beauty? And on a higher level: What if we could teach machine leaning based products more actively, defining what they should learn, instead of having this defined by a company. How personal could they become if we'd teach them more about the qualitative things in our lifes that are often only visible to us personally?


Concept and Technology:
Jochen Maria Weber / @j___m___w

Visual Design:
Tiffany Yuan

Industrial Design:
Leo Marzolf

Special thanks:
Tobias Toft

Brainchild has been developed as a personal initiative of Jochen Maria Weber, Palo Alto, 2017.
Brainchild portraits taken with exhibition device
Exhibition device front
Exhibition device back