CAVcamera: Try on new
ways of seeing

A collaborative photography app, powered by
personalised AIs trained on subjective concepts

Google AI (Mural x PAIR x Brain): 8 weeks

Material exploration with Concept Activation Vectors, interaction
design-led R&D, advanced prototyping, on-device ML.
This project builds on CAVstudio, a moodboard-based tool that lets anyone teach an AI system to understand nuanced visual concepts powered by a technology called CAVs.
CAVcamera is a speculative prototype that lets us explore what it feels like to share a camera interface with AIs trained on visually subjective concepts.

CAVcamera - see the world through the eyes of others.

• Collaborative photography, live in the viewfinder
• Using AIs trained on subjective concepts
• Curating, cropping, surfacing new ways of seeing
1/5
Get inspired

Explore new perspectives from different people. Learn about concepts we’ve made. Save them to your library so they’re ready in the camera when you want them.

2/5
Point, then shoot.

This is a collaborative experience. You choose a concept then capture moments you’re curious about.

The AI sees what you see and can signal potential matches by colouring captured ‘moments’ based on their CAV score.

3/5
Over to the AI...

The AI analyses your captured moments. It curates a shortlist of potential matches for each concept you’ve used, testing alternate ways of cropping that best match the concept.

4/5
The big reveal

Shortlisted results are organised by concept, based on how close a match the AI thinks they are. You can inspect the AI’s scoring and compare its crops to the original to see what it's drawn to.

Select the favourites you’d like to keep, and discard the duds.

5/5
Build your collections

Saved photos live in Favourites. You can browse by time, or thematically by concept - a collection of images you’ve taken using a unique perspective. A place for reflection, and for sharing.

How it works

CAVcamera runs on Android and is designed for Google’s Pixel phones. It utilises the Pixel’s on-device TPU for fast, privacy preserved, ML processing. This lets us run CAVs live in the viewfinder, providing real-time visual feedback to help guide the user and shortlist the best results.

Camera as a shared Human:AI control surface

By placing the AI directly in the shutter button, it can point or signal when it might be good to take a photo: using colour and dynamism to express the CAV-score of the current composition.

We tried to make the shutter feel expressive without influencing the user too much. Here, the user is still in control, with the AI as co-pilot.

Transparency and control

We use the CAV-score to decide which images to shortlist. Results are split into ‘close’, ‘related’, or ‘distant’, implying qualitative judgement, while leaving room for human taste to reinterpret.

The AI auto-crops images to optimise compositions against the concept. This can feel magical, revealing high-scoring parts of an image that might not have been your intended focus. Or, you can always revert back to the original if you prefer.

On-device personalisation

Take an existing concept and use it as a starting point to add your own unique perspective. ‘Personalise’ provides gradiented controls to score images you’ve taken from best to worst, and the AI then re-learns the concept using your positive and negative direction to skew the model to better match your own interpretation of a concept.

All processed on-device so what’s personal to you stays that way.

“Using a CAV is a bit like borrowing someone elses glasses, so that they can point things out to you based on how they see the world.”
Tom

What next?

Try CAVcamera yourself

CAVcamera is on the Play Store. It’s designed for Pixel 4, 5 and 6, but works on other Android devices too.

Check out CAVstudio

CAVstudio is where you can make your own visually subjective concepts, that can be imported into CAVcamera.

Build with CAVlib

CAVlib makes it easy for people to utilise the expressive power of CAVs in their own projects and products.

And finally thanks to:

Been Kim and Emily Reif from Google AI who developed the TCAV technology and worked with us to humanise it.

Alex Etchells, Rachel Maggart and Tom Hatton for their artistic experimentation with CAVs.

Eva Kozanecka, Alison Lentz and Alice Moloney from Mural who commissioned the project and guided it creatively.


Not to mention Matt Jones, Martin Wattenberg and Fernanda Vegas who helped us uncover the value of CAVs.