Machine Learning models are expensive to train in data, time and cost. As such, they’re mostly produced in small quantities by big tech companies, which creates homogenised models with a unified view of the world.
They’re great for objective tasks like recognition, but fail to tackle subjectivity, where visual interpretation is shaped by culture and community, and is personal to the individual.
For AI to play a deeper role in our lives, we will require new, shared forms of Human:AI expression.
With only a handful of images, we can use a new technology called CAVs (Concept Activation Vectors) to quickly skew a model’s focus in a particular direction - allowing anyone to personalise an AI model to see things in a particular, nuanced way.
Express a way of seeing as a visually coherent moodboard, then see how the AI model interprets your concept.
CAVstudio lets you choose from different model layers, depending on the concept you’re trying to express. Some layers are more sensitive to colour and texture, others are better for shapes and compositions.
Refine your concept by upweighting images that best express your way of seeing. Downvote results where the AI isn’t quite getting it. Iterate.
Upweighted images appear larger in the moodboard, reflecting their ‘weight’ of bias being applied in the CAV.
Search results are surfaced in seconds. Preview shows the images that best match the concept, curated by the model. AI Crop goes a step further letting the model point to the optimum composition in the chosen image.
Image curation is based on CAVscore. By cropping each image in multiple ways, and then scoring each crop against the CAV, we can pull in to focus where the concept is most salient.
See the results through the eyes of the AI with Inspect mode’s heatmap and focus tools. This helps you understand what visual qualities the AI is drawn to, to help you get on the same wavelength
Heatmap uses the perceptually uniform 'magma' to represent CAVscore per pixel. Focus represents the most optimum crop based on CAVscore.
CAVstudio was built with some rudamentary search sets of images that we assembled for testing. But you’re not restricted by this. Anyone can upload their own search set, be it from their own image bank or public access repositories.
Everyday examples by Nord Projects was produced using ambiently captured imagery from London and the UK. It includes urban, rural, environments, objects and graphics, but excludes other contexts or cultures.
Last but not least, decide how best to name your concept. Each concept is saved to your library and can be shared with others using the URL, or by downloading the .CAV file itself.
Because concepts are subjective, we avoid the explictness of a #. Instead we use the tilde ~ to reflect their inherent approximation and interpretability.
These concepts form the beginnings of an indexable cultural ecosystem.
And by using moodboards instead of labels, CAVstudio lets anyone working with visual imagery train and collaborate with ML systems, irrespective of language or technical expertise.
Encourages you to interrogate your surroundings in search of seemingly ordinary, otherwise overlooked scenes and details.
Qualities:
Extraordinary ordinary objects, abstract graphic details, bold colours, challenging to decipher or fine reference point at first, inviting the viewer to take a closer look.
An experimental style of photography, inspired by Polaroids. Defined by loose but close crops, vibrant colours and soft focus.
Qualities:
Soft focus, strong colour. A meditative trans-like state of looking, where objects beging to separate from representation and functions as something else.
An active and visually evocative way of describing a rupture, breaking or bursting apart.
Qualities:
Deconstructionist, analytical, breaking into pieces for understanding the whole, at the same time images are expressive, human. Hard-edged, prismatic, ripped, scarred, bursting, with a dark nucleus or focal point.
CAVstudio is a browser-based tool that uses a Python backend to generate a CAV from your training images. It then sorts a set of images to play back what it saw. The CAV can be downloaded to be used in other projects with CAVlib.
• Give the ML model a handful of images and it analyses them at a pixel level, spotting patterns and relationships.
• CAVs learn to find an underlying visual thread, present across a set of images
• This is described by the ML model in the form of a mathematical ‘vector’ or direction in high dimensional space (ML speak for a machine’s imagination). We call this concept a CAV.
• ML models can then use this new found understanding of subjective concepts like ~Graphic to search image sets in order to surface meaningfully different results.
Give the ML model a handful of images and it analyses them at a pixel level, spotting patterns and relationships. CAVs learn to find an underlying visual thread, present across a set of images
This is described by the ML model in the form of a mathematical ‘vector’ or direction in high dimensional space (ML speak for a machine’s imagination). We call this concept a CAV.
ML models can then use this new found understanding of subjective concepts like ~Graphic to search image sets in order to surface meaningfully different results.
‘Concept collages’ is a technique we developed as a way for the CAV to visually express what it thinks is the essence of a concept.
Here, moodboard training images are segmented based on the saliency of the CAV/concept in that image. The highest scoring segments are then overlaid on top of each other, creating a collage effect.
Concept collages are an incredibly rich form of visual vocabulary, providing a flavour of a concept. We imagine these being useful when creating a concept, or deciding which concept to use.
CAVlib is a Python Library that exposes the underlying tech powering CAVstudio. It lets anyone take .cav files and use them in their own websites, apps and prototypes.
In only a few lines of code, CAVlib unlocks the power of meaningful visual interpretation and search for a host of potential new applications. Visit the GitHub to learn more and try it yourself.
CAV Camera is on the Play Store. It’s designed for Pixel 4, 5 and 6, but works on other Android devices too.
CAVstudio is where you can make your own visually subjective concepts, that can be imported into CAV Camera.
CAVlib makes it easy for people to utilise the expressive power of CAVs in their own projects and products.
Been Kim and Emily Reif from Google AI who developed the TCAV technology and worked with us to humanise it.
Alex Etchells, Rachel Maggart and Tom Hatton for their artistic experimentation with CAVs.
Eva Kozanecka, Alison Lentz and Alice Moloney from Mural who commissioned the project and guided it creatively.
Not to mention Matt Jones, Martin Wattenberg and Fernanda Vegas who helped us uncover the value of CAVs.