Compositional Caching for Training-free Open-vocabulary Attribute Detection

1University of Trento, 2Cisco Research, 3Fondazione Bruno Kessler
CVPR 2025
COPS teaser

Attribute annotations are often: sparse, as they are not consistent across samples; incomplete, as not all attributes are annotated; ambiguous, as they can be subjective or miss a frame of reference. This makes open-vocabulary attribute detection challenging. Differently from previous works, we do not rely on such annotations and propose ComCa, a training-free approach requiring no supervision.

Abstract

Attribute detection is crucial for many computer vision tasks, as it enables systems to describe properties such as color, texture, and material.

Current approaches often rely on labor-intensive annotation processes which are inherently limited: objects can be described at an arbitrary level of detail (e.g., color vs. color shades), leading to ambiguities when the annotators are not instructed carefully. Furthermore, they operate within a predefined set of attributes, reducing scalability and adaptability to unforeseen downstream applications.

We present Compositional Caching (ComCa), a training-free method for open-vocabulary attribute detection that overcomes these constraints. ComCa requires only the list of target attributes and objects as input, using them to populate an auxiliary cache of images by leveraging web-scale databases and Large Language Models to determine attribute-object compatibility.

To account for the compositional nature of attributes, cache images receive soft attribute labels. Those are aggregated at inference time based on the similarity between the input and cache images, refining the predictions of underlying Vision-Language Models (VLMs).

Importantly, our approach is model-agnostic, compatible with various VLMs. Experiments on public datasets demonstrate that ComCa significantly outperforms zero-shot and cache-based baselines, competing with recent training-based methods, proving that a carefully designed training-free approach can successfully address open-vocabulary attribute detection.

Method

COPS architecture

Given a list of attributes and objects, we compute their compatibility from a large database Dr and with an LLM. The scores are merged and normalized to obtain the compatibility distribution, from which we sample cache entries ad construct the cache. We enrich the latter with soft labels from the VLM-based similarity between cache images and attributes.

Main results

Main quantitative results

Comparison with state of the art. Green indicates ComCa. Bold indicates best among training-free methods. The symbol "-" indicates results for the competitors are not available on the original papers or the impossibility to run their method due to lack of public code and/or model weights.

Qualitative results

Qualitative results

Predictions of OVAD, CLIP and ComCa on some OVAD images. Green are correct ones, red are wrong.

Extended qualitative results

Qualitative results extended - Part 1 of 3

(a) Comparison of performance on a mobile phone, a tennis racket, and an apple

Qualitative results extended - Part 2 of 3

(b) Comparison of performance on a cake, a frisbee flying disc, and a PC monitor.

Qualitative results extended - Part 3 of 3

(c) Comparison of performance on a laptop, a kite, and a skateboard.

Top positive predictions of OVAD, CLIP and ComCa on sample images from OVAD. Green are correct predictions, red are wrong ones.

Related Links

There's a lot of excellent work that ComCa builds upon or compares with. Here are some links, but we refer to the "References" section of our paper for a comprehensive list.

Open-vocabulary Attribute Detection introduces the OVAD benchmark, which we use to evaluate ComCa.

OvarNet: Towards Open-vocabulary Object Attribute Recognition, LOWA: Localize Objects in the Wild with Attributes and ArtVLM: Attribute Recognition Through Vision-Based Prefix Language Modeling are all training-based competitors that we compare with in our experiments.

We also implement cache-based methods, adapting them to the attribute detection setting. Notably, we compare with Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling and SuS-X: Training-Free Name-Only Transfer of Vision-Language Models.

Naturally, the backbone of our work is Learning Transferable Visual Models From Natural Language Supervision (CLIP).

BibTeX

@article{garosi2025comca,
    author  = {Garosi, Marco and Conti, Alessandro and Liu, Gaowen and Ricci, Elisa and Mancini, Massimiliano},
    title   = {Compositional Caching for Training-free Open-vocabulary Attribute Detection},
    journal = {CVPR},
    year    = {2025},
}