Training Data Attribution for Diffusion Models

Zheng Dai1, David K Gifford1

1Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology

Overview

The use of generative diffusion models for creative uses has sparked serious ethical and legal discussions, in particular surrounding the sourcing and use of training data. Yet the role that training data plays in producing these models, a key element of this discussion, remains poorly understood. To address this, we tackle the technical challenge of attributing the output of generative diffusion models to specific training data.

Technical Details

We show how a collection of diffusion models trained on different (yet potentially overlapping) batches of training data can be used to collaboratively generate a picture. The influence of a given piece of training data can then be removed from the generated picture by removing the contributions of all the models that have seen that piece of data. This gives a counterfactual picture, which captures what the original picture would have looked like had the original training data not contained the removed training data. The counterfactual picture can then be compared with the original picture to obtain some qualitative or quantitative measure of influence.

Main Contributions

We provide to our knowledge the first method of attributing the samples produced by generative diffusion models to training data.

Check out the preprint here
Code and data availabile here