Benchmarking the Attribution Quality of Vision Models

datacite.relation.isDescribedBy https://arxiv.org/abs/2407.11910
dc.contributor.author Hesse, Robin
dc.contributor.author Schaub-Meyer, Simone
dc.contributor.author Roth, Stefan
dc.date.accessioned 2025-04-03T12:11:16Z
dc.date.available 2025-04-03T12:11:16Z
dc.date.created 2024-12
dc.date.issued 2025-04-03
dc.description Attribution maps are one of the most established tools to explain the functioning of computer vision models. They assign importance scores to input features, indicating how relevant each feature is for the prediction of a deep neural network. While much research has gone into proposing new attribution methods, their proper evaluation remains a difficult challenge. In this work, we propose a novel evaluation protocol that overcomes two fundamental limitations of the widely used incremental-deletion protocol, i.e., the out-of-domain issue and lacking inter-model comparisons. This allows us to evaluate 23 attribution methods and how different design choices of popular vision backbones affect their attribution quality. We find that intrinsically explainable models outperform standard models and that raw attribution values exhibit a higher attribution quality than what is known from previous work. Further, we show consistent changes in the attribution quality when varying the network design, indicating that some standard design choices promote attribution quality. de_DE
dc.identifier.uri https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/4531
dc.language.iso en de_DE
dc.rights.licenseApache-2.0 (https://www.apache.org/licenses/LICENSE-2.0)
dc.subject deep learning de_DE
dc.subject interpretability de_DE
dc.subject explainable artificial intelligence de_DE
dc.subject evaluation de_DE
dc.subject.classification 4.43-05
dc.subject.ddc 004
dc.title Benchmarking the Attribution Quality of Vision Models de_DE
dc.type Software de_DE
dc.type Model de_DE
dcterms.accessRights openAccess
person.identifier.orcid 0000-0003-0458-5483
person.identifier.orcid 0000-0001-8644-1074
person.identifier.orcid 0000-0001-9002-9832
tuda.project EC/H2020 | 866008 | RED
tuda.project HMWK | 500/10.001-(00111) | 3AI - TP Roth
tuda.project HMWK | 500/10.001-(00012) | TAM - TP Roth
tuda.project HMWK | 500/10.001-(00111) | 3AI-NWG Schaub-Meyer
tuda.unit TUDa

Files

Original bundle

Now showing 1 - 14 of 14
NameDescriptionSizeFormat
idsds_code.zipTraining and inference code3.56 MBZIP-Archivdateien Download
vgg16_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights1.03 GBUnknown data format Download
vit_base_patch16_224_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights660.56 MBUnknown data format Download
wide_resnet50_2_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights525.95 MBUnknown data format Download
resnet152_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights460.42 MBUnknown data format Download
resnet101_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights340.58 MBUnknown data format Download
resnet50_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights195.34 MBUnknown data format Download
bcos_resnet50_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights195.17 MBUnknown data format Download
fixup_resnet50_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights194.68 MBUnknown data format Download
xresnet50_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights194.61 MBUnknown data format Download
bagnet33_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights140.05 MBUnknown data format Download
resnet18_imagenet1000_lr0.001_epochs30_step10_checkpoint_best.pth.tarModel weights89.28 MBUnknown data format Download
idsds_resnet50_IxG.csvInference results259.13 KBcomma-separated values Download
idsds_resnet50_IG-U.csvInference results248.53 KBcomma-separated values Download

Collections