I'm currently reproducing the results from your paper using the official repository (UnlearnCanvas), but I'm encountering discrepancies in the CRA metric for both style and object unlearning tasks.
According to the paper, the reported CRA scores are:
- Style Unlearning: UA = 86.29, IRA = 84.59, CRA = 88.43
- Object Unlearning: UA = 75.43, IRA = 77.5, CRA = 81.18
However, in my reproduction using the provided code and instructions, I obtained the following results:
- Style Unlearning: UA = 89.21, IRA = 84.11, CRA = 82.54
- Object Unlearning: UA = 82.00, IRA = 75.49, CRA = 74.90
As you can see, while some metrics match or exceed your reported results, the CRA metric is consistently lower in both tasks.
It would be greatly appreciated if you could share any additional details about the experimental setup used in the paper—such as hyperparameters, seed settings, or evaluation code specifics—that might help explain this discrepancy.
I'm currently reproducing the results from your paper using the official repository (UnlearnCanvas), but I'm encountering discrepancies in the CRA metric for both style and object unlearning tasks.
According to the paper, the reported CRA scores are:
However, in my reproduction using the provided code and instructions, I obtained the following results:
As you can see, while some metrics match or exceed your reported results, the CRA metric is consistently lower in both tasks.
It would be greatly appreciated if you could share any additional details about the experimental setup used in the paper—such as hyperparameters, seed settings, or evaluation code specifics—that might help explain this discrepancy.