Jan Sobotka, Luca Baroni, Ján Antolík
This project focuses on decoding visual scenes from population neural activity recorded in the early visual system. For details about the approach and results, please refer to our NeurIPS 2025 paper.
For instructions on getting the datasets, please refer to the README files in the respective directories csng/cat_v1/ (Synthetic Cat V1), csng/mouse_v1/ (SENSORIUM 2022), and csng/brainreader_mouse/ (Brainreader).
Setup an environment from the environment.yaml file and activate it (Miniconda):
conda env create -f environment.yaml
conda activate csngInstall the main csng package:
pip install -e .Install the modified packages neuralpredictors, nnfabrik, featurevis, sensorium (modified for Python 3.10 compatibility and additional features), and the packages for the CAE decoder, MonkeySee, and Energy-Guided Diffusion (EGG) in the pkgs directory:
pip install -e pkgs/neuralpredictors pkgs/nnfabrik pkgs/featurevis pkgs/sensorium pkgs/CAE pkgs/MonkeySee pkgs/energy-guided-diffusionCreate .env file in the root directory according to .env.example file and make sure to set the path to an existing directory where the data will reside (DATA_PATH). You might need to load the environment variable(s) from the .env file manually in the terminal: export $(cat .env | xargs)
README.md- This filesetup.py- Setup file for thecsngpackageenvironment.yaml- Environment file with all the dependencies.env.example- Example of the.envfile. Important to setup your own .env file in the same directory to be able to run the scripts.gitignore- Git ignore filepkgs- Directory containing modified packagesneuralpredictors,nnfabrik,featurevis,sensorium. Directoriespkgs/CAE,pkgs/MindEye2,pkgs/MonkeySee, andpkgs/energy-guided-diffusioncontain code for the CAE decoder, MindEye2, MonkeySee, and Energy-Guided Diffusion (EGG), respectively.csng- Directory containing the main code for the project (seecsng/README.mdfor details):run_gan_decoder.py- MEIcoder training pipeline.run_comparison.py- Final test-set evaluation and plotting for MEIcoder and baselines.data.py- Shared dataset utilities (loading, normalization, cropping, mixing).losses.py- Custom losses/metrics (SSIM variants, Alex/CLIP/SwAV, etc.).generate_meis.py- Generate neuron-wise MEIs from a pretrained encoder for downstream decoding.models/readins.py- MEI readin implementation (possible to extend with custom readins).models/utils/gan.py- GAN decoder core and training utilities used by MEIcoder.utils/- Helpers for seeding, plotting, model inspection, and training support.cat_v1/- Directory with code specific to the cat V1 data (C)mouse_v1/- Directory with code specific to the SENSORIUM 2022 mouse V1 data (datasets M-<mouse id> and M-All)brainreader_mouse/- Directory with code specific to the mouse V1 data from Cobos E. et al. 2022 (datasets B-<mouse id> and B-All)<your-data>/- Directory with code specific to your data (e.g.,cat_v1/). This folder should include a dataloading utility that could be then combined with other datasets using the code incsng/data.py.
notebooks/- Directory with Jupyter notebooks for plotting, inspecting data and model performance, and for demonstration purposes. Notebooknotebooks/train.ipynbis a minimal example of how to train a model using thecsngpackage on one of the provided datasets, serving as a good starting point for your own experiments.
The main training script is csng/run_gan_decoder.py. The script is highly configurable via the config dictionary defined at the top of the file. Below are the steps to run an experiment:
- Prepare data and MEIs: download/generate datasets as described in
csng/<dataset>/README.mdfiles and produce MEIs withcsng/generate_meis.py. Place the resultingmeis.ptunderDATA_PATH/.../meis/<data_key>/so it matches themeis_pathentries in the config (cat V1 MEIs are linked incsng/README.md). - Activate the environment:
conda activate csng. - Choose the dataset block in
csng/run_gan_decoder.py: uncomment and edit the relevantconfig["data"]["<dataset>"]entry (e.g.,brainreader_mouse,cat_v1, ormouse_v1). Verify batch sizes, resize targets, and any neuron coordinate settings. - Launch training: run
python csng/run_gan_decoder.py. Checkpoints and logs are written to the run directory configured bysetup_run_dir(defaults toDATA_PATH/models/gan/<timestamp>). Resume or fine-tune by filling theconfig["decoder"]["load_ckpt"]block.
The top-level config dictionary in csng/run_gan_decoder.py controls the full experiment:
config["device"],["seed"],["save_run"], and["wandb"]handle reproducibility, checkpointing, and logging.config["data"]holds per-dataset dataloader settings (paths, batch sizes, normalization, cropping, optional neuron coords). Only keep blocks for the datasets you want to train on, and comment out or remove others.config["decoder"]["readin_type"]should staymeifor MEIcoder. However, you can create your own readin modules by inheriting fromcsng.models.readins.ReadIn(seecsng.models.readins.MEIReadInfor example) and then specifying the new class name here.config["decoder"]["model"]defines the GAN core (generator/discriminator shapes) and appends areadins_configentry per dataset. Each MEI entry specifiesmeis_path,mei_target_shape,mei_resize_method, whether MEIs are trainable, contextual modulation (ctx_net_config), and pointwise convolution settings. Shapes must match the crop windows inconfig["crop_wins"].config["decoder"]["loss"], optimizer settings (G_opter_kwargs/D_opter_kwargs), adversarial/Stim loss weights, andn_epochscontrol training dynamics.eval_loss_nameselects the validation metric used for checkpointing.
After edits, rerun python csng/run_gan_decoder.py to train with the updated configuration.
Use csng/run_comparison.py to evaluate MEIcoder (and baselines) on held-out test sets and plot metrics/reconstructions.
- Pick datasets: enable the needed
config["data"]["<dataset>"]blocks (e.g.,brainreader_mouse,cat_v1,mouse_v1) at the top ofcsng/run_comparison.py; crop windows are inferred there. - Point to checkpoints: in
config["comparison"]["to_compare"], set each model’sckpt_path(or provide adecoderobject).run_nameis used just for labeling on figures.load_best=Trueloads the best-val checkpoint;eval_all_ckptsorfind_best_ckpt_according_tolets you sweep/auto-pick checkpoints. - Configure evaluation: choose
eval_tier(defaulttest), metrics inlosses_to_plot, and output directory viasave_dir. Keepsave_all_preds_and_targets=Trueif you need full tensors alongside plots. Optionalload_ckptlets you reload prior comparison results to re-plot without recompute. - Run:
python csng/run_comparison.py. Results are saved under toconfig["comparison"]["save_dir"].
If you find our repository useful, please consider citing:
@inproceedings{
sobotka2025meicoder,
title={{MEI}coder: Decoding Visual Stimuli from Neural Activity by Leveraging Most Exciting Inputs},
author={Jan Sobotka and Luca Baroni and J{\'a}n Antol{\'\i}k},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://openreview.net/forum?id=V3WQoshcZe}
}
