Skip to content

Mxbonn/MLRF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MLRF

Machine Learning Research Flashcards (for Anki and Obsidian)

Description

MLRF is a collection of machine learning flashcards that can be used with Anki and Obsidian. The flashcards in this repository are associated with scientific research papers in the field of machine learning.

As a machine learning researcher, I read many papers to stay current with the state-of-the-art. Yet I often found myself only vaguely remembering papers months later. Thinking only "I read something about that" without retaining the specifics. Inspired by Michael Nielsen's article "Augmenting Long-term Memory", I started using Anki and spaced repetition to fix this problem.

These flashcards are not a substitute for reading papers, but a complement to deepen and retain your understanding. The papers are initially selected based on my interests, but this repository is open to community contributions. If you use these flashcards with Anki or Obsidian, I'd welcome your additions and improvements.

Preview

image

Papers

Title URL flashcards
3D Gaussian Splatting for Real-Time Radiance Field Rendering [arXiv] [gaussian_splatting.csv]
A Simple Framework for Contrastive Learning of Visual Representations [arXiv] [simclr.csv]
Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour [arXiv] [accurate_large_minibatch_sgd.csv]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [arXiv] [vision_transformers.csv]
Anchor Pruning for Object Detection [arXiv] [anchor_pruning.csv]
Attention Is All You Need [arXiv] attention_is_all_you_need.csv]
Auto-Encoding Variational Bayes [arXiv] [vae.csv]
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers [arXiv] [autoslim.csv]
Barlow Twins: Self-Supervised Learning via Redundancy Reduction [arXiv] [barlow_twins.csv]
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift [arXiv] [batchnorm.csv]
DETR: End-to-End Object Detection with Transformers [arXiv] [detr.csv]
DUSt3R: Geometric 3D Vision Made Easy [arXiv] [dust3r.csv]
Deep Reinforcement Learning with Double Q-learning [arXiv] [double_qlearning.csv]
Denoising Diffusion Probabilistic Models [arXiv] [diffusion_models.csv]
Distilling the Knowledge in a Neural Network [arXiv] [knowledge_distillation.csv]
Extracting and Composing Robust Features with Denoising Autoencoders [ICML] [denoising_autoencoder.csv]
FaceNet: A Unified Embedding for Face Recognition and Clustering [arXiv] [facenet.csv]
Focal Loss for Dense Object Detection [arXiv] [retinanet.csv]
GigaPose: Fast and Robust Novel Object Pose Estimation via One Correspondence [arXiv] [gigapose.md]
High-Resolution Image Synthesis with Latent Diffusion Models [arXiv] [stable_diffusion.csv]
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [arXiv] [instant_ngp.csv]
Learned Thresholds Token Merging and Pruning for Vision Transformers [arXiv] [ltmp.csv]
Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware [arXiv] [act.csv]
Learning Transferable Visual Models From Natural Language Supervision [arXiv] [clip.csv]
LoRA: Low-Rank Adaptation of Large Language Models [arXiv] [lora.csv]
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields [arXiv] [mipnerf360.csv]
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields [arXiv] [mipnerf.csv]
Mixture-of-Experts with Expert Choice Routing [arXiv] [expert_choice.csv]
MobileNetV2: Inverted Residuals and Linear Bottlenecks [arXiv] [mobilenetv2.csv]
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications [arXiv] [mobilenetv1.csv]
Multi-Scale Context Aggregation by Dilated Convolutions [arXiv] [multi_scale_context_dilated_convolutions.csv]
Multiple Choice Learning: Learning to Produce Multiple Structured Outputs [NeurIPS] [multiple_choice_learning.csv]
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [arXiv] [nerf.csv]
On Network Design Spaces for Visual Recognitio [arXiv] [network_design_spaces.csv]
Once-for-All: Train One Network and Specialize it for Efficient Deployment [arXiv] [once_for_all.csv]
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer [arXiv] [moe.csv]
PaLM-E: An Embodied Multimodal Language Model [arXiv] [palm_e.md]
Playing Atari with Deep Reinforcement Learning [arXiv] [deep_rl.csv]
Proximal Policy Optimization Algorithms [arXiv] [ppo.csv]
RT-1: Robotics Transformer for Real-World Control at Scale [arXiv] [rt1.md]
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers [arXiv] [setr.csv]
SSD: Single Shot MultiBox Detector [arXiv] [ssd.csv]
Segment Anything [arXiv] [segment_anything.csv]
Slimmable Neural Networks [arXiv] [slimmable_neural_networks.csv]
Squeeze and Excitation Networks [arXiv] [squeeze_and_excitation.csv]
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity [arXiv] [switch_transformer.csv]
Templates for 3D Object Pose Estimation Revisited: Generalization to New Objects and Robustness to Occlusions [arXiv] [templates_for_3d_pose.csv]
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks [arXiv] [lottery_ticket.csv]
Token Merging: Your ViT But Faster [arXiv] [tome.csv]
Understanding the Effective Receptive Field in Deep Convolutional Neural Networks [arXiv] [understanding_receptive_field.csv]
Universally Slimmable Networks and Improved Training Techniques [arXiv] [universally_slimmable_networks.csv]
V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation [arXiv] [dice_loss.csv]

Usage

Obsidian

To use the flashcards with Obsidian, symlink the flashcards directory into your Obsidian vault:

ln -s /path/to/MLRF/flashcards /path/to/your/obsidian/vault/MLRF

The flashcards are stored in Markdown format using Obsidian callouts for questions and answers:

  • > [!question] for the question
  • > [!answer]- for the answer
  • > [!explanation]- for additional explanation

You can browse, read, and edit the flashcards directly in Obsidian. The Markdown format makes it easy to integrate with your existing notes and knowledge base.

Anki

The flashcards in this repository are made for Anki. However, the cards are stored here in csv formats, so you can also use them as inputs to a different flashcards system. In case you edit the raw csvs and want to sync them here, you can leave the guid column of new cards empty. Additionally, you also need to install the Anki add-on CrowdAnki.

Once you cloned this repository you can use tools/source_to_anki.py to create a deck that can be imported in Anki. tools/anki_to_source can be used to update or add your own cards to this repository.

In order to run these scripts you need to install brain brew and pandas pip install brain-brew pandas.

source_to_anki

usage: python tools/source_to_anki.py [-h] [--include INCLUDE [INCLUDE ...]] [--exclude EXCLUDE [EXCLUDE ...]]

Tool to convert the source format of this repository to a crowdAnki folder that can be imported into Anki.

optional arguments:
  -h, --help            show this help message and exit
  --include INCLUDE [INCLUDE ...]
                        You can convert only part of this repository by using this argument with a list of the csv files to convert. E.g. `--include ofa.csv mobilenetv2.csv`
  --exclude EXCLUDE [EXCLUDE ...]
                        Exclude certain papers in the crowdAnki export folder. E.g. `--exclude ofa.csv mobilenetv2.csv`

The resulting export folder will be created in MLRF/build/. To add the cards to Anki do the following:

  • Open Anki and make sure your devices are all synchronised.
  • In the File menu, select CrowdAnki: Import from disk.
  • Browse for and select MLRF/build/

Recommended next steps:

  • Review all cards in the MLRF deck, delete the cards you're not interested in (see also TODO).
  • Move the cards to a deck of your own. (This allows you use your own card scheduling steps)

anki_to_source

  • Open Anki and make sure your devices are all synchronised.
  • In the File menu, select CrowdAnki: Snapshot, and remember the location where it is stored.
usage: python tools/anki_to_source.py [-h] crowdanki_folder

Tool to convert crowdAnki export folder to the format of this repository.

positional arguments:
  crowdanki_folder  Location of the crowdAnki export folder.

Important notes:

  • This tool only extracts cards that use the paper_basic note model from this repository. This means that you can export a deck that contains more than just your machine learning research flashcards.
  • paper_basic cards that are tagged with DoNotSync are ignored.
  • Tags are not copied to this repository

About

Machine Learning Research Flashcards (for Anki and Obsidian)

Topics

Resources

Stars

Watchers

Forks

Contributors