Skip to content

ronpay/TAME

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LCMP & TAME

This repo provides official implementations for LCMP & TAME:

TAMEing Long Contexts in Personalization:
Towards Training-Free and State-Aware MLLM Personalized Assistant
(KDD 2026 Research Track)

Overview

🔥 LCMP: A long-context MLLM personalization benchmark.

LCMP-Construct

Construction pipeline of LCMP benchmark.

🔥 TAME: A training-free, state-aware personalized MLLM assistant.

TAME-Arch

Architecture of TAME framework.

🏗️ LCMP

LCMP is the first benchmark for evaluating Long-Context MLLM Personalization, which is more practical and conceptually different from existing static, single-turn visual-identification-oriented evaluation.

LCMP Content

We will release the complete LCMP benchmark in data/concept, which includes personalized pets, people, and objects, along with their associated dialogues, questions, and personalized images.

Construct LCMP

We also provide code to reproduce LCMP by adding more personalized concepts.

Generate New Personalized Concept for LCMP (Pet Example)

  1. Sample an image from the COCO dataset to dataset_maker/concept_pet/$concept_id/base.png.

  2. Run the following code:

# generate concept and history
python dataset_maker/generate_profile_history_question.py --concept_id $concept_id
# generate personalized image
python dataset_maker/generate_prompt_images.py --concept_id $concept_id

🚀 TAME

TAME is a training-free and state-aware personalized MLLM assistant powered by double memories.

Run TAME

  1. Move the constructed concepts to data/concept

  2. Run the following code:

# build memory on all concepts using InternVL3-8B
python method/main.py build --model internvl
# run TAME on all concepts using InternVL3-8B
python method/main.py qa --model internvl

✅ Evaluation

# run evaluation on generated answers
python evaluator/evaluator.py $input_file $output_file

📚 Project Structure

  • dataset_maker/: tools to construct/extend LCMP concepts
  • method/: TAME pipeline (memory building + QA)
  • evaluator/: evaluation scripts
  • data/: benchmark concepts

📖 Citation

@inproceedings{hong2026tameing,
	author = {Hong, Rongpei and Lang, Jian and  Zhong, Ting and Wang, Yong and Zhou, Fan},
	booktitle = {ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)},
	year = {2026},
	title = {TAMEing Long Contexts in Personalization: Towards Training-Free and State-Aware MLLM Personalized Assistant},
}

About

[KDD 2026] TAMEing Long Contexts in Personalization: Towards Training-Free and State-Aware MLLM Personalized Assistant

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages