TUM Master's Seminar: Advanced Topics in Vision-Language Models (SS 2026)

Content


The seminar aims to explore cutting-edge advancements in the realm of Vision-Language Models (VLMs), focusing on various topics crucial to their development and application. Through a deep dive into seminal papers and latest research, students will gain an understanding of how models like CLIP, Qwen, and Stable Diffusion work at an architectural and mathematical level. By the end of the seminar, students should have a comprehensive perspective on the current state and future potential of vision-language modeling. They will be equipped to evaluate new research, identify promising applications, and contribute meaningfully to the responsible development of this important field.


This is a Master's level course. Since these topics are very complex, prior participation in at least one of the following lectures is required:

  • Introduction to Deep Learning (IN2346)
  • Machine Learning (IN2064)

Additionally, we recommend to have taken at least one advanced deep learning lecture, for example:

  • AML: Deep Generative Models (CIT4230003)
  • Machine Learning for Graphs and Sequential Data (IN2323)
  • Computer Vision III: Detection, Segmentation, and Tracking (IN2375)
  • Machine Learning for 3D Geometry (IN2392)
  • Advanced Natural Language Processing (CIT4230002)
  • ADL4CV (IN2390)
  • ADL4R (IN2349)

or a related practical.

Organization


The preliminary meeting will take place at 2pm on Wednesdy, 9th of Februray 2026 on Zoom.


The seminar awards 5 ECTS Credits and will take place in person at SAP Labs Munich in Garching campus.


All students will be matched to one topic group including a primary paper and two secondary papers. They are expected to give one short and one long presentation on their primary paper (from the perspective of an academic reviewer) as well as a one-slide on the secondary papers from two different perspectives (industry practitioner and academic researcher).


For questions, please contact luca.eyring@tum.de or yiran.huang@helmholtz-munich.de.


Topics to select from:

Foundation VLMs


  1. Qwen3-VL Technical Report
  2. FLAIR: VLM with Fine-grained Language-informed Image Representations
  3. COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training
  4. Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models

Text-to-Image Models


  1. Qwen-Image Technical Report
  2. Flow-GRPO: Training Flow Matching Models via Online RL
  3. Align Your Flow: Scaling Continuous-Time Flow Map Distillation
  4. ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization

Explainability and Mechanistic Interpretability (SAEs)


  1. Sparse Autoencoders Find Highly Interpretable Features in Language Models
  2. Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
  3. ConceptScope: Characterizing Dataset Bias via Disentangled Visual Concepts
  4. Vision Transformers Need Registers

Foundation Model Adaptation


  1. A Systematic Study of Model Merging Techniques in Large Language Models
  2. How to Merge Your Multimodal Models Over Time?
  3. DeLoRA: Decoupling Angles and Strength in Low-rank Adaptation
  4. GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

Requirements


A successful participation in the seminar includes:

  • Active participation in the entire event: We have 70% attendance policy for this seminar. (You need to attend at least 5 of the 7 sessions.)
  • Short presentation (10 minutes talk including questions)
  • Long presentation (20 minutes talk including questions)

Registration


The registration must be done through the TUM Matching Platform.