# AgilePruner **Repository Path**: lenghong/AgilePruner ## Basic Information - **Project Name**: AgilePruner - **Description**: [ICLR 2026] AgilePruner: An Empirical Study of Attention and Diversity for Adaptive Visual Token Pruning in Large Vision-Language Models - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-28 - **Last Updated**: 2026-03-28 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # [ICLR 2026] AgilePruner: An Empirical Study of Attention and Diversity for Adaptive Visual Token Pruning in Large Vision-Language Models Changwoo Baek*, Jouwon Song*, Sohyeon Kim*, Kyeongbo Kong† *Equal contribution, †Corresponding author [**🌐 Project Page**](https://cvsp-lab.github.io/AgilePruner/) | [**📄 Paper**](http://arxiv.org/abs/2603.01236) ## 🎉 News - **[2026/01]** 🔥 Our paper has been accepted to **ICLR 2026!** 🎊 - **[2026/02]** 🚀 Project page is now live! ## 📖 Overview Large Vision-Language Models (LVLMs) have adopted visual token pruning strategies to mitigate substantial computational overhead incurred by extensive visual token sequences. While prior works primarily focus on either attention-based or diversity-based pruning methods, in-depth analysis of these approaches' characteristics and limitations remains largely unexplored. In this work, we conduct thorough empirical analysis using effective rank (erank) as a measure of feature diversity and attention score entropy to investigate visual token processing mechanisms and analyze the strengths and weaknesses of each approach. ## 🔍 Key Findings Our analysis reveals two key insights: 1. Diversity aware hybrid pruning methods preserve less feature diversity than intended, and **the diversity they do retain is closely tied to increased hallucination** frequency compared to attention-based pruning.