SharpZO: Hybrid Sharpness-Aware Vision Language Model Prompt Tuning via Forward-Only Passes

Yifan Yang, Zhen Zhang, Rupak Vignesh Swaminathan, Jing Liu, Nathan Susanj, Zheng Zhang,
University of California, Santa Barbara Amazon AGI

Abstract

Fine-tuning vision language models (VLMs) has achieved remarkable performance across various downstream tasks; yet, it requires access to model gradients through backpropagation (BP), making them unsuitable for memory-constrained, inference-only edge devices. To address this limitation, previous work has explored various BP-free fine-tuning methods. However, these approaches often rely on high-variance evolutionary strategies (ES) or zeroth-order (ZO) optimization, and often fail to achieve satisfactory performance. In this paper, we propose a hybrid Sharpness-aware Zeroth-order optimization (SharpZO) approach, specifically designed to enhance the performance of ZO VLM fine-tuning via a sharpness-aware warm-up training. SharpZO features a two-stage optimization process: a sharpness-aware ES stage that globally explores and smooths the loss landscape to construct a strong initialization, followed by a fine-grained local search via sparse ZO optimization. The entire optimization relies solely on forward passes. Detailed theoretical analysis and extensive experiments on CLIP models demonstrate that SharpZO significantly improves accuracy and convergence speed, achieving up to 7\% average gain over state-of-the-art forward-only methods.

SharpZO Pipeline

(a) The overall training pipeline of SharpZO, consisting of a two-stage optimization process. (b) Visualization of the smoothed loss landscape after Stage 1 sharpness-aware CMA-ES optimization. (c) Training dynamics of the sharpness-aware CMA-ES method. (d) RGE-based gradient estimation during sparse ZO training in Stage 2.

Wanda++ Pipeline Diagram

Training Curve and Downstream Generalization Results

(a) Comparison between SharpZO and other ZO prompt-tuning baselines.SharpZO demonstrates significantly lower variance than other ZO-based baselines like ZIP and BlackVIP. (b) Fine-tuned performance across all 11 tasks tested compared with ZIP and BlackVIP and BBT. All experiments are conducted using the CLIP model with a ViT-B/16 backbone.

Wanda++ Pipeline Diagram

BibTeX

@article{yang2025sharpzo,
  title={SharpZO: Hybrid Sharpness-Aware Vision Language Model Prompt Tuning via Forward-Only Passes},
  author={Yang, Yifan and Zhang, Zhen and Swaminathan, Rupak Vignesh and Liu, Jing and Susanj, Nathan and Zhang, Zheng},
  journal={arXiv preprint arXiv:2506.20990},
  year={2025}
}