Continual image super-resolution (CISR) aims to efficiently adapt a pre-trained model to a variety of tasks while retaining knowledge from previously learned tasks, minimizing the need for intensive independent training. The primary challenges include catastrophic forgetting due to varying data distributions and degradation types, along with the necessity for high adaptability. While prompt-based continual learning has proven effective in image classification, its direct application to super-resolution (SR) often fails to meet the demands for detailed pixel-level restoration and domain discrimination in low-level characteristics. To address these challenges, we propose Learning Prompt Adapters (LPA), which dynamically generates pixel-wise prompts through a combination of multi-granularity prompt bases and identities. By adaptively integrating these prompts into the Transformer architecture, we effectively improve the model’s performance on fine-grained details in super-resolution tasks, as well as enhancing the model’s adaptability to new tasks and preserving knowledge from previous ones. Through organizing the low-rank prompt bases with specific identities, we set up an effective solution to managing cross-task differences and enhancing prompt richness. Extensive experiments on benchmarks comprising the NYU, RealSR, DIV2K, REDS, and MANGA109 datasets with diverse degradation types demonstrate that LPA significantly outperforms existing continual learning methods. Codes of this paper are available at: https://github.com/dummerchen/LPA.