In this paper, we propose a novel method to address the challenge of learning deep neural network models in the presence of open-set noisy labels, which include mislabeled samples from out-of-distribution categories. Previous methods relied on the distances between sample-wise predictions and labels to identify mislabeled samples and distinguish between in-distribution (ID) and out-of-distribution (OOD) noisy samples, which struggle to promptly identify the two types of noisy samples. To overcome these limitations, we propose a novel method that utilizes feature information and cross-instance relationships, enabling a more comprehensive distinction between ID and OOD noisy samples. Our approach involves a multi-prototype modeling mechanism, where each class is represented by multiple prototypes to account for the diversity within categories. This mechanism helps in distinguishing in-distribution and out-of-distribution noisy samples by comparing sample features with class prototypes. We introduce an online algorithm for updating prototypes and enhancing model optimization with cross-augmentation consistency and a noise-robust contrastive siamese learning technique. Our extensive experiments on datasets like CIFAR100, Clothing1M, and Food101N show our method’s superiority in handling noisy labels compared to existing approaches.