Cloth-Changing Person Re-Identification (CC-ReID) aims to recognize individuals across camera views despite clothing variations, a crucial task for surveillance and security systems. Existing methods typically frame it as a cross-modal alignment problem but often overlook explicit modeling of interference factors such as clothing, viewpoints, and pedestrian actions. This oversight can distort their impact, compromising the extraction of robust identity features. To address these challenges, we propose a novel framework that systematically disentangles interference factors from identity features while ensuring the robustness and discriminative power of identity representations. Our approach consists of two key components. First, a dual-stream identity feature learning framework leverages a raw image stream and a cloth-isolated stream, to extract identity representations independent of clothing textures. An adaptive cloth-irrelevant contrastive objective is introduced to mitigate identity feature variations caused by clothing differences. Second, we propose a Text-Driven Conditional Generative Adversarial Interference Disentanglement Network (T-CGAIDN), to further suppress interference factors beyond clothing textures, such as finer clothing patterns, viewpoint, background, and lighting conditions. This network incorporates a multi-granularity interference recognition branch to learn interference-related features, a conditional adversarial module for bidirectional transformation between identity and interference feature spaces, and an interference decoupling objective to eliminate interference dependencies in identity learning. Extensive experiments on public benchmarks demonstrate that our method significantly outperforms state-of-the-art approaches, highlighting its effectiveness in CC-ReID. Our code is available at https://github.com/yblTech/IIFR-CCReID.