The primary purpose of this study is to critically examine ethical challenges arising from integrating AI into language education. While tools such as large language models, machine translation, automated writing feedback, and speech recognition offer gains in personalization, language development, and efficiency, their rapid adoption raises several ethical concerns. Six challenges are discussed: data privacy and security, academic integrity, overreliance on AI, algorithmic bias, and widening educational divides tied to access and digital literacy. Drawing on recent empirical findings and theoretical insights, this paper also underscores risks such as the erosion of critical thinking, challenges to academic honesty, exclusion of diverse linguistic identities, and exacerbation of inequities. To address these challenges, the study proposes a multi-faceted framework comprising robust data protection, bias-aware language practice, human-AI collaboration, process-oriented instruction, and teacher training. These measures aim to foster a responsible and ethical approach to AI use, ensuring that AI supports equitable, inclusive, and effective language learning and teaching while preserving the inherently social and humanistic nature of language education.