AI Models Show Language-Based Censorship Discrepancies in Responses

AI Models Show Language-Based Censorship Discrepancies in Responses
A recent analysis reveals that AI models from Chinese labs, such as DeepSeek, display varying levels of compliance based on the language used in prompts. While these models are designed to censor politically sensitive content, they tend to be more restrictive in Chinese than in English. Developer xlr8harder tested multiple models and found that even U.S.-developed AI showed less willingness to address critical topics in Chinese. Experts suggest this reflects broader issues in AI training and cultural context, highlighting the need for improved model governance across languages.