A California policy group co-led by AI pioneer Fei-Fei Li released a report urging lawmakers to consider unobserved AI risks when formulating regulations. The report calls for increased transparency from AI developers regarding safety tests and data practices. It emphasizes the need for third-party evaluations and whistleblower protections for employees in the AI sector. While it does not endorse specific legislation, it aligns with previous efforts to enhance AI safety, reflecting a growing consensus among experts on the necessity of proactive measures in AI governance. The final report is expected in June 2025.