





Collect the smallest dataset that still answers learning questions, and justify every field. Store sensitive attributes separately with strict retention policies. Offer data access requests and deletion pathways, and publish a clear policy so contributors understand rights, responsibilities, and redress options in plain, approachable language.
If using machine learning to cluster feedback or predict completion, enforce guardrails: fairness checks, explainability requirements, and human-in-the-loop review. Avoid automating eligibility decisions without oversight. Document model behavior, training data lineage, and failure modes, and give communities simple channels to contest or improve algorithmic outcomes.
Co-create indicators with community advisors and translate instruments into local languages. Validate questions for clarity and cultural resonance. Track distributional effects, not just averages, ensuring marginalized groups experience benefits equitably. Share findings back in accessible formats, and invite ongoing participation in refining measures and decisions informed by them.
All Rights Reserved.