No matter which prompting framework you use, or the specific product (Claude, OpenAI, etc), no prompt will get you exactly what you're looking for the first time. Because AI bias is a real problem you can use the following questions after your first prompt:
What assumptions have you made in your output?
What bias could be in the output?
What impact could this bias have on learners?
Next, prompt: Please suggest a revised prompt that minimizes bias for all learners.
Example: Ensure this assessment avoids cultural, socioeconomic and gender biases by incorporating diverse examples and contexts.