--

If that were actually the case then there wouldn't be biased tests that are generally understood to produce results that are systematically unfair to a group. For this to happen, the test must ordinarily measure variables for that group at least partly distinct from those it measures for other people in the population.

A biased test usually measures different things for different groups, which is unfair. However, it is possible to have a test that measures different things for different groups and yet does not produce unfair results because of the way it is used. In generative AI, we see more of the former.

The reason most nurses are women because that was one of the occupations open to women. During the Civil War, there weren't enough male nurses, so women volunteered to help—which morphed into the beginning of the female-dominated nursing profession we know today.

--

--

Nettrice Gaskins
Nettrice Gaskins

Written by Nettrice Gaskins

Nettrice is a digital artist, academic, cultural critic and advocate of STEAM education.

Responses (1)