In the present day, it might seem obvious that we’d want to test people’s mental abilities to see what they might be good at, especially children. But, that’s only because we have a greater degree of choice than ever before. Was such a measurement necessary in the past? If a kid was bright, it was obvious. If someone was good at something, they’d pursue and do it well — end of story. If someone was of a certain social standing, that person might pursue higher education regardless of intellect. If not, that person would be a laborer, farmer, etc., like generations prior. “Intelligence” was a meaningless metric. Depending on your stance, it still is.
As a metric, “intelligence” only came into being a bit over 100 years ago with British polymath Sir Francis Galton. As Verywellmind explains, Galton wanted to know if intellectual acuity was inherited, like eye color or height. He also envisioned gathering actual statistical data to verify his ideas, not just lobbing his opinions around. Come 1904, per the Stanford-Binet Test website, the French government originally approached psychologist Alfred Binet and his student Theodore Simon because they wanted a way to detect which children had “notably below-average levels of intelligence for their age” – hence the whole “mental age” definition underlying IQ tests. Binet, building on Galton, developed the Binet-Simon Scale that consolidated the measurement of intelligence into one number. Lewis Terman of Stanford University revised it in 1916 and dubbed it the Stanford-Binet Test.