We've Been Here Before
The lesson we didn't learn from Chromebooks
This is the second post in a series on AI and education. If you’re new here, you might want to start with the first post — where I explain why I started writing and what this newsletter is about
.
We have a habit in education of letting technology arrive before wisdom does.
When smartphones became ubiquitous, we handed them to children and hoped for the best. When social media platforms set their minimum age at 13, we largely accepted it without asking whether 13-year-olds were actually ready for what those platforms were designed to do.
We know how that turned out.
Mental health crises. Misinformation. A generation that grew up online but never learned to navigate it thoughtfully.
The Chromebook story is quieter — but no less instructive.
Between 2012 and 2021, Chromebook adoption in American schools went from nearly zero to nearly universal. Before the pandemic, roughly two-thirds of middle and high schools had implemented 1:1 device programs. Within two years of the pandemic’s start, 96% of schools were providing devices to students who needed them. By 2024, 88% of schools reported full 1:1 programs.
A technology adoption curve that would normally take a generation compressed into a single emergency.
Massachusetts was no exception. Beginning in 2017, the state phased in computer-based MCAS testing across all grades, and districts scrambled to get Chromebooks into every student’s hands. What followed in classrooms was a quiet but significant restructuring of instructional time. Teachers who once spent the early elementary years building writing stamina, handwriting fluency, and extended reading practice found themselves carving out time to teach keyboarding.
Second and third graders needed to learn to type before they could demonstrate what they actually knew.
The test format had changed. The instruction had to follow.
The outcomes did not.
According to the Massachusetts Department of Elementary and Secondary Education’s 2024 MCAS results, ELA scores dropped at every single grade level between 2019 and 2024 — years that bracket the pandemic but represent the fully computer-based testing era. The percentage of students meeting or exceeding expectations statewide fell from 52% to 39%. A 13-point decline.
Grade 3 dropped 14 percentage points. Grade 4 dropped 15. Grade 5 dropped 14.
These are not small fluctuations. These are the grades where foundational literacy is built.
None of this proves that Chromebooks caused the decline. The data is not that simple, and anyone who tells you otherwise is overclaiming. But that’s precisely the point: the burden of proof should have been on those arguing that universal device adoption would improve learning. That case was never made rigorously. Billions were spent. Instruction was restructured.
The OECD — the Organisation for Economic Co-operation and Development, which tracks education outcomes across dozens of countries — concluded that students who use computers very frequently at school perform worse in most learning outcomes, with no appreciable improvements in reading, math, or science in countries that invested most heavily in education technology.
We didn’t ask hard enough questions before we handed every child a Chromebook.
We assumed access was the same as learning. We assumed the tool would do the teaching.
AI is arriving the same way — fast, powerful, and ahead of our collective wisdom about what it actually does to learning.
The difference is that this time, we can see it coming.
The instinct in most schools right now falls into one of two camps. The first is prohibition — block it, ban it, treat it as an academic integrity problem to be managed. The second is adoption — embrace it, integrate it, hand students the tools and call it innovation.
Both responses share the same flaw: neither asks what students actually need to become before AI can help them.
Because here’s what we know about AI that we should have asked about Chromebooks: it is not a neutral tool. It doesn’t make weak thinking stronger. It makes it faster. A student who approaches AI without curiosity, domain knowledge, or the habit of questioning their own assumptions won’t become more capable.
They’ll become more efficiently superficial.
The question schools should be asking isn’t “should we allow AI?” or even “how do we use AI?”
The right question is: what kind of thinkers do we need students to become, and how does AI fit into that development?
That reframe changes everything.
Next post: What students actually need before AI can help them — and why metacognition, domain knowledge, and intellectual integrity aren’t soft skills. They’re the prerequisites.
What’s your school’s instinct right now — prohibition, adoption, or something else? I’d love to hear in the comments.


