Why AI Is a Big Problem for School Cybersecurity
Artificial intelligence is transforming education technology and expanding cybersecurity risks. In the Education Week article, experts examine how AI adoption is creating new challenges for schools. Read the article to learn how AI tools are expanding the cybersecurity attack surface in education, why automated threats and phishing attacks are becoming more sophisticated, and what security considerations schools should evaluate as AI adoption grows.
Frequently Asked Questions
Why are AI-powered cyberattacks such a concern for K-12 schools?
AI-powered cyberattacks are a growing concern for K-12 schools because they amplify risks that already existed in an under-resourced environment.
Schools have long been attractive targets because they:
- Hold large volumes of sensitive student data (including Social Security numbers) that can command a high price on the dark web, especially children’s data with clean credit histories.
- Manage substantial financial transactions and store staff personal information.
- Often operate with limited cybersecurity budgets and staff compared with banks or hospitals.
- Make much of their operational information public (budgets, staff directories, even some emails), which gives attackers useful intelligence.
AI is reshaping this threat landscape in several ways:
- Generative AI tools help attackers write polished, convincing phishing emails that no longer contain the spelling and grammar errors that used to be red flags.
- AI can mimic writing styles, making emails look like they came from a superintendent or principal.
- Deepfake tools can clone voices and appearances, enabling fake phone or video calls that pressure staff into making urgent payments or sharing credentials.
- AI systems can quickly scan the internet to map out a district’s vendors, key decision-makers, and payment processes, making social engineering attacks more targeted.
- Agentic AI tools can now automate complex attack steps, allowing a single, relatively unskilled person to execute what previously required a coordinated ransomware group.
Because many cybersecurity products are built and priced for the private sector, schools often struggle to keep pace. This combination of high-value data, public visibility, and constrained resources makes AI-enhanced attacks a practical and growing risk for K-12 leaders to address.
How is reduced federal support affecting school cybersecurity?
Recent shifts in federal support have made it harder for districts to keep up with increasingly sophisticated, AI-driven cyber threats.
Key changes include:
- Funding cuts to MS-ISAC: The Multi-State Information Sharing and Analysis Center previously provided free cybersecurity support and threat intelligence to schools through a federal partnership. That cooperative agreement ended, and districts now generally need to pay membership fees, unless their state covers the cost.
- Suspension of the K-12 Cybersecurity Government Coordinating Council: This group used to bring together federal agencies, state education departments, districts, and ed-tech companies to share information and coordinate responses to attacks. It has moved outside the federal structure to the Institute for Security and Technology, reducing direct federal coordination.
- Closure of the U.S. Department of Education’s Office of Educational Technology: This office helped states and districts navigate emerging technology issues, including AI and cybersecurity. Its closure removed a federal hub for guidance and best practices.
One notable initiative that remains is a three-year FCC pilot program, initially offering up to $200 million in competitive grants through E-rate to help schools and libraries purchase cybersecurity tools and services. However, experts note that it is unclear what will happen after the first round of grants and whether the program will become permanent.
These changes matter because the federal government has visibility into national and international threat activity that local districts simply do not. Without sustained federal backing for information sharing and guidance, districts must rely more heavily on state-level efforts, consortia, and their own budgets at a time when AI is lowering the barrier to sophisticated cybercrime.
What practical steps can districts take to counter AI-enabled attacks?
Districts do not need to match attackers tool-for-tool to make meaningful progress. A combination of basic controls, staff training, and collaboration can significantly improve resilience, even in the era of AI.
Practical steps include:
1. Double down on cybersecurity fundamentals
- Enforce multi-factor authentication (MFA) for staff and administrators.
- Require strong, unique passwords and regular updates.
- Keep operating systems, browsers, and critical applications patched and current.
- Standardize incident response plans so everyone knows what to do when something looks wrong.
2. Train staff to recognize modern phishing and deepfakes
- Use phishing simulation software that sends realistic fake phishing emails to employees.
- Automatically route anyone who clicks on a simulated phishing link to a short, focused training video.
- Teach staff that no legitimate financial transaction should be completed solely on the basis of an urgent email, text, or call—even if it appears to come from the superintendent.
- Introduce verification practices such as:
- Always confirming unusual payment requests through a second channel.
- Using pre-agreed code words or phrases to verify identities on phone or video calls.
3. Run tabletop exercises with leadership
- Bring together district leaders (technology, finance, communications, instruction) to walk through realistic cyberattack scenarios.
- Clarify roles, decision points, and communication plans before an incident occurs.
- Use these exercises to identify gaps in policies, backups, and vendor coordination.
4. Leverage collaborative networks and shared services
- Explore membership in MS-ISAC; some states (including Alaska, Connecticut, Kansas, Maine, Mississippi, New Jersey, Oregon, Texas, and Vermont) cover membership so districts can access services at no additional cost.
- Participate in state or regional CoSN chapters and similar groups to share playbooks, vendor evaluations, and incident experiences.
- Consider forming local consortia with neighboring districts to share expertise, negotiate better pricing, and coordinate training.
5. Protect budgets for security basics
- Treat cybersecurity as an ongoing operational requirement, not a one-time project.
- Resist the temptation to cut phishing simulations, training, or monitoring tools simply because the district has not yet experienced a major incident.
By focusing on these fundamentals and using collaboration to stretch limited resources, districts can reimagine their cybersecurity approach and stay more prepared for AI-enhanced threats without needing enterprise-level budgets.


