AI in K-12 Education: What School Divisions May Consider in Preparing for the Future

AI in K-12 Education: What School Divisions Might Consider in Preparing for the Future

By Dr. Elizabeth Davis and Alyssa Barone

January 2025


School divisions are inundated with new software and programs incorporating Artificial Intelligence (AI). AI-enabled tools can be beneficial for curriculum and instructional practices, such as individualizing learning to meet the diverse needs of students.i AI can also be used for administrative tasks and decision-making, and it has the potential to reduce the workload of educators by streamlining and assisting with myriad tasks.ii However, there are also many risks and uncertainties associated with the use of AI, particularly in education where the impact of its use on student outcomes is still largely unknown. Divisions must weigh the risks and develop the systems, structures, and policies for strategic and equitable use of AI-enabled tools. This brief uses the latest guidance available to highlight some of the potential and risks of AI for education leaders new to the topic, as well as critical questions to consider regarding its implementation now and in the future.

What are the possibilities and potential of AI in K-12 education?

Divisions are learning about the potential, limitations and risks of AI and the ways that it can support student learning and educator workloads. For example, reactive AI responds to specific requests without learning from past data (e.g., Amazon’s Alexa or Siri), predictive AI analyzes data to predict future events or behaviors (e.g., suggested shows on Netflix or YouTube), and generative AI creates content based on data patterns (e.g., ChatGPT).iii Educators and administrators are beginning to use these types of AI in a variety of ways, including AI-enabled tools that support student learning, help teachers with lesson planning, and even help central office staff with decision-making for resource allocation.iv

The education field is experiencing major advancements in the power of AI to personalize learning. For example, AI is used in Intelligent Tutoring Systems (ITSs) that can recognize the steps students take in solving a math problem and provide real-time feedback when a student begins to diverge from the steps needed to get to the correct answer to help the student course correct and learn from a potential mistake.v Preliminary research supports that the hybrid use of AI and human tutoring can have positive effects on learning outcomes for all students, including students with disabilities.vi This is particularly promising given the continued need for remediation to address learning loss from the COVID-19 pandemic and the emphasis on high-dosage tutoring as an evidence-based practice to accelerate learning.vii Furthermore, AI-enabled tools can increase accessibility of assessments by adapting to the communication abilities of neurodiverse students, providing these students with more ways to demonstrate their knowledge than traditional assessments.viii

AI-enabled tools can potentially also alleviate some of the workload placed on teachers. For example, it is estimated that AI-enabled tools can reduce classroom preparation time for teachers from 11 hours to 6 hours a week through platforms that facilitate finding relevant material, suggesting lesson plans, and creating assignments for students.ix It should be noted that this estimate is not a precise measure, but a preliminary one based on an estimation of the time teachers currently spend on tasks that align to AI tools. It is also important to note that additional research is needed to identify if the use of AI on these tasks reduces the quality of instruction.

Beyond teacher workloads, the use of AI may help decision-makers improve the efficiency of systems and resource allocation. For example, Colorado Springs’ District 11 implemented a new AI tool called “Strategic Routing” by HopSkipDrive, which analyzed the district’s transportation data (e.g., the number of students taking buses, where they lived) and suggested a new plan that included eliminating underutilized routes and adding passenger cars in place of buses in some instances.x As a result, the district cut 45 routes, increased their on-time arrival rate to 99%, and expected to save $8 million on transportation costs over a decade.xi This tool may be particularly helpful during the current shortage of bus drivers.

What are the risks of and concerns about AI use in education?

AI is expanding rapidly and the U.S. education system is still reckoning with the digital divide, with low-income and students of color being less likely to have access to reliable internet and laptops.xii Beyond access to those resources, historically underresourced schools may not have teachers with the capacity to utilize AI tools, effective curriculum to teach AI literacy, or systems in place to ensure appropriate use of AI.xiii Therefore, if these advancements from AI usage are to scale without reinforcing existing inequities in education, divisions must invest in designing systems and structures to effectively and equitably use these tools. This requires providing access to effective training to support education leaders and teachers in the usage of these tools and ensuring that all educators have the time and resources for implementation. If this is not done systematically, the education community risks exacerbating opportunity gaps in classrooms and accelerating gaps between resourced and under-resourced schools.

There are also numerous concerns with the use of large language models (LLMs) such as ChatGPT or Gemini in educational settings. For example, some teachers use these tools for test development or grading, but they are prone to hallucinations and other issues of quality.xiv LLMs are limited in the quality of their production since their outputs are based on the identification of patterns in writing and symbols across publicly available documents on the internet without any control of the quality or reliability of those documents.xv This means that the use of LLMs requires a human touch to review the outputs of content created and especially the scores and feedback given to students. There are also numerous concerns with using AIs for grading, including the retention of student information or prompts used to grade it, potentially violating student data privacy.xvi Leaders interested in using AIs for grading should scrutinize the data privacy statements issued by AI companies to determine if and how an AI retains user information, as well as considerations of biased outputs.

Beyond privacy, there are ethical concerns with the use of AI with students. For example, some AI-enabled tools are prone to biased outputs, which can replicate inequitable conditions and harm students’ self-identity. A study found that the most frequently used LLMs (e.g., ChaptGPT and Claude) produce texts that portray minoritized individuals as subordinate to empowered groups or present them in stereotypical roles, upholding existing power structures and causing psychological harm to systemically marginalized individuals.xvii There is also the potential for increased student plagiarism with the ease of access to AI.xviii

What are the next steps that divisions might take to benefit from AI’s potential and mitigate risks?

Divisions should identify problems of practice and consider if there are AI-enabled tools that can support addressing the problem, rather than starting with a tool and designing for its implementation. For example, a division that aims to improve its instructional practices for literacy might consider many different options, including adopting an intelligent tutoring system (ITS). These systems respond to student answers with specific feedback on the steps that students took to get their answers, can provide encouragement while students are working, and can adapt questions to support the individual needs of students.xix Preliminary research found ITSs to be particularly helpful in improving reading comprehension in low-income schools. In one study of the use of ITSs in low-income schools, students improved substantially on measures related to writing the main idea and text recall, although the accompanying positive effect on standardized literacy assessments was not statistically significant.xx However, it should be noted that while ITSs have shown a significant effect on literacy skills when compared to traditional instruction, the effect is much smaller when compared to human tutoring and cannot replicate the benefits of one-to-one support from an educator.xxi

If an AI-enabled tool is selected to address a problem of practice, the division will need to consider an implementation plan, which may include the need to reconceptualize existing systems and structures. For example, divisions might find that they need to restructure their data architecture to maximize the analytical abilities of AI tools. Furthermore, it is critical for divisions to be aware that AI-enabled tools might already be making their way into classrooms. For example, San Diego Unified School District (SDUSD) signed a contract with Houghton Mifflin to integrate the district’s English curriculum into an online platform, provide training sessions on its use and access to Writable, which is an AI-enabled tool that scaffolds student learning in writing and reading and provides feedback on student assignments.xxii The contract, which never included the word “AI” and only mentioned Writable once, was approved unanimously by the board without discussion.xxiii Board members were surprised later to find that Writable used AI and that at least some teachers were using it to grade student work.xxiv

As school leaders navigate these potential challenges, what should they be asking?

If a division decides that an AI-enabled tool will be helpful to improve or solve a problem of practice, it might consider the following questions to plan for implementation:

  • Could this AI-enabled tool cause harm to students and educators?
    • When using AI tools, who owns the student data, and is students’ privacy protected?
    • Has the AI tool been tested for issues of equity, bias, and appropriateness of use? Are there additional considerations for implementation to ensure appropriate use?
  • How can the tool be piloted before scaling districtwide?
  • What professional development is needed to implement this tool?
    • How can divisions provide educators and administrators the time to test and build these tools into their curriculum and practice?
  • Will the tool be implemented for use by students, teachers or administrators? Are there safeguards needed specific to that user group?
    • What are the existing federal, state, and local policies that establish guardrails for using AI-enabled tools?
  • What data infrastructure is needed to leverage the full capabilities and functionalities of the AIenabled tool while also safeguarding student data and privacy?
  • What plans should be put in place for contingencies, including the program’s potential failure?

Divisions may also wish to revisit their procurement practices as they implement AI-enabled tools. This might include considering questions such as:

  • What evidence is there that the tool’s provider has the stability, capacity, and knowledge to deliver the contracted AI tool and associated assurances over time?
  • Are there additional guardrails needed in technology procurement practices to identify AI use and uphold privacy protections?
  • What technical processes can be included in the contract to revisit and test student privacy and data protection throughout the course of the partnership?

Other Considerations

Weighing the Environmental Costs of Using AI

When weighing the potential benefits of using AI in education, divisions may want to be aware of the environmental costs of its usage. For example, training a large language model (LLMs) produces approximately 300,000 kilograms of carbon dioxide emissions, which is equal to the energy pollution of 125 roundtrip flights between New York and Beijing.xxv There are also many environmental concerns with data centers that store the data collected by AI tools. A large data center uses as much electricity as approximately 80,000 U.S. households and roughly three to five million gallons of water per day as a cooling mechanism.xxvi

This is of particular significance in Northern Virginia, which houses the largest number of data centers in the world.xxvii Divisions in the region may wish to consider how they can be a leader in environmental protections related to the growth of AI. For example, divisions can encourage technology vendors to provide accurate measures and reporting of the energy costs associated with the use of AI tools for educational purposes and prioritize contracts with technology companies that use clean and renewable energy to incentive its use.

Cautionary Tale: Districtwide Implementation of a Chatbot for Student Learning

Los Angeles Unified School District (LAUSD) partnered with AllHere Education to develop a districtwide AI chatbot, or personal assistant for students, that aimed to reduce the amount of information that a student would need to click through to identify learning resources, while also being playful and interactive with the student.xxviii For example, a student could ask for their grade in a class, and the chatbot could tell them immediately, meaning that the bot had access to student data systems.xxix This innovation received national attention, yet within months of its launch, the chatbot was discontinued because of financial difficulties in the tech company and challenges maintaining the chatbot.xxx The abrupt ending led to questions about protecting student data collected by the chatbot and allegations that its privacy protections were likely not in compliance with the district’s privacy policies.xxxi

Conclusion

It is critical that division leaders continue to educate themselves about the impact of AI on education, including exploring its potential, as well as questioning potential risks and ethical concerns.

While AI is here to stay, it is not designed to replace humans. Thoughtful human control and inputs are key for its effective use. Division leaders should ensure that they are building the capacity of their teams to use AI and designing systems to implement its use effectively. Slowing down to ask critical questions can help mitigate potential risks and enable the effective use of AI.

Helpful Resources


  1. i Wijekumar, K., Lei, P., Rice, M., Beerwinkle, A., Zhang, S., & Meyer, B. J. F. (2024). A web-based intelligent tutoring system for reading comprehension delivered to fifth-grade students attending high-poverty schools: Results from a replication efficacy study. Journal of Educational Psychology, 116(8), 1333–1351. doi.org/10.1037/edu0000878
  2. Kingson, J.A. (6 March, 2024). Teachers are embracing ChatGPT powered grading. Axios. www.axios.com/2024/03/06/ai-tools-teachers-chatgpt-writable
  3. National Educational Association. (Apr. 2024). Report of the NEA Task Force on Artificial Intelligence in Education. www.nea.org/sites/default/files/2024-06/report_of_the_nea_task_force_on_artificial_intelligence_in_education_ra_2024.pdf
  4. ibid
  5. U.S. Department of Education, Office of Educational Technology. (2023). Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. www.edtecheurope.org/s/US-OET-Artificial-Intelligenceand-the-Future-of-Teaching-and-Learning-Insights-andRecommendations.pdf
  6. Thomas, D.R., Gatz, E., Gupta, S., Aleven, V., & Koedinger, K.R. (July 2024). The neglected 15%: Positive effects of hybrid human-AI tutoring among students with disabilities. [Conference paper]. In The 25th Artificial Intelligence in Education (AIED) Conference, Recife, Brazil. doi.org/10.1007/978-3-031-64302-6_29
  7. Robinson, C., Kraft, M., Loeb, S., & Schueler, B. (2021). Design principles for accelerating student learning with high-impact tutoring. EdResearch for Action, Brief #16. annenberg.brown.edu/sites/default/files/EdResearch_for_Recovery_Design_Principles_1.pdf
  8. U.S. Department of Education, Office of Educational Technology. (2023). Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations. www.edtecheurope.org/s/US-OET-Artificial-Intelligenceand-the-Future-of-Teaching-and-Learning-Insights-andRecommendations.pdf
  9. Bryant, J., Heitz, C., Sanghvi, S., & Wagle, D. (14 Jan., 2020). How artificial intelligence will impact K-12 teachers. McKinsey. www.mckinsey.com/industries/education/our-insights/howartificial-intelligence-will-impact-k-12-teachers
  10. Fitzpatrick, A. (25 Jan., 2024). AI is helping school districts navigate bus driver shortages. Axios. www.axios.com/2024/01/25/bus-driver-shortage-ai-solution
  11. ibid
  12. Lake, R. & Makori, A. (June 2020). The digital divide among students during COVID-19: Who has access? Who doesn’t? The Center on Reinventing Public Education. crpe.org/the-digital-divide-among-students-during-covid-19-who-has-access-who-doesnt/
  13. Pham, H., Kohli, T., Olick Llano, E., Nokuri, I., & Weinstock, A. (June 2024). How will AI impact racial disparities in education? Stanford Center for Racial Justice. law.stanford.edu/2024/06/29/how-will-ai-impact-racial-disparities-ineducation/#:~:text=This%20discrepancy%20in%20AI%20usage%20may%20lead%20to%20an%20expansion
  14. Liu, F., Liu, Y., Shi, L., Huang, H., Wang, R., Yang, Z., Zhang, L., Li, Z., & Ma, Y. (2024). Exploring and evaluating hallucinations in LLM-powered code generation. arXiv preprint arXiv.2404.00971
  15. Ko, A.J. (18 Dec., 2023). More than calculators: Why large language models threaten learning, teaching, and education. Medium. medium.com/bits-and-behavior/morethan-calculators-why-large-language-models-threatenpublic-education-480dd5300939
  16. Adel, A., Ahsan, A., & Davison, C. (2024). ChatGPT promises and challenges in education: Computational and ethical perspectives. Education Sciences, 14(814), (1-27). doi.org/10.3390/educsci14080814
  17. Shieh, E., Vassel, F. M., Sugimoto, C., & MonroeWhite, T. (2024). Laissez-faire harms: Algorithmic biases in generative language models. arXiv preprint arXiv:2404.07475
  18. Adel, A., Ahsan, A., & Davison, C. (2024). ChatGPT promises and challenges in education: Computational and ethical perspectives. Education Sciences, 14(814), (1-27). doi.org/10.3390/educsci14080814
  19. Wijekumar, K.K., Harris, K.R., Graham, S., Lei, P. (2022). A teacher technology tango shows strong results on 5th graders persuasive writing. Educational Technology Research and Development, 70, 1415-1439. doi.org/10.1007/s11423-022-10117-9
  20. Wijekumar, K., Lei, P., Rice, M., Beerwinkle, A., Zhang, S., & Meyer, B. J. F. (2024). A web-based intelligent tutoring system for reading comprehension delivered to fifth-grade students attending high-poverty schools: Results from a replication efficacy study. Journal of Educational Psychology, 116(8), 1333–1351. doi.org/10.1037/edu0000878
  21. Xu, Z., Wijekumar, K., Ramirez, G., Hu, X., & Irey, R. (2019). The effectiveness of intelligent tutoring systems on K-12 students’ reading comprehension: A meta-analysis. British Journal of Educational Technology, 50(6), 3119–3137. doi.org/10.1111/bjet.12758
  22. McWhinney, J. (12 June, 2024). The learning curve: San Diego Unified’ s AI future is now. Voice of San Diego. voiceofsandiego.org/2024/06/12/the-learning-curve-sandiego-unifieds-ai-future-is-now/
  23. ibid
  24. Johnson, K. (6 Aug., 2024). California’s two biggest school districts botched AI deals. Here are lessons from their mistakes. Associated Press. apnews.com/us-news/californiaartificial-intelligence-schools-california-state-governmentgeneral-news-0c00ca12374a0362a2e5d961d9a55778#
  25. Lin, P. K. (2023). The cost of training a machine: Lighting the way for a climate-aware policy framework that addresses artificial intelligence’s carbon footprint problem. Fordham Environmental Law Review, 34(2), 1-30. papers. ssrn.com/sol3/papers.cfm?abstract_id=4066935
  26. ibid
  27. Kidd, D. (July 2023). The data center capital of the world is in Virginia. Governing. www.governing.com/infrastructure/the-data-center-capital-of-the-world-is-in-virginia
  28. Young, J.R. (2 May, 2024). Los Angeles School District launched a splashy AI chatbot. What exactly does it do? EdSurge. www.edsurge.com/news/2024-05-02-los-angelesschool-district-launched-a-splashy-ai-chatbot-what-exactlydoes-it-do
  29. Keierleber, M. (1 Jul., 2024). Whistleblower: L.A. schools’ chatbot misused student data as tech co. crumbled. The 74. www.the74million.org/article/whistleblower-l-a-schoolschatbot-misused-student-data-as-tech-co-crumbled/
  30. Young, J.R. (15 Jul., 2024). An education chatbot company collapsed. Where did the student data go? EdSurge. www.edsurge.com/news/2024-07-15-an-education-chatbotcompany-collapsed-where-did-the-student-data-go?
  31. Keierleber, M. (1 Jul., 2024). Whistleblower: L.A. schools’ chatbot misused student data as tech co. crumbled. The 74. www.the74million.org/article/whistleblower-l-a-schoolschatbot-misused-student-data-as-tech-co-crumbled/

This brief is part of a series sponsored by a partnership between George Mason University’s Center for Advancing Human-Machine Partnerships and EdPolicyForward, the Center for Education Policy at Mason’s College of Education and Human Development. If your school or division is interested in collaborating with us, please reach out to us at epf@gmu.edu or visit our website at: https://cehd.gmu.edu/centers/edpolicyforward/.