Will AI Literacy Become a Teacher Certification Requirement? ETS’s Adapt AI Suggests It Might
- David Ross

- 1 day ago
- 8 min read
Originally published by David Ross on Substack and shared here with permission. View the original article HERE

Why This Conversation Matters Right Now
That long-running conversation about teacher AI literacy? It just moved from theory to measurement.
Educational Testing Service (ETS) has released a new AI-competency assessment for teachers titled Futurenav Adapt AI. Launched in February, under the Praxis® program, the assessment is designed to provide K-12 school districts with a reliable way to measure the AI literacy of their staff.
While the test is part of the Praxis suite (which handles teacher licensing in 46 states), Adapt AI is currently used for diagnostic and professional development purposes rather than high-stakes licensure. However, ETS has stated this positions the company to support states if they choose to make AI competency a requirement for certification in the future.
Traditional licensure tests (like the MTEL in Massachusetts or the CSET in California) focus on subject matter knowledge and pedagogy. There is no widely adopted update in 2025–2026 that makes AI literacy a required component of passing a state teacher credential exam. That could change quickly, however, as policy momentum builds.
There is no widely adopted update in 2025–2026 that makes AI literacy a required component of passing a state teacher credential exam. That could change quickly, however, as policy momentum builds.
Adapt AI is part of a broader movement led by ETS and its partners. In January of this year, the company launched the ETS Center for Responsible AI in Learning and Assessment, a research hub dedicated to ensuring AI tools in education are fair, valid, and transparent. ETS also recently partnered with aiEDU to co-author white papers and support a broader framework for AI teacher certification across the United States.
Let’s look at the details of this new assessment and then ponder what districts and teachers can do to position themselves for success.
The Need to Validate Teacher AI Competency
Let’s start with education policy.
AI for Education’s continuously updated tracker reports that 34 states plus Puerto Rico have official guidance or policy on the use of AI in K-12 schools.
There is no authoritative public count of districts with formal, board-approved AI policies, but several indicators give a floor and range. AI for Education cites research that 68% of districts have purchased at least one AI-related tool, using this as evidence that AI activity and policy work are widespread, even if not all are in formal policy yet. A 2025-26 national survey, summarized by The 74, reports that 78% of districts are now providing some form of guidance about AI use.
Now pair these policy documents with the user data from the dominant large language models (Google’s Gemini, Anthropic’s Claude, and OpenAI’s ChatGPT), with employment status inferred from account type and usage patterns.
EdWeek Research Center data show that by late 2025, 61% of K-12 teachers reported using AI tools in their work “a little, some, or a lot.” A RAND report focusing on pre-K found 29% of preschool teachers using gen AI, with usage much higher in K-12 grades: 42% of elementary, 64% of middle, and 69% of high school.
A RAND report focusing on pre-K found 29% of preschool teachers using gen AI, with usage much higher in K-12 grades: 42% of elementary, 64% of middle, and 69% of high school.
We have evidence of broad policy activity alongside accelerating classroom use. We can couple this with the publication of numerous AI literacy frameworks for teachers. These are the dominant frameworks:
UNESCO AI Competency Framework for Teachers – Defines 15 competencies across five dimensions: Human-centered mindset, ethics, foundations and applications, pedagogy, and professional learning.
Digital Education Council AI Literacy Framework – Positions AI literacy as a set of competencies for educators and students, covering technical understanding, ethical use, critical evaluation, and pedagogical integration.
AILit Framework (European Commission & OECD, with Code.org and partners) – Emphasizes foundational understanding, interdisciplinary integration, practical classroom applicability, ethics, and durability of competencies as the technology evolves.
ISTE Standards for Educators – Have been updated to embed AI‑related competencies (e.g., safe, responsible, innovative use, equity and inclusion) within existing standards rather than as a separate AI framework.
AI Literacy Framework – The Digital Promise framework has been widely adopted and referenced, including for defining teacher AI competencies. Many organizations (including AILit/TeachAI) cite this as a foundational reference.

Notably, ETS does not publish a standalone named framework the way ISTE or UNESCO do. The company does reference a core set of competencies that Adapt AI is built around, but it is more implicit than a formal, independently published model.
Neither of the two major national teaching unions — the American Federation of Teachers and the National Education Association — has released a validated AI literacy assessment tool with clearly defined competencies and scoring, nor have they endorsed an evaluation rubric districts could use to assess demonstrated AI literacy.
In any case, this existing three-part structure (policy, usage data, competency frameworks) sets the stage for assessment.
Structure of the Assessment
The Adapt AI assessment does not just test technical knowledge; it focuses on the practical and ethical application of AI in a classroom setting. It evaluates four key areas:
Recognize and Understand AI: Identifying AI features within existing edtech tools and understanding how large language models (LLMs) function.
Navigate AI Ethically: Making judgment calls on student data privacy, academic integrity, and bias in AI outputs.
Evaluate AI: Critiquing AI-generated lesson plans or feedback for accuracy and “hallucinations.”
Use and Apply AI: Demonstrating the ability to write effective prompts and integrate AI into authentic teaching scenarios.
The assessment is multi-component and performance-based, typically taking under 30 minutes. These are the components:
Reflect – Self-reported perceptions and patterns of AI use. This section captures perception, confidence, and frequency of use. But it is not just a survey. It is calibrated against behavioral indicators.
Reason – Adaptive scenario items testing knowledge and judgment. Examples: A student submits AI-generated writing that is partially accurate but contains fabricated citations. What do you do? A chatbot suggests differentiated math problems. Some are subtly biased in word context. What is the issue? An AI tool suggests grading adjustments based on writing style patterns. Is this valid assessment practice?
Apply – Interactive, real-world tasks that mirror how teachers might use AI with planning, prompting, and ethical decisions. Teachers must perform practical tasks such as revise an AI-generated lesson plan for alignment and rigor; adjust a prompt to improve output quality; decide whether a student AI use case violates policy; identify what student data should or should not be entered into a tool.
Upon completion, districts receive a “Skillprint” dashboard of aggregated results. The data allows administrators to see precisely where staff excel — and where targeted professional development is needed — rather than relying on surveys or assumptions.
Joining the Pilot
Administrative staff may reasonably wonder how to access this new assessment, which is clearly in early-stage mode because it just rolled out in February.
To sign up for a pilot or implementation of the Adapt AI assessment, school districts typically go through a consultative “demo-first” process rather than a public self-registration portal.
As of February, here are the specific steps and contacts for a school district to get involved:
The Primary Enrollment Path: Consultative Demo. Visit the ETS Adapt AI for Districts page. Click the “Book a Meeting” or “Book a Demo” button.
Direct Partnership Contacts: For larger districts or those seeking to participate in research-heavy pilots (often involving grants or data-sharing agreements), direct contact with the program’s leadership is recommended.
The aiEDU Collaborative Path: ETS has a formal partnership with aiEDU to design and validate these AI measures. Districts that are already members of the aiEDU network often get “first look” access to these assessments.
If your district is accepted into a pilot or early-adoption phase, you generally receive access for all staff to the Reflect (self-efficacy), Reason (decision-making), and Apply (interactive scenario) assessments; a district-wide “heatmap” showing which schools or departments are “AI-ready” and which are at risk of ethical or safety lapses; and a report that cross-references assessment results with specific professional development modules, so you aren’t paying for “AI 101” for teachers who are already advanced users.
As you ponder this option, understand that instructional technology coordinators are bombarded with consultants promising AI literacy through workshops and webinars. It is difficult to filter the signal from the static. Valid and reliable AI literacy assessments provide a mechanism to match need with claim, ensuring districts spend wisely.
Valid and reliable AI literacy assessments provide a mechanism to match need with claim, ensuring districts spend wisely.
AI Literacy Coach for Educators
The need to upskill teachers in AI literacy is apparent. Accordingly, I have created a free Custom GPT called the AI Literacy Coach for Educators (take the link or click the icon below this paragraph).
I identified the domains and descriptors from all the AI literacy frameworks mentioned in this blog and used them to create a progressive and adaptive tutor that will increase your AI skill set through 10-minute sessions that are based on classroom scenarios, simulations, and real-student work.
You can use my free program to simply improve your knowledge and fluency with AI tools and classroom best practices, or you can use it to prepare yourself should your school or district decide to assess those very same skills. Did I mention it’s free?
Final Thoughts
Most U.S. districts already have some AI tools in classrooms, but leaders lack reliable data on whether teachers can use them well, safely, and ethically. Adapt AI appears to be the first nationally scaled assessment system designed to fill that gap with objective, competency-based insights instead of surveys or confidence measures alone.
Participating districts are encouraged to translate results into a professional learning architecture instead of treating this as a compliance checkbox. Ideally, districts will use the aggregated data from these assessments to:
Prioritize professional development based on actual needs
Identify ethical or safety risk areas before scaling AI
Build more coherent AI rollout plans
Communicate capability and readiness to boards and communities
This assessment signals a shift toward objective AI competency measurement in K-12 education. State education agencies, workforce frameworks, and district policy are increasingly talking about defined AI literacy standards and measurable outcomes. In fact, the U.S. Department of Labor released a new framework in February.
This assessment signals a shift toward objective AI competency measurement in K-12 education.
If states or accrediting bodies decide to include AI literacy explicitly in teaching standards, ETS has already positioned itself with a validated tool. I’m sure its competitors will be quick to follow.
I wrote earlier this year about the need to create a continuum that begins with AI literacy and proceeds to AI fluency. That structure could lead districts and states to develop a tiered AI credentialing system, along the lines of a progression from AI aware (stage 1) to AI practitioner (stage 2) to AI instructional leader (stage 3).
Over the years, I have likened the assessment process to competitive sports, my default analogy for most things in life. Basketball players know the rules of the game and where the basket is. We need to mimic those conditions in our demands for teacher AI literacy. Assessments that reliably measure these competencies ensure that teachers understand both the rules of the game and where to aim.
David Ross is a leading voice in AI and education. To explore more of his writing here: https://davidpblross.substack.com/




Comments