Skip to main content

Vision & Values

Reclaiming educational futures in the age of AI through decolonial design, epistemic justice, and digital sovereignty.

Theoretical Foundation

Education AI Futures sits on a theoretical spine that combines:

Decolonial AI and epistemic justice

Centering marginalized knowledge systems and challenging Western epistemic dominance in AI design.

Speculative methods and futures thinking

Imagining and prototyping alternative educational futures beyond imported models.

Critical data studies and digital sovereignty

Examining power relations in data collection, storage, and governance.

Advanced computational techniques

LLMs, knowledge graphs, and quantum-inspired models grounded in local contexts.

We begin from the recognition that education in Africa has historically been shaped by external epistemic authorities—colonial curricula, imported testing regimes, foreign accreditation systems, and now global AI platforms. These systems encode implicit assumptions about what counts as knowledge, whose language is "standard," and which futures are desirable.

A Future We're Building

A future where AI in education strengthens, rather than erases, African knowledge systems, languages, and pedagogies; where learning technologies are co-designed locally and governed with dignity, fairness, and care.

Core Values

Epistemic Authority & Local Knowledge

We insist that teachers, learners, elders, and local curriculum experts remain primary authorities about what should be taught, assessed, and preserved. AI architectures must be aligned to local curricular logics, not the other way around.

Decolonial Design

We reject the notion that educational AI must be "universal" to be valuable. We explicitly design localized models, datasets, and interfaces that encode African histories, philosophies, and practices, rather than flattening them into generic content.

Data Sovereignty & Anti-Extractivism

Educational data—student essays, exam scripts, lesson plans, assessment logs—should not be silently harvested into private models. We design governance, consent, and benefit-sharing frameworks that ensure communities retain power over their data.

Accessibility & Inclusion

We center learners with disabilities, learners in fragile contexts, and marginalized language communities, ensuring AI serves those systematically excluded by mainstream digital education.

Experimentation & Speculative Methods

Each prototype we build—AI tutor, assessment engine, curriculum recommender—is treated as a "futures-thinking artifact". We use speculative scenarios, design fiction, and participatory futures workshops to explore the ethical and political consequences of each technology.

Theory of Change

Our approach to transforming education AI through iterative, community-centered development.

1

Co-create knowledge infrastructures

Build local curricula graphs, multilingual corpora, and decolonial ontologies with educators, students, and communities.

2

Build and deploy AI tools

Create tools grounded in these infrastructures—tutors, analytics, assessment systems, and generative curriculum tools.

3

Continuously audit systems

Monitor for bias, unfairness, extractivism, and unintended harms using critical data methods and explainable AI.

4

Feed insights back

Inform policy, teacher training, and institutional decision-making with continuous learning and adaptation.

Through this iterative loop, educational stakeholders gain technical capacity and political literacy to negotiate and govern AI, shifting power away from foreign platforms and toward local institutions.