Mkhwanazi, Sthembiso NNaidoo, PrivolinMoodley, AvashlinKhumalo, SandileMnisi, GivenMothomoholo, Mamolapi2025-12-172025-12-1720250-7988-5673-4http://hdl.handle.net/10204/14530This study investigates the use of Retrieval-Augmented Story Generation (RASG) to produce culturally relevant and educational children's stories in isiZulu, a low-resource yet widely spoken South African language. To address the scarcity of high quality narrative data, we combined translation-based data augmentation with fine-tuning of multilingual large language models (LLMs), including GPT-4o-mini and LLaMA 3B. A retrieval mechanism was integrated using multilingual-e5-large embeddings, which, despite lacking explicit isiZulu support, enabled contextual story generation from Wikipedia-derived passages. Qualitative evaluations involving native isiZulu speakers revealed that fine-tuned models outperformed baseline systems in terms of grammatical accuracy, coherence, and cultural relevance, though challenges such as language mixing and prompt sensitivity remained. A comparative English baseline using a non-fine-tuned LLaMA 3B model highlighted the performance disparities between high- and low-resource language settings. Our findings underscore the importance of targeted fine-tuning, curated datasets, and embedding models that better represent African languages. This research contributes to the development of AI-driven literacy tools for underrepresented linguistic communities and highlights future directions for improving story generation in low-resource contexts.FulltextenGenerative artificial intelligenceRetrieval- Augmented generationRetrieval-Augmented story generationNatural language processingRetrieval-Augmented story generation for isiZulu: Enhancing literacy through AI in low-resource contextsConference PresentationN/A