The FAIR Guiding Principles have become a global benchmark for evaluating the quality and reusability of research data. Yet, in many contexts, evidence about FAIR compliance is based primarily on self-reported practices rather than on empirical assessment of actual datasets. Building on prior survey-based work that documented limited data sharing and weak documentation among South African researchers, this paper presents a systematic audit of research datasets hosted in institutional and disciplinary repositories at South African universities. We develop an operational FAIRness assessment framework that scores datasets along core dimensions of findability, accessibility, interoperability, and reusability, using observable indicators such as persistent identifiers, metadata richness, licensing, file formats, and documentation. A stratified sample of datasets is drawn from multiple repositories and evaluated by trained coders using a standardized rubric. The analysis focuses on three questions: (i) how FAIR are research datasets in practice; (ii) how FAIRness varies across disciplines, repository types, and publication years; and (iii) whether the observed FAIRness patterns are consistent with previously reported attitudes and practices regarding data management and sharing. While we do not report concrete numerical results here, we outline the structure of the empirical analysis and illustrate the kinds of patterns this design can uncover. The study contributes a reusable FAIR auditing methodology and provides an evidence base for improving research data management policies, infrastructure, and training at South African universities and in comparable contexts.
The FAIR Guiding Principles have become a global benchmark for evaluating the quality and reusability of research data. Yet, in many contexts, evidence about FAIR compliance is based primarily on self-reported practices rather than on empirical assessment of actual datasets. Building on prior survey-based work that documented limited data sharing and weak documentation among South African researchers, this paper presents a systematic audit of research datasets hosted in institutional and disciplinary repositories at South African universities. We develop an operational FAIRness assessment framework that scores datasets along core dimensions of findability, accessibility, interoperability, and reusability, using observable indicators such as persistent identifiers, metadata richness, licensing, file formats, and documentation. A stratified sample of datasets is drawn from multiple repositories and evaluated by trained coders using a standardized rubric. The analysis focuses on three questions: (i) how FAIR are research datasets in practice; (ii) how FAIRness varies across disciplines, repository types, and publication years; and (iii) whether the observed FAIRness patterns are consistent with previously reported attitudes and practices regarding data management and sharing. While we do not report concrete numerical results here, we outline the structure of the empirical analysis and illustrate the kinds of patterns this design can uncover. The study contributes a reusable FAIR auditing methodology and provides an evidence base for improving research data management policies, infrastructure, and training at South African universities and in comparable contexts.