Leope, NREloff, JHPDlamini, Thandokuhle M2026-01-142026-01-1420252169-3536DOI:10.1109/ACCESS.2025.3635532http://hdl.handle.net/10204/14584Federated Learning (FL) enables decentralized model training, while maintaining the privacy of the underlying individual datasets. Therefore, FL can resolve some intrinsically privacy-sensitive challenges in domains, such as healthcare and finance. However, privacy preservation usually comes with a trade-off on the usefulness (i.e., utility) of the information. The research problem is how to optimize this inversely proportional trade-off balance between privacy and utility. This study uses an experimental comparative analysis, in a synthetic healthcare setting, of different noise types (i.e., Gaussian, Laplacian, Poisson, Uniform, and Exponential) injected on the client side at the input-feature level prior to local training to enhance privacy in FL. We explore the impact of these noise types on the privacy–utility trade-off in FL data. The findings indicate that although Laplacian, Poisson, and Exponential types of noise provides stronger obfuscation which often comes at the cost of utility. This confirms and amplifies the trade-off in maintaining the usefulness of the data against its privacy. More importantly, the findings also show that Gaussian noise generally offers the best trade-off between privacy and utility on this task, suggesting a practical default for privacy-aware FL in healthcare-like environments.FulltextenFederated LearningFLDatasetsGaussian noise typeLaplacian noise typePoisson noise typeUniform noise typeExponential noise typeSynthetic healthcare settingData poisoningDifferential privacyFederated learningMalicious data injectionPrivacy -utility trade-offPrivacy versus utility in federated learning: An experimental analysis of noise injection techniquesArticleN/A