Fair clustering with \((\alpha,\beta)\)-proportionality constraints requires every cluster to satisfy lower and upper bounds on the representation of each sensitive group. Recent universal-coreset results enable scalable optimization by compressing the dataset into a small weighted subset that preserves constrained clustering costs simultaneously for all center sets and all feasible group-cardinality (coloring) constraints. In many applications, however, the sensitive attributes that motivate fairness constraints also require formal privacy protection, and releasing even a small coreset can leak membership and group information. We propose a modular framework for differentially private universal coresets for \((\alpha,\beta)\)-fair \(k\)-median and \(k\)-means in metric and Euclidean spaces. The framework consists of two stages: (i) construct a randomized \(\eta/2\)-universal coreset via random sampling and reweighting; (ii) release a privatized coreset by perturbing only the weight vector using the Gaussian mechanism, followed by post-processing (nonnegativity and mass renormalization). We prove that the released summary is \((\varepsilon,\delta)\)-differentially private and, with high probability, preserves constrained costs for all \((C,M)\) up to a multiplicative \((1\pm\eta)\) factor plus an additive privacy term that depends on the metric diameter bound, coreset size, and an \(\ell_2\) weight-sensitivity parameter. We further extend the method to streaming via merge-and-reduce and analyze privacy under continual observation using advanced composition and amplification-by-subsampling. The resulting DP universal coresets allow multi-query fair clustering and interactive constraint tuning without repeated access to the raw sensitive dataset.
Fair clustering with \((\alpha,\beta)\)-proportionality constraints requires every cluster to satisfy lower and upper bounds on the representation of each sensitive group. Recent universal-coreset results enable scalable optimization by compressing the dataset into a small weighted subset that preserves constrained clustering costs simultaneously for all center sets and all feasible group-cardinality (coloring) constraints. In many applications, however, the sensitive attributes that motivate fairness constraints also require formal privacy protection, and releasing even a small coreset can leak membership and group information. We propose a modular framework for differentially private universal coresets for \((\alpha,\beta)\)-fair \(k\)-median and \(k\)-means in metric and Euclidean spaces. The framework consists of two stages: (i) construct a randomized \(\eta/2\)-universal coreset via random sampling and reweighting; (ii) release a privatized coreset by perturbing only the weight vector using the Gaussian mechanism, followed by post-processing (nonnegativity and mass renormalization). We prove that the released summary is \((\varepsilon,\delta)\)-differentially private and, with high probability, preserves constrained costs for all \((C,M)\) up to a multiplicative \((1\pm\eta)\) factor plus an additive privacy term that depends on the metric diameter bound, coreset size, and an \(\ell_2\) weight-sensitivity parameter. We further extend the method to streaming via merge-and-reduce and analyze privacy under continual observation using advanced composition and amplification-by-subsampling. The resulting DP universal coresets allow multi-query fair clustering and interactive constraint tuning without repeated access to the raw sensitive dataset.