Research on Consumer Data Sovereignty and Algorithmic Fairness

Research on Consumer Data Sovereignty and Algorithmic Fairness

Authors

  • Qiaoxiu Li Jinan Tomas School, Jinan 250000, Shandong, China

DOI:

https://doi.org/10.53469/ijomsr.2025.08(04).02

Keywords:

Data sovereignty, Algorithmic fairness; Personal information protection, Algorithmic discrimination, Data governance

Abstract

In the data-driven digital economy, the tension between consumer data sovereignty and algorithmic fairness has increasingly become a core governance issue. This paper systematically examines the legal empowerment pathways of data sovereignty, technical implementation schemes for algorithmic fairness, and the dynamic interplay between the two. The research demonstrates that the realization of data sovereignty requires transcending the limitations of the "informed-consent" framework, while algorithmic fairness necessitates shifting from the myth of technological neutrality to value-embedded design. The synergistic governance of both demands the construction of a novel trinity framework integrating rights, technology, and institutions. Future studies should focus on defining data property rights, fairness verification of explainable AI (XAI), and cultural conflicts in transnational platform regulation.

References

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Hoofnagle, C. J., van der Sloot, B., & Borgesius, F. Z. (2019). The European Union General Data Protection Regulation: What it is and what it means. Information & Communications Technology Law, 28(1), 65-98.

O’Hara, K. (2020). Data trusts: Ethics, architecture and governance for trustworthy data stewardship. WebSci '20: Proceedings of the 12th ACM Conference on Web Science, 1–9.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35.

Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006). Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality, 7(3), 17–51.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732.

Kairouz, P., et al. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1–210.

Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006). Calibrating noise to sensitivity in private data analysis. Journal of Privacy and Confidentiality, 7(3), 17–51.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732.

Kairouz, P., et al. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1–210.

Bradford, A. (2020). The Brussels effect: How the European Union rules the world. Oxford University Press.

Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability. AINow Institute.

Citron, D. K., & Pasquale, F. A. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89(1), 1–33.

Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), 5.

Bibri, S. E. (2022). The social shaping of the metaverse as an alternative to the imaginaries of data-driven smart cities. Springer.

Downloads

Published

2025-04-08

Issue

Section

Articles
Loading...