Government AI Regulation: Balancing Interests and Accountability
- Felicita J Sandoval MSc., CFE
- Jun 30, 2024
- 3 min read
September 19, 2023 Felicita J Sandoval MSc., CFE
Artificial Intelligence Researcher | Cybersecurity Professional | Governance | Co- Founder | Speaker | PhD (Candidate)
Artificial Intelligence (AI) has integrated itself into our society. It has revolutionized businesses and embedded itself in our daily routines. However, AI has significant responsibilities, and we must examine its ethical, social, and economic ramifications. Government agencies have emerged as pivotal actors in the regulation of AI, striving to strike a balance between safety and innovation.
On September 13, 2023, the titans of the tech industry met with the U.S. Senate in a private gathering to deliberate on the regulation of AI. There was a consensus among the participants that government intervention is imperative for overseeing artificial intelligence. However, one conspicuous absence in this crucial dialogue was stakeholders' perspective beyond the tech leaders and government officials.
While acknowledging the necessity of such meetings, it remains equally essential to include the voices and opinions of various entities, including businesses, academia, the broader community, and all relevant stakeholders. These diverse viewpoints can contribute significantly to a holistic and well-informed approach to AI governance.
The Stakeholder Theory
In pursuing my doctoral program, I have engaged in my dissertation of Stakeholder Theory and the Global Perspective on AI Accountability. My research is centered around exploring how Stakeholder Theory can be effectively applied to scrutinize the evolution and efficacy of international AI accountability agreements, norms, and standards. During my investigation, a pivotal moment emerged when I stumbled upon a scholarly journal penned by the distinguished Edward R. Freeman, which goes into the dynamics of stakeholder collaboration. This discovery made me realize that the Stakeholder Theory holds in the world of AI regulation and the establishment of robust frameworks for this transformative technology.
The Stakeholder theory can be used to analyze and solve complex problems through social collaboration that organizations can't solve independently (J Bus Ethics, 2010). This is done in the interests of not only the shareholders but any stakeholders, including customers, employees, communities, etc.
The Need for AI Regulation
AI systems can profoundly affect society, individuals, and businesses. We have seen the influence that AI systems have had in transportation, communication, finance, and other industries. For that reason, the need for regulation increases from concerns about bias, transparency, accountability, and fairness.
Governments should find a balance between the interests of all stakeholders. The Stakeholder theory can provide direction and an organized structure that can identify, comprehend, and prioritize these concerns. Big industry leaders and the government can work on the regulatory process in collaboration with AI developers, tech companies, and technology associations to discuss their concerns and ideas. This allows for the regulations to have overall input from the industry and reach a successful outcome.
Consumer rights are crucial and should be in our minds. Regulations must indicate privacy safety, prevent discrimination, and provide a high level of assurance that AI systems are transparent and accountable. Involving consumer rights groups and agencies, a perspective can be given to the concerns of AI users.
Society has taken AI systems as a positive and valuable thing in their lives, but also as they are fearful of losing their job or becoming a victim of biased decisions. The government can collaborate with academia and civil rights organizations to discuss the impact of AI and incorporate the outcome into regulations.
Accountability Assurance
To have effective regulations, accountability must be at the center of it. The stakeholder theory can play a significant role in promoting accountability. Stakeholders should understand AI systems. They should be able to comprehend how it functions and the risks that it may pose. Regulations should enforce transparency by having AI developers disclose relevant information to stakeholders.
Establishing oversight mechanisms that include committees, consumers, and society is essential. These will allow for a better and more efficient way to monitor compliance with regulations and hold big tech leaders and AI developers accountable.
Collections of feedback on AI regulations should be implemented to promote collaboration and innovation. This can ensure a more effective regulation that can adapt to change.
AI will become part of our daily lives as cell phones have and vehicles have. Government AI regulation is crucial for the growth of society as well as our safety. Applying the stakeholder theory can provide a significant benefit in balancing the interests of all stakeholders and promoting responsible and accountable AI use and development.
Felicita Sandoval is a professional in cybersecurity and AI, serving as a Cybersecurity Professional at LiveRamp and a doctoral student at Colorado Technical University. Her work focuses on protecting digital assets, compliance, and AI research. An effective speaker, she often discusses AI and cybersecurity career development. As Co-Founder of Latinas in Cyber (LAIC), she promotes diversity in tech through advocacy, mentorship, and networking. Felicita also hosts the Cyber C-Suite x La Jefa Interview Series, engaging with industry leaders on AI and cybersecurity.
Commenti