Four Horizontal Sectors Regulating the Future of AI
Four Horizontal Sectors Regulating the Future of AI
The idea that AI technologies must be regulated is a common sentiment. Most governments, AI product developers, and even ordinary users of AI products agree on this. Unfortunately, the best way to regulate this rapidly growing field is an unsolved puzzle.
MUO VIDEO OF THE DAY
SCROLL TO CONTINUE WITH CONTENT
If left unchecked, AI technologies can negatively disrupt our way of life and threaten our existence. But how can governments navigate the labyrinth of challenges that comes with this rapidly evolving field?
1. Data Privacy and Protection Regulations
One of the primary concerns with AI technologies is data privacy and security. Artificial intelligence systems are data-hungry machines. They need data to operate, more data to be efficient, and even more data to improve. While this isn’t a problem, the way this data is sourced, the nature of it, and how it is processed and stored is one of the biggest talking points surrounding AI regulations.
Considering this, the logical path to take is to put in place strict data privacy regulations that govern data collection, storage, and processing, as well as the rights of individuals—whose data are being used—to access and control their data. Questions these regulations would likely address are:
- What kind of data can be collected?
- Should some private data be considered taboo in AI?
- How should AI companies handle sensitive personal data, such as health records or biometric information?
- Should AI companies be required to implement mechanisms for individuals to request the deletion or correction of their personal data easily?
- What are the consequences for AI companies that fail to comply with data privacy regulations? How should compliance be monitored, and how should enforcement be ensured?
- Perhaps most importantly, what standard should AI companies implement to ensure the safety of the sensitive nature of the information they possess?
These questions and a few others formed the crux of why ChatGPT was temporarily banned in Italy . Unless these concerns are addressed, the artificial intelligence space might be a wild west for data privacy, and Italy’s ban might turn out to be a template for bans by other countries worldwide.
## 2\. Development of an Ethical AI FrameworkAI companies frequently boast about their commitment to ethical guidelines in developing AI systems. At least on paper, they are all proponents of responsible AI development. In the media, Google execs have emphasized how the company takes AI safety and ethics seriously. Similarly, “Safe and ethical AI” is a mantra for OpenAI’s CEO, Sam Altman. These are quite applaudable.
But who’s making the rules? Who decides which AI ethical guidelines are good enough? Who decides what safe AI development looks like? Right now, every AI company seems to have its own spin on responsible and ethical AI development. OpenAI, Anthropic, Google, Meta, Microsoft, everyone. Simply relying on AI companies to do the right thing is dangerous.
The consequences of an unchecked AI space can be catastrophic . Letting individual companies decide what ethical guidelines to adopt and which to discard is akin to sleepwalking our way into an AI apocalypse. The solution? A clear ethical AI framework that ensures:
- AI systems do not unfairly disadvantage or discriminate against individuals or certain groups based on race, gender, or socioeconomic status.
- AI systems are safe, secure, and reliable, and minimize the risk of unintended consequences or harmful behavior.
- AI systems are built with the broader societal impact of AI technologies in mind.
- That humans retain ultimate control of AI systems and their decision-making transparently.
- AI systems are intentionally limited in ways that are advantageous to humans.
Similarly, to ensure that things don’t go wrong in the AI space, a dedicated agency akin to the FDA and the NRC is necessary as AI continues to make aggressive inroads into all areas of our life. Unfortunately, the issue of in-country AI regulation is a tricky one. The work of any dedicated regulatory agency is likely to be agonizingly hard without cross-border cooperation. Just like the US’s NRC (Nuclear Regulatory Commission) needs to work hand in hand with the International Atomic Energy Agency (IAEA) to be at its best, any dedicated in-country AI regulatory agency would also need an international analog.
Such an agency would be responsible for the following:
- Development of AI regulations
- Ensuring compliance and enforcement
- Overseeing the ethical review process of AI projects
- Collaboration and cross-country cooperation on AI safety and ethics
How? Well, many of today’s AI systems are trained using copyrighted materials. You know, copyrighted articles, copyrighted songs, copyrighted images, etc. That’s how tools like ChatGPT, Bing AI, and Google Bard can do the awesome things they do.
While these systems are clearly taking advantage of people’s intellectual property, the way these AI systems do it isn’t any different from a human reading a copyrighted book, listening to copyrighted songs, or looking at copyrighted images.
You can read a copyrighted book, learn new facts from it, and use those facts as a foundation for your own book. You can also listen to a copyrighted song for inspiration to create your own music. In both cases, you used copyrighted materials, but it doesn’t necessarily mean the derivative product infringes on the copyright of the original.
While this is a logical explanation to explain away the mess that AI makes of copyright laws, it still hurts the owners of copyrights and intellectual property. Considering this, regulations are necessary to:
- Clearly define the liability and responsibilities of all parties involved in the lifecycle of an AI system. This includes clarifying the roles of every party, from AI developers to end users, to ensure that responsible parties are held accountable for any copyright infringement or intellectual property violations committed by AI systems.
- Reinforce existing copyright frameworks and perhaps introduce AI-specific copyright laws.
- To ensure innovation in the AI space while safeguarding the rights of original creators, AI regulations should redefine the concepts of fair use and transformative work in the context of AI-generated content. Clearer definitions and guidelines are needed to ensure that AI space can continue to improve while respecting copyright boundaries. It is essential to strike a balance between innovation and preserving the rights of content creators.
- Clear pathways for collaboration with right holders. If AI systems are going to use people’s intellectual property anyway, there should be clear pathways or frameworks for AI developers and rights-owners to collaborate, especially in terms of financial compensations, if the derivative work of such intellectual properties is commercialized.
While artificial intelligence has emerged as a promising fix to many of our societal problems, AI itself is rapidly becoming a problem needing an urgent fix. It’s time to take a step back, reflect, and make the necessary fixes to ensure AI’s positive impact on society. We desperately need an urgent recalibration of our approach to building and using AI systems.
SCROLL TO CONTINUE WITH CONTENT
If left unchecked, AI technologies can negatively disrupt our way of life and threaten our existence. But how can governments navigate the labyrinth of challenges that comes with this rapidly evolving field?
- Title: Four Horizontal Sectors Regulating the Future of AI
- Author: Brian
- Created at : 2024-09-06 23:30:22
- Updated at : 2024-09-07 23:30:22
- Link: https://tech-savvy.techidaily.com/four-horizontal-sectors-regulating-the-future-of-ai/
- License: This work is licensed under CC BY-NC-SA 4.0.