The Complex Reality of AI Regulation

Aneesh Patil
9 min readOct 23, 2023

--

Photo by Possessed Photography on Unsplash

2 Trillion USD ($$$!!).

A whopping $2 trillion is the estimated market value for Artificial Intelligence in 2030. This is a 20x increase from where it sits today — a cool $100 billion.

To put this up for a simple comparison, the mobile phone industry — producer of one of the most ubiquitous products — currently has a market value of around $500 billion and is expected to be approximately $800 billion by 2030. Just $1.2 trillion shy of the AI industry!

The point is this, AI is not just growing linearly; it is compounding. And it is compounding at a rate that we cannot really comprehend. Former Policy Director at OpenAI and current CEO at Anthropic (ChatGPT rival), Jack Clark, recently tweeted that AI “has started to display ‘compounding exponential’ properties in 2021/22 onwards”. He also mentions that the progress in this field will “intuitively feel[s] nuts” in the coming years. For all we know Clark’s eerie prediction could be true as we see the compound annual growth rate of AI reaching an estimated 40%. Warren Buffett believes compounding to be “the greatest force in the universe”, but if leaders in this space fail to contain the growth, we could very well see ourselves walking a tightrope while battling the threats posed by AI.

Let’s briefly explore some of the potential risks of AI, such as the spread of misinformation, bias, loss of employment, and ease of AI accessibility to people. We will also explore why regulation is important and how it fits into this problem space.

Recently, an image went viral on social media, with many verified accounts reposting it on Twitter. It depicted an explosion near the Pentagon in Arlington, Virginia. The circulation of this image caused a stir not only on social media but also in the stock markets, as the S&P 500 experienced a 0.3% decline following the post. Moreover, in March 2023, several images of former President Donald Trump’s arrest spread like wildfire on social media. One of these images, posted on Twitter by a user named ‘TheInfiniteDude,’ garnered nearly a million views. The post trended and was reposted by several verified users, further fueling the frenzy. However, it later came to light that these were fake images generated by AI tools such as Midjourney and Stable Diffusion. At the time, we were unaware of the power these AI tools possessed when combined with social media. Another example dates back to May 2022 when a video emerged featuring Elon Musk endorsing a crypto trading platform called BitVex. In the video, he claimed that anyone depositing money on the platform would receive “exactly 30%” returns. It was later discovered that this video was, in fact, a deepfake, and the crypto platform being promoted was fake as well. Although the scam itself was not very successful, as Musk himself replied to the tweet confirming the inauthenticity of his words, it serves as a reminder that in the coming years, there could be highly realistic videos capable of deceiving unsuspecting internet users. These incidents highlight the potential dangers and challenges posed by AI-generated content, especially when it comes to misinformation and manipulation.

In an ideal world, we would expect AI algorithms to efficiently perform various tasks while we can sit back and relax. However, the reality is far more complex due to the bias that can emerge during the training of AI models. A notable case that illustrates this challenge occurred in 2018 when Amazon employed an AI engine to recruit top talent for their organization. Unfortunately, things did not go as planned. Amazon trained the AI model using historical job applications submitted over the course of a decade. As the company already had a gender disparity, with male employees constituting over 50% of the workforce, the AI algorithm began associating the term ‘woman’ on a resume with lower scores. Consequently, many qualified female applicants were filtered out before a human even had the chance to review their resumes. Although Amazon attempted to modify the AI to ensure a gender-neutral process, they could not guarantee that the model would not perpetuate the same bias again. Consequently, Amazon discontinued its AI recruiting system. This example is not an isolated incident. Another alarming instance of bias in AI can be found within the United States judicial courts. The U.S. courts employed a tool called COMPAS, developed by a company called Northpointe. COMPAS aimed to assess the risk level of defendants, categorizing them as ‘low,’ ‘medium,’ or ‘high’ risk based on factors such as criminal history and personality traits. However, a research study revealed that COMPAS “falsely flagged black defendants as future criminals, incorrectly labeling them at nearly twice the rate of white defendants.” Both gender and racial biases have significant detrimental effects on the progress of our society. In the first example, we witnessed how candidates were denied equal opportunities to advance their careers due to biased AI evaluation. In the second example, certain defendants were unfairly labeled as potential future criminals based on their race. In a world where diversity and equality are valued, the biases inherited by AI models pose risks that must be addressed promptly. Resolving these biases in AI is imperative for the advancement and fairness of our society.

AI engines and chatbots are very quickly disrupting the workforce with a lot of people having lost their jobs already and many more predicted to lose jobs in the coming years. 2023 has not been a very positive year for a lot of the workforce due to mass layoffs — from the tech industry to the banking industry. A recent study shows that in May 2023, there was a 20% increase in layoffs from April making it ~80,000 cuts. Out of this, 3,500 are said to be due to AI replacing employees. That is, ~5% of the total cuts are because companies believe AI to perform an equivalent (or better) job than humans. While these jobs included writing and clerical tasks, it really makes you question the degree to which AI can be sentient to write and answer customer service questions; and they are only getting better. Furthermore, a paper published by Goldman Sachs in March 2023 predicts AI could “expose the equivalent of 300 [million] full-time jobs to automation”. They also mention that “18% of work globally could be automated by AI on an employment-weighted basis”. These examples show that companies would choose AI over humans, the reasons being lower cost, lesser errors, and the ability to provide substantial output for repetitive tasks. It can also result in significant repercussions for individuals, families, and communities.

The issues illustrated above are happening in real time, directly impacting us and those around us. The main concern is that we cannot accurately predict where we will be with AI in the span of one year, three years, or ten years. Before considering how to regulate AI, we must understand that the body governing it must ensure AI is developed ethically, make processes transparent, and constantly adapt the regulations in real time depending on the innovation stage. This task is more difficult than it seems and it presents multiple pain points such as the potential of stifling innovation, lack of thorough AI-based rationales and boundaries, as well as generally deciding who will formulate these measures.

Photo by Joshua Sukoff on Unsplash

If a government body, like the FDA or FAA, is formed to control AI advancements there could be some potential concerns. First, there is a possibility that the agency members are not well-versed in the intricacies of software being developed and might construct regulations that might be too stringent and therefore cause hindrances in growth; the vice-versa could be true as well where they create boundaries too loose that could cause damage. In May 2023, Senator Chuck Schumer proposed a framework to regulate AI systems by getting companies to allow “independent experts to review and test AI technologies ahead of a public release or update”. Furthermore, OpenAI CEO, Sam Altman said in a May 2023 Senate hearing that a government agency should be created to hand licenses to companies before releasing products. Given the rate at which many of the other government bodies function, this could be a massive bottleneck for advancement to occur organically. Moreover, if the body ends up equipping an approval or licensing process, there could be significant delays and there is a high chance larger companies would be behind most approvals at a given time due to the volume of output. This could prove to be a major disincentivizing factor and barrier to entry for many budding startups, which could lead to lesser market competition in the long run.

So what could be the best way to regulate AI? Honestly, there is no clear answer to this question yet and the rules/laws are constantly evolving as we speak. That being said, in my opinion, there are some measures that can be considered for building a reasonable regulatory model.

  • Quantify Risks of Product: A metric system could be developed to gauge the potential ‘risk’ posed by an AI model. For instance, there could be a score generated out of 100 based on different factors and weightage assigned to each; if the score is below 50, it is less safe than a product with a score above 50. Some influential underlying factors could include platform of deployment (an AI Chrome extension is less likely to cause damage than an AI model integration with education/healthcare companies), type of data required from users (user’s favorite hobbies are less likely to cause harm than storing a user’s SSN), accuracy of the AI model (a model with a below par accuracy is more likely to spread misinformation), fairness (a model with lesser biases will be safer for public use), and ease of rolling back (a model that has more human-control over roll-backs is safer in theory than not). In this case, the formula for generating the score would look something like this: {Factor_1 *Weight_1} + {Factor_2 *Weight_2} + {Factor_3 *Weight_3}, where the total weights add up to 1 and the factor scores are out of a 100.
  • Artificial Prod Testing + Chaos Testing 2.0: Subject the product to a testing environment where artificial production data is used. Under these conditions, the data that is consumed, stored, and processed by the AI model can be monitored for safety checks. By setting up the AI model for failure, we can also observe post-malfunction outputs and potential data pollution inputs back into the model.
  • Documentation for Release Licensure: It is important for a diverse group of leaders (Big Tech AI, Healthcare AI, Edtech AI, Finance AI, etc) in the AI space to come together and discuss the latest development standards and assessment metrics. Today, government agencies such as the FCC (Federal Communications Commission) and NRC (U.S Nuclear Regulatory Commission) make the grounds for obtaining a license explicit on their websites. If the licensing process is to be followed for releasing AI products, then there should be a rulebook available publically as well. AI leaders can come together at a forum once a year to amend the guidelines on AI development thereby helping engineers design their AI models accordingly. This also promotes informational transparency between regulators and companies and makes the approval process less subjective. It would also be ideal to keep the rulebook open to feedback since AI is a constantly evolving space and listening to people’s feedback is the best way to not over or under-regulate.
  • Invest in Model Testing Tools: It is difficult for government officials and sometimes even engineers to test all potential edge cases. There are some tools in the market to help automate testing codebases. More investments and focus on such startups by the government can boost the ‘AI for testing’ space and foster innovative approaches to catch bugs in AI models.

The best policies will aim to strike a good balance between not impeding AI innovation while also preventing destructive applications of the technology. It is also crucial to automate most of the processes such as testing and risk assessments while giving equal opportunities to large, medium, and small-sized enterprises. This will help cut down the time taken for the approval process and mitigate barriers to entry for smaller companies.

There are new, fascinating AI products released almost every day and this is just a great time to be alive to experience the genius behind these products! I hope to see safe and ethical innovation continue in this space with true business problems being solved.

I hope you enjoyed this blog! Check back for similar tech-related content and don’t hesitate to reach out with any questions in the comments or at apatil6@wisc.edu!

Photo by Roman Synkevych on Unsplash

--

--

Aneesh Patil
Aneesh Patil

Written by Aneesh Patil

Exploring the intersection of tech, AI, and the human experience. Writing about the future, present, and everything in between. 🌎

No responses yet