What Section 230 Means for Your Money

AI has taken the world by storm over the last few years and the U.S. legal system is scrambling to catch up.

As a growing number of companies rapidly roll out AI-powered features, liability concerns are mounting. Many want to know who will be responsible if AI outputs lead to damages.

While Section 230 provides a safe haven for online service companies that host user-generated content, it’s unclear if it will guard against liability for AI-generated content.

And if you heed some wrong information regarding personal finances that negatively impacts your money, what could that mean for you?

Here’s what you should know.

What is Section 230?

Section 230 was enacted as part of the Communications Decency Act of 1996 and provides limited federal immunity to providers of interactive computer services.

This means that companies like social media platforms, search engines and forums that host user-generated content aren’t liable for the information shared by their users. They are, however, liable for information they develop and activities they partake in that are unrelated to third-party content.

“Section 230 was critical for the evolution of search engines and social media platforms. Especially in the early days, before we had a better ability to detect potentially toxic or offensive content. The liability protection allowed for high growth,” Andrew Gamino-Cheong, the chief technology officer and co-founder of AI governance company, Trustible, said in an email.

[See: Artificial Intelligence Stocks: The 10 Best AI Companies.]

Without Section 230’s protections, Gamino-Chong said large platforms would require moderation systems that hinder their ability to scale.

Enter the Mass Adoption of Generative AI

Since OpenAI released ChatGPT in November 2022, companies have been racing to integrate generative AI into their services.

Google, for example, began testing AI Overviews in mid-2023 and recently rolled them out nationwide. Now, when you perform a Google search, you’ll see an AI-generated overview of the topic with links to learn more.

Further, Apple announced a new partnership with OpenAI, which involves integrating ChatGPT into Apple experiences across iOS, iPadOS and macOS later this year.

And the list goes on; Meta, LinkedIn, Microsoft and many more have jumped on board.

[READ: Will Artificial Intelligence Steal My Job?]

The Problem with AI-Generated Personal Finance Information

The main problem with the swift adoption and integration of AI is that it’s not foolproof. While rapidly improving, AI models still generate inaccurate information, and that means you could be getting some bad advice about your finances.

For example, a wildly inaccurate Google AI Overview recently went viral. A person searched “cheese not sticking to pizza” and Google’s AI Overview suggested they add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness.

Imagine if generative AI made a similar mistake concerning your finances, and it caused you to incur losses.

Further, while generative AI can perform calculations and provide advice, its lack of emotional intelligence, experience and context can lead to oversights.

For instance, in a recent comparison of homebuying advice from ChatGPT and a certified financial planner, ChatGPT provided decent general advice but didn’t consider important factors like potential changes to the person’s income each year and recent interest rate increases.

The human advisor also stood apart because he asked follow-up questions to uncover potential underestimations in spending and started by trying to figure out if the goal was realistic.

“One can quickly see that liability — or lack thereof — for AI-generated content can have major consequences for consumers, companies and the tech vendors they rely on to produce their services,” Pete Foley,CEO of AI-governance software company Model Op, said in an email.

He explained that enterprises are rushing to capitalize on the rewards of AI while simultaneously learning how to mitigate the risks, which is like building a plane while flying it.

Who’s Responsible for Damages That Result From AI?

These situations beg the question — who’s responsible if an AI-powered feature leads to damages?

For example, if you follow the personal finance advice of a Google AI Overview and it causes you to lose money, could you sue Google and get your expenses covered? There’s no clear answer as of yet.

Arguments in favor of Section 230 protecting companies claim that generative AI outputs come from third-party sources so the liability falls on the original creators of the information, according to the Progressive Policy Institute.

Conversely, arguments against it claim that generative AI plays an active role by organizing and editing third-party data, so it shouldn’t qualify for immunity under Section 230.

“A GenAI search tool that only shares one answer has to have a higher liability standard than a platform that shares 10 options. This is especially true if the GenAI system can’t easily point to the source of its information and answer,” Gamino-Cheong said.

He explained that “the AI system is essentially doing the selection of the single best answer, and that unexplainable selection is an editorial decision that typically hasn’t been clearly covered by Section 230.”

Sam Altman, CEO of OpenAI, testified during a Senate hearing in May 2023 that he doesn’t believe Section 230 provides adequate regulatory framework for generative AI and that new solutions are needed.

[Related:Should You Let AI Manage Your Retirement Plan?]

What Should You Know About AI and Section 230?

No definitive decision has been made yet but Congress has been contemplating the issue.

The Congressional Research Service reported that current case law suggests that liability will be determined on a case-by-case basis depending on how the AI products in question generate output and which aspect of the output a plaintiff claims is illegal. It also recommended various approaches Congress could take to amend the law.

However, creating a new legal infrastructure will be complex and requires the consideration of many factors.

“The general public should be aware that the debate over Section 230 and AI is not just a legal quibble,” Lars Nyman, chief marketing officer at CUDO Compute, said in an email.

He said that if we lean too heavily on regulation, we risk throttling the innovation that makes AI so promising. On the other hand, giving AI free rein without oversight could lead to unintended consequences that we’re ill-prepared to handle.

“The challenge is to craft legislation that fosters innovation while ensuring accountability,” Nyman said.

In the meantime, it’s unclear if companies will be held liable or not. If you find that AI-generated information or content causes you to suffer damages, you can sue the company and you might have a chance at compensation. In fact, your case could end up setting a precedent.

More from U.S. News

Stumped for Gifts? How AI Can Help

Can AI Pick Stocks? A Look at AI Investing

7 Ways to Invest in AI Smart Home Devices

What Section 230 Means for Your Money originally appeared on usnews.com

Federal News Network Logo
Log in to your WTOP account for notifications and alerts customized for you.

Sign up