All this AI has a byproduct: regulatory and legal scrutiny

Technology


This audio is auto-generated. Please let us know if you have feedback.

The publicity surrounding ChatGPT and AI-powered models has not gone unnoticed among lawmakers. 

Industry watchers expect AI regulations and government focus on AI to increase throughout the year, especially as enterprise adoption and development of AI-powered models continue to spread. 

Rep. Ted Lieu, D-CA, used ChatGPT to write legislation calling for Congress to increase its focus on AI. Lieu’s proposal, which was touted last week as the first-ever piece of legislation written by AI, called for development and deployment of AI that is safe, ethical and respects the rights and privacy of all Americans.

Intellectual property disputes over AI-generated content have furthered the push for oversight. A class-action lawsuit was filed in San Francisco against GitHub, Microsoft and OpenAI in November. 

The lawsuit, on behalf of “possibly millions” of GitHub users, claims that because the defendants trained their AI systems on public GitHub repositories, these companies have violated the legal rights of creators who posted code or other work, according to Matthew Butterick, who serves as co-counsel in the lawsuit against GitHub. 

The relationship between regulations and big tech companies is often seen as adversarial through high-profile hearings and antitrust bills. Lawmakers must navigate the tight-rope that separates encouraging innovation from protecting the public’s interest. 

“Regulations follow; they don’t lead,” Rajesh Kandaswamy, distinguished VP analyst at Gartner said. “Now, clearly, with ChatGPT becoming so popular, it definitely adds some sense of urgency in terms of people wanting to regulate AI.”

The main focus of regulations will mainly highlight how businesses use AI for decision-making, according to Kandaswamy. 

“It’s important when decisions are made that there is no bias related to race, gender or other things,” Kandaswamy said. “But as AI is making a decision, how can we ensure that there is no bias? It’s important for something to work, but it’s also important to be able to explain how we came to a decision, and that’s a challenge in AI.”

In the case of ChatGPT, the model has limitations — occasionally generating biased and false assertions. If used in decision-making, this could cause some problems.  

For businesses, implementing a responsible AI framework, guided by human values-based principles in the development of trustworthy AI applications, is the first step to preventing ethical issues, according to Bill Wong, principal research director at Info-Tech Research Group. 

When developing a framework, Wong suggested businesses establish these principles:

  • Inclusiveness and respect for data privacy
  • Fairness and objectivity
  • Transparency and explainability (how decisions are made)
  • Safety and security
  • Accountability

Many organizations already have similar frameworks and guardrails. Microsoft added guardrails to OpenAI’s technologies when it implemented them in its Azure OpenAI service.

“Whether legislation is adopted to govern AI applications or not, it is good and important to ask questions regarding the deployment of AI-based applications,” Wong said in an email. “At the very least, companies need to prepare themselves to answer questions regarding the guiding principles they used to build their AI application.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *