Vice President Kamala Harris Discusses Artificial Intelligence Risks with Industry Leaders

On Thursday, Vice President Kamala Harris met with the CEOs of Google, Microsoft, and two other companies at the forefront of artificial intelligence (AI) development. The aim of the meeting was to discuss the potential risks and opportunities of the rapidly-evolving technology, and to explore ways in which AI can be developed to improve lives without compromising people’s rights and safety.
Vice President Kamala Harris

As part of the Biden administration’s initiatives to ensure responsible AI development, the US government has invested $140 million to establish seven new AI research institutes. The White House Office of Management and Budget is also expected to issue guidance on the use of AI tools by federal agencies in the coming months. In addition, top AI developers have committed to participating in a public evaluation of their systems at the Las Vegas hacker convention DEF CON in August.

The CEOs of Google, Microsoft, OpenAI, and Anthropic, two influential startups backed by these tech giants, attended the meeting with Kamala Harris and administration officials. The government’s message to these companies was clear: they have a responsibility to reduce the risks associated with AI and to collaborate with the government to achieve this goal.

The UK is also considering the risks associated with AI. The country’s competition watchdog announced on Thursday that it will review the AI market, with a focus on chatbots like ChatGPT, developed by OpenAI, which underpin the technology.

While AI has the potential to address many of society’s most pressing challenges, such as climate change and disease, it also poses significant risks to national security and the economy. The release of ChatGPT last year has sparked a debate about the ethical and societal concerns surrounding AI, particularly the ability of generative AI tools to produce human-like writing and fake images.

AI companies have been criticized for their lack of transparency, particularly with regards to the data their AI systems have been trained upon. This makes it difficult to understand why a chatbot might produce biased or false answers, or to address concerns about copyright infringement.

While some have called for disclosure laws to force AI providers to open up their systems to third-party scrutiny, this may be difficult to implement in practice. AI systems are built on top of previous models, making it challenging for companies to provide greater transparency after the fact.

The companies developing and using AI face increased scrutiny from US agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws. In the EU, negotiators are finalizing AI regulations that could propel the 27-nation bloc to the forefront of global efforts to set standards for the technology.

Italy has temporarily banned ChatGPT over a breach of strict European privacy rules, while the European Data Protection Board has set up an AI task force to explore common AI privacy rules.

As part of efforts to test the risks associated with AI, the DEF CON hacker conference will host a public inspection of AI systems developed by Google, Microsoft, OpenAI, Anthropic, Hugging Face, chipmaker Nvidia, and Stability AI. While the one-time event may not be as thorough as a prolonged audit, it offers a novel way to identify potential risks and opportunities associated with AI development.

Also Read: JPMorgan Chase Acquires First Republic Bank Deposits and Assets After Government Takeover

Leave a Reply

Your email address will not be published. Required fields are marked *