Former OpenAI employees have decided to found their own startup, Anthropic, to develop a more responsible approach to Artificial Intelligence (AI). With their chatbot, Claude, built according to their “constitutional” method, they managed to get funding from Google as well as the attention of US President Joe Biden. This new approach puts forward ethical and moral values that are included in their “constitution”.
Explicit ethical principles for a more responsible AI
Anthropic: a more transparent, yet effective approach
Anthropic does not claim that its “constitution” is the final solution for more responsible AI, but it does believe that it represents a transparent and explicit starting point for the AI community. Claude does not consider each principle every time he gives an answer, but he has considered each principle many times during his training, which allows him to make choices that are better suited to specific situations. While Anthropic continues to explore ways to produce a more democratic constitution for Claude, it also plans to offer customizable constitutions for specific use cases.
Anthropic: an ethical and responsible startup that's getting attention
Although much less well known than its competitors, Anthropic is starting to stand out in the industry. It received $300 million in funding from Google, and the White House gave it a lot of attention in a meeting to discuss ethics and safety in AI. Anthropic also announced a Claude integration with Slack. With their ethical and responsible approach, they are confident that they will attract more attention from policymakers, who are increasingly aware of the importance of ethical values in AI.
I am a student and I am part of the editorial staff of thesilverink.com. I have the chance to enjoy writing, however, I also like to discuss all subjects and especially anything related to Science.