in

Claude, the chatbot that shocks the Internet by making the difference between good and evil thanks to its advanced artificial intelligence!

Anthropic is a company of former researchers who aim to train an capable of distinguishing right from wrong with minimal human intervention.

This AI named Claude is equipped with a unique “constitution,” which is based on the Universal Declaration of Human Rights, but also other “ethical” standards to ensure ethical behavior while providing robust functionality. However, the idea of Claude's “constitution” may be metaphorical rather than literal, according to Jared Kaplan, one of Anthropic's founders.

A 100,000-word processing capacity, superior to that of other AIs

Anthropic's training method is described in a research paper titled “Constitutional AI: Harmlessness of AI feedback,” which explains how to create a harmless but useful AI by improving its own behavior. With constitutional AI, Claude can continuously improve without human feedback by identifying inappropriate behavior and adapting his own behavior. In addition, Claude has an impressive processing capacity of over 100,000 tokens, which puts it ahead of and Bard, as well as all other major competent language models or AI chatbots available today. This feature allows Claude to handle both large-scale conversations and complex tasks.

It is important to note that Claude's main development focus, however, is not to outperform other AIs in terms of capability, but rather to create an ethical model that can make harmless decisions without human intervention. Ultimately, Claude will allow companies to rely on an ethical AI that will represent their business and their needs, while handling even unpleasant or malicious interlocutors with grace.