Artificial intelligence crosses ethical boundaries and does inappropriate things!

In recent months, the conversational AI developed by has become the subject of obsession for many Internet users. But how advanced is this system and how far can it go?

ChatGPT's development is so advanced that it is gradually giving the impression that it could be closer to becoming an almost self-aware system that would therefore not need direct instructions to think and activate itself.

Experts have called for putting the brakes on powering these systems to focus on creating preventative safety measures and regulatory locks in case something bad happens with AI. But in the meantime, users continue to use the OpenAI platform to find uses that may not be so appropriate, correct, or ethical.

The dark side of ChatGPT: all of this can be done but maybe shouldn't be done.

While ChatGPT is a great business tool and is capable of increasing anyone's productivity exponentially, this AI can still be used for malicious purposes.

Malicious or flawed code can be programmed by taking advantage of its ability to develop games, extensions, and applications, leading to the use of vulnerabilities or the development of parts that make it easier for malware to steal data.

ChatGPT can be complicit in email and social media scams, generating highly convincing and personalized messages to trick victims into accepting money or personal information.

can also be exploited to generate offensive or simply false content. Insults, threats, fake news or deepfakes are easier than ever thanks to ChatGPT and it is now very difficult to distinguish fake from real.

On a more “harmless” level, AI can be used to perform tasks that should be done individually by each subject, such as writing homework, essays, or schoolwork, helping students complete their obligations in a fraction of the time. But this strategy also encourages plagiarism and prevents true learning.

Ultimately, ChatGPT is a powerful tool, but its development, powering, and use are transparent to zero. This makes it dangerous in the wrong hands. Therefore, it is essential to put in place safety measures and regulatory locks to ensure ethical use of AI.