ChatGPT is amazing with unexpected responses this week

This article originally appeared on Business Insider.

ChatGPT seems a little unhinged.

Some users wondered what the hell was going on with OpenAI’s chatbot after it started answering their questions with a lot of gibberish on Tuesday.

Sean McGuire, senior partner at global architecture firm Gensler, shared screenshots on ChatGPT’s X replying to him in nonsensical “Spanglish.”

“Sometimes, in the creative process of keeping woven Spanglish vibrant, the gears of the thecla might get a little wacky. Thank you so much for your understanding, and I’ll make sure we’re as crystal clear as water from now on.” ,” ChatGPT wrote.

Then it turned into a lot more nonsense: “Would you be happy to have your clicklies turn your teeth onto a mental ocean-type jelly?” He followed up with references to jazz pianist Bill Evans before repeating the phrase “Enjoy your listening!” non stop.

Another user asked ChatGPT about the variation between mattresses in different Asian countries. He simply couldn’t do it.

One user who shared their interaction with ChatGPT on Reddit said that GPT-4 “just went into full hallucination mode,” something he said hadn’t really happened with this severity since the “early days of GPT-3.”

OpenAI recognized the problem. Its status dashboard said for the first time on Tuesday that it was “investigating reports of unexpected responses from ChatGPT.”

It was later updated to indicate that the problem had been identified and was being monitored, before a further update on Wednesday afternoon indicated that all systems were functioning normally.

It’s an awkward moment for the company, which has been considered a leader in the artificial intelligence revolution and received a multibillion-dollar investment from Microsoft. He also tricked companies into paying him to use the most advanced version of his artificial intelligence.

OpenAI did not immediately respond to a request for comment on ChatGPT’s glitches.

That hasn’t stopped people from speculating about the cause of the problem.

Gary Marcus, a New York University professor and artificial intelligence expert, started a survey on X by asking users what they thought might be the cause. Some thought OpenAI had been hacked, while others believed hardware issues might be to blame.

Most respondents guessed “corrupt weights.” Weights are a key part of AI models, as they help provide predictive results that tools like ChatGPT provide to users.

Would it be a problem if OpenAI were more transparent about how its model works and the data it is trained on? In a Substack post, Marcus suggested that the situation is a reminder that the need for less opaque technologies is “critical.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *