‘RizzGPT’ Glasses Aim to Bring an End to ‘Awkward Dates and Job Interviews’
The wearable tech is billed as being like "having God observe your life and tell you exactly what to do next."
As criticism and concerns build surrounding the use of AI in creative industries, a project from Stanford students is continuing to garner more lighthearted attention for its implementation of ChatGPT to allow users to “say goodbye to awkward dates and job interviews.”
In short, as you’ve no doubt seen in various social media posts dating back several months, the so-called “RizzGPT” project is comprised of a wearable set of glasses that offer responses to questions one might face on either an encounter of the romantic variety or an encounter of the please-hire-me variety. Both types of encounters, arguably, inately trigger responses that are abjectly human in ways both messy and life-affirming.
RizzGPT, also billed by student and engineer Bryan Hau-Ping Chiang as a “real-time Charisma as a Service (CaaS),” is said to have been designed with the intention of serving as a “simple proof-of-concept of what’s possible” in the field moving forward.
“[I]t’s like having God observe your life and tell you exactly what to do next,” Bryan wrote in a tweet back in March.
While months of fun have been had online in the months since RizzGPT was first teased, concerns surrounding AI—specifically the tech's relationship with the world of art—have continued to grow. AI, notably, has been prominently mentioned in negotiations at the center of the ongoing Writers Guild of America strike.
Broader concerns have also been raised by those directly involved with the development of such tech. During a Senate Judiciary Committee hearing this week, for example, Sam Altman—the CEO of OpenAI, the company that developed ChatGPT—said he was “nervous” about potential AI risks. Due to this, he is calling for regulations.
"I think if this technology goes wrong, it can go quite wrong," Altman said. "And we want to be vocal about that. We want to work with the government to prevent that from happening."
Complex recently reached out to Bryan, who answered a few questions about RizzGPT. See more below.
Complex: What inspired this project?
Bryan: It was a fun hackathon project. I’d been building my own AR headsets for a few years [and] doing research on holographic displays at school, but it didn't seem like anybody was really applying AI to AR yet. So this felt like a fun thing to do that involved new hardware — just wanted to show people what the future might look like.
What has the reaction been like?
A lot of laughs, and a lot of hate. People are naturally scared of the future but it's coming. The monocle / glasses will soon be so well integrated that it looks like a regular pair of glasses.
In other industries, namely creative industries such as film and TV, there have been expressions of concern about AI being used in a nefarious way. Do you share those concerns? How do you see AI being used responsibly?
There's always a risk of [insert any technology here] being used for bad purposes. For example, kitchen knives can be used to stab people but we don't ban them. But more seriously, think about the fact that people were very fearful of cameras when they first came out: ‘They'll just remove the jobs of these highly skilled painters and artists!’ Well, not really — modern art is still alive and kicking. And more importantly, the industries that you mentioned, "film and tv", were directly enabled by this new technology.
I’m more of an accelerationist myself: I think we should just rip it and see what happens. The only way to safely release AI is to do things incrementally: continuously as the AI improves, and to increasingly large portions of the public. The dangers of tech are often overblown: in fact, the government tried to ban cryptography in the 1960s but guess what? They failed and nothing really happened from that. Software is very hard to control and the best defense is always offense.