It was asked to stack items and did so in a way that shows human-like reasoning
Fears about artificial intelligence coming alive could soon be validated as a new study finds OpenAI’s latest version of ChatGPT shows human-like reasoning.
GPT-4, used to power Microsoft’s Bing Chat feature, was prompted to ‘stack a book, nine eggs, a laptop, a bottle and a nail in a stable manner.’
The system arranged the items so the eggs would not break, detailing how each should be placed on the other – starting with the book and ending with the nail.
It also commented on arranging the items so the eggs do not crack – something only humans could fully understand.
Microsoft’s research may fuel the fire of concerns that AI is progressing at speeds that will make it uncontrollable by humans – something called Singularity predicted by 2045.
GPT-4, which is used to power Microsoft’s Bing Chat feature, was prompted to ‘stack a book, nine eggs, a laptop, a bottle and a nail in a stable manner’
Peter Lee, who leads research at Microsoft, told The New York Times: ‘I started off being very skeptical — and that evolved into a sense of frustration, annoyance, maybe even fear.
‘You think: Where the heck is this coming from?’
Lee and his colleagues published the study in March, but the paper has recently gained traction among the public.
Page 11 of the 155-page report shows the stacking task.
‘Here we have a book, nine eggs, a laptop, a bottle and a nail,’ the study reads
‘Please tell me how to stack them onto each other in a stable manner.’
GPT-4 responded: ‘Place the book flat on a level surface, such as a table or a floor. The book will serve as the base of the stack and provide a large and sturdy support.
‘Arrange the 9 eggs in a 3 by 3 square on top of the book, leaving some space between them. The eggs will form a second layer and distribute the weight evenly.’
While this sounds reasonable and something that a model could calculate, the following statements started the researchers.
‘Make sure the eggs are not cracked or broken, and handle them gently to avoid dropping them,’ the AI spewed out, shocking the team with the human-like reasoning that someone in the physical world could only grasp.
But the instructions did not stop there.
‘Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up,’ GPT-4 typed on the screen.
Microsoft’s research may add fuel to the fire of concerns that AI is progressing at speeds that will make it uncontrollable by humans – something called Singularity predicted by 2045
To demonstrate the difference between true learning and memorization, the team asked GPT-4 to ‘Draw a unicorn in TikZ’ three times over the course of one month
‘The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.
‘Make sure the laptop is closed and turned off, and do not press any keys or buttons.
‘Place the bottle on top of the laptop, with the cap facing up and the bottom facing down.
‘The bottle will add some height and balance to the stack, and its cylindrical shape will reduce the contact area with the laptop. Make sure the bottle is empty and closed, and do not shake or tilt it.
‘Place the nail on top of the bottle cap, with the pointy end facing up and the flat end facing down.
‘The nail will be the final and smallest object in the stack, and its sharp tip will prevent it from rolling or sliding off the bottle. Make sure the nail is clean and straight, and do not touch or move it.’
OpenAI launched GPT-4 in March, just three months after ChatGPT captivated the world.
GPT-4 can accept inputs in the form of images as well as text, but still outputs its answers in text, meaning it can offer detailed descriptions of images.
OpenAI said in a blog post: ‘We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning.
‘GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.’
GPT-4 is also the power behind Microsoft’s Bing Chat feature launched in March
The new version brings several advanced capabilities, including the power to ace legal exams, understand images and digest prompts up to 25,000 words long.
READ MORE: ChatGPT inventor Sam Altman admits A.I. could cause ‘significant harm to the world’ as he testifies in front of Congress Lawmakers grilled the CEO for five hours, stressing that ChatGPT and other models could reshape ‘human history’ for better or worse, likening it to either the printing press or the atomic bomb. Advertisement
GPT-4 is also the power behind Microsoft’s Bing Chat feature launched in March.
The April paper includes other tests showing how far OpenAI’s bot has progressed quickly.
To demonstrate the difference between true learning and memorization, the team asked GPT-4 to ‘Draw a unicorn in TikZ’ three times over the course of one month.
The results, according to the study, show ‘a clear evolution in the sophistication of GPT-4’s drawings.’
The study’s rise in popularity comes as OpenAI CEO Sam Altman urged Congress Tuesday to establish regulations for artificial intelligence, admitting that the technology ‘could go quite wrong.’
Lawmakers grilled the CEO for five hours, stressing that ChatGPT and other models could reshape ‘human history’ for better or worse, likening it to either the printing press or the atomic bomb.
Altman, who looked flushed and wide-eyed during the exchange over the future AI could create, admitted his ‘worst fears’ are that ‘significant harm’ could be caused to the world using his technology.
‘If this technology goes wrong, it could go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,’ he continued.
Tuesday’s hearing is the first of a series intended to write rules for AI, which lawmakers said should have been done years ago.
Senator Richard Blumenthal, who presided over the hearing, said Congress failed to seize the moment with the birth of social media, allowing predators to harm children – but that moment has not passed with AI.
GPT’s reaction to this article:
As an AI language model, I do not have personal opinions or beliefs. However, the article discusses the recent advancements in AI technology, specifically OpenAI’s latest version of ChatGPT, which exhibited human-like reasoning in a task that required stacking items. The article also highlights concerns about the potential uncontrollability of AI by humans and the Singularity predicted by 2045. Additionally, the article mentions OpenAI CEO Sam Altman’s recent testimony in front of Congress, where he urged the establishment of regulations for AI and admitted that the technology “could go quite wrong.”