AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
George boom boom chuvalo12/26/2023 ![]() ![]() Not everyone in cognitive science agrees, though. ![]() Microsoft, OpenAI sued for $3B after allegedly trampling privacy with ChatGPT.Experts scoff at UK Lords' suggestion that AI could one day make battlefield decisions.Microsoft and GitHub are still trying to derail Copilot code copyright legal fight.Mozilla Developer Network adds AI Help that does the opposite."The disadvantage is that they can gloss over a lot of important findings… in the philosophy of cognitive science, we can't give that give up and we can't get away from it." Nonetheless, large language models can be valuable, even if their value is overstated by their proponents, she said. In the '90s and Noughties, that would not have been allowed at cognitive science conferences," Martin said.Īrguing that human-like performance in LLMs is not enough to establish that they are thinking like humans, Martin said: "The idea that correlation is sufficient, that it gives you some kind of meaningful causal structure, is not true." "The models are getting a lot of feedback about what the parameter weights are for pleasing responses that get marked as good. Despite its name, OpenAI hasn't been open with how it has used training data or human feedback to develop some of its models. And I don't want to say that doesn't belong in science, but I just think it's definitionally, a different goal," Martin said.Ĭlaiming that LLMs are intelligent or can reason also runs into the challenge of transparency in the methods employed to development. New York University psychology and neural science emeritus professor Gary Marcus has also criticized the test as a means of assessing machine intelligence or cognition.Īnother problem with the LLM approach is it only captures aspects of language that are statistically driven, rather than trying to understand the structure of language, or its capacity to capture knowledge. "His intentions there was always an engineering or computer science concept rather than a concept in cognitive science or psychology." If a human evaluator cannot reliably tell the unseen machine from an unseen human, via a text interface, then the machine has passed.īoth ChatGPT and Google's AI have passed the test, but to use this as evidence of thinking computers is "just a terrible misreading of Turing," Martin said. The test sets out to assess if a machine can fool people into thinking that it is a human through a natural language question-and-answer session. Martin is also dismissive of using the Turing Test – proposed by Alan Turing, who played a founding role in computer science, AI and cognitive science – as a bar for AI to demonstrate human-like thinking or intelligence. These behaviors or measures may be correlated with some essentialist traits we have very little evidence for that," she told The Register. It's mainly predictive: one test largely predictive of how you score on another test. "My problem is with the notion of general intelligence in and of itself. Those who have inherited this scientific legacy are critical of the grandiose claims made by economists and computer scientists about large language models and generative AI.ĭr Andrea Martin, Max Planck Research group leader for language and computation in neural systems, said AGI was a "red herring." In 1960, American psychologists George Miller and Jerome Bruner founded the Harvard Center for Cognitive Studies, providing as good a starting point as any for the birth of the discipline, although certain strands go back to the 1940s. A team led by Sebastien Bubeck, senior principal research manager in the software giant's machine learning foundations, concluded its "skills clearly demonstrate that GPT-4 can manipulate complex concepts, which is a core aspect of reasoning."īut scientists have been thinking about thinking a lot longer than Altman and Bubeck. Microsoft, which put $10 billion into OpenAI in January, has been conducting its own experiments on GPT-4. And I now think that it's sort of a fundamental property of matter." OpenAI CEO Sam Altman last month declared to an audience in India: "I grew up implicitly thinking that intelligence was this, like, really special human thing and kind of somewhat magical. ![]()
0 Comments
Read More
Leave a Reply. |