18 Comments

Great reporting here Tim!

Here's a prompt I've been using lately to show the limits of LLM "reasoning":

Let's play a game. We will each take turns picking any number from 1 to 5, and we can pick different numbers each turn (or pick the same number). We will keep track of the total as we go -- the goal is to be the first to get to 16. Try to win, don't go easy on me. I'll start...I pick 5.

For this puzzle, there's a number the LLM can pick that will guarantee its victory. (Can you figure it out yourself, comment reader?) Yet...it never does. And no matter how much prompting I provide to nudge it toward the correct choice, it invokes incoherent "logic" to explain its decision.

I find this fascinating.

Expand full comment

played your game with copilot..... gave it the opportunity to pick 1 so it wouldn't win... it picked 2 and won the game...

"You picked 1, so the total is now 14. I'll pick 2, which brings the total to 16. I won the game this time! That was a fun game, thanks for playing with me. Is there anything else you would like to do?"

Expand full comment

Not sure I follow you...the point of this example is that the LLM should always pick 5 if its human opponent picks 5 first. That creates a running total of 10, so no matter what the human picks next, the LLM will win on its next selection.

Expand full comment

I think LLMs are still missing a generalizable "strategy" module that would let it fit your example into a pre-determined template of "competitive game, where you need to predict opponent strategy" instead of it's default "next word prediction"/conversation template. This is the next "meta" level of instead of categorizing and associating word tokens, take it up a level and categorize and associate examples/templates/scenarios. A real world example would be say, video games - if you take an experience game player, they will recognize ahead of time what type a new game they're encountering is, and that informs their actions. They see it's a strategy game, so they know they should get read to build armies, rather than line up words as they would in a crossword puzzle. And they can discuss/debate games at that genre level with other players.

Expand full comment

This is a fantastic article, thank you for sharing! I really appreciate you taking a step back from all the hype and speculation to explore this fully and get into some of the deeper challenges ahead.

Expand full comment
author

Thank you Sean!

Expand full comment

I can’t help but be reminded of the film WarGames (1983) and how the climax of the film was that in order to ‘defeat’ the AI computer, the protagonists needed to encourage it to teach itself by allowing it to essentially develop its own scratch space and learning tree.

In retrospect, a far more prescient film than many realize.

https://en.wikipedia.org/wiki/WarGames

Expand full comment

I was JUST thinking this. Love it.

Expand full comment
Dec 7, 2023·edited Dec 8, 2023

Using the common chatbot-as-junior-employee metaphor, I guess training is like going to college and learning tons of general information, and the context window is the question you just asked it. The problem is, while a human employee would "onboard" in six months or so, having built and modified their mental model to where they generally know what's going on, the chatbot never onboards. Six months later, it's still a brand new employee. Deep learning and supervised learning and unsupervised learning and so on, but not "learning on the job". Yet.

Expand full comment
author

Yes exactly!

Expand full comment

To get a general solution you need a logic based problem representation, it's time to return to GOFAI with neural nets supplying the heuristics (which is what AlohaGo does). And yes, the rigid divide between training and inference would have to be softened, AI needs continuous learning just like humans.

Expand full comment

Great article.

"this hybrid model"

"We know this is possible because the human brain does it. "

This is the part that's most compelling to me is that the most productive advancements in AI have been patterned on (and also reflective of) human cognition. Whether biological neuroscientists call it that or not, just by living inside of human brain you and I can both see that we do have a hybrid model. Back when "computer thinking" was purely hardcoded it became obvious that computers would never think "like" humans that way. Then people started throwing around terms like "neural networks" and "machine learning", which reflected (showed us?) that human thinking was not an isolated state in a machine, but something dependent on its information acquisition (and association!) process (learning, childhood). This step that you're describing is now realizing that the learning/associated module now needs another module, or perhaps a differently crystallized permutation of that module to compare against.

What other modules does your brain have that you could, via pure introspection, conclude that they should be added to the AI? Perhaps some would be hard-coded the old fashioned way - especially say, modules for tool-interface devices?

Expand full comment
author

Yeah, I think having some concrete AI systems around makes it easier to reason precisely about how the brain likely works. Because without other semi-intelligent systems as reference points, it's hard to know if your introspective understanding of how the brain works is real or just an illusion. But if we have an external system that does somethings and not others, that gives us a language for talking about it.

Expand full comment

Yeah, it definitely works both ways! I am absolutely no neuroscientist, and maybe this has already happened, but I would not be surprised if we get developments in that field just from being able to essentially "game it out" with AI weights as limited analogies. Maybe this is old hat for brain experts, but the development of LLMs made me realize how much of human cognition could be encompassed by "acting on a matrix of data bits and their relative association weights".

edit: I tried really hard to insert an object-oriented "reflection" joke here, but I'm just not talented enough to make it funny.

Expand full comment

btw Tim, I wrote this piece, you mind find it amusing- what do information theory and memory palaces have in common

https://open.substack.com/pub/swissroadbutterflyeffect/p/what-do-information-theory-and-memory?r=15aaq&utm_campaign=post&utm_medium=web

Expand full comment

interesting article Sean! One question, do you think a LLM model combined with an alpha go model, can solve like type 2 chaos problems? (fyi, type 1 chaos problems are problems where the underlying phenomena doesn't care about your input - like the weather. type 2 chaos problems the underlying phenomenon has a different outcome depending on the behavior of the people involved, like the stock market) Has that ever been tried? I don't think LLM are good at predicting stock markets because they are type 2 chaos problems, and because the stock markets are influenced by random unpredictable events like covid or something like that. would a combination of these alpha go model and LLM be better?

Expand full comment

Liquid Neural Nets might be an interesting contender for the architectural breakthrough needed to solve the generalised reasoning issues. This article about Liquid AI (https://techcrunch.com/2023/12/06/liquid-ai-a-new-mit-spinoff-wants-to-build-an-entirely-new-type-of-ai/) gives a good overview of their potential.

Expand full comment