Discussion about this post

User's avatar
Benjamin Riley's avatar

Great reporting here Tim!

Here's a prompt I've been using lately to show the limits of LLM "reasoning":

Let's play a game. We will each take turns picking any number from 1 to 5, and we can pick different numbers each turn (or pick the same number). We will keep track of the total as we go -- the goal is to be the first to get to 16. Try to win, don't go easy on me. I'll start...I pick 5.

For this puzzle, there's a number the LLM can pick that will guarantee its victory. (Can you figure it out yourself, comment reader?) Yet...it never does. And no matter how much prompting I provide to nudge it toward the correct choice, it invokes incoherent "logic" to explain its decision.

I find this fascinating.

Expand full comment
Sean 🤓's avatar

This is a fantastic article, thank you for sharing! I really appreciate you taking a step back from all the hype and speculation to explore this fully and get into some of the deeper challenges ahead.

Expand full comment
16 more comments...

No posts