Discussion about this post

User's avatar
Aaron Strauss's avatar

Here's what I don't understand about the current AI discourse: training-set/feedback/reinforcement learning has proved dominant over rule-based AI. Yet, when it comes to assuming AIs will be "super-intelligent" *in the real world* (not chess, Go, protein-folding), I haven't heard any of the dystopians explain what feedback mechanism even has the possibility of creating a threatening AI. Instead, to my lay ears, it sounds like "we'll, computers are logical and fast, ergo they must be capable of super-intelligence"--but that completely skirts the issue and seems to rely more on rule-based thinking. Have you heard folks discuss feedback mechanisms for super-intelligence? And might those discussions help clarify how to built AIs safely?

Expand full comment
Jon Fischer's avatar

How will LLMs ever get over the hallucination problem? Or will they? Is there a solution that doesn't involve lots and lots of human-executed hard coding?

Expand full comment
35 more comments...

No posts