

Discover more from Understanding AI
Exploring how AI works and how it's changing our world.
Over 19,000 subscribers
Continue reading
A little over two months after launch, Understanding AI has more than 5,000 readers. With all these new readers, I’d love to get a better understanding of who is reading the newsletter and what you’d like me to write about. So I’d like to ask a favor: ask me a question!
If you are reading this as an email, click the title at the top of the email to go to the Substack website. Then scroll down to leave a comment at the bottom of the post. I’ll answer a few of your questions in a post next week.
Thanks for being part of Understanding AI!
Ask me anything!
Here's what I don't understand about the current AI discourse: training-set/feedback/reinforcement learning has proved dominant over rule-based AI. Yet, when it comes to assuming AIs will be "super-intelligent" *in the real world* (not chess, Go, protein-folding), I haven't heard any of the dystopians explain what feedback mechanism even has the possibility of creating a threatening AI. Instead, to my lay ears, it sounds like "we'll, computers are logical and fast, ergo they must be capable of super-intelligence"--but that completely skirts the issue and seems to rely more on rule-based thinking. Have you heard folks discuss feedback mechanisms for super-intelligence? And might those discussions help clarify how to built AIs safely?
How will LLMs ever get over the hallucination problem? Or will they? Is there a solution that doesn't involve lots and lots of human-executed hard coding?