39 Comments

Here's what I don't understand about the current AI discourse: training-set/feedback/reinforcement learning has proved dominant over rule-based AI. Yet, when it comes to assuming AIs will be "super-intelligent" *in the real world* (not chess, Go, protein-folding), I haven't heard any of the dystopians explain what feedback mechanism even has the possibility of creating a threatening AI. Instead, to my lay ears, it sounds like "we'll, computers are logical and fast, ergo they must be capable of super-intelligence"--but that completely skirts the issue and seems to rely more on rule-based thinking. Have you heard folks discuss feedback mechanisms for super-intelligence? And might those discussions help clarify how to built AIs safely?

Expand full comment

How will LLMs ever get over the hallucination problem? Or will they? Is there a solution that doesn't involve lots and lots of human-executed hard coding?

Expand full comment

Which workflows that exist today, can be mostly automated by approximately the existing technology, hallucinations and all? Rank the ideas by "contribution to gdp".

Expand full comment

Given that human consciousness arises from being physically present and vulnerable in a natural environment, thus giving rise to emotions and desires (pain, loss, excitement, fear, joy, love, etc) and that our whole mental life extends from those experiences, do we really need to be worried about AI when it can never achieve true consciousness without them? Chris C.

Expand full comment

I know you're not an economist, but what is your reaction to this tweet (which as an economist I think is correct, and you seem to be on the side of the economists):

https://twitter.com/adamdangelo/status/1659965893210931201

"Economists seem to consistently be the most dismissive of AI existential risk concerns, out of all groups of people who think seriously about the future. Why is this and what can we learn from it?"

What is it we are missing, or conversely what is it we understand that others don't?

Expand full comment

How should schools address AI applications, like ChatGPT? What guidance would you give secondary school teachers in terms of what adjustments to policies and procedures they should employ?

Expand full comment

What is the best way to determine who the niche players are in the AI-space? For example, some established audio-processing companies are clearly experimenting with AI, but I expect this technology to be disruptive to the professional audio/video community. Who else is doing work in this area besides the obvious big players?

Expand full comment

I am very concerned about the impacts of AI on jobs, so I find myself reading a lot of articles that attempt to predict whether AI will basically obliterate the workforce or whether it will somehow create more jobs. I read one recently that said something like “AI is already creating so many jobs, look at all these AI-related jobs on Indeed!” and it proceeded to describe these jobs, which were almost all extremely advanced engineering jobs for people with specializations in machine learning.

I guess my question is...are the jobs “created” by AI really going to be available to the average person whose job is lost to AI? There seems to be a lot of vague talk about retraining people to be able to take on these new AI-created jobs, but if AI is only creating extremely advanced engineering jobs, how is any of this going to work? I have faith in humans’ intelligence and adaptability, but if you’re a 45 year-old secretary who has worked at the same job for 25 years and it’s now been eliminated, I doubt you’re going to somehow be retrained as an engineer quick enough to be able to get some more solid working years in before retirement...

Expand full comment

Combining:

1) The lack of congressional action on legislation regulating tech companies/ social media, even though many members state they strongly favor such legislation - both currently and when under single control by either party

2) The movement of the current SCOTUS towards implementing it’s major questions doctrine

Can we really expect any action from the federal government to install boundaries (legislative and/or regulatory requirements) on AI companies and their products?

Expand full comment

Facts verse opinion. Data vs agenda. These are my concerns. Biased data as input gives biased data as output. AI has to use inputs of data that in many fields today is heavily censored (lacking scope of opinion) or contains significant bias (as an objective). How can AI detect (if it can) bias?

Expand full comment

I know what a llm is and how they are built. How is a chatbot built from a llm?

Expand full comment

We always talk about the output of LLMs but rarely about the input. How does an LLM process a prompt? What happens inside in order for the AI to be able to give an output? How do prompt and answer relate to each other? If I may suggest a headline: "You won't believe what happens after you prompt ChatGPT" ;-)

Expand full comment

A big emerging application of AI is to integrate it with all kinds of interactive software so that some large-language-model can assist the user. Often these LLMs are offered as a centralised service like ChaptGPT. The end-point of this is that a handful of powerful AIs will be reading everyone's spreadsheets, everyone's emails etc. What are the risks of this and what are the mitigations?

To anticipate one possible answer, maybe you'll say that LLMs don't really read your documents the way a person would. Rather the model runs inference on the document, and spits out some result, fire-and-forget. But (1) do we really know that's all that is going on? Why shouldn't AI companies be logging these interactions and further training their models on them. (2) How will this evolve if the technology pushes closer to an AGI -- which by definition will be more like a person than a fire-and-forget function evaluation.

Expand full comment

Two more questions to piggy-back off my jobs-related question:

1. When do you personally predict that the majority of current white-collar jobs will be automated?

2. What advice would you give a young person (who works in a field exposed to automation) to prepare for becoming economically obsolete?

Expand full comment

In a world full of black box AIs, XAI or explainable AIs are getting a lot of attention.

I'd request a comment on the viability of XAIs,

a. from a technical standpoint in terms of the overheads involved in building AIs with explainability as a foundational feature.

b. from a industry perspective in terms of the risks of losing competitive edge with XAIs, black boxes can be relied upon to keep secrets.

c. in terms of policy with the problems inherent in forcing the industry to make explainability compulsory. Do you believe it to be necessary?

Expand full comment

Is ‘existential threat’:

A) a cover for the current real world threats to social cohesion through:

-information (ranging from increased scams/phishing, to ‘flood the zone’ political bias, all the way through to polluting the information commons so that we quite literally cannot believe anything we read (who is President of the USA today, does smoking cause cancer etc)

-change, job losses, changes that took 200 years in the Industrial Revolution taking place in 2-5 years, social and economic instability

-biased and unchallengeable decision making (credit scores, profiling affecting minorities and disadvantaged groups disproportionately)

B) a distraction and justification for regulation of AI that will create a regulatory moat for the incumbents against open source developers - since most of the other moats against the competition aren’t sustainable, or

C) actually an existent threat in the Skynet sense.

Expand full comment