Here's what I don't understand about the current AI discourse: training-set/feedback/reinforcement learning has proved dominant over rule-based AI. Yet, when it comes to assuming AIs will be "super-intelligent" *in the real world* (not chess, Go, protein-folding), I haven't heard any of the dystopians explain what feedback mechanism even has the possibility of creating a threatening AI. Instead, to my lay ears, it sounds like "we'll, computers are logical and fast, ergo they must be capable of super-intelligence"--but that completely skirts the issue and seems to rely more on rule-based thinking. Have you heard folks discuss feedback mechanisms for super-intelligence? And might those discussions help clarify how to built AIs safely?
How will LLMs ever get over the hallucination problem? Or will they? Is there a solution that doesn't involve lots and lots of human-executed hard coding?
Which workflows that exist today, can be mostly automated by approximately the existing technology, hallucinations and all? Rank the ideas by "contribution to gdp".
Given that human consciousness arises from being physically present and vulnerable in a natural environment, thus giving rise to emotions and desires (pain, loss, excitement, fear, joy, love, etc) and that our whole mental life extends from those experiences, do we really need to be worried about AI when it can never achieve true consciousness without them? Chris C.
I know you're not an economist, but what is your reaction to this tweet (which as an economist I think is correct, and you seem to be on the side of the economists):
"Economists seem to consistently be the most dismissive of AI existential risk concerns, out of all groups of people who think seriously about the future. Why is this and what can we learn from it?"
What is it we are missing, or conversely what is it we understand that others don't?
How should schools address AI applications, like ChatGPT? What guidance would you give secondary school teachers in terms of what adjustments to policies and procedures they should employ?
What is the best way to determine who the niche players are in the AI-space? For example, some established audio-processing companies are clearly experimenting with AI, but I expect this technology to be disruptive to the professional audio/video community. Who else is doing work in this area besides the obvious big players?
1) The lack of congressional action on legislation regulating tech companies/ social media, even though many members state they strongly favor such legislation - both currently and when under single control by either party
2) The movement of the current SCOTUS towards implementing it’s major questions doctrine
Can we really expect any action from the federal government to install boundaries (legislative and/or regulatory requirements) on AI companies and their products?
Facts verse opinion. Data vs agenda. These are my concerns. Biased data as input gives biased data as output. AI has to use inputs of data that in many fields today is heavily censored (lacking scope of opinion) or contains significant bias (as an objective). How can AI detect (if it can) bias?
We always talk about the output of LLMs but rarely about the input. How does an LLM process a prompt? What happens inside in order for the AI to be able to give an output? How do prompt and answer relate to each other? If I may suggest a headline: "You won't believe what happens after you prompt ChatGPT" ;-)
A big emerging application of AI is to integrate it with all kinds of interactive software so that some large-language-model can assist the user. Often these LLMs are offered as a centralised service like ChaptGPT. The end-point of this is that a handful of powerful AIs will be reading everyone's spreadsheets, everyone's emails etc. What are the risks of this and what are the mitigations?
To anticipate one possible answer, maybe you'll say that LLMs don't really read your documents the way a person would. Rather the model runs inference on the document, and spits out some result, fire-and-forget. But (1) do we really know that's all that is going on? Why shouldn't AI companies be logging these interactions and further training their models on them. (2) How will this evolve if the technology pushes closer to an AGI -- which by definition will be more like a person than a fire-and-forget function evaluation.
A) a cover for the current real world threats to social cohesion through:
-information (ranging from increased scams/phishing, to ‘flood the zone’ political bias, all the way through to polluting the information commons so that we quite literally cannot believe anything we read (who is President of the USA today, does smoking cause cancer etc)
-change, job losses, changes that took 200 years in the Industrial Revolution taking place in 2-5 years, social and economic instability
-biased and unchallengeable decision making (credit scores, profiling affecting minorities and disadvantaged groups disproportionately)
B) a distraction and justification for regulation of AI that will create a regulatory moat for the incumbents against open source developers - since most of the other moats against the competition aren’t sustainable, or
C) actually an existent threat in the Skynet sense.
Do you think AI is currently in an overhype phase, the way self-driving cars were several years ago when many were predicting full self-driving was just around the corner? My own experience with co-pilot, for example, is that co-pilot is helpful, but it's certainly not making me 10x more productive (more like 1.1x more productive).
What do you think are the odds of a widening regulatory approach between Europe and the US? GDPR already seems to be driving a bit of a wedge between the EU and the US and I can't see Europe taking a laissez faire approach to e.g. training datasets or outputs that mimic a specific person's work.
Here's what I don't understand about the current AI discourse: training-set/feedback/reinforcement learning has proved dominant over rule-based AI. Yet, when it comes to assuming AIs will be "super-intelligent" *in the real world* (not chess, Go, protein-folding), I haven't heard any of the dystopians explain what feedback mechanism even has the possibility of creating a threatening AI. Instead, to my lay ears, it sounds like "we'll, computers are logical and fast, ergo they must be capable of super-intelligence"--but that completely skirts the issue and seems to rely more on rule-based thinking. Have you heard folks discuss feedback mechanisms for super-intelligence? And might those discussions help clarify how to built AIs safely?
How will LLMs ever get over the hallucination problem? Or will they? Is there a solution that doesn't involve lots and lots of human-executed hard coding?
Which workflows that exist today, can be mostly automated by approximately the existing technology, hallucinations and all? Rank the ideas by "contribution to gdp".
Given that human consciousness arises from being physically present and vulnerable in a natural environment, thus giving rise to emotions and desires (pain, loss, excitement, fear, joy, love, etc) and that our whole mental life extends from those experiences, do we really need to be worried about AI when it can never achieve true consciousness without them? Chris C.
I know you're not an economist, but what is your reaction to this tweet (which as an economist I think is correct, and you seem to be on the side of the economists):
https://twitter.com/adamdangelo/status/1659965893210931201
"Economists seem to consistently be the most dismissive of AI existential risk concerns, out of all groups of people who think seriously about the future. Why is this and what can we learn from it?"
What is it we are missing, or conversely what is it we understand that others don't?
How should schools address AI applications, like ChatGPT? What guidance would you give secondary school teachers in terms of what adjustments to policies and procedures they should employ?
What is the best way to determine who the niche players are in the AI-space? For example, some established audio-processing companies are clearly experimenting with AI, but I expect this technology to be disruptive to the professional audio/video community. Who else is doing work in this area besides the obvious big players?
Combining:
1) The lack of congressional action on legislation regulating tech companies/ social media, even though many members state they strongly favor such legislation - both currently and when under single control by either party
2) The movement of the current SCOTUS towards implementing it’s major questions doctrine
Can we really expect any action from the federal government to install boundaries (legislative and/or regulatory requirements) on AI companies and their products?
Facts verse opinion. Data vs agenda. These are my concerns. Biased data as input gives biased data as output. AI has to use inputs of data that in many fields today is heavily censored (lacking scope of opinion) or contains significant bias (as an objective). How can AI detect (if it can) bias?
I know what a llm is and how they are built. How is a chatbot built from a llm?
We always talk about the output of LLMs but rarely about the input. How does an LLM process a prompt? What happens inside in order for the AI to be able to give an output? How do prompt and answer relate to each other? If I may suggest a headline: "You won't believe what happens after you prompt ChatGPT" ;-)
A big emerging application of AI is to integrate it with all kinds of interactive software so that some large-language-model can assist the user. Often these LLMs are offered as a centralised service like ChaptGPT. The end-point of this is that a handful of powerful AIs will be reading everyone's spreadsheets, everyone's emails etc. What are the risks of this and what are the mitigations?
To anticipate one possible answer, maybe you'll say that LLMs don't really read your documents the way a person would. Rather the model runs inference on the document, and spits out some result, fire-and-forget. But (1) do we really know that's all that is going on? Why shouldn't AI companies be logging these interactions and further training their models on them. (2) How will this evolve if the technology pushes closer to an AGI -- which by definition will be more like a person than a fire-and-forget function evaluation.
In a world full of black box AIs, XAI or explainable AIs are getting a lot of attention.
I'd request a comment on the viability of XAIs,
a. from a technical standpoint in terms of the overheads involved in building AIs with explainability as a foundational feature.
b. from a industry perspective in terms of the risks of losing competitive edge with XAIs, black boxes can be relied upon to keep secrets.
c. in terms of policy with the problems inherent in forcing the industry to make explainability compulsory. Do you believe it to be necessary?
Is ‘existential threat’:
A) a cover for the current real world threats to social cohesion through:
-information (ranging from increased scams/phishing, to ‘flood the zone’ political bias, all the way through to polluting the information commons so that we quite literally cannot believe anything we read (who is President of the USA today, does smoking cause cancer etc)
-change, job losses, changes that took 200 years in the Industrial Revolution taking place in 2-5 years, social and economic instability
-biased and unchallengeable decision making (credit scores, profiling affecting minorities and disadvantaged groups disproportionately)
B) a distraction and justification for regulation of AI that will create a regulatory moat for the incumbents against open source developers - since most of the other moats against the competition aren’t sustainable, or
C) actually an existent threat in the Skynet sense.
Do you think AI is currently in an overhype phase, the way self-driving cars were several years ago when many were predicting full self-driving was just around the corner? My own experience with co-pilot, for example, is that co-pilot is helpful, but it's certainly not making me 10x more productive (more like 1.1x more productive).
What do you think are the odds of a widening regulatory approach between Europe and the US? GDPR already seems to be driving a bit of a wedge between the EU and the US and I can't see Europe taking a laissez faire approach to e.g. training datasets or outputs that mimic a specific person's work.