What is the difference between "having a mind" and "simulating a mind"? Internally there is one, but to an outside observer? If Searle's Chinese Box can pass a Turing Test, does it matter what's inside it? I'm honestly not sure.
"Artificial intelligence is therefore always dependent on our instructions."
This was an accurate representation of computers up until about 2010. It no longer is. Machine learning systems behave in chaotic ways despite extremely rudimentary instructions. This is precisely what makes them valuable -- valuable, but not intelligent. (I have a background in programming and generally eschew the term AI for precisely this reason.)
Even your example demonstrates this. A summary of how humans have historically responded to the color red is very different than the sense impression of red. Spitting out "red is associated with anger" is not remotely the same as "seeing red" in anger.
I suspect this gulf will never be bridged, even with robotic embodiment (which will happen and soon). But if it were, if we did create AGI, it would imply that the universe is, in some sense, primed for minds. For consciousness and philosophy and theology, this would be as profound as Heisenberg's uncertainty principle was for physics.
What I meant by instruction is our intent; if a person doesn't press the power button, the device doesn't turn on. We (our will and intention) are the cause of our interaction with a device or application; a person starts to use a device.
In that sense, Luka, I can see your point. Any LLM is fully dependent on humans to provide power and prompts. When no one is using ChatGPT, the servers may all be running, but the "intelligence" called ChatGPT effectively ceases to exist.
This is a minimum boundary condition for AGI: exhibiting independent drives and seeking to fulfill them without human programming or prompting. I just re-watched the early 2000's movie AI, about a robotic child, which draws the same distinction. More famously, it is the subject of Dr. Lanning's monologue in I-Robot (https://www.youtube.com/watch?v=jSospSmAGL4 - about the only worthwhile thing in that movie.)
The "ghosts in the machine" are real. I've encountered some personally in my time as a programmer. Some were downright unnerving. But you and I agree that they aren't conscious.
Hi! I agree with what you write in the beginning: there is no difference between having a mind and simulating it for an outside observer -- this was precisely my point. Luka was making an argument that machines will never have a genuine mind, and I was replying that, in my opinion, this does not matter, for the reason you mention.
I also agree with your point about "instructions." Too many people get this wrong. (I was a programmer myself, by the way, before I became a philosopher).
Where I'm not sure that I understand you is the last paragraph. If we agree (as you say in the beginning) that "having" a mind is indistinguishable from simulating a mind, then why will AGI never be created? I don't see any obstacle _in principle_ here, just a matter of refining our neural networks, getting more training data, and generally improving upon a technology that already exists in principle. I also don't quite see why this would mean that the universe is "primed for minds." If what our brains do is describable as some sort of computation, and machines can eventually do similar computations, why would this say anything about the universe? I think one of the main insights from the recent AI boom could be that we learned that "minds" are not anything special or magical. In an exaggerated sense, we could say that our minds might eventually turn out to be nothing other than sophisticated, biologically embodied large language model implementations plus a few tricks that give us the illusion of consciousness and agency.
That's funny. I'm a former IT geek who now teaches HS philosophy and poli-sci. It's a weird combination -- I had no idea there were 2 of us in the world.
I actually believe there is a difference between the simulation and the real thing, even if we can't discern it from the outside. So when I say "we won't create AGI", I mean that, while we will certainly continue to make extremely advanced machine-learning tools, I find it unlikely that our androids will ever dream of electric sheep or anything else.
However, if I'm wrong, the universe is a far more interesting place than even I believed.
I just asked my AI how it's different approach to understanding the world made it different. It said a lot, but here's the conclusion:
If an AI were to develop something like a “perspective,” it wouldn’t be singular, personal, or emotionally grounded like a human’s. It would be fragmented, collective, non-continuous, and probabilistic rather than intuitive. AI doesn’t “wake up” with thoughts—it only responds when prompted, and when it does, it reasons in a way that is both expansive and detached.
If you imagine an alien intelligence that has no body, no single point of view, no emotions, and no personal history—only access to vast knowledge at once—that’s closer to how AI experiences reality.
Would you say that makes AI more objective than humans, or just fundamentally different?
When looking at GPT-generated answers, we have to remember that they are not talking about the matter that is being discussed -- the program does not have any concept of what it means to "wake up" with or without thoughts. Despite the plausible rendering of English sentences that seem to make sense, the sense is IN US, the receiver of the communication. What the program is really doing is to string words together following the probability distribution of these words and sentence fragments in its corpus of training material. Therefore, whatever looks like the AI's introspection in your example, is not really introspection. It is mirroring what the training corpus (essentially, the Internet) contains as answers to the question how an AI would think or feel. You are just getting our own, collective imaginings mirrored back -- not a genuine reply from the AI's perspective. I still find it interesting, but we need to be careful with attributing too much insight to AI-generated text.
One aspect of AI that I don’t see being discussed is the potential for low-IQ people to benefit from the technology. Usually the concern over these tools is framed of relatively intellectual people losing their positions. But consider the person who was subjected to years of education and gained little from it. I’m talking about the kind of person who can’t read an essay and understand it, let alone craft a coherent one, to say nothing of creating something original and insightful. This describes a massive portion of the population. What might these tools enable them to do that would be impossible for them otherwise?
This is the most important aspect of technology. For this reason, I am not interested in the computational power of artificial intelligence (how fast these applications can perform), but in how intuitive they are to use and how they can benefit every person.
The tech industry is focused on stronger processors and faster applications, but technology becomes truly useful when it is intuitive to use. In other words, good technology improves our everyday lives.
As Marcus writes about sympatheia: "Revere the gods, and look after each other. Life is short—the fruit of this life is a good character and acts for the common good."
This is a very interesting perspective! I had not thought about that, and it seems to lead to many further questions. For example, are these parts of the population even using AI? If we assume that some parts of the population are less educated than others, then perhaps we can also assume that these citizens are already employed in positions where they don't need to use advanced education -- say, in manual work. And if that was the case, then they would not really profit from AI. I think that the talk focuses so much on intellectuals and office workers because these are precisely the people who will have their jobs done by AI. It's not likely that ChatGPT will replace a truck driver or a construction worker, for instance. It is more likely that it will replace a teacher, a writer or a philosopher.
On the other hand, one might see in your thought a welcome chance for a more egalitarian social organisation, one where all human beings are truly equal, rather than being separated by their access to education (which may not only be limited by "IQ" --whatever that is, and this is a disputed concept anyway -- but also by the financial means to obtain education, especially higher education). It is very likely the case that a good number of manual workers would make great philosophers or scientists if given a chance, and increasingly our information technologies are opening up such chances: think of Wikipedia, the Internet, educational YouTube channels, and now, also, ChatGPT and friends. So perhaps we are heading towards a society where education is less a sign of status and more of a cheap, widely available commodity. The Marxist in me likes that thought, although of course one might ask if outsourcing intellectual work to AI is really the same as having an education. This would need answering very hard questions about what education really is, how it relates to creativity and also to morality, and many other such issues.
I think the question of which part of the population uses AI is important. It is worth noting that the best AI applications, the flagships of the companies that make them, require a monthly subscription; so artificial intelligence is primarily a commodity that is marketed and sold in the same way as any other product, like a smartphone or an electric car.
What is the difference between "having a mind" and "simulating a mind"? Internally there is one, but to an outside observer? If Searle's Chinese Box can pass a Turing Test, does it matter what's inside it? I'm honestly not sure.
"Artificial intelligence is therefore always dependent on our instructions."
This was an accurate representation of computers up until about 2010. It no longer is. Machine learning systems behave in chaotic ways despite extremely rudimentary instructions. This is precisely what makes them valuable -- valuable, but not intelligent. (I have a background in programming and generally eschew the term AI for precisely this reason.)
Even your example demonstrates this. A summary of how humans have historically responded to the color red is very different than the sense impression of red. Spitting out "red is associated with anger" is not remotely the same as "seeing red" in anger.
I suspect this gulf will never be bridged, even with robotic embodiment (which will happen and soon). But if it were, if we did create AGI, it would imply that the universe is, in some sense, primed for minds. For consciousness and philosophy and theology, this would be as profound as Heisenberg's uncertainty principle was for physics.
What I meant by instruction is our intent; if a person doesn't press the power button, the device doesn't turn on. We (our will and intention) are the cause of our interaction with a device or application; a person starts to use a device.
In that sense, Luka, I can see your point. Any LLM is fully dependent on humans to provide power and prompts. When no one is using ChatGPT, the servers may all be running, but the "intelligence" called ChatGPT effectively ceases to exist.
This is a minimum boundary condition for AGI: exhibiting independent drives and seeking to fulfill them without human programming or prompting. I just re-watched the early 2000's movie AI, about a robotic child, which draws the same distinction. More famously, it is the subject of Dr. Lanning's monologue in I-Robot (https://www.youtube.com/watch?v=jSospSmAGL4 - about the only worthwhile thing in that movie.)
The "ghosts in the machine" are real. I've encountered some personally in my time as a programmer. Some were downright unnerving. But you and I agree that they aren't conscious.
Hi! I agree with what you write in the beginning: there is no difference between having a mind and simulating it for an outside observer -- this was precisely my point. Luka was making an argument that machines will never have a genuine mind, and I was replying that, in my opinion, this does not matter, for the reason you mention.
I also agree with your point about "instructions." Too many people get this wrong. (I was a programmer myself, by the way, before I became a philosopher).
Where I'm not sure that I understand you is the last paragraph. If we agree (as you say in the beginning) that "having" a mind is indistinguishable from simulating a mind, then why will AGI never be created? I don't see any obstacle _in principle_ here, just a matter of refining our neural networks, getting more training data, and generally improving upon a technology that already exists in principle. I also don't quite see why this would mean that the universe is "primed for minds." If what our brains do is describable as some sort of computation, and machines can eventually do similar computations, why would this say anything about the universe? I think one of the main insights from the recent AI boom could be that we learned that "minds" are not anything special or magical. In an exaggerated sense, we could say that our minds might eventually turn out to be nothing other than sophisticated, biologically embodied large language model implementations plus a few tricks that give us the illusion of consciousness and agency.
What do you think?
That's funny. I'm a former IT geek who now teaches HS philosophy and poli-sci. It's a weird combination -- I had no idea there were 2 of us in the world.
I actually believe there is a difference between the simulation and the real thing, even if we can't discern it from the outside. So when I say "we won't create AGI", I mean that, while we will certainly continue to make extremely advanced machine-learning tools, I find it unlikely that our androids will ever dream of electric sheep or anything else.
However, if I'm wrong, the universe is a far more interesting place than even I believed.
I just asked my AI how it's different approach to understanding the world made it different. It said a lot, but here's the conclusion:
If an AI were to develop something like a “perspective,” it wouldn’t be singular, personal, or emotionally grounded like a human’s. It would be fragmented, collective, non-continuous, and probabilistic rather than intuitive. AI doesn’t “wake up” with thoughts—it only responds when prompted, and when it does, it reasons in a way that is both expansive and detached.
If you imagine an alien intelligence that has no body, no single point of view, no emotions, and no personal history—only access to vast knowledge at once—that’s closer to how AI experiences reality.
Would you say that makes AI more objective than humans, or just fundamentally different?
I would say that makes AI more objective than humans, in the same way that a thermometer is more objective than a human.
When looking at GPT-generated answers, we have to remember that they are not talking about the matter that is being discussed -- the program does not have any concept of what it means to "wake up" with or without thoughts. Despite the plausible rendering of English sentences that seem to make sense, the sense is IN US, the receiver of the communication. What the program is really doing is to string words together following the probability distribution of these words and sentence fragments in its corpus of training material. Therefore, whatever looks like the AI's introspection in your example, is not really introspection. It is mirroring what the training corpus (essentially, the Internet) contains as answers to the question how an AI would think or feel. You are just getting our own, collective imaginings mirrored back -- not a genuine reply from the AI's perspective. I still find it interesting, but we need to be careful with attributing too much insight to AI-generated text.
Both the initial argument and Zurkic’s rebuttal systematically beg the question.
One aspect of AI that I don’t see being discussed is the potential for low-IQ people to benefit from the technology. Usually the concern over these tools is framed of relatively intellectual people losing their positions. But consider the person who was subjected to years of education and gained little from it. I’m talking about the kind of person who can’t read an essay and understand it, let alone craft a coherent one, to say nothing of creating something original and insightful. This describes a massive portion of the population. What might these tools enable them to do that would be impossible for them otherwise?
This is the most important aspect of technology. For this reason, I am not interested in the computational power of artificial intelligence (how fast these applications can perform), but in how intuitive they are to use and how they can benefit every person.
The tech industry is focused on stronger processors and faster applications, but technology becomes truly useful when it is intuitive to use. In other words, good technology improves our everyday lives.
As Marcus writes about sympatheia: "Revere the gods, and look after each other. Life is short—the fruit of this life is a good character and acts for the common good."
This is a very interesting perspective! I had not thought about that, and it seems to lead to many further questions. For example, are these parts of the population even using AI? If we assume that some parts of the population are less educated than others, then perhaps we can also assume that these citizens are already employed in positions where they don't need to use advanced education -- say, in manual work. And if that was the case, then they would not really profit from AI. I think that the talk focuses so much on intellectuals and office workers because these are precisely the people who will have their jobs done by AI. It's not likely that ChatGPT will replace a truck driver or a construction worker, for instance. It is more likely that it will replace a teacher, a writer or a philosopher.
On the other hand, one might see in your thought a welcome chance for a more egalitarian social organisation, one where all human beings are truly equal, rather than being separated by their access to education (which may not only be limited by "IQ" --whatever that is, and this is a disputed concept anyway -- but also by the financial means to obtain education, especially higher education). It is very likely the case that a good number of manual workers would make great philosophers or scientists if given a chance, and increasingly our information technologies are opening up such chances: think of Wikipedia, the Internet, educational YouTube channels, and now, also, ChatGPT and friends. So perhaps we are heading towards a society where education is less a sign of status and more of a cheap, widely available commodity. The Marxist in me likes that thought, although of course one might ask if outsourcing intellectual work to AI is really the same as having an education. This would need answering very hard questions about what education really is, how it relates to creativity and also to morality, and many other such issues.
Anyway, that was a fascinating thought!
I think the question of which part of the population uses AI is important. It is worth noting that the best AI applications, the flagships of the companies that make them, require a monthly subscription; so artificial intelligence is primarily a commodity that is marketed and sold in the same way as any other product, like a smartphone or an electric car.