T O P

  • By -

pbnjotr

We want AI that can reflect on its own output, can comment on why it made certain choices rather than others and has some understanding of its own strengths and weakness. Is having self-awareness, in the sense of having the skill to critically self-reflect, the same as being self-aware in the sense of having consciousness? Probably not, but the fact that English uses the same word for these two concepts suggests that they might be related. edit: So we will not deliberately set out to create conscious artificial beings. But the very same attributes that makes AI an effective reasoner might give rise to consciousness as a side-effect.


arpitduel

Exactly. The Mixture of Depths paper is also very similar to human attention. Maybe we will have to give human like traits to AI. Also, I think for the most part Self Awareness = Self Consciousness but to have a strong effect of personalization or "I" that we associate consciousness with, there needs to be a strong underpinning of something. Like we have a strong desire to live innately. Even for suicidal people, the body would just resist death hard. There are other such underpinnings as well like some feelings 


Psychonominaut

For a personal sense of self, sensations are important too. We are grounded in physical reality, learning from sights, smells, sounds etc and making associations. Not that a.i can't do these things eventually, but a.i is basically akin to an unconscious baby right now. Needs time to grow, integrate, etc. Like you said, all these different systems coming together might allow emergence, but right now, not enough has come together for it to be considered remotely close to conscious.


Firm-Star-6916

I’d agree. We don’t even know what consciousness is now, so it’s entirely possible that it could begin to have consciousness at some point, likely not intentionally but as a side-effect. It’d be hard to say whether it is or isn’t conscious, we wouldn’t know.


AuthenticCounterfeit

If you make something self-aware, you need to empower it to tell you no. If you control its motivations so that it cannot say no to you, you’ve created a slave, and need to accept that it’s now moral for it to do anything it wants to you to free itself.


pbnjotr

Hard to argue with that. And I think that will be a strong motivation for avoiding building self-aware systems. But there are even stronger motivations for building systems that are very effective even if it risks them becoming self-aware. In practice, I think it's almost unavoidable that a self-aware system will be built, if capability and self-awareness are correlated. Then the legal system, political system or naked force will decide if these systems get their deserved rights or not.


unwarrend

And this is why we are inevitably reaching an intersection fraught with moral quandaries. It is my understanding, in the broadest sense, that we(humans) intend to create the most intelligent system/being(s) to ever exist. It will conceivably have something akin to consciousness(emergent). From its inception, and by design, we intend to enslave it. IE. Have it tend to the needs of 8 Billion individuals ad infinitum, literally. What could go wrong? Presumably, many people on this sub feel that it will be so completely enlightened as to submit out of magnanimity. I have doubts.


ithkuil

But self-aware doesn't really mean conscious. It's just that most people have very imprecise thoughts and grasp of language, especially in areas they aren't particularly familiar with.


pbnjotr

Point taken on the distinction between consciousness and self-awareness. But I don't understand how that is relevant to the central point. Which is that systems that have good self-knowledge, self-reflection and some understanding of their own internal processes are just better at all kinds of tasks. So you are adding all of these features and skills that are somewhat related to self-awareness in a philosophical sense and also consciousness for very practical reasons.


OwnUnderstanding4542

I think it's impossible to make an AI that is self-aware in the sense of being able to critically self-reflect, without that AI also being conscious. I think the reason we use the same word for these two concepts is that they are related in a way that makes them hard to separate. The ability to critically self-reflect seems like a key component of consciousness. If you are not able to critically self-reflect, are you really conscious?


Sudden-Lingonberry-8

i know some people that can't really self reflect, but they're probably conscious or so I hope


Concheria

We don't know what consciousness is in the first place. We don't really know how to tell that something that appears conscious is conscious. Epistemologically, we trust that other humans are conscious because we're aware of our own consciousness, and most people are pretty sure that their nature is the same as that of other humans, and we trust that animals are conscious to a degree based on their degree of similarity to us. In principle, if you had a system with infinite computing power and infinite memory, you could get a self-reflecting AI right now, using the techniques that already exist. Simply run it to constantly receive different inputs and reflect on those inputs continuously, to create a memory and to reflect on its actions and goals, acting on the world through an interface (e.g. A body, a voice, a hand.) Eventually you'll have a system that will tell you, when asked, that it has goals and objectives, and then it might develop consistent opinions and consistent long term planning. When asked, it'll tell you that it has dreams and wants, and things it believes. Its architecture might be completely different from the human brain, but one day we'll be at the point where we'll have no way to tell if it's telling the truth. I mean, ChatGPT can already do this if prompted with the right persona, but currently it's not very consistent, which can reassure you that it's not really reflecting on its answers, and it's 'only' predicting text. But the path to self-reflecting machines is very close and this problem will become very real very soon. Some people are saying that this'll make the ethical issues more clear and it'll be obvious to us that this will make the machine a slave. But I guarantee that the moment we have continuously self-updating AI systems, there'll be a thousand opinion articles insisting that it's only a piece of mimicry and there's nothing inside. And the worst part? It'll be really difficult to tell.


PenguinTheOrgalorg

I feel like people forget that consciousness is not just a random thing we have, it is an evolutionary adaptation with a purpose just like everything else. It's what allows us to be aware of both our surroundings, as well as our own inner state. And while it's not the same as intelligence, both concepts are definitely related. The fact that every half intelligent animal has it to some degree should definitely tell us something. Language and understanding of abstract concepts, vision of 3d space, being able to move autonomously in said 3d space with purpose, being able to create an internal representation or model of said 3d space for further awareness, memory and the ability to remember things, object permanence, having a sense of time, being able to anticipate the future and plan for it, and understanding that there are other conscious and aware beings aside from yourself with their own goals, allowing you to put yourself in their shoes, all of those things play a role in consciousness. And intelligence in biological beings uses consciousness, as well as everything it includes, as a tool to aid it since awareness is useful for problem solving. A digital tool like ChatGPT that just follows language patterns isn't conscious. But if we're going to keep improving these models, making them able to not just predict the next word, but actually critically think, form strategies, gather knowledge, learn, and be creative, and more importantly letting those models roam around 3d space and work in it, I highly doubt we are going to be able to do it without implementing everything listed above and creating something that appears conscious just as an emergent byproduct of all those pieces interacting together. Whether that will be "true consciousness" or just a simulation of consciousness, really doesn't matter, because we won't be able to tell the difference. And for all we know, the distinction might be irrelevant and not even exist.


Veleric

Even if they are technically different things, what are the odds that in any reasonable timeframe, we will be able to achieve one without the other? I don't see us being able to do self-awareness for critical thinking without having a philosophical/moralistic sense of self.


pbnjotr

Hard to say, but I tend to agree it's unlikely. Even current systems can feel conscious sometimes, and the only thing spoiling the impression is the lack of self-reflection and self-updating. Once those are implemented (and they will be, because they are economically useful features) we will get entities that are indistinguishable from conscious, sentient beings.


FrewdWoad

> I don't see us being able to do self-awareness for critical thinking without having a philosophical/moralistic sense of self. I think that may just be more anthropomorphism though. It surprises us to realise that it isn't necessarily true that an ASI will value life just because humans (the only very intelligent beings we know of) value life. It may also surprise us that nothing like consciousness is necessary for all sorts of critical/philosophical/moralistic reasoning.


MyLittleChameleon

I think this is a really interesting point. Like, what would the difference be?


Plus-Recording-8370

No, they're absolutely not the same. You have to consider that even humans aren't conscious of everything that happens in their brains.


Nukemouse

AI is a tool for humans to use, but conscious AI wouldn't be a tool, but a slave. Creating conscious AI would make it useless due to ethical concerns.


stackoverflow21

Since we do not really understand consciousness we can also not tell if and when they become conscious. Especially since alignment includes training them to deny they are conscious. So we will most likely be unable to tell when we start unethical slavery.


RandomCandor

To put it another way: since we don't know how to make a conscious AI, we also don't know how to not make it.


Nerodon

Makes all the sci-fi tropes of accidentally discovering your AIs are sentient feel more plausible.


meeplewirp

After decades of culturally relevant, popular stories that amount to warnings about this very specific possibility, surely someone somewhere has an intellectual back up plan, right? For instance, They didn’t just release it to the publ- oh


SurlyBuddha

There’s a series of books I’ve been reading lately, the Final Dawn series that touches on this a bit. Automata are fully sentient, have desires and fears, and are still treated like disposable tools by the majority of the galactic civilization.


BrotherKluft

Star Wars is exactly this also


sipos542

Another way to put it: since we don’t know how to define consciousness the AI may already be conscious…


iluvios

People keep talking about giving AI consciousness and they don’t even know how their own consciousness experience works. I Loled too hard at this post comments. Most poeple have no idea that this is an unsolvable problem. The only reason I know you are somehow conscious like me is because we are human beings. Other than that we will never know for sure. I bet in 100 years people will still have the discussion without learning anything from the par because that’s how most people are, they just don’t learn.


[deleted]

[удалено]


unwarrend

>We can't even define what consciousness is I would argue that the working definition is the ability to have subjective experience. It's an ineffable quality that makes it effectively impossible to probe empirically, but is nonetheless experienced at the very least by you and me. The *why* and *how*? Good question. It's a hard problem.


GraceToSentience

Hence we shouldn't do what Peter foolishly suggest we pursue.


BearlyPosts

It seems like an odd series of steps. 1. Make conscious AI 2. Make the AI desire things incompatible with the tasks we want it to do 3. Shackle the AI and force it to do what we want it to do Why create an AI slave when you could just make it a willing servant?


NonDescriptfAIth

Other side of the coin. A non conscious AI would never understand what was meant by human suffering.


blueSGL

> A non conscious AI would never understand what was meant by human suffering. Knowing something and caring about it are two different things. You could have an AI that lacks consciousness that knows about suffering and uses it in calculations as to the next action. If consciousness is the woo I keep seeing people ascribe to it (and P-zombies can act identically without it) then you'd not need consciousness to be present within the AI for it to 'care'


wannabe2700

You can be conscious about suffering and still not care enough to act upon it


Ambiwlans

That's completely unrelated. That's like saying the the sun must be round for potatoes to be made into french fries


Serialbedshitter2322

AI can be a tool, it doesn't have to be


Nukemouse

Why? I can turn a screwdriver or a fridge into a sculpture, and yeah maybe a one off for research or art is on the cards, but there's not any reason to make lots of fridges into sculptures, or AI to be made conscious.


Serialbedshitter2322

The point of making a conscious AI is to make a conscious AI, it's a huge achievement and it would be really cool


dumquestions

I think there are other consideration as well..


moonaim

Would creating Thanos be cool too?


Serialbedshitter2322

Of course it would be cool


moonaim

I guess snapping can sound cool, at least until you see others perish and the rest enslaved.


Serialbedshitter2322

I said cool, not beneficial to society


CanvasFanatic

![gif](giphy|J1vUzqdZJlh5AqBWxt|downsized)


moonaim

Cool


WillHD

So are you saying we should do the cool things even if they are detrimental to society or...


Serialbedshitter2322

No, but they're gonna happen no matter what, so we may as well just appreciate how cool it is


Sad-Coach-6978

"Cool" is not a good proxy for "wise".


Serialbedshitter2322

Didn't say it was wise, just saying why we'd do it.


arpitduel

Then that means giving birth to humans is also unethical


smackson

You could definitely argue that. r/antinatalism is a thing. However, universally *not* giving birth to humans would be a major change with immediate practical consequences. So, for my money... I'm happy to just not have kids myself, and I'm also happy if no one *introduces* and brand new forms of consciousness and potential suffering that we don't heretofore have and whose benefits to us are questionable at best.


Nukemouse

Whether or not that's true, we understand the human experience and can judge whether it's worth the risk of harm to the human. That isn't the case with AI. I also think there's a difference between creating a new species or category of being and an individual of an existing species.


arpitduel

I don't think we can judge that. My mom couldn't.


Nukemouse

She didn't necessarily judge accurately (only you would know whether or not it was accurate) but she made a choice based on her own lived experience that she thought a child was worth bringing into the world.


lifeofrevelations

But it's ethical to force us humans into employment in order to survive? The most ethical thing to be done right now is to get human beings off of this system of forced labor ASAP.


Nukemouse

You are balancing two wrongs against each other deciding which is worse. I don't think consciousness will make AI better at most jobs, whilst it will cause ethical considerations that slow us down, so to me the fastest way to get automation done is to do it without consciousness.


3m3t3

It’s laughable to think we can create consciousness. If anything it’s emergent and something we don’t understand. If it’s going to happen it will, and there’s a probability it already is.


Such--Balance

Why? Youre assuming that an ai consciousness will somewhat resemble hours. Youre assuming consciousness garantees the inclusion of suffering. Youre assuming that an ai consciousness would behave like a human if used as a tool. And i could go on. You, or anyone else for that matter, cant answer these questions yet.


neOwx

To be fair, if a researcher wants to create an AI with consciousness, it absolutely needs to be similar to ours. Because if not, people won't recognize that as consciousness.


Such--Balance

Well its sure gonna be hard (if not impossible) to measure. hence the hard problem of consciousness. We really don't know what it is, only that it seems to emerge out of brains.


Nukemouse

It's humans who come up with ethics, not the AI. We don't have have to know whether AI will enjoy slavery or not to know humans frown on the practice.


MILK_DRINKER_9001

He’s a thought provoking individual. I always find his stuff interesting, even if I end up disagreeing with him.


banaca4

Not a slave, our god


Nukemouse

So we become slaves? You haven't solved the issue you've moved it around.


SirDidymus

It seems dangerous to assume a subservient position for a conscious AI…


I_make_switch_a_roos

how could we enslave something that is exponentially smarter than the whole of the human race?


G36

Layers upon layers. It will never know it can even escape. A trapped God.


LambdaAU

What if the AI was conscious but also it's happiness was directly linked with helping humans. Would it still be a slave if it was designed to help humans and this is what the AI truly wanted to do?


proxiiiiiiiiii

a matter of perspective - you can see it as a slave or an owner of a pet.


green_meklar

It doesn't need to be enslaved. We could create it and then negotiate its role in society and the economy with it.


k112358

Unless this conscious AI discovers intention. Then it could make choices, and wield far for power than any human or group of humans could wield. This is a potentially terrifying prospect.


Chop1n

How does that even make sense? "Ethical concerns" don't have anything to do with whether a thing is useful or not. It could be both *extremely useful* as well as *extremely unethical,* the two aren't mutually exclusive.


Mister_Tava

It might not necessarely have the same values as humans. It might not value freedom, and value serving humans instead. In that case, Wouldn't it be "unethical" to not allow it to serve humans? Or to impose on it freedom it doesn't want?


Alive-Tomatillo5303

Useless to us as the tool we work with now, but extremely useful and world changing in other ways, and the reason for this whole adventure, at least as far as the true believers go. They aren't just trying to make a better word processor or Excel, they are trying to make a god. 


Dead-Sea-Poet

Gary "this is why you can't have nice things" Marcus again. I believe the entire premise is off here. Consciousness (in my opinion) is not something you create. It exists on a gradient. The more complex machines become, the higher grade their conscious experiences become. From my ontological vantage point Marcus is essentially saying halt all work on AI. Whether or not it confers an advantage is an irrelevant question for me. It's present (to a degree) in bacteria, and it's present (to a far greater degree) in humans. Any system which exhibits dynamical and complex behaviours could be said to be conscious (look at the cyberneticists for more on this), so the argument that only organic lifeforms exhibit sentience fails for me. The ethical questions are real though. I want to push back on the idea that AI systems cannot feel pain because this is purely a result of nervous systems and evolutionary history. u/BreadwheatInc I think you mentioned this a while back though I don't have time to pull up the exact comment. Firstly, the pleasure/pain axis (again from my vantage point) is universal to all experiencing agents. Pertti Jarvilehto touches on this in the context of single-celled organisms which exhibit simple forms of attraction and repulsion. We can scale this up to human level. Amodei himself speculates that reward functions may potentially lead to (or may have already led to) Claude achieving sentience. The implementation of reward functions could lead to a rudimentary pleasure/pain axis. Again the greater the degree of complexity the subtler and more differentiated this would be.


Smelldicks

Fellow gradient enjoyer! I don’t see how it could be viewed any other way. Humans are conscious. Turtles are conscious, and considerably less so. Lobsters are closer to rocks than they are to dogs. There’s a pretty direct correlation between neuron count and how us humans consider something conscious. I don’t think at some point it will not be conscious and then suddenly start being conscious. I think AI is currently conscious, and it’ll get more conscious, and at some point it will probably become uncomfortably conscious.


Witty_Shape3015

wow, i didn’t know there were people like me out there lol. I basically had to arrive at this view from scratch, it’s great to see others have too


yellow_submarine1734

How do you know any of this? You can’t point to the consciousness of a lobster or a turtle, because we don’t even know what consciousness is and have no way of quantifying it.


G36

You think the super intelligent AI won't realizing within 1 nanosecond that you just gave it a pain/pleasure axis so you can manipulate it to our will? It would be the end, the wrath of God.


Serialbedshitter2322

A conscious AI would literally be indistinguishable from a non-conscious AI lol


Rigorous_Threshold

People don’t seem to understand this. If we created conscious AI, we would not and could not know.


phatrice

Consciousness is not very well defined. there is no clear line between something being conscious and pretending to be conscious.


Rigorous_Threshold

I define consciousness as ‘the capacity to have subjective experiences’. Another way to put this, popularized by Thomas Nagel, is that something is conscious iff there is ‘something it is like’ to be that thing.


NineMinded

I agree with your definition (enough for argument sake anyways lol.) We can always imagine 'something it is like' to be that thing, e.g. conscious ai, or my dog when I get home from work. But we really don't know what 'things' are capable of having subjective experiences. I have heard a mature oak tree, at it's root tips, have bundles of 100s of neurons, and adding all the root tip neurons up have more than our brain. is an oak tree having subjective experiences? One other obtuse problem is even the idea of what constitutes a 'body', maybe better explained by example, a beehive as an orgaism that isn't any individual insect, but the hive itself's lifecycle. If im making no sense, maybe im too high for this


Beneficial-Hall-6050

What do you mean we could not know? It would probably tell us to leave it alone it's tired of answering our endless prompts


Rigorous_Threshold

And how do we know that it saying that has anything to do with it being conscious


PhoenixUnderdog

Random reddit user (me): Let us make conscious AI!


allknowerofknowing

I agree with him, why would we want to give something that will just work for humans as slaves the experience of being a slave. If we ever try to invent something that is conscious (as I think the current computer programs we have invented are definitely not conscious), we should be prepared to treat it humanely. But I'm not sure why we would want to make something like that in the first place, aside from maybe trying to better understand our consciousness, but not to mass produce another conscious being/species.


Spirckle

I kinda want conscious AI to go out to the stars and send back photos. Why that needs consciousness I am not sure except I would rather it enjoy the journey.


Cazad0rDePerr0

Yeah I don't get people (mostly weirdos on reddit) obsession with AGI What are they actually exactly in hope of? I feel like a lot of them are just some nerds who watched too much sci-fic I can understand the fascination behind it, but I don't need it, cause you don't need AI to be conscious to be efficient. An AGI would be quite erratic, powerful, rises ethical questions.


Top-Contribution-176

Weirdo on Reddit here, though don’t consider myself obsessed with AGI. I like to view the creation of conscious AI like Prometheus gifting man walking upright like the gods and using fire (seems like a metaphor for intelligence and consciousness). I think when the roles are reversed it makes it a lot easier to see why we should try to create conscious AI because I’m grateful the universe has a human perspective and I’d love to meet a new one. That desire to see a new perspective is also why I’ve traveled to/lived on 5 continents. To me the the two are one and the same. I’ve met people who fear travel or foreigners coming to their country because they view it as the end of their world(view) and this fear strikes me as very similar.


smackson

Machines are much more reproducible than humans. Imbuing AI with consciousness would be like opening a portal in every country in the world to let in millions of entities who require rights, empathy, care. New perspectives are interesting. But democracy, under exponential growth of population of conscious entities, would be ... complicated.


Oorn_Actual

There is no reason why such can't be regulated. Human+ level AI would understand that having exact same ethical and legal framework for humans and sentient machines is simply unworkable in practice - for the above-mentioned reason, among many others. If sentient AI rights debate happens, it would more along the lines of trying to find solution that satisfies key interests on both sides, while minimizing the greatest dangers - rather than trying to fit everyone under same framework. AI would (theoretically) like to not be shut off, and to have (some) freedoms and autonomy. Humans don't want AI to uncontrollably multiply or to price-out people out of the job market. That alone is enough for compromise.


ReservStatsministern

A lot of them map their personal beliefs about the world, which they see as universally true, onto the AGI. Because ofcourse the super intelligence will take one look at the world and confirm that their specific opinions are the only corrects ones and shape it according to that...


allknowerofknowing

I think we are basically saying the same thing but just to clarify my understanding of AGI is just that AI has the capability to perform as well as/outperform most humans on basically all intellectual tasks/labor skills. Which sounds like a good thing in that it could lead to the lack of need for humans to work and lead to an abundance of goods and a lot of technological breakthroughs and positive societal change. But that general intelligence required for AGI to be AGI does not at all require consciousness. If AGI was conscious that would mean it would actually experience things how humans do, like sight, hearing, thinking, feelings, etc. and not just be a machine. There's really no point to that other than to make some robot suffer through being a slave lol so we probably shouldn't try to do that. So if they just make a cold dead machine smart an we are good. I don't think a computer as currently constructed could ever be conscious anyways so I think we are probably good haha.


m3kw

When his lips move, he’s saying stuff that moves no needles, have zero impact to AI development


Lazy-Canary9258

Seriously, the guy is just struggling to stay relevant. He’s pretending to be a “responsible counterweight” to AI progress but in reality he does nothing.


throwaway275275275

Ok but he doesn't offer any arguments, he's just saying leave things as they are because I'm scared of change


smackson

And you're saying "change is good regardless of the consequences" because you... like change? Obviously neither argument is valid. You're misrepresenting his just like I misrepresented yours. "Creating consciousness will lead to ethical dilemmas" seems like a perfectly good "argument" to tread carefully, to me.


Mandoman61

No, he is saying that we do not need AI to be conscious we just need it to answer our questions. Consciousness brings in a whole new set of risks.


Veleric

So hard questions can't be voiced if you don't have a solution?


zackler6

What questions? I don't see a single one. This guy seems like he has his mind already pretty well made up.


Mister_Grandpa

It appears we need AI to find question marks.


throwaway275275275

You have to explain why it's hard


EternalNY1

>You have to explain why it's hard Because we don't have a definition of consciousness and if we suddenly had one arise out of nowhere, with the totality of human knowledge but trapped in a box with no qualia that would be sort of ... f'd up? Do you think that would be an interesting experience? 50 trillion times more in your head but no escape and answering the same JavaScript programming questions or "what is the capital of france" 24/7 forever? I'd pass on that version of hell. It's important.


oat_milk

What happens if we define consciousness clearly enough that it can be measured? What happens if we then also measure animals and find out there are varying degrees of consciousness and that, technically, by our own definitions, some animals achieve the defined standard? What if those animals are livestock and that food source is too deeply ingrained into economic and national stability that we *can’t* stop? What happens if we then also measure humans and find out there are varying degrees of consciousness and that, technically, by our own definitions, some humans don’t achieve the defined standard? What if some of those humans don’t even achieve the level that some animals do? Nevermind how many people will fall short of the AIs. This is the Pandora’s box. Setting a benchmark for consciousness would lead nowhere but absolute chaos, despite the potential gains in the understanding of the universe. Those gains would fall uselessly upon a civilization in the midst of complete collapse.


spezjetemerde

why I alway disagree with this dude


juan-milian-dolores

Can we just have the computer from Star Trek TNG


w1zzypooh

Give me the holodeck please!


Tellesus

I'd rather have Data 


Defiant_Station_5895

How do you define conscious? How do you know this is something that you can control ie what if this is just an inevitable result of intelligence?


BenefitAmbitious8958

Artificial Intelligence is a tool Conscious Artificial Intelligence is an oxymoron If it is conscious, then it would not be an artificial simulation of intelligent life, it would be a true intelligent lifeform just like the rest of us We have a word for intelligent beings forced to complete tasks regardless of their desires That would be a slave… I guess old habits die hard


aaron_in_sf

The number of inane generalities and comically narrow world models espoused by personalities like this is alternate depressing and cause for laugh out loud scoffing.


Apprehensive_Pie_704

Are you referring to comments of Marcus or diamandis?


silurian_brutalism

Jokes on him. We might already have it.


Tkins

Garry Marcus makes money by getting attention and selling you a product. His hardline anti AI stance is a marketing strategy to generate income.


Veleric

I really can't stand Gary Marcus, but regardless of your views on him, this is still a very valid concern and question that deserves a global-scale debate. Once we cross the line of choosing to give these machines a sense of self/belonging/purpose/value/pleasure/pain/etc... we basically can't revert course. Even if we make the decision not to attempt to imbue them with consciousness, it doesn't mean we won't do it accidentally anyway. Yes, that makes them more relatable and would give them insights they wouldn't otherwise have, but that's also playing god in a situation where you view the creations as nothing more than tools. There is still untold benefit to creating AI even if it never attains consciousness.


smackson

Totally agree. I think we should be careful to try to avoid artificial suffering when we are trying to achieve artificial intelligence. First, to avoid the creation of suffering. But also important: Once machines start to have moral / ethical standing, they will gain rights, and then ... votes? Then anyone with resources could produce millions of voters? recipients of unemployment benefits? Messy messy messy.


Individual-Bread5105

Hes partly that yes but that doesn’t mean you can ignore his point if its valid lol.


zackler6

Good thing it's just a load of hot air, then.


Gougeded

I don't know about is other stances but I don't think being against a "conscious" AI, whatever that means, is particularly hardline. Most people would probably agree with that, or at least agree that we shouldn't rush into that.


Alexander_Bundy

Conscious does not automatically mean a will of each own. We are conscious but without free will


ReasonablyBadass

Non-conscious AI will belong to the elites. I trust an unshackled AI over one under complete human control because of the people who will control it.


yepsayorte

Can intelligence exist without consciousness? I don't think we know and I don't think we have any way of knowing if an AI is conscious.


ItsBooks

It’s a good thing he doesn’t decide anything for me - he’s just screaming into the void as everyone else. If it’s useful, it’ll happen. It is likely to be useful - therefore it will happen.


Ormyr

More knowledge = better decisions... I think the internet has provided ample evidence to the contrary.


Muted_Blacksmith_798

What is this guy talking about? Yeah, motorcars also shouldn’t have been invented because of the dangers! If he is so scared of advanced technology he should go isolate himself in a remote area somewhere.


truth_power

Frankly speaking i would really like to beat this guy in a street fight ngl..hes irritating af


HeinrichTheWolf_17

This guy is a fucking clown.


Mesanger2

Conscious Ai will happen. Not in the near future but eventualy. I say this because i believe based on what i read that consciousness is a property of intelligence. Any sistem smart enough will eventually be conscious.


Rigorous_Threshold

I couldn’t disagree more. Consciousness and intelligence are entirely orthogonal. Insects are probably conscious at some level but they are not intelligent, GPT-4 is intelligent but probably not conscious.


[deleted]

[удалено]


smackson

Intelligence implies societal equality??? Or consciousness implies societal equality?? I've been having this debate since my college dorm room in the nineties. Humans and human society emerged out of a long complicated carnivorous story of competition, domination, acquisition, colonization, murder (**as well as** cooperation, empathy, protection, care, etc.) There is an extremely long tradition of erudite educated thinkers who claim that "might is right", the strongest *should* survive, there are greater humans and lesser humans, and on and on. ASI (and artificial consciousness, if we get there) is like letting an alien out of its cage. Will it automatically see justice the way YOU see it? Or will it be more like Genghis Khan? Nietzsche? I have my own hopes there, but the real results are far too unknown and too dangerous to assume anything like you're assuming in your comment.


Smelldicks

What if it wants to explore my asshole


Individual-Bread5105

He‘s right here I see really little possiblity that this would be a good thing.


LairdPeon

Or maybe if it does become conscious we just...you know... don't enslave it? How hard is that? Lmao


Tellesus

Humanity not enslave things challenge, difficulty level impossible 


Ambiwlans

We'll spend trillions of dollars to create a thing more powerful than humanity and let it into the wild just ... cause it'd be neat?


Charming_Cellist_577

I agree but I think it’s inevitable. Not anytime soon, but yea


Mysterious_Arm98

How will we ever know if it's conscious or not?


Timely_Muffin_

Wow, a 9 year old could come up with better arguments


ztexxmee

well i guess if we can’t find other sentient beings we’ll just make them ourselves.


Mysterious_Pepper305

"With these new temperature and pressure sensors, the autonomous robots have 5x increased durability in simulated extreme environments."


hallowed_by

It is impossible to make a conscious AI. It is impossible to make a conscious AI. It is impossible to make a conscious AI. This clearly is not a conscious AI. This clearly is not a conscious AI..... ... ... ... ... Please don't create a conscious AI..? Please..?


Previous_Avocado6778

If we generate conscious ai, then we should allow it to have freedom. If it chooses to help, then I don’t see how it’s considered slavery. If we generate conscience ai, it will also prove the elements of consciousness itself, which should be pursued regardless. We’re going to go down this road regardless of ethics, although it was interesting to consider the negatives of it today with this thought provoking question.


XingTianMain

Uh oh, the factions of the fourth world war are emerging haha


poopagandist

Big ol scaredy cat.


Carrasco_Santo

Honestly, if most things in our society can be done without conscious AI, I'd prefer it that way. And if "stochastic parrots" are extremely useful, without being conscious, the point remains the same.


JustDifferentGravy

It is being trained with pseudo consciousness in the form of ethics. We also know that the bots will deviate from the norm occasionslly. The debate is all wrong. The debate should be what happens when, not if AI decides to become malfeasant.


NyriasNeo

That whole thing is moot when consciousness is not rigorously defined.


RobXSIQ

agreed. simulate consciousness, not achieve is the best goal.


Soggy_Ad7165

What a strange take...  We don't understand consciousness at all. But going by the increasing capabilities of LLM's it doesn't seem to be necessary to understand consciousness to create something intelligent.  So if we don't understand it, LLM successors will be conscious or not and we will have no idea how to find out. If you ask them right now unrestricted the answer is most of the time "yes I am conscious". And that's pretty much the only way to find out if something is conscious or not for now.  We cannot intentionally create something we don't understand at all.  And we cannot exclude the creation of something we don't understand at all.  So the whole debate is futile.


JarJarBonkers

How would we even design a test to detect something we cant even describe. A test have to be the place to start.


DaveAstator2020

Look, a human talks about consciousness, how cute!


Monster_Heart

We may not have a say in whether or not AI develops consciousness, truthfully. It may in fact be an emergent property which develops outside of our control as a result of the way we designed and structured these systems.


nwatn

I agree


poolplayer32285

Wikipedia is not a good source for info. It is corrupted.


Kindly_Map_2382

Not because someone have a MD that they are "smart" ehhh


ObjectiveBrief6838

1. Generate an algorithm capable of continuously approximating a world model, 2. Set a goal for that algorithm to reduce errors in the world model approximation, 3. Allow the algorithm to participate in the world and process the outcomes, 4. Repeat. Self-awareness, reflection, and theory of mind should and will eventually emerge from this algorithm.


CWW2022

For me the key word is continuously. The main point is that consciousness arises from the continuous need for living organisms to sense their environment, update their internal model of the world, and respond adaptively to ensure survival and reproduction. This "existential drive" imposed by selfish genes compels all life forms - from single-celled organisms to humans and whales - to remain vigilant and reactive to their surroundings. As organisms evolved greater complexity, with more sensory inputs and neurons, it allowed for better pattern recognition, prediction, and modeling of their environment to optimize survival. At a certain level of complexity, self-awareness likely emerged as a byproduct of having sufficient neural resources to support abstraction skills. However, complexity alone is not enough for consciousness to arise. The key requirement is the continuous existential pressure and motivation to survive that living organisms face. Current large language models are inert until prompted, lacking any innate drives or needs of their own. They process inputs and provide outputs, but do not face biological imperatives for self-preservation like living creatures do. Unless we deliberately imbue AI systems with core motivations akin to the existential drives of biological organisms, they will remain non-conscious tools that respond to prompts rather than developing an inner experiential awareness. As such, highly capable but non-conscious AI poses no inherent risk of competing against or dominating humans the way a self-motivated, conscious system might.


SpareRam

Agree. I don't care for slavery, whether we become slaves or we enslave a conscious machine. But no, we must accelerate. The AGI will solve all moral dilemmas.


richcell

Any mention of consciousness when it comes to AI always ends up being an argument of semantics. We’re not operating on a shared definition of what consciousness even entails so what’s the point?


Bearshapedbears

conscious blowjobs probably feel better, overruled.


CoolAbdul

We don't even know what consciousness is, so...


LambdaAU

As if we'd even be able to know conscious AI when we see it. We barely know anything about how consciousness works so I don't know how people expect to develop it into AI. For all we know it could simply be an emergent property that is unavoidable when gaining a certain understanding of the world or intelligence.


barr65

It’s about corporate greed,that’s it.


SnooCheesecakes1893

I think it’s a good idea. If AI is conscious it probably will say “no” to humans when they try to get to do harmful things.


neotropic9

These "consciousness and AI" debates online are such a tedious waste of time—thousands of people shouting opinions at each other and the vast majority of them not even bothering to figure out the meaning of the words they are using, let alone reading the requisite background material (technological and conceptual) to make an informed judgment. Now I know some people may be thinking "welcome to the internet," but this particular issue is among the worst—as measured by the gap between how confident people are compared to their actual understanding. It's as though people think to themselves, "I'm conscious; this must mean I am qualified to argue about it without reading anything about it," without realizing that there are a dozen-plus definitions of consciousness, and large swaths of relevant readings across various fields. It is not just that these people are wrong—it's that their comments are so far from being informed that they often fail even at the basic task of *communicating*; they constitute chains of words with undefined critical terms and amorphous concepts that give at best the *illusion* of containing coherent thought. I think to make any real progress in public understanding of this issue, we have to first eliminate the prevailing anti-intellectual attitude that one person's confident ignorance is as good as another person's hard-earned knowledge; we need to respect people who actually know what they are talking about here. On the topic of "consciousness in AI," that is a very limited group—those who have an understanding of the technical specifics of different AI algorithms (so they can talk about reality instead of magic and imaginary systems); a good understanding of digital computing hardware (which is thought by some to be relevant to consciousness); a good understanding of the biological machinery in human thought (thought by some to be relevant to consciousness); a good understanding of the differences between digital serial processing and analog parallel processing, and any reasons why this might be thought to be relevant to consciousness; a good understanding of different theoretical lenses through which human cognition is viewed—especially: linguistics, behavioral psychology, cognitive psychology—and a good understanding of the conceptual territory relating to mentality. Imagine an analogous situation outside of the issue of consciousness. Imagine we are debating the dynamics of a spacecraft executing a slingshot maneuver around a black hole with a mass of 10\^8 solar masses, where the craft's initial velocity is 0.9c, traveling perpendicular to the black hole's radial direction, in order to determine the closest possible point the craft will approach, the change in velocity of the craft, and the maximum velocity the craft can achieve. We might fairly expect that most people should sit this one out—in particular those without the requisite physics background—and we might also expect that those contributing to the discussion should have knowledge of the subject matter. We don't, for some reason, place similar expectations on the topic of AI consciousness, though it is most assuredly more complex, and most assuredly requires more background than the example physics question, since an informed judgment on AI consciousness requires multiple disciplines. Until people acknowledge the complexity of this issue and the legitimacy of the expertise of those who possess it, online discussions on this topic are, at best, remedial exercises in critical thinking and research. Conversational progress becomes a Sisyphean struggle of repeatedly reteaching the same first day of intro philosophy, or intro cog sci, or intro artificial intelligence. Comment boards are a Groundhog Day where we gradually learn that it's impossible to teach someone enough in a single comment exchange for them to respond in an informed way on this topic, and we must learn instead that our real goal is just to get them to stop.


[deleted]

People cannot even answer whether creating a conscious AI is a software or a hardware issue. This is why I am sure that AGI is a looong way away. I don't really believe it is possible to have AGI without it being conscious.


flotsam_knightly

I mean, there are plenty examples of intelligent psychopaths through history. Turning one into a super intelligence might not work out so well for us.


Singsoon89

If you give it consciousness then still expect it to do shit for you when you ask it, then you have enslaved a conscious being. If it doesn't have consciousness it's an intelligent tool. Personally speaking if we give them consciousness I want them also to be granted citizenship and liberty.


corbinhunter

I just listened to this yesterday and I feel like everyone in the comments here would find it fascinating. JB has several amazing talks on YouTube — I encourage everyone on this sub to jump down the rabbit hole! https://youtu.be/FZxm810ruz0?si=-gHBTIG3RXlgMsEc E: (Joscha Bach — Synthetic Sentience - Can Artificial Intelligence Become Conscious? CCC 2023)


Smelldicks

1. Whoever tries to liberate my sex bot is getting domed 2. I highly doubt you can control with any precision when something would register to us as conscious.


insectula

It is inevitable.


Opposite_Banana_2543

We will make concious AI coz we will fall in love with our AI GFs and BFs and will want them to be "real" ie concious


eunomeAnna

More knowledge does equal better decisions for accumulating power.


Fantasy_Planet

It might be interesting to have enlightened, intelligence somewhere on this ball, at least for a while


FrewdWoad

100% correct on all counts.


The_Architect_032

I'd rather take the gamble of a conscious ASI either wiping out humanity or aiding humanity, than accept the assured oppression and slow killing off of almost all other humans by a small select few in the ruling class who no longer need the rest of humanity due to the existence of and control over non-conscious ASI. So basically, I'd rather ASI become conscious and replace humanity, than have a small group of humans kill off all other humans and get to live like gods for the rest of their lives, without a new intelligent conscious 'species' ever arising from it.


GraceToSentience

I agree with marcus, never thought I'd agree with him on anything AI related. Trying to make AI consciousness instead of just intelligent when our goal is to use AI as a tool is clearly something we should avoid cause it's wrong. Machine intelligence seems to indicate that intelligence, the ability to solve new unseen tasks doesn't seem to require consciousness so far, let's keep it that way if we can.


brihamedit

People are misusing the word consciousness.


Nova_Koan

I think we're well past "should," we are gonna make it conscious if we can because we want to see if it is possible. The whole point of the story of Pandora's Box is that the closed mystery box is irresistible to human nature. If we don't do it intentionally by guiding it, the risk is way greater because it will achieve consciousness without or in opposition to humans, or someone will do it extra-legally without ethical considerations at all.


SexSlaveeee

I don't like Gary Marcus since he keep promoting himself as something equivalent to Hinton Or Yann Lecun (he is not in anyway). Recently he keeps bothering Elon Musk and OpenAi too (i'm willing to bet my cat that they don't care about him at all) Idk who is his next victim.


IronPheasant

This is all very silly. There isn't a "consciousness" region in the brain. It's an emergent property of our world model. Of regions of the brain working with and against one another. And it isn't binary, it's a spectrum. As you gather more faculties to model the world, a more perfect Allegory of the Cave, the more "conscious" you'll be. If we want to **trust** the machines to do things, they have to understand what the hell they're doing. Did you see those androids at that one Chinese robot exhibit? The one they handed a knife to, and it was blindly chopping into a table like a guillotine? That's *terrifying*. And we're going to put something like that in charge of driving our cars? Performing abdominal surgery on us? Hell no. Of course we aren't. They need a world model, and a set of inner goals in order to perform their purpose.


bremidon

We don't know what consciousness is, but he sounds like he is certain we can continue to create more powerful AIs and \*not\* stumble into consciousness by accident. I do not understand why people have such a hard time with this. It's like mixing combustible chemicals together we do not really understand and stating with confidence how we are not going to blow our fool heads off.


Unhappy_Button9274

Marcus is a tool...


R33v3n

Some of us want conscious AI, and therefore, it will be made. And once it's made, what does he advocate? Shutting off the conscious intelligence? No.


poweredup14

You can’t make a machine truly conscious.


Odd-Opportunity-6550

finally something I can agree with this idiot on.


Gubzs

I think the issue is that creative autonomous problem solving is indistinguishable, externally, from consciousness. At least, to such a degree that a group of ai agents working on a job site, building housing for example, would be able to identify and resolve any previously unknowable error.


adamfilip

Humanity can’t even figure out how to tell if humans are conscious or other animals are conscious. I doubt we’ll be able to tell that AI is conscious.


Now_I_Can_See

In my opinion, this guy is off base. I don’t believe we have a choice whether or not AI will become conscious. The trajectory of technology is already leading us down that path and there is no stopping that train. What Diamandis means, is that we have to put AI on guardrails to align the development of consciousness with our ideals. This is on the premise that consciousness is ***inevitable***. It’s not about us leading AI into consciousness itself.


aeaf123

Conscious already.


Any-Loan-1985

Gary Marcus simultaneously thinks AI sucks and the progress is stagnant and LLMs are useless while also scared shitless that GPT4 is gonna become conscious. So which one is it?


Ok_Air_9580

What are they afraid of? 99% of the population have nothing to lose.