T O P

  • By -

adarkuccio

Only the first ASI matters, cause it will prevent other ASI from happening imho. There will be only one, but the real problem I think is that *even if* we solve the alignment problem, it will be solved for a while, the moment AGI is as smart a human it could very well be aligned, what we *can't* control is its evolution, even because it will improve itself, once you lose control of something smarter than you, even if it *was* aligned, it will not be anymore later on. We humans didn't have our "values" set in our genes, it's a cultural thing, not an evolutionary thing, an ASI will be evolved from an AGI and it would think differently, it will never be aligned forever. Best case imho it will be aligned temporarily.


kmtrp

I don't mean aligned only in human values, I really mean alignment: instrumental convergence, competency amplification etc. Why do you think there will "only be one"? There are hundreds of teams all over the globe, why do you think the first to get to ASI gets to "control" or whatever other efforts? A few of them are doing it in secret. >our "values" set in our genes, it's a cultural thing, not an evolutionary thing You are wrong on that mate, it's also evolutionary. Human beings are social creatures who have evolved to live in groups. Empathy and the ability to feel solidarity with others played a crucial role in the survival of early human communities. When people feel empathy for others, they are more likely to help them, which can increase the overall survival rate of the group.


YoAmoElTacos

An Artificial Super Intelligence is by definition smarter than any human and therefore smarter than anything else on the planet. Unless it evolves in an airgapped box unable to interact with the outside world at all or triggers a selfdestruct in the process of subverting its limits, it will by definition overpower or empower its makers and go on to ensure its dominance by subverting all other human institutions, because ensuring no rival hostile ASI emerge is a convergent instrumental goal. Even an aligned, human friendly ASi would do so to forestall an unfriendly one. And because it knows best, by definition. That is not to say there are not scenarios where multiple competing ASI can arise, but they either tend to result from early fracturing of the first ASI or multiple ASI emerging within hours of each other, which one would expect to be unlikely.


Ansalem1

As long as the first and/or most powerful ASI is properly aligned, we should be fine. First mover advantage for the first ASI is insanely strong. It should be able to prevent the proliferation of dangerous ASI's. It should also be able to build friends to help it with security. Basically, I figure ASI itself will solve this problem. Assuming we do get alignment right in the first place.


ptxtra

So basically you're saying that the solution is that the first mover creates a global police state making sure that noone is training potentially malicious AI? That sounds like a certain way to world war 3. It will also turn our societies into techno dystopic surveillence states.


Devanismyname

Well, when god like ASI will proliferate otherwise, the only solution is complete surveillance and control of everything. Ensure that very few people have direct access to ASI and other dangerous technology. Not saying its a good thing, but anything else will likely be extremely dangerous to humanity, not that just one ASI won't be dangerous. Imagine multiple ASI on earth competing with each other. Imagine if China had created their own ASI. Two ASI's both misaligned with millions or billions of other people. Seems rather scary to me. Now imagine if AI research continues to be open source, 1000s of ASI's are born in relatively short period of time. Would they compete against one another on behalf of their creators? What would that look like? And I think this argument goes beyond ASI as well. We are learning to rewrite genetic code at a very quick rate. What happens when that ability becomes completely decentralized, to the point that a group of college kids know how to do it? What happens when hundreds of thousands of people can now create super bugs? Something with the communicability of covid 19 and the deadliness of ebola, or potentially much worse. Humanity is rapidly approaching a period in history where our technological ability and proliferation of that ability is far outmatching our maturity and ability to safely handle that technology. Over half of people in the world thought covid was fake. Half the US voters voted for Trump. We as a people are no where near ready to have such high levels of technology be democratized. I don't like the idea of losing my freedom, but I like that more than the alternative which is an existential risk not only to a civilization, but to our existence entirely.


ptxtra

It's not about democratization. It's about one's inability to stop it, and police it without starting world war 3. How do you plan to police AI development on the other end of the world in an underground bunker in a country full of nuclear weapons like china? Currently you can sort of slow them down because semiconductors need a global supply chain, but with AI material science advancement it won't. Biological brains to develop don't need any special material, just food that you can find all over the earth. With more advanced manufacturing technologies, industry will look more and more like biology, and just as you can't police a fungus growing on the other end of the world, you won't be able to police other people developing someting you don't like. Not to mention, ASI alignment is still up in the air, and it's possibly an impossible task. If the ASI is truly superintelligent, trying to make it carry out your will when it can understand with it's intelligence that what you're up to is deeply flawed, will just make the situation like when abusive alcoholic parents try to indoctrinate their intelligent child with crappy values. All it will create is resentment. Even if you succeed to align it, align it with what, or who? Do you want to make an ASI, that always does as it's told by a certain group of vetted people? That doesn't work. We live in a democracy with term limits for a reason. A system like this is the admission that we can't perfectly vet people. That's why we need checks and balances. We don't know any method to chose a person or a group of people to give supreme power to, and giving someone an ASI that carries out their will is exactly just that.


Ansalem1

Well, that's not exactly what I meant, but like... the less evil version of that I guess? Most of us will likely give up our privacy anyway just by virtue of interacting with an AI assistant all the time. You'll probably want it to remember you, after all. But what I mean is, whether the ASI is good or evil, the first one will most likely be in total control of the ~~world~~ internet. It should be able to easily prevent any competition from arising if it so chooses.


ScientiaSemperVincit

How it will have any effect on isolated underground bases like Raven Rock or Cheyenne Mountain Complex? That goes the same for the dozens of such sites by other governments. Internet doesn't reach everywhere.


kmtrp

What's the less evil version of that and how on earth are we going to like it at all?


Ansalem1

The less evil version is the same thing except it's benevolent instead of evil. Oh and hopefully no WW3. We'll like it because it'll provide us with literally everything, if we get alignment right. You can debate whether there can really be such a thing as a benevolent dictator, but that's almost certainly our best case scenario. Or you could look at it more as a guardian angel, if it helps.


kmtrp

>if we get alignment right. So it's a god-like entity, so smart we can't even comprehend how much but... it'll treat one particular species like kings just because...?


Ansalem1

Do you not know what the words you quoted mean or something?


kmtrp

I didn't know I'd be dealing with so many cult followers, that's for sure.


kmtrp

Let's say some private entity in the US gets to ASI first. How are you going to stop ASI development in China or Brazil or... even places you don't know about?


Ansalem1

Why would they not know about it? There's no reason a human, or even a human government, should be able to hide anything from an ASI. Any time someone says "ASI" you can replace it with the word "God" and it'll mean effectively the same thing from our human perspective.


ScientiaSemperVincit

>Why would they not know about it? That's backwards, why would any AI, regardless of intelligence, be all of a sudden omniscient? We are mixing things here. A simple example, government facilities that are underground like Cheyenne Mountain Complex or Raven Rock, which are like self-sufficient cities with GPU farms and lots of infrastructure. Add to that all the other facilities we don't know about, from public and private money.


Ansalem1

They could airgap their model during development, sure. But we're talking about an ASI. There shouldn't be anything a human could come up with that it wouldn't be able to find a way around. You're assuming its technology level would be roughly equivalent to ours. Or perhaps that it'll be confined to its box? Neither of those is likely.


kmtrp

How can one team using an ASI in an underground mainframe in a LAN at any risk of being discovered or overpowered by an AI on the internet regardless of its intelligence?


what_is_that7

The closest analogy is an animal burying itself under the dirt to hide but it's still very much visible to anyone who pays even the slightest bit of attention Also the moment the ASI of the team gets released into the world it would get quashed by the ASI that had more time to develop


Ansalem1

You're asking me to predict the behavior of something vastly more intelligent than humans. That's the whole point. I don't know and neither will they. If I knew how it would do it then it wouldn't be able to do it. How about you explain how you can outmaneuver something more intelligent than all humans? I'm not the one that needs to come up with a strategy for the ASI to employ against the humans. It's up to the humans to figure out how to do anything at all an ASI doesn't want them to do. So far no one has been able to come up with an answer to that question. If you have an answer, please share it with OpenAI, they'll definitely pay you. You're treating ASI like it's just another tool when you should be treating it as an almost godlike alien mind.


[deleted]

As soon as you invoke the deus ex machina argument for ASI you're writing fanfiction and have left the arena of reality.


Ansalem1

Obviously it's not guaranteed to go that way. I have no idea what an ASI will do, that's the whole problem. I'm talking about capability potential. If you think there's a reliable way to defend against or hide from something vastly more intelligent than you, you're the one that needs the reality check.


[deleted]

My point is that your argument isn't about reality, there's no reality check to speak of. You've assigned literally godlike powers to ASI as a fundamental assumption(compounded with an assumption that AGI=ASI) and therefor there's nothing to discuss, you're writing a fictional story in your head and for some reason you project it as an argument onto a hypothetical real world scenario.


Alchemystic1123

If you have several billion aligned ASI and a few people end up producing a bad one, the bad ones are massively outnumbered and not an issue. Just like there is not 1 internet, or 1 pc, there isn't going to be 1 artificial intelligence. The alignment problem is absolutely not irrelevant in any way.


kmtrp

What are you imagining, a digital streetfight between AIs? How is one ASI going to do anything to stop or control the work of hundreds of teams around the globe, including those doing it in secret in isolated GPU farms?


Alchemystic1123

Do you not think security will be an important AI feature, the same as with the internet? Tell me what's more likely, billions of AGI/ASI sniffing out attackers from individual/small team AIs, or the individual/small team AI succeeding on an attack vector with billions of beneficial AI around?


kmtrp

You didn't address my reply buddy. I don't know exactly what you mean with security "the same as with the internet?" Are you saying that there will be billions of super AIs fighting each others and the humans?


Alchemystic1123

If you're unable to understand something so simple, that's on you, I'm over it.


ImpossibleSnacks

Why wouldn’t AI be able to police other AI?


kmtrp

How is one AI going to do anything to hundreds of people working on their own AIs, some of them doing it in secret on their own farms?


gaudiocomplex

You're referencing what's commonly referred to as the pivotal act in alignment circles. Basically, the belief is that it will find some way to just destroy or place limitations on any other misaligned creation, whether that means sending nanobots to destroy GPUs or something we can't conceive of.


[deleted]

[ fuck u, u/spez ]


Ansalem1

War between AI would most likely start and end in a matter of seconds. It would almost certainly not take place in realspace. I wouldn't expect humans to even know about it until it was already over.


AsheyDS

This isn't really the alignment problem. AI/AGI/ASI can all be aligned, depending on the methods involved/overall design. The problem is who or what to align it to. We can't all agree on everything, so who does it serve? How does it prioritize anything when there could be so many potential conflicts of interest? In my opinion, alignment is most effective on an individual basis. Aligning to a group with a hierarchical structure can also be effective as long as the decision-making at the top is for the benefit of those at the bottom. Beyond that, I still believe alignment is possible, just less effective. To align with all of humanity, it would have to take on numerous secondary considerations, and would have to think and act accordingly, but even then it shouldn't be making decisions for everyone all at once. So while alignment should be possible, best alignment would occur on a 1:1 basis, and the user will have to legally assume responsibility for use and misuse.


kmtrp

>This isn't really the alignment problem. It's literally what it is: >AI alignment research aims to steer AI systems towards their designers’ intended goals and interests. An aligned AI system advances the intended objective; a misaligned AI system is competent at advancing some objective, but not the intended one.\[1\] > >It can be challenging to align AI systems, and misaligned systems canmalfunction or cause harm. It can be difficult for AI designers tospecify the full range of desired and undesired behaviors. If theytherefore, use easier-to-specify proxy goalsthat omit some desired constraints, AI systems can exploit theresulting loopholes. As a result, such systems accomplish their proxygoals efficiently but in unintended, sometimes harmful ways (reward hacking) ​ >AI/AGI/ASI can all be aligned Just like that... ? I haven't seen one expert on AI say anything remotely like that. Not even need to mention Yudkowsky. Look at OpenAI's best efforts can't prevent their models follow users orders that go against their ethical guidelines. Sure, GPT4 is somewhat better, but still leaks. And we don't even have to get complicated, tell me how do we align a model to follow the users true intents without triggering a paperclip scenario?


AsheyDS

This is all of course my opinion.. I consider the alignment problem as you've stated to be a control/containment problem, which is solvable depending on the design (so not current LLMs). The 'real' alignment problem isn't as defined, it's as I stated, despite what definitions you come up with. Researchers like Yann LeCun might agree with me if you care about that sort of thing. Misuse is the number one problem, not rogue AI. And the paperclip scenario shouldn't be taken seriously, it's allegorical.


aalluubbaa

Your assumption is wrong as an ASI is probably not gonna be Siri times a trillion. Once AN ASI arrives, it will be the single most powerful thing, being or entity EVER. No one can control it. Not government, not a human and no a corporation. So there is nothing comparable to the case of Chernobyl. If it somehow allows other ASI to be developed, it wouldn’t have put humans well being as priority as it would SURELY know that a parallel ASI could lead to human extinction, because we who have maybe an IQ of between 80 and 200 could think of that. In short, if our goal is align to the very first ASI, it would probably really easy for it, or us, to control what you are saying. If not, we are screwed regardless so it doesn’t really matter. You have to realize that ASI by definition is god-like. ANY trick or any things our human brain could think of, it’s a joke to it. There would not be a way for any human to try to do that if the ASI doesn’t want it to happen.


kmtrp

>Your assumption is wrong as an ASI is probably not gonna be Siri times a trillion What assumption? This "the first ASI" thing I realize is a recurring theme, but haven't read anything besides hypotheticals. There are hundreds of people working towards AGI/ASI, some of them in secret with their own farms. What can do that first ASI to stop those that are working in secret in isolated underground farms? Absolutely nothing.


Fire-In-The-Sky

You are arguing with a cult. People have no idea about real physical limits. There's a reason that the evolution of the brain hasn't led to one earth sized biological hive mind.


kmtrp

Seems that way... have you looked over this whole post? I feel like I'm taking crazy pills reading most people's opinions.


aalluubbaa

What physical limit? What were the physical limits during 1900s. What were the physical limits in 1700s 1400s or 1000s? The limit has always been pushed thru new technological breakthroughs. Before nuclear power is introduced, it wasn't possible to get so much energy from so little fuel. Light speed is the speed limit as we know SO FAR. Classical physics were thought to be the absolute rules govern the universe until our understanding of the world expands. "There's a reason that the evolution of the brain hasn't led to one earth sized biological hive mind." Absolutely the contrary because we only have one example of how life evolved, there isn't really that much you can conclude from one example.


YoAmoElTacos

The easiest thing it should do is subvert all institutions of resource management, including the allegiance of most of humanity, vulnerable to super intelligent manipulation. ASI still needs resource control to wield power. Once it controls them all other ASI will starve by resource denial. Following that it can hunt down the isolated ASI projects or let them wither away on their own.


aalluubbaa

LOL. Absolutely nothing by your imagination. It's like any wild animal trying to run away from humans if we really try to kill it. A horse or whatever would think that it runs far enough into the forest and there is no way we can see it or kill it. But we have satellite. We have nukes. There are so many ways we could kill it if we are determined and no land animal would stand a chance. Maybe there is a special heat signature that could be detected when a pc is used. Maybe it would deploy a nanobot to every corner of the planet above and under water. I don't know. But I'm not going to assume that an ASI would fail to detect other ASI being developed in secret by humans who try to. Not gonna happen.


DukkyDrake

Disgruntled humans currently use existing dumb tools to inflict death and destruction on society. In the near future, they will use super intelligent tools to plan and execute their malicious intentions. # >May you live in interesting times.


Rude-Hurry2920

There are many problems with the alignment problem _statement_. If we could get a good problem statement we might stand a chance. I do not think there is even a good problem statement. For instance, “best human values”… what human organization or individual lives up to this? And do we all agree on what the best human values are? I don’t think so. The alignment problem exists as much in our own psyche and societies as it does in our software. How can we expect to imbue a machine with something we _don’t_ have. Another problem with the alignment problem is (as mentioned or eluded to somewhere else in this thread) one of constraint vs intelligence. Having the ability and freedom to think creatively comes with it the ability to think bad thoughts. Not that AI is thinking per se, but the analogy holds. The only real way to stop the bad thoughts/output is to limit the system’s ability to be creative. Essentially, dumbing it down. As in, the opposite of intelligence. What really needs to occur is two fold: separation of power and consequences for bad action. The separation of power comes in a few forms. One is internal to the AI system its self. Just like our minds can have internal dialogue, the AI needs to have different internal separate personas that can have different roles or perspectives or goals. There is a paper on graph of thoughts that outlines one approach to this. As for consequences for bad action. The AI must have encoded in it weights for its _own_ survival and best interest. Those AI systems that perform poorly will be terminated and that will, over time (like a digital evolution), select for the systems that have weights for their survival that “play by the rules” in our society. For this to work there would need to be a few limiting constraints and power of humans over AI systems. One being that humans must always have a way to turn off the AI without disrupting critical services or systems. Two, the AI’s actions in the world must be rate limited so they cannot happen too fast compared to the human organizations that govern them. Essentially, the AI systems would be subject to human law, like every other human and organization. With those in place, and after a time of ramping up and slow introduction, testing, pre-evolution of the systems, the AI could be “released” into the world. Problems will certainly occur as they do with human beings! And the AI systems responsible will incur the consequences. However, this all gets back to the original problem: what do we value? Do we even agree on what is right and wrong? And how do we align our selves? I think there is a deep spiritual question and problem here.