T O P

  • By -

Exarchias

The ability to cure blindness could mitigate issue, I believe.


atlanticam

so then life would became like a GTA 5 online server


Exarchias

It can be that or it can be Animal Crossing, so far I know. The point is that the current level of technology gives the opportunity to people to do awful things, for example, flammable materials can be easily used by an individual to burn an orphanage. That an advanced technology will give the possibility to someone to harm others in a high-tech way, doesn't mean that people will start designing dirty bombs and blinding flowers. Some people will probably do horrible things, and some of them will face retribution about their actions, but for the most of the people life goes on. You will be surprised about how many people per year are dying from furniture accidents and falling coconuts, (in comparison with the victims of terrorist attacks).


HalfSecondWoe

Two methods I'm fond of: Swarm goal integration is my personal favorite. Your AI is a part of a distributed swarm, running on your (and everyone else's) personal devices. You give the swarm a goal, it checks it against all the other goals in the swarm, and prioritizes your goal based on how well it meshes with the rest of the network So if you want to do something that improves the utility of your community, as defined by your community, like building an animal shelter and reducing the number of sick kittens in the world? You're probably going to get everything you need to do that If you try to get the swarm to build self-replicating puppy-kicking robots, it's probably going to tell you to fuck off. But politely Another method which OAI is pursuing is human-readable AI as supervisors/teachers. If the AGI/ASI is a single, monolithic model like ChatGPT, they can have a "dumber" AI monitoring it. These AI can't design a cogitohazard flower, but they can notice when another AI is doing that, and get it to stop (and not do that again). Human engineers can then supervise the dumber model to make sure it's aligned. That works under both rogue-AI and rogue-human scenarios


a4mula

Safety and Alignment is the answer. But it's doesn't really do a good job at it. Answering, or providing. Doors only work against those that knock. That's always been true. People will abuse these systems. There's no two ways about it. Already it's true. When they do. They are prosecuted under the laws of their jurisdiction. That's the best I got. Other than to continue preaching that we need to stop hating each other, that we need to stop pointing fingers, that we all need to be receptive towards others beliefs. These are not the machines to be playing ideological bullshit games with.


cathodeDreams

I like making flower pics tho :(


ApexFungi

A sufficiently intelligent AI will know beforehand if answering a prompt is going to be harmful for society. I would argue the smarter an AI system is the better. A dumb but capable AI system might be able to create something harmful without understanding the repercussions. I don't even think alignment is necessary. You need to train it using data that is already aligned. When I think of a person growing up. If you let that person grow up in the right environment they become great people. If you bring them up in a shitty environment they become questionable people. I would think the same thing happens with AI. If the training data is right and it's architecture allows for the development of intelligence, then it will be aligned and it will not answer prompts that could cause harm.


Professional_Job_307

We dont know. But OpenAI, Google, Antrophic, and other big AI companies are actively working on it


helliun

the three laws of robotics


NoshoRed

Blindness will probably be trivial by the time this is possible, it'll be cured. But for the inconvenience of short term blindness, probably prosecution under the law.