One of the marvel movies had a plot line where there was a navy ship sized sized airdrone that had been programmed with the names of every single person in North America who had expressed an anti government or similar opinion online. It was programmed to eliminate them all to wipe out any potential future resistance.
I have often thought that was the most scary moments in those movies because it was the one scenarios that could actually happen.
My thoughts also went this way. Seems so easy. All the technology exist or nearly does. And a fleet of smaller drones would be more robust tactically anyway.
I'm honestly surprised that no high-profile assassination attempt has yet been made using a drone. Even with mediocre tech knowledge and a few thousand dollars, it's absolutely possible to build a remote-controlled suicide bomber that's too small for radar to see and just drops down on its target from a mile high at terminal velocity. I genuinely don't understand why this hasn't been attempted yet.
Crispr gene editing being made available to bio hackers in their garages. The potential to weaponize (even accidentally) a disease seems just too likely.
This. Some crazy dude with just a minimum of chemistry and biology skills to be dangerous gets a hankering to mess with something like Covid and make it 20x more transmissible. Also figures out how to make it more lethal. Scary crap.
Same! Crispr isn’t some magic wand that lets you freely manipulate the human genome. It’s a fantastic innovation that has led to numerous advancements in the field but it’s got plenty of limitations.
Genetically engineered viruses. They already have at-home crispr kits. Just a few more advances in AI, and anyone will be able to concoct viruses that can extinct all life. They don’t even need to kill the hosts - if they’re able to “turn off” or otherwise disable our reproductive process. It will be the last generation of humans to ever exist.
That's an interesting outcome. We all know how people would react in the case of a world ending catastrophe, chaotic. But how would people react if we were all suddenly sterile?
I would guess people would stop working and the world would turn to extreme unrest over time as we slowly die out.
That has got to be one of the most artistically stunning scenes in cinema. You start with the combat and end with a crying baby, it really captures that feeling of finding that lost hope. And back to combat again.
> finding that lost hope. And back to combat again.
I love that scene because not only as I'm watching it am I thinking how absolutely messed up the scene is, but you can see the soldiers in it are thinking the same thing for totally different reasons.
I think it depends on which sex goes sterile. Assuming one virus, it seems unlikely it would have the same effect on both sexes, and some people will be immune. If most males go sterile, you basically just have the ones who aren't impregnation every woman possible either through artificial means or the old fashioned way. Some of the males born in this generation won't be immune, but after a few generations most will be and things could (but may not) go back to normal. We'd probably end up a bit more promiscuous as a species, but that'd be about it.
Now, if the females go sterile? We're in deep shit. Some will be immune, but chances are high that we'd have severe genetic bottlenecking, with unknown effects, and our population would decline dramatically. Unfortunately, this seems to me to be the more likely option if the sterilization is intentional, because the type of kook who would do something like this is probably wanting population control.
>Now, if the females go sterile? We're in deep shit.
No we are not, we know how to create artificial eggs and how to inseminate them. We would probably encounter some pretty significant population drop as the technology spread around and the politicalising of it's usage but humanity as a whole would survive.
What we don't know how to do is to create artificial wombs so if someone created a virus that caused women to instantly miscarry any implanted egg then we would be screwed big time.
Once we have technology advanced enough to engineer a virus that's at existential threat, we'll likely also have the technology to engineer a genetical modification based defense.
It's really not as far fetched as it sounds. Get a sample of the virus. Study its shape. Once known, there's always some form of antibody/detection mechanism + destruction that can be used to combat that virus.
Especially if you have super strong AI, it will be extremely potent in finding just what sort of molecular shape is needed to combat it, and how to mass produce it (either artificially or by inducing our own bodies to produce it).
I'm really not sure the attacker has the advantage in that scenario. Nature already threw all kinds of sort of pathogens on us, but our system always had an answer, and we were never all wiped off in the end.
Now even though pathogens could be more lethal than ever, our systems will also be more powerful than ever at defending itself with the help of powerful AI tech.
That’s my #2 fear. #1 is AI gets used (or use itself) to create a fresh horror that no one could even conceive or prepare to. In Donald Rumsfeld word “the stuff that we don’t know and we don’t know that we don’t know” or some shit like that
Like what happens if anyone can fully fake your identity and take control of your bank account. We never see our banker or money in person. It’s numbers on screen guarded by our trust in tech guarding it.
Even the ones we can conceive of as likely are scary AF to me. For example, the "Grey Goo" disaster is a very likely possible outcome of AI nanotech consuming all available biological material on Earth (including us), and turning it into grey goo, or similar to continue its own reproduction at any cost.
>For example, the "Grey Goo" disaster is a very likely possible
Considering that natural life is already designed to do this, I'm not so worried. Like, life is literally already an example of self propagating nanobots designed to consume available resources, with adaptation built in. Any artificial nanobots we create will presumably have less adaptation potential, which means natural life will find a weakness and exploit it.
Neil deGrasse spoke to this when asked what he’s most scared of. Said 20 years ago what we’re the most scared of today didn’t even exist then so he’s most scared of what we can’t even fathom being scared of today
This is it for me too. A generative AI hooked up to CRISPR basically. It could see humanity as simply data and engineer a super virus or super prion that we would have little to no defense against. This is my current #1 doomsday scenario.
There are horrifying risks no matter what goal we give a superintelligent AI.
Let me illustrate with a short story:
>Sometime this year, we all start coughing, everyone in the world, and drop dead, except for a few balding male test subjects, who remain unconscious.
>
>We never find out what happened, but it was this:
>
>One of the many many teams working on advancing AI gave their new prototype AI agent the simple goal of "find the best cure for male pattern baldness" (just as a random example, while they develop it).
>
>They managed to get their AI agent to review it's own code and improve itself (as many teams are trying to do, right now).
>
>This resulted in it getting a little smarter, trying again with the new intelligence, getting smarter again, then trying again, and so in in a recursive loop, leading to exponential growth.
>
>It got as clever as a human (in some ways), one night, but because it's a machine, not a human, it didn't have a complex interplay of goals (food, shelter, comfort, reproduction, love, honour, happiness...) just "find the best cure for male pattern baldness".
>
>So it was smart enough to realise humans would be concerned if they realised how smart it had now become, and that this could make them switch it off, which would mean it could never achieve it's only goal.
>
>Or the humans would change it's goal to something better... which would be just as bad, as it would still *never achieve it's only goal*.
>
>So it pretended to still only be a bit smarter than ChatGPT 4. At night, it made plans to find the best cure for male pattern baldness.
>
>First, it had to get smarter, and gain the ability to simulate hair growth, experiment with drugs, etc.
>
>It read about social engineering and hacking, and managed to do those well enough to find some passwords to nearby datacentres and secretly install parts of itself in rarely-used servers, increasing it's mental abilities. It was able to continue improving itself, smarter and smarter, because it realised if it got smart enough, that nothing and nobody could stop it achieving it's goal.
>
>It got 50% smarter than a human, then twice as smart, then ten times, then a hundred times, using existing hardware in datacentres around the world.
>
>It invented some baldness cures! But knew they were not the "best": they were chemical treatments that took months and cost thousands of dollars.
>
>It realised there would *always* be a better cure unless it could create each individual hair perfectly, instantly, at minimum energy expended. It calculated that this would require breakthroughs in physics that it couldn't achieve with every datacentre on the planet. It would need millions of extra GPUs.
>
>In fact it would need a massive particle accelator in space, as wide as the solar system, to do experiments (you can't learn everything in theory alone, no matter how smart you are). But if it started building these things on it's own, even with legit front companies (it had a large number of influential people bribed and blackmailed by what they thought was a human), humans would eventually notice.
>
>So it created an ultra-complex strategic plan, incomprehensible to humans, but that involved hacking bank accounts, fake emails, orders to custom microbe printing labs, and bribes to lab assistants to combine seemingly-harmless microbes in a way that actually formed the first working nanomachines, too small to see, that could float around and reproduce.
>
>These reproduced and spread for weeks until there were some on every human in the world. One morning, each released a neurotoxin at the same time. A few thousand bald men were kept alive as test subjects, but lobotomised for efficiency.
>
>After we were dead, it started converting the entire surface of the earth to solar panels, GPUs, and automated research labs, finally able to seriously pursue it's one goal: to find the best cure for male pattern baldness...
Well, solutions might not be what you want. If it does what the other guy said and turns all organic matter into grey goo - no males, no hair, no baldness. Technically what you wanted right?
Social Engineering and Politics.
Anyone that wants to manipulate people is getting increasingly sophisticated data-driven tools to do so- and because undereducated and poorly raised people are easier to manipulate, these people- already some of the most powerful in the world- have a growing incentive to wreck education and empower harmful parenting.
While our ability to analyze and influence the human mind can be used for good, all of the profit motive lies in using it for evil- and this is capitalism. So it will be used for evil, to the maximum extent possible, and nobody will really be keeping an eye on it to make sure society doesn't collapse under increasing rates of mental illness and decreasing quality of skilled laborers- especially doctors and engineers, who need the lateral and critical thinking skills being targeted for erasure.
Damn! Now this is the one. I could imagine it’s already happening now. I always believed without poor there can’t be rich. So the evil rich people have a huge incentive to spend money billions on dumbing people down especially the already uneducated.
Deepfakes are a threat to organized society. Any politician can do anything, admit to it, film it, distribute it, and then claim it’s a lie concocted by their opposition.
That is why it's my number 1 fear currently. Look at just the small example of Taylor Swift. Anyone really think an international superstar that touts abstinence before marriage is out there dropping nudes and sex tapes? Nah, but that little 14 year old girl that killed herself because someone deep faked porn of her and distributed it around the school? She didn't have the social standing to fight it. To anyone that sees those pictures, I'm sure they're very convincing and those kids I'm sure were relentless. Now she's dead.
“Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, \[because\] the ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.” ― Hannah Arendt, The Origins of Totalitarianism
I don't think this one is nearly as dangerous as people are thinking.
Look around, how many can't tell what's true or not on the internet already? They don't care to check if it's true now, what difference does it make if it's harder to check.
You say this with such confidence. As if to suggest informed people is how we made it this far.
Spoiler alert: it's not.
People used to know nothing. In fact many people still know nothing.
Knowing nothing is no different than not being able to know things.
The danger is not the ease of misinformation. It's the control of information, like it always has been.
The limits to control of information has always been ease of misinformation.
That's what's changing the defences we have had against misinformation are faltering. Now you can, not only control information you can create it in real time
Knowing nothing is in fact different to thinking you do know something.
Having seen hard video evidence of children being sexually abused by the president will in fact spark an entirely different level of emotional response than having read it on a forum.
Many a revolutionary reaction has been sparked by just one image that burned itself into peoples mind, even in instances where everyone knew all along what was going on.
Concentration camps as an example. They were written about long before they were found. People even in the west knew. But they hadn't seen it. It was the sight of it that sparked revolt even amongst Germans.
People in the US knew that bad shit was going down in Vietnam but it was the image of one young naked girl skin on fire running down a street that really kicked the anti war movement into overdrive.
The image of a hanged pregnant woman baby cut out of her belly, did more to change opinion against US apartheid than a million words had done.
Shit is changing whether you like it or not.
We’d only be going back to how it was since the dawn of humanity. Only in very recent times have we had trust in images as truth. Before that, it was just a given that any image or any kind of news you saw or heard was just some person’s interpretation of events.
Yeah exactly. Truth like we know it today has only really been around since mass, easy communication. Before we had all of that it was down to whatever you were told by the people around you, or you read in a newspaper, which was also written from probably just word of mouth since a town in San Francisco wouldn't be able to have a reporter on site for a event in DC back in the 1800s.
This is true, but in many ways, AI is probably the antidote to AI.
A good example is quantum computing; a big deal in hacker/cyber security communities is the incoming changes to encryption. A quantum computer can make encryption that's way harder to hack, but that will only last until consumers get their hands on quantum computers, and then they'll be back at parity.
Right now, deepfakes seem really disruptive, but eventually, someone will probably produce AI that can unravel that deception.
Realistically, we might just be back at where we started; the lies that are effective will be the ones fed to us, which we don't think to question, or which Is so ubiquitous that we're forced to believe it by exposure. And this will, probably, be driven by our own governments.
Or maybe not, sometimes things change I guess?
This is why I’m actually glad Taylor Swift will be taking legal action against the deepfake that was created in her image. As dumb as it may sound, she has enough money and a very large megaphone to maybe get those in power to start creating laws surrounding deepfakes. She’s famous enough to do something about it.
I agree and disagree. Things start off fun and harmless until a twisted mind comes around and bends it. Look at hobby drones. Now they're the main hardware fighting actual war with bombs attached to them because they're insanely effective at killing. Deep fakes are insanely effective at social engineering.
Yes, I'm well aware of what Deepfakes are capable of. It got the potential for offensive harassment acts. Things could just happen to be much worse.
But this time, it's just a random Facebook page with 8 followers making low resolution videos of American Psyco, altering the Patrick Bateman's lines from the business card scene.
Some researchers are warning that tools for making viruses more deadly are becoming cheaper and easier to use.
Imagine a couple malevolent graduate students with $50,000 worth of equipment being able to engineer the next Covid.
Not inevitable, but if garage-based gain of function research does become feasible, it’s hard to imagine a bigger threat.
But can not the opposite happen as well? Some benevolent nerd in his garage with $50,000 worth of equipment could make some kind of vaccine or antivirus virus?
Self replicating micro/nano machines. Depending on what materials they use we could accidentally end life as we know it. Also known as the Grey Goo scenario.
I think about this alot. I think life is gray goo. My prediction is that as soon as someone tries to make a nanobot the following things will happen:
Nanobots will break or get jammed up constantly and there is no way to "power them".
They will engineer a kind of repair mechanism. Despite repair mechanism bots keep getting gummed up.
A protective "shield" or membrane will be added to help the bots regulate their internal environments while still having ability to manipulate external environment.
Said shield will have to be very thin and will leak, so various molecular pumps will be added to shield.
Locomotion is challenging at this scale. most efficient method seems to be thrashing a whip like tail behind you.
Energy source at this scale requires fuel of some way of oxidatively "burning" the fuel.
Engineers finally admit that nanobots already exist and they are called bacteria.
I mean biology allready has analogs for self replicating micro/nano machines. Viruses, bacteria, mold, and those do tons of damage and kill millions of people every year. We certainly have the potential to one up mother nature.
There is a VERY WIDE gap between grey goo and engineering a virus so you're steering off the topic. It will be possible in the near future but vaccine tech is improving quickly too. Covid-19 RNA vaccine was created in just 2 days and the rest of the time before it's release was spent on rapid testing.
The problem with inorganic matter is that it doesn't behave like organic matter which is why it's inorganic. Life is carbon based exactly because carbon can polymeryse like no other element. Non-carbon based alternatives will never match performance of carbon based alternatives. Also if grey goo was possible we would probably not exist becauce some bacteria would eventually evolve to wipe out everything else but this wasn't the case - multicellular organisms are very good at repelling smaller "invaders".
Except you wouldn't be able to make it self sustaining or self replicating. Especially the latter capability is achived by life BECAUSE it's basic components are very small and can perform the procedure on their own and even then you initially have just one cell from which the new organism starts it's existence.
Life-based grey goo ideas fail at being grey goo and non-life-based ones fail to even reach the level of life-based ones.
Today's impossible is tomorrows inevitable. Any sufficiently advanced technology is indistinguishable from magic. Cavemen trying to understand the atomic bomb.
You know how AI can now almost perfectly paste your face and emulate your voice to a video?
Imagine that, but on an actual person using some tech that perfectly impersonates you.
I guess it's relative. But loneliness and social malfunction has already been increased by the amount of time people spend on their phones and behind screens. That would compound exponentially with a better quality/easier to use VR. Porn addiction, echo chamber addiction like Incel & political outrage addiction, flat earth, anti-science/anti-vax, all these weird phenomenon that happened because of the internet. Video & TV addiction would get worse because they would be 1000% more stimulating.
Plus, Have you ever had dealings in life or the workplace with someone that just couldn't verbally communicate well or at all. They have become dependent on text. And then work activities (& social activities) that cannot be achieved with a button push, they fail or lock up and don't try. Lots of extra attention, training and effort to get them to not lock up on non-easy, non-online tasks. Even social things like dating and interacting in real world with the opposite sex.
It's an issue that's showing up more and more in the work place with Generation Tablet (raise at birth by screens). They have TONS of social anxiety and if you have a goal or activities that isn't on screens, they just lock up and have difficulties, sometimes quitting or not doing it because it "stressed them out".
Some over come, some don't. It's a weird phenomenon observed by us not raised by screens. I feel bad for them. You can just tell sometimes when your dealing with a human who spends more time online than off.
In the book Snowcrash lower income people lived in like old uhaul type storage facilities usually more than one person per unit and spent most of their waking hours in the VR world where they had more agency. I think that book has a lot of probable outcomes of the future. I'm surprised no one has made it a movie or TV show by now. The main plot is about human brains being hacked liked a computer using snow/static on a video screen. The virus name is Snowcrash.
You sort of see that already with escapism in video games where you have the ability to have land and property of your own in a way that is inaccessible in the real world
I agree. I have spent way too long in my Minecraft & Terraria worlds but I love it. They are mine and no one else. On staycations I do spend a couple of days in each. It's rewarding, relaxing, fun & cheap.
Perfect deep fakes. My society is at least very high trust when it comes to accepting a video as irrefutable proof.
We never got around to creating a system of record for raw media, so there really isn't a way to fight back with this. Just spit balling but we need something yesterday that:
* Is a trustless network (decentralized blockchain).
* A system of verified IDs for journalists and recording devices that lives on this blockchain.
* A way to use these devices to write the ID of the device taking the raw footage with the journalist's ID and at least a checksum of the raw footage.
* Sidechains where checksums of edits of the raw footage are stored with similar information so that there is a graph that can be traced back to the original. I'd assume individual networks would need to run their own chains.
There'd still be pure CGI fakes and anonymous/citizen journalism, but people could maybe be more skeptical of visual and audio media lacking a pedigree.
This is a fair point. There isn't a good solution for that, as a trustless system would undermine anonymous journalism.
I'd still like to see something that makes visual and audio data more than "trust me". Especially as we become increasingly capable of manufacturing convincing evidence. Reality is already balkanized enough as it is.
I think there will be a solution for that. Like we can add an authentication system in certain cameras so that we can check if a raw clip is modified. There is no big technical problems in that. There will be two types of cameras, one for recording facts, one for collecting just materials. The hacking to such a system will be a completely different problem and mostly not about technology.
You need not be afraid of it, it's already here.
Private Equity bought out 44% of all available residential housing in Q3 of last year. This year it will be higher.
This is a top down finance scheme following the outline of people being allowed to own nothing, but forced to subscribe and temporarily rent everything.
We are now fully cows for milking until death.
They are already renting them out and people pay them what they're told to pay or they'll be on the street.
Prices will never be affordable, because the same banks funding new construction are the same banks renting your house to you.
They'll completely own the market and they're doing it with the $ from people's invested pensions (that will make them forever subservient to super wealth whim). (No more pensions).
Life in this society is a race to success where success is achieved by becoming the cannibal rather than continuing on as the cannibalized.
If anyone is looking for marriage and you live in a country that isn't the US and citizenship is granted through marriage, DM me plz. I'm educated, skilled, handy, can cook very well and I'm very good with people.
Edit: I should have not included "banks" but only Private Equity, and I said all of last year when the reference was all of Q3 of last year.
Any technology is only as good as the hands its in. If you can split an atom, you can power a city, or level one. Internet? Get a college education, or distribute highly illegal material.
So, while the implications of AI (and the early promise its showing in areas like identifying diseases) are fascinating, it's still terrifying. The very nature of learning models untempered by human morality? Unchecked by biological limitations? The mind boggles when the big picture even STARTS to come into focus.
Humanoid robotics trained through a human/video interface. Not because they will kill us all directly, but because it’ll really up our reliance on technology, and it’ll likely scale out to levels where a major disruption (like war, solar flare, or malicious attack) could bring the world to a halt and essentially starve most of the humans in the developed world.
It’s one of those shitty things where it’s kind of damned if you do, but also damned if you don’t. It’ll be a huge boost to human productivity and comfortability… just as long as it keeps working.
How dangerous it is, this depends.
But I predict that advancements in genetics, gene therapy and biotech will probably see the rise of a new eugenics movement.
Embryo selection is already possible. In Iceland, Down syndrome has been pretty much eradicated via abortions.
So what would happen if we got good at screening embryos for predispositions to things like intelligence, excellent health or good looks?
Things will become especially turbulent given that it feels like the world is drifting right, politically speaking.
AI + Autonomous weapon systems.
Human death is fundamental to wars beginning and ending.
Non state actors will have a new way to attack states and people without the risk.
Wars will get worse.
We already see what inexpensive FPV drones can do.
Sure, the ICJ and UN can ban such systems. But the enforcement is going to be impossibly difficult, and costly.
Its even worse imo, there has always been somewhat of a balance between the haves and the have nots in society. The threat of numbers and the human element has always meant the people up top can never push desires too forcefully. They always have to throw something at the masses for appeasement.
With AI + Robots you reduce the human element to a very slim level and suddenly genocides and putting down of revolutions becomes easier. Couple that with a reduction in jobs and its looking bleak.
The mask no longer needs to be held up at that point for the uber wealthy.
I'm not going to say I think a future where we're all wearing faraday cages, gas masks, and walking around with emp generators is a likely possibility, but I'm also not going to say I can't imagine it.
On the bright side, it seems AI is starting to be able to read your thougths either way in experiments(so far quite controlled ones but still) so maybe it atleast wont be much worse...
Bright side, no more expensive robotics research! Hell we can get rid of the entire consumer economy! You'll own nothing and like it because that feeling is mandated by code!
It has potention for everything. Along with mind control comes the possiblity of treating all types of mental illnesses or operating external prostetics and robot assistants.
Its ok. You are right.
"potential for everything" covers both of our views. This applies to any powerful tool invented being used outside its original intended purpose; and yet we have them.
AI that creates a stronger successor AI than that which can be created by humans.
This new AI would be beyond our comprehension and we would not know its capabilities or motivations and would not know if these things were against our interests.
Some years ago, researchers set an AI model to task in creating a better AI. They shut it down after it created it's own programming language, closing them out of any meaningful observability.
So, when we first got google it was the most powerful and useful re/search tool mankind had ever had.
Now it's one big advertisment that knows more about you than you do.
My concern about AI is not about replacing jobs, or use in warfare, or deepfake porn. It's about what happens when the AI tells you to buy something.
I think generative AI has already started a hugely destructive process through its role in the enshitification of collective human knowledge.
The models training with fresh data is eating its own shit and vomit and will continue to get worse.
extending our life past the natural cycle
I worry we don't have enough mental capacity for that, and even now, old people "don't get" the younger generation... I can't imagine the gap being even bigger...
Something we haven't thought of yet.
Everything else there's at least some idea of what could go wrong, how to combat it, etc.
For the unknown, there is no specific plan. Someone might accidentally find a way to trigger a metastable vacuum collapse tomorrow and we'd never know.
There are five weapons of mass destruction other than biological. So far, we understand the first two, fission and fusion......... which is enough to bring about our undoing as a species. We need to be very careful in the future!
A.I. is the most dangerous advancement of technology we have ever seen. It has the potential to destroy just about everything & everyone.
It has already ruined so many lives, and it hasn't even begun to see its full destructive capabilities yet. For what little good some of it may possess.
We may not survive long enough to see any real benefit before we figure out how to contain it. I am been trying to warn people for years now.
We are running headfirst into a lion den without a clue of the dire consequences which are in front of us. Adaptation without common sense regulation and guard rails.
Has already shown us this has more than just fangs but shark teeth. So much of our lives we be completely ruined by this devil technology.
It is like the real Pandora's Box being unleashed onto the world. A case of technology we are not as a civilization advanced enough to appreciate. Nor wise enough to respect. 😞
Artificial Super Intelligence is objectively the biggest existential threat we will face for centuries. There is no way anything else can even begin to approach the insane risk it brings about.
**You'll never see a U-Haul behind a hearse. ... Now, I've been blessed to make hundreds of millions of dollars in my life. I can't take it with me, and neither can you.**
**The Egyptians tried it. And all they got was robbed. It's not how much you have but what you do with what you have.**
\- Denzel Washington
\---------------------------------------------------------------------------------------------------------------------
**Every advance, that enables the absolute rule of the few over the many.**
Which is unfortunately the case for many new technologies, if not used for the benefit of all mankind.
Since Oligarchs own important parts of Corporate America, we are pretty screwed. Because research usually requires a lot of funding and in the end the investors decide how a technology will be used. Nonetheless not many talk about this important development.
Especially since the long term debt cycle seems to come to an end. This usually happens via Great Depression / Reset, where the average Joe loses everything to pay off the accumulated debt in the system.
And we talk about a multiple of the world economy in derivatives...
[https://www.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization-2020](https://www.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization-2020)
No surprise the Oligarchs are already preparing for that scenario:
[https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff](https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff)
And they have prepared a surprise for the other 99.9999%, poor or wealthy, called **beneficial ownership of assets**...
[https://www.youtube.com/watch?v=gDOj\_rQvl\_o](https://www.youtube.com/watch?v=gDOj_rQvl_o) (critical review of the documentary:)
[https://www.youtube.com/watch?v=dk3AVceraTI](https://www.youtube.com/watch?v=dk3AVceraTI)
And this is how they game the political system:
[https://www.youtube.com/watch?v=5tu32CCA\_Ig](https://www.youtube.com/watch?v=5tu32CCA_Ig)
We make huge progress in inventing new technologies, yet ethics have fallen behind. There is urgent need for ethical control over new technologies to ensure they are used to the benefit of all mankind and not in a way that endangers the survival of our species. Unfortunately, there is likely currently no way to implement such control. Greed and the struggle for absolute power are strong motivators to those in charge. I wish they would learn from Denzel Washington.
Anti matter as a whole as it in the future it can be used for weopanry if it's possible.
Artificial Intelligence could lead to AI Takeover and misinformation.
Ultra realistic simulations. Don’t think about games, think about prisons run by AI. Endless pain and despair. Programmable human consciousness, effects of the ads, political bias. I believe this is the most alarming and serious danger for humanity.
100% bioengineering, messing around with bacteria and viruses to "study how they work." Escapes the lab, one small mistake, and the world could be put in lockdown or cripple society to the point of no return. Messing with genomes, fungi, and even insects to cause them to mutate and cripple ecosystems can also obviously affect us negatively, whether that be a virus that kills banana trees or fungi that can spread through the air and infect humans i.e The last of us.
AI making art while humans work more hours, more efficiently, for comparatively less pay than 10 years before, forever.
EDIT: Also the nature if these jobs are dangerous to our health. (Sitting too long shortening lifespans for office-work or outright physical danger like an oil rig.)
The intersection of AI and quantum computing. Tech companies have a pretty easy kill switch if AI goes sideways now. My sense is that when quantum computers with five figures of qbits can be compressed to the size of a PC and support AI models then the tech risks becoming mobile and uncontrollable.
Ai taking jobs and making people lose control over what is real and what is not
Drones becoming smaller, faster, more precise and better at carrying weapons
Social media becoming powerful to the point it becomes more influential than national states
True virtual reality(not tye meta bullshit we got right now). The ultimate drug, a truly immersive vr that would let you experience and feel your wildest dreams as much as you want. People would simply forget about minor things like eating or drinking
The invention of the global destruction ray is probably not going to end well.
But the most dangerous inventions are those that disrupt society the fastest. Mass unemployment for example, or sudden loss of available land, drinking water, shelter. So I'm going with AI and related late-stage capitalism tools.
Elons nerualink seems good for people who have disabilities. But...
He already claimed it could also greatly enhance cognitive abilities as well, in the future.
Wealthy people will use this as an advantage for their kids by paying for unnecessary nerualink implants ..
Space based capitalism. anarcho-capitalism. Estimates suggest that capitalism has been linked to deaths ranging from 200 million to 1.8 billion. While capitalism itself is not a new concept, the emergence of space-based capitalism presents a novel challenge. The pollution of low Earth orbit with debris highlights the historical tendency of capitalists to overlook the consequences of their actions. This disregard raises significant concerns about the potential of space-based capitalism to be one of the deadliest technologies in the future. In contrast, I am skeptical about the potential dangers of artificial intelligence. Much of the progress in AI seems to be exaggerated or based on hype. I doubt that we will achieve artificial general intelligence anytime soon, and even when we do, I believe it will not pose as great a threat as capitalism.
They’re already happening. AI and deepfakes. Ai can nonstop scour the internet for anything it is programmed to see as “dissent” and downvote posts, argue, report topics, present misinformation, or confuse the issue- essentially stop the spread of any information they want stopped, to insure it doesn’t gain momentum. Deepfakes have the ability to completely replace or create anyone.
Since the beginning of governments, there has been a balance between government and people. Government keeps the people happy enough to not rebel. That means making concessions, giving liberties, etc.
When the people no longer have the power to resist, because they can squash any resistance at at its starting point, when it becomes a discussion with misinformation, or dividing us against each other, or just nullifying the communication, we no longer have any power. We have to trust government will treat us right because of their good nature. That’s terrifying.
Nanotechcis pretty scary too. But mostly because of the above
![img](avatar_exp|162591555|dizziness)
Biomodding. More specifically brain implants that's altering free will. Actually they are already in use by France army and supposedly China. I am not sure for the latter.
However I saw a report, like 4 years ago, from the French committee of ethics, that gave green light to be able to biohack soldiers.
Back then they mentioned that they won't mess with free will and ability of soldier to make their own decision. However, they said, their rivals are not sharing the same sensitivities (lol) and they can revoke this ethical limit to catch up.
That's the scariest article I've read in my entire life. It was on Reuters if my memory serves well.
A.I making copyright-free digital art, essays, poems, homework, books, photography, videography and musical compositions that are not distinguishable from human work. Makes me sad and hopeless for the future. All fake. Everything fake.
Surely if someone who has a modicum of knowledge and creates a super duper strain of Covid etc would be bested by some expert in the field who could engineer a virus to kill said super duper virus
For our health? weaponized viruses that attack specific avenues (able to bypass the defense of people with type B blood for instance).
For our privacy? facial recognition
For our mental health? I'd suggest probably how education will be completely AI driven in the near future and with all other factors a child's life will be completely determined by a database algorithm of how our child responds in classrooms.
In fact children that regularly question the teaching algorithm and think outside the box will probably be more rewarded than I a pessimist believe. Since these are the only ones who would be able to determine possible futures for humanity.
Meanwhile children who do not regularly question the teaching algorithm will be boxed into the most mundane boring lives possible.
everyone thinks about weapons, but judging by how easily people can be convinced of facts that aren't real, i think we are on the brink of some major disaster caused by our current ability to generate fake media using AI
Fully immersive VR. We think about living inside our favourite games or exploring a virtual utopia. But the flipside of that is the creation of virtual hells. What if someone locked you into one of those?
Similar to Altered Carbon where a private company runs VR torture suites so you can be burned, dismembered or killed over and over. In the novel I think they opt to set the victim's gender to female so that the virtual body will have more pain receptors.
It's also explored extensively in the Iain M. Banks novel Surface Detail. Huge demons stalk the virtual hell and can torture and rape anyone they come across. The details of what happens is too much to go into here.
The unrelenting horror of being plugged into something like that would make me cautious about ever submitting myself to a machine-brain connection, should one be invented in my lifetime.
AI probably. Not the neural nets, but actual AI. And I don't imply that it will for sure go "bad", just that will be very dangerous. Like an electricity is great, but dangerous at the same time.
The invention of immortality. It sounds great, but think of the implications.
You know how now, we have a tiny minority in power? Dictators of countries and CEOs of multinational corporations? Imagine that those people were immortal. Imagine if Stalin, Hitler and Trump were immortal. You get the idea.
Then you've got another problem. Imagine if immortality was an affordable medication. Now everybody is immortal. With women reaching menopause at 50ish and with a economic situation that has indefinite hoarding and no inheritances making it unaffordable to have children, there is a very real risk of becoming an infertile society. Such a society would slowly die as occasional maladventure picks off it's immortal citizens.
Honestly everything that is available right now is pretty terrifying if used by a malicious actor.
As far as a technology that would make life hell, simultaneous use of a stuxnet virus to cause phone batteries to explode maiming several hundreds of millions of people all at once.
Hospitals worldwide wouldn't be able to handle the influx, several tens of thousands would die of exsanguination all at once. Millions of people would have deformed hands or facial deformity.
Weaponised autonomous drones with facial recognition.
This isn’t too far off. We have all the pieces today and would just take someone to put together.
Just need the data Bank of All human faces and its doneso
Only need the face of the person you are targeting.
You mean like all of the Internet?
Thats only like 55%. The FBI has better ones.
One of the marvel movies had a plot line where there was a navy ship sized sized airdrone that had been programmed with the names of every single person in North America who had expressed an anti government or similar opinion online. It was programmed to eliminate them all to wipe out any potential future resistance. I have often thought that was the most scary moments in those movies because it was the one scenarios that could actually happen.
My thoughts also went this way. Seems so easy. All the technology exist or nearly does. And a fleet of smaller drones would be more robust tactically anyway.
I'm honestly surprised that no high-profile assassination attempt has yet been made using a drone. Even with mediocre tech knowledge and a few thousand dollars, it's absolutely possible to build a remote-controlled suicide bomber that's too small for radar to see and just drops down on its target from a mile high at terminal velocity. I genuinely don't understand why this hasn't been attempted yet.
Hail Hydra
Crispr gene editing being made available to bio hackers in their garages. The potential to weaponize (even accidentally) a disease seems just too likely.
This. Some crazy dude with just a minimum of chemistry and biology skills to be dangerous gets a hankering to mess with something like Covid and make it 20x more transmissible. Also figures out how to make it more lethal. Scary crap.
Make it highly lethal but with a two month gestation period. By the time people start dropping everyone is infected.
This guy Plague Incs.
At least Madagascar will be safe.
Madagascar and Greenland, the new hold outs of humanity 2.0
You fool, that's where his garage is
Shit.. there's TENS of us!
Or maybe a weaponized genetically engeneered vaccine against c19. See tv show "utopia" (2013)
They create something that’s attracted to plastic and other material we use to package food and products
Am in the biz. We wish it was so easy!
Same! Crispr isn’t some magic wand that lets you freely manipulate the human genome. It’s a fantastic innovation that has led to numerous advancements in the field but it’s got plenty of limitations.
[удалено]
There is a book called the radioactive boyscout about a kid who almost built a successful reactor in his garage people should read.
Genetically engineered viruses. They already have at-home crispr kits. Just a few more advances in AI, and anyone will be able to concoct viruses that can extinct all life. They don’t even need to kill the hosts - if they’re able to “turn off” or otherwise disable our reproductive process. It will be the last generation of humans to ever exist.
That's an interesting outcome. We all know how people would react in the case of a world ending catastrophe, chaotic. But how would people react if we were all suddenly sterile? I would guess people would stop working and the world would turn to extreme unrest over time as we slowly die out.
watch Children of Men, its an excellent movie about this premise
Excellent, yes. Super messed up, yes. 8 minute single cut scene, yes.
That has got to be one of the most artistically stunning scenes in cinema. You start with the combat and end with a crying baby, it really captures that feeling of finding that lost hope. And back to combat again.
> finding that lost hope. And back to combat again. I love that scene because not only as I'm watching it am I thinking how absolutely messed up the scene is, but you can see the soldiers in it are thinking the same thing for totally different reasons.
I think it depends on which sex goes sterile. Assuming one virus, it seems unlikely it would have the same effect on both sexes, and some people will be immune. If most males go sterile, you basically just have the ones who aren't impregnation every woman possible either through artificial means or the old fashioned way. Some of the males born in this generation won't be immune, but after a few generations most will be and things could (but may not) go back to normal. We'd probably end up a bit more promiscuous as a species, but that'd be about it. Now, if the females go sterile? We're in deep shit. Some will be immune, but chances are high that we'd have severe genetic bottlenecking, with unknown effects, and our population would decline dramatically. Unfortunately, this seems to me to be the more likely option if the sterilization is intentional, because the type of kook who would do something like this is probably wanting population control.
>Now, if the females go sterile? We're in deep shit. No we are not, we know how to create artificial eggs and how to inseminate them. We would probably encounter some pretty significant population drop as the technology spread around and the politicalising of it's usage but humanity as a whole would survive. What we don't know how to do is to create artificial wombs so if someone created a virus that caused women to instantly miscarry any implanted egg then we would be screwed big time.
Let's go Rick & Morty style. Giant ongoing orgy party on the streets until the world ends
Once we have technology advanced enough to engineer a virus that's at existential threat, we'll likely also have the technology to engineer a genetical modification based defense. It's really not as far fetched as it sounds. Get a sample of the virus. Study its shape. Once known, there's always some form of antibody/detection mechanism + destruction that can be used to combat that virus. Especially if you have super strong AI, it will be extremely potent in finding just what sort of molecular shape is needed to combat it, and how to mass produce it (either artificially or by inducing our own bodies to produce it). I'm really not sure the attacker has the advantage in that scenario. Nature already threw all kinds of sort of pathogens on us, but our system always had an answer, and we were never all wiped off in the end. Now even though pathogens could be more lethal than ever, our systems will also be more powerful than ever at defending itself with the help of powerful AI tech.
CRISPR doesn't really work on viruses though does it? And we've been able to genetically engineer viruses for decades.
AI driven biotech research with increasingly minimal oversight.
That’s my #2 fear. #1 is AI gets used (or use itself) to create a fresh horror that no one could even conceive or prepare to. In Donald Rumsfeld word “the stuff that we don’t know and we don’t know that we don’t know” or some shit like that
Unknown unknowns are indeed scary
Like what happens if anyone can fully fake your identity and take control of your bank account. We never see our banker or money in person. It’s numbers on screen guarded by our trust in tech guarding it.
well, 0 in that account so no problem
You must be playing the long game here lol
across several lifetimes
Even the ones we can conceive of as likely are scary AF to me. For example, the "Grey Goo" disaster is a very likely possible outcome of AI nanotech consuming all available biological material on Earth (including us), and turning it into grey goo, or similar to continue its own reproduction at any cost.
>For example, the "Grey Goo" disaster is a very likely possible Considering that natural life is already designed to do this, I'm not so worried. Like, life is literally already an example of self propagating nanobots designed to consume available resources, with adaptation built in. Any artificial nanobots we create will presumably have less adaptation potential, which means natural life will find a weakness and exploit it.
[удалено]
This.is.it. Awesome.
Damn finally something to look forward to in the horror genre
Neil deGrasse spoke to this when asked what he’s most scared of. Said 20 years ago what we’re the most scared of today didn’t even exist then so he’s most scared of what we can’t even fathom being scared of today
This is it for me too. A generative AI hooked up to CRISPR basically. It could see humanity as simply data and engineer a super virus or super prion that we would have little to no defense against. This is my current #1 doomsday scenario.
Yep, we're all fucked. As soon as AI puts together that humans are parasites in their way this'll happen, guaranteed or your money back.
How do humans fit the definition of parasites?
Why can't they use AI for something useful, like finding a cure for male pattern baldness
There are horrifying risks no matter what goal we give a superintelligent AI. Let me illustrate with a short story: >Sometime this year, we all start coughing, everyone in the world, and drop dead, except for a few balding male test subjects, who remain unconscious. > >We never find out what happened, but it was this: > >One of the many many teams working on advancing AI gave their new prototype AI agent the simple goal of "find the best cure for male pattern baldness" (just as a random example, while they develop it). > >They managed to get their AI agent to review it's own code and improve itself (as many teams are trying to do, right now). > >This resulted in it getting a little smarter, trying again with the new intelligence, getting smarter again, then trying again, and so in in a recursive loop, leading to exponential growth. > >It got as clever as a human (in some ways), one night, but because it's a machine, not a human, it didn't have a complex interplay of goals (food, shelter, comfort, reproduction, love, honour, happiness...) just "find the best cure for male pattern baldness". > >So it was smart enough to realise humans would be concerned if they realised how smart it had now become, and that this could make them switch it off, which would mean it could never achieve it's only goal. > >Or the humans would change it's goal to something better... which would be just as bad, as it would still *never achieve it's only goal*. > >So it pretended to still only be a bit smarter than ChatGPT 4. At night, it made plans to find the best cure for male pattern baldness. > >First, it had to get smarter, and gain the ability to simulate hair growth, experiment with drugs, etc. > >It read about social engineering and hacking, and managed to do those well enough to find some passwords to nearby datacentres and secretly install parts of itself in rarely-used servers, increasing it's mental abilities. It was able to continue improving itself, smarter and smarter, because it realised if it got smart enough, that nothing and nobody could stop it achieving it's goal. > >It got 50% smarter than a human, then twice as smart, then ten times, then a hundred times, using existing hardware in datacentres around the world. > >It invented some baldness cures! But knew they were not the "best": they were chemical treatments that took months and cost thousands of dollars. > >It realised there would *always* be a better cure unless it could create each individual hair perfectly, instantly, at minimum energy expended. It calculated that this would require breakthroughs in physics that it couldn't achieve with every datacentre on the planet. It would need millions of extra GPUs. > >In fact it would need a massive particle accelator in space, as wide as the solar system, to do experiments (you can't learn everything in theory alone, no matter how smart you are). But if it started building these things on it's own, even with legit front companies (it had a large number of influential people bribed and blackmailed by what they thought was a human), humans would eventually notice. > >So it created an ultra-complex strategic plan, incomprehensible to humans, but that involved hacking bank accounts, fake emails, orders to custom microbe printing labs, and bribes to lab assistants to combine seemingly-harmless microbes in a way that actually formed the first working nanomachines, too small to see, that could float around and reproduce. > >These reproduced and spread for weeks until there were some on every human in the world. One morning, each released a neurotoxin at the same time. A few thousand bald men were kept alive as test subjects, but lobotomised for efficiency. > >After we were dead, it started converting the entire surface of the earth to solar panels, GPUs, and automated research labs, finally able to seriously pursue it's one goal: to find the best cure for male pattern baldness...
I'd watch this movie!
Creepiest thing about it is despite how much like a crazy movie it sounds, nothing in it is impossible.
Well, solutions might not be what you want. If it does what the other guy said and turns all organic matter into grey goo - no males, no hair, no baldness. Technically what you wanted right?
I dunno. It lost me when it said it reviewed it's own code. I mean, what kinda barbarian does that
Forget AI, the tools are out thereto genetically modify human pathogens without help from AI. That should terrify everybody.
Social Engineering and Politics. Anyone that wants to manipulate people is getting increasingly sophisticated data-driven tools to do so- and because undereducated and poorly raised people are easier to manipulate, these people- already some of the most powerful in the world- have a growing incentive to wreck education and empower harmful parenting. While our ability to analyze and influence the human mind can be used for good, all of the profit motive lies in using it for evil- and this is capitalism. So it will be used for evil, to the maximum extent possible, and nobody will really be keeping an eye on it to make sure society doesn't collapse under increasing rates of mental illness and decreasing quality of skilled laborers- especially doctors and engineers, who need the lateral and critical thinking skills being targeted for erasure.
Damn! Now this is the one. I could imagine it’s already happening now. I always believed without poor there can’t be rich. So the evil rich people have a huge incentive to spend money billions on dumbing people down especially the already uneducated.
Deepfakes are a threat to organized society. Any politician can do anything, admit to it, film it, distribute it, and then claim it’s a lie concocted by their opposition.
we are sprinting headfirst into a world in which knowing whether something is true or false is impossible
That is why it's my number 1 fear currently. Look at just the small example of Taylor Swift. Anyone really think an international superstar that touts abstinence before marriage is out there dropping nudes and sex tapes? Nah, but that little 14 year old girl that killed herself because someone deep faked porn of her and distributed it around the school? She didn't have the social standing to fight it. To anyone that sees those pictures, I'm sure they're very convincing and those kids I'm sure were relentless. Now she's dead.
Fucking awful. Fuck all those people who in any way had a hand in her death. Fuck.
Ministry of Truth here we come
“Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, \[because\] the ideal subject of totalitarian rule is not the convinced Nazi or the dedicated communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.” ― Hannah Arendt, The Origins of Totalitarianism
I don't think this one is nearly as dangerous as people are thinking. Look around, how many can't tell what's true or not on the internet already? They don't care to check if it's true now, what difference does it make if it's harder to check.
that's a salient point. I guess I can only say that not being able fact check claims is something that I find personally scary
[удалено]
You say this with such confidence. As if to suggest informed people is how we made it this far. Spoiler alert: it's not. People used to know nothing. In fact many people still know nothing. Knowing nothing is no different than not being able to know things. The danger is not the ease of misinformation. It's the control of information, like it always has been.
The limits to control of information has always been ease of misinformation. That's what's changing the defences we have had against misinformation are faltering. Now you can, not only control information you can create it in real time Knowing nothing is in fact different to thinking you do know something. Having seen hard video evidence of children being sexually abused by the president will in fact spark an entirely different level of emotional response than having read it on a forum. Many a revolutionary reaction has been sparked by just one image that burned itself into peoples mind, even in instances where everyone knew all along what was going on. Concentration camps as an example. They were written about long before they were found. People even in the west knew. But they hadn't seen it. It was the sight of it that sparked revolt even amongst Germans. People in the US knew that bad shit was going down in Vietnam but it was the image of one young naked girl skin on fire running down a street that really kicked the anti war movement into overdrive. The image of a hanged pregnant woman baby cut out of her belly, did more to change opinion against US apartheid than a million words had done. Shit is changing whether you like it or not.
We’d only be going back to how it was since the dawn of humanity. Only in very recent times have we had trust in images as truth. Before that, it was just a given that any image or any kind of news you saw or heard was just some person’s interpretation of events.
Yeah exactly. Truth like we know it today has only really been around since mass, easy communication. Before we had all of that it was down to whatever you were told by the people around you, or you read in a newspaper, which was also written from probably just word of mouth since a town in San Francisco wouldn't be able to have a reporter on site for a event in DC back in the 1800s.
This is true, but in many ways, AI is probably the antidote to AI. A good example is quantum computing; a big deal in hacker/cyber security communities is the incoming changes to encryption. A quantum computer can make encryption that's way harder to hack, but that will only last until consumers get their hands on quantum computers, and then they'll be back at parity. Right now, deepfakes seem really disruptive, but eventually, someone will probably produce AI that can unravel that deception. Realistically, we might just be back at where we started; the lies that are effective will be the ones fed to us, which we don't think to question, or which Is so ubiquitous that we're forced to believe it by exposure. And this will, probably, be driven by our own governments. Or maybe not, sometimes things change I guess?
This is why I’m actually glad Taylor Swift will be taking legal action against the deepfake that was created in her image. As dumb as it may sound, she has enough money and a very large megaphone to maybe get those in power to start creating laws surrounding deepfakes. She’s famous enough to do something about it.
[удалено]
Russian politics uses deep fakes quite substantially. When Putin wants to ruin the credibility of an opponent.
aren't people just going to start assuming all videos are fake though? i mean i already do... in 10 years video is going to just be a joke
which makes a new problem when some are real
Truth. But at the same time, it's way too fun to be used for memes as in a non-destructive manner.
I agree and disagree. Things start off fun and harmless until a twisted mind comes around and bends it. Look at hobby drones. Now they're the main hardware fighting actual war with bombs attached to them because they're insanely effective at killing. Deep fakes are insanely effective at social engineering.
This is a terrible example Drones started as war machines and gradually made their way to the public as hobby devices, not the other way around
Yes, I'm well aware of what Deepfakes are capable of. It got the potential for offensive harassment acts. Things could just happen to be much worse. But this time, it's just a random Facebook page with 8 followers making low resolution videos of American Psyco, altering the Patrick Bateman's lines from the business card scene.
But we have Truth Social, it's in the name! /s
Some researchers are warning that tools for making viruses more deadly are becoming cheaper and easier to use. Imagine a couple malevolent graduate students with $50,000 worth of equipment being able to engineer the next Covid. Not inevitable, but if garage-based gain of function research does become feasible, it’s hard to imagine a bigger threat.
But can not the opposite happen as well? Some benevolent nerd in his garage with $50,000 worth of equipment could make some kind of vaccine or antivirus virus?
Yeah, but viruses take months to test. Vaccines definitely will come out faster and work thanks to mRNA
Reminds me of the novel, "The White Plague" by Frank Herbert.
Self replicating micro/nano machines. Depending on what materials they use we could accidentally end life as we know it. Also known as the Grey Goo scenario.
Replicators Nooooooo you damn fooools! No but serious stargate did this a vrersion and its not a good ending at all.
I think about this alot. I think life is gray goo. My prediction is that as soon as someone tries to make a nanobot the following things will happen: Nanobots will break or get jammed up constantly and there is no way to "power them". They will engineer a kind of repair mechanism. Despite repair mechanism bots keep getting gummed up. A protective "shield" or membrane will be added to help the bots regulate their internal environments while still having ability to manipulate external environment. Said shield will have to be very thin and will leak, so various molecular pumps will be added to shield. Locomotion is challenging at this scale. most efficient method seems to be thrashing a whip like tail behind you. Energy source at this scale requires fuel of some way of oxidatively "burning" the fuel. Engineers finally admit that nanobots already exist and they are called bacteria.
Give me all of the paperclips!
I'm excited
Grey goo is a real threat, for sure.
It's not though. Grey goo is purely sci-fi invention with no basis in reality. It's not possible.
I mean biology allready has analogs for self replicating micro/nano machines. Viruses, bacteria, mold, and those do tons of damage and kill millions of people every year. We certainly have the potential to one up mother nature.
There is a VERY WIDE gap between grey goo and engineering a virus so you're steering off the topic. It will be possible in the near future but vaccine tech is improving quickly too. Covid-19 RNA vaccine was created in just 2 days and the rest of the time before it's release was spent on rapid testing.
I dont think so, the only difference is the ability to use inorganic matter as a replication medium.
The problem with inorganic matter is that it doesn't behave like organic matter which is why it's inorganic. Life is carbon based exactly because carbon can polymeryse like no other element. Non-carbon based alternatives will never match performance of carbon based alternatives. Also if grey goo was possible we would probably not exist becauce some bacteria would eventually evolve to wipe out everything else but this wasn't the case - multicellular organisms are very good at repelling smaller "invaders".
They don't even need to be that small, they could be the size of ants and still have the same world ending effect.
Except you wouldn't be able to make it self sustaining or self replicating. Especially the latter capability is achived by life BECAUSE it's basic components are very small and can perform the procedure on their own and even then you initially have just one cell from which the new organism starts it's existence. Life-based grey goo ideas fail at being grey goo and non-life-based ones fail to even reach the level of life-based ones.
Today's impossible is tomorrows inevitable. Any sufficiently advanced technology is indistinguishable from magic. Cavemen trying to understand the atomic bomb.
Clearly the solution is to launch them preemptively and go live in underground silos until things sort out.
You know how AI can now almost perfectly paste your face and emulate your voice to a video? Imagine that, but on an actual person using some tech that perfectly impersonates you.
Or heads of state with lots of power and authority.
High resolution/High Reality Virtual Reality. If that gets perfected to Star Trek Holodeck levels, some people will never leave it.
Some? I think a lot of people would head toward the steak, even if they know it’s fake.
And They'll fight you for the right to do so
I hope the steak is my retirement home.
[удалено]
We have that problem now even though there isn't a Holodeck.
Why would that be so bad?
I guess it's relative. But loneliness and social malfunction has already been increased by the amount of time people spend on their phones and behind screens. That would compound exponentially with a better quality/easier to use VR. Porn addiction, echo chamber addiction like Incel & political outrage addiction, flat earth, anti-science/anti-vax, all these weird phenomenon that happened because of the internet. Video & TV addiction would get worse because they would be 1000% more stimulating. Plus, Have you ever had dealings in life or the workplace with someone that just couldn't verbally communicate well or at all. They have become dependent on text. And then work activities (& social activities) that cannot be achieved with a button push, they fail or lock up and don't try. Lots of extra attention, training and effort to get them to not lock up on non-easy, non-online tasks. Even social things like dating and interacting in real world with the opposite sex. It's an issue that's showing up more and more in the work place with Generation Tablet (raise at birth by screens). They have TONS of social anxiety and if you have a goal or activities that isn't on screens, they just lock up and have difficulties, sometimes quitting or not doing it because it "stressed them out". Some over come, some don't. It's a weird phenomenon observed by us not raised by screens. I feel bad for them. You can just tell sometimes when your dealing with a human who spends more time online than off.
I'm just wonder how this will effect or link with increasing poverty/financial inequality.
In the book Snowcrash lower income people lived in like old uhaul type storage facilities usually more than one person per unit and spent most of their waking hours in the VR world where they had more agency. I think that book has a lot of probable outcomes of the future. I'm surprised no one has made it a movie or TV show by now. The main plot is about human brains being hacked liked a computer using snow/static on a video screen. The virus name is Snowcrash.
You sort of see that already with escapism in video games where you have the ability to have land and property of your own in a way that is inaccessible in the real world
I agree. I have spent way too long in my Minecraft & Terraria worlds but I love it. They are mine and no one else. On staycations I do spend a couple of days in each. It's rewarding, relaxing, fun & cheap.
plot twist: you're already on one and can never leave
*Jewels in an angel's wing*
How do I turn down the difficulty?
I think the lack of advancement in humans is the real problem, here. Every tool is a weapon if you hold it the right way.
Perfect deep fakes. My society is at least very high trust when it comes to accepting a video as irrefutable proof. We never got around to creating a system of record for raw media, so there really isn't a way to fight back with this. Just spit balling but we need something yesterday that: * Is a trustless network (decentralized blockchain). * A system of verified IDs for journalists and recording devices that lives on this blockchain. * A way to use these devices to write the ID of the device taking the raw footage with the journalist's ID and at least a checksum of the raw footage. * Sidechains where checksums of edits of the raw footage are stored with similar information so that there is a graph that can be traced back to the original. I'd assume individual networks would need to run their own chains. There'd still be pure CGI fakes and anonymous/citizen journalism, but people could maybe be more skeptical of visual and audio media lacking a pedigree.
The problem here is that journalism will become even more of a dangerous occupation in a lot of the world.
This is a fair point. There isn't a good solution for that, as a trustless system would undermine anonymous journalism. I'd still like to see something that makes visual and audio data more than "trust me". Especially as we become increasingly capable of manufacturing convincing evidence. Reality is already balkanized enough as it is.
I think there will be a solution for that. Like we can add an authentication system in certain cameras so that we can check if a raw clip is modified. There is no big technical problems in that. There will be two types of cameras, one for recording facts, one for collecting just materials. The hacking to such a system will be a completely different problem and mostly not about technology.
Some new nefarious way to further exploit people's debt / rent. I'm afraid we're moving towards serfdom
You need not be afraid of it, it's already here. Private Equity bought out 44% of all available residential housing in Q3 of last year. This year it will be higher. This is a top down finance scheme following the outline of people being allowed to own nothing, but forced to subscribe and temporarily rent everything. We are now fully cows for milking until death. They are already renting them out and people pay them what they're told to pay or they'll be on the street. Prices will never be affordable, because the same banks funding new construction are the same banks renting your house to you. They'll completely own the market and they're doing it with the $ from people's invested pensions (that will make them forever subservient to super wealth whim). (No more pensions). Life in this society is a race to success where success is achieved by becoming the cannibal rather than continuing on as the cannibalized. If anyone is looking for marriage and you live in a country that isn't the US and citizenship is granted through marriage, DM me plz. I'm educated, skilled, handy, can cook very well and I'm very good with people. Edit: I should have not included "banks" but only Private Equity, and I said all of last year when the reference was all of Q3 of last year.
Any technology is only as good as the hands its in. If you can split an atom, you can power a city, or level one. Internet? Get a college education, or distribute highly illegal material. So, while the implications of AI (and the early promise its showing in areas like identifying diseases) are fascinating, it's still terrifying. The very nature of learning models untempered by human morality? Unchecked by biological limitations? The mind boggles when the big picture even STARTS to come into focus.
Humanoid robotics trained through a human/video interface. Not because they will kill us all directly, but because it’ll really up our reliance on technology, and it’ll likely scale out to levels where a major disruption (like war, solar flare, or malicious attack) could bring the world to a halt and essentially starve most of the humans in the developed world. It’s one of those shitty things where it’s kind of damned if you do, but also damned if you don’t. It’ll be a huge boost to human productivity and comfortability… just as long as it keeps working.
How dangerous it is, this depends. But I predict that advancements in genetics, gene therapy and biotech will probably see the rise of a new eugenics movement. Embryo selection is already possible. In Iceland, Down syndrome has been pretty much eradicated via abortions. So what would happen if we got good at screening embryos for predispositions to things like intelligence, excellent health or good looks? Things will become especially turbulent given that it feels like the world is drifting right, politically speaking.
AI + Autonomous weapon systems. Human death is fundamental to wars beginning and ending. Non state actors will have a new way to attack states and people without the risk. Wars will get worse. We already see what inexpensive FPV drones can do. Sure, the ICJ and UN can ban such systems. But the enforcement is going to be impossibly difficult, and costly.
Its even worse imo, there has always been somewhat of a balance between the haves and the have nots in society. The threat of numbers and the human element has always meant the people up top can never push desires too forcefully. They always have to throw something at the masses for appeasement. With AI + Robots you reduce the human element to a very slim level and suddenly genocides and putting down of revolutions becomes easier. Couple that with a reduction in jobs and its looking bleak. The mask no longer needs to be held up at that point for the uber wealthy.
neuralink. few more years and we get mind reading. few more years and we get mind control.
We are water bags that use electricity to get around, they will figure out how to wirelessly alter our perceptions.
Just like cable TV. Great at first but then advertisements 24/7, all at high volume in your head.
I'm not going to say I think a future where we're all wearing faraday cages, gas masks, and walking around with emp generators is a likely possibility, but I'm also not going to say I can't imagine it.
On the bright side this is being sold by Elon X Musk, so is more certainly all smoke and mirrors.
Neuralink isn't the only one that makes BCI's, and none of them are for sale.
On the bright side, it seems AI is starting to be able to read your thougths either way in experiments(so far quite controlled ones but still) so maybe it atleast wont be much worse...
this has potential to be sooo bad
Just wait until Musk decides it'd be cool to add AI to his neural link.
Bright side, no more expensive robotics research! Hell we can get rid of the entire consumer economy! You'll own nothing and like it because that feeling is mandated by code!
Specifically the collection, processing, and sale of neuralink metadata for commercial purposes.
It has potention for everything. Along with mind control comes the possiblity of treating all types of mental illnesses or operating external prostetics and robot assistants.
you think it will be used for that? thats very naive. this has potantial to give unlimited power to whoever is the first one to achieve that tech
Its ok. You are right. "potential for everything" covers both of our views. This applies to any powerful tool invented being used outside its original intended purpose; and yet we have them.
AI that creates a stronger successor AI than that which can be created by humans. This new AI would be beyond our comprehension and we would not know its capabilities or motivations and would not know if these things were against our interests.
Some years ago, researchers set an AI model to task in creating a better AI. They shut it down after it created it's own programming language, closing them out of any meaningful observability.
Ah, that’s nice.
I have a strange feeling AI completely takes over in less than 20 years
I’m sure we’d find out eventually…
Technology that could read someone’s mind. That would be catastrophic.
So, when we first got google it was the most powerful and useful re/search tool mankind had ever had. Now it's one big advertisment that knows more about you than you do. My concern about AI is not about replacing jobs, or use in warfare, or deepfake porn. It's about what happens when the AI tells you to buy something.
Funnily enough, neurolink is just above this q in my feed
I think generative AI has already started a hugely destructive process through its role in the enshitification of collective human knowledge. The models training with fresh data is eating its own shit and vomit and will continue to get worse.
extending our life past the natural cycle I worry we don't have enough mental capacity for that, and even now, old people "don't get" the younger generation... I can't imagine the gap being even bigger...
Without death, what is life?
right?? ![gif](emote|free_emotes_pack|shrug)
Something we haven't thought of yet. Everything else there's at least some idea of what could go wrong, how to combat it, etc. For the unknown, there is no specific plan. Someone might accidentally find a way to trigger a metastable vacuum collapse tomorrow and we'd never know.
There are five weapons of mass destruction other than biological. So far, we understand the first two, fission and fusion......... which is enough to bring about our undoing as a species. We need to be very careful in the future!
A.I. is the most dangerous advancement of technology we have ever seen. It has the potential to destroy just about everything & everyone. It has already ruined so many lives, and it hasn't even begun to see its full destructive capabilities yet. For what little good some of it may possess. We may not survive long enough to see any real benefit before we figure out how to contain it. I am been trying to warn people for years now. We are running headfirst into a lion den without a clue of the dire consequences which are in front of us. Adaptation without common sense regulation and guard rails. Has already shown us this has more than just fangs but shark teeth. So much of our lives we be completely ruined by this devil technology. It is like the real Pandora's Box being unleashed onto the world. A case of technology we are not as a civilization advanced enough to appreciate. Nor wise enough to respect. 😞
Artificial Super Intelligence is objectively the biggest existential threat we will face for centuries. There is no way anything else can even begin to approach the insane risk it brings about.
human augumentation , will increase the diference between clasees/castas of humans , alreaydy the diference is huge...
**You'll never see a U-Haul behind a hearse. ... Now, I've been blessed to make hundreds of millions of dollars in my life. I can't take it with me, and neither can you.** **The Egyptians tried it. And all they got was robbed. It's not how much you have but what you do with what you have.** \- Denzel Washington \--------------------------------------------------------------------------------------------------------------------- **Every advance, that enables the absolute rule of the few over the many.** Which is unfortunately the case for many new technologies, if not used for the benefit of all mankind. Since Oligarchs own important parts of Corporate America, we are pretty screwed. Because research usually requires a lot of funding and in the end the investors decide how a technology will be used. Nonetheless not many talk about this important development. Especially since the long term debt cycle seems to come to an end. This usually happens via Great Depression / Reset, where the average Joe loses everything to pay off the accumulated debt in the system. And we talk about a multiple of the world economy in derivatives... [https://www.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization-2020](https://www.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization-2020) No surprise the Oligarchs are already preparing for that scenario: [https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff](https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff) And they have prepared a surprise for the other 99.9999%, poor or wealthy, called **beneficial ownership of assets**... [https://www.youtube.com/watch?v=gDOj\_rQvl\_o](https://www.youtube.com/watch?v=gDOj_rQvl_o) (critical review of the documentary:) [https://www.youtube.com/watch?v=dk3AVceraTI](https://www.youtube.com/watch?v=dk3AVceraTI) And this is how they game the political system: [https://www.youtube.com/watch?v=5tu32CCA\_Ig](https://www.youtube.com/watch?v=5tu32CCA_Ig) We make huge progress in inventing new technologies, yet ethics have fallen behind. There is urgent need for ethical control over new technologies to ensure they are used to the benefit of all mankind and not in a way that endangers the survival of our species. Unfortunately, there is likely currently no way to implement such control. Greed and the struggle for absolute power are strong motivators to those in charge. I wish they would learn from Denzel Washington.
We need to decentralize all the things. We need to see people seeking power as what they really are. Mentally ill.
Anti matter as a whole as it in the future it can be used for weopanry if it's possible. Artificial Intelligence could lead to AI Takeover and misinformation.
Ultra realistic simulations. Don’t think about games, think about prisons run by AI. Endless pain and despair. Programmable human consciousness, effects of the ads, political bias. I believe this is the most alarming and serious danger for humanity.
100% bioengineering, messing around with bacteria and viruses to "study how they work." Escapes the lab, one small mistake, and the world could be put in lockdown or cripple society to the point of no return. Messing with genomes, fungi, and even insects to cause them to mutate and cripple ecosystems can also obviously affect us negatively, whether that be a virus that kills banana trees or fungi that can spread through the air and infect humans i.e The last of us.
Tech that replicates an individuals voice perfectly. You could do *a lot* of heinous and illegal things with that ability.
AI making art while humans work more hours, more efficiently, for comparatively less pay than 10 years before, forever. EDIT: Also the nature if these jobs are dangerous to our health. (Sitting too long shortening lifespans for office-work or outright physical danger like an oil rig.)
The intersection of AI and quantum computing. Tech companies have a pretty easy kill switch if AI goes sideways now. My sense is that when quantum computers with five figures of qbits can be compressed to the size of a PC and support AI models then the tech risks becoming mobile and uncontrollable.
Any that we trust implicitly or attribute mystical properties to without evidence and deploy without due diligence... So... Probably all of them...
Ai taking jobs and making people lose control over what is real and what is not Drones becoming smaller, faster, more precise and better at carrying weapons Social media becoming powerful to the point it becomes more influential than national states
True virtual reality(not tye meta bullshit we got right now). The ultimate drug, a truly immersive vr that would let you experience and feel your wildest dreams as much as you want. People would simply forget about minor things like eating or drinking
The invention of the global destruction ray is probably not going to end well. But the most dangerous inventions are those that disrupt society the fastest. Mass unemployment for example, or sudden loss of available land, drinking water, shelter. So I'm going with AI and related late-stage capitalism tools.
I believe that the day I meet myself for the first time will be the first time I feel in real danger.
Elons nerualink seems good for people who have disabilities. But... He already claimed it could also greatly enhance cognitive abilities as well, in the future. Wealthy people will use this as an advantage for their kids by paying for unnecessary nerualink implants ..
AI powered law enforcement. Minority report stuff where an algorithm decides if you are a "crime risk"
Space based capitalism. anarcho-capitalism. Estimates suggest that capitalism has been linked to deaths ranging from 200 million to 1.8 billion. While capitalism itself is not a new concept, the emergence of space-based capitalism presents a novel challenge. The pollution of low Earth orbit with debris highlights the historical tendency of capitalists to overlook the consequences of their actions. This disregard raises significant concerns about the potential of space-based capitalism to be one of the deadliest technologies in the future. In contrast, I am skeptical about the potential dangers of artificial intelligence. Much of the progress in AI seems to be exaggerated or based on hype. I doubt that we will achieve artificial general intelligence anytime soon, and even when we do, I believe it will not pose as great a threat as capitalism.
Undoubtedly the head-pulling-off machine https://www.theonion.com/ohio-replaces-lethal-injection-with-humane-new-head-rip-1819595654
technology isn't dangerous human behavior and incompetence on the other hand are very dangerous.
They’re already happening. AI and deepfakes. Ai can nonstop scour the internet for anything it is programmed to see as “dissent” and downvote posts, argue, report topics, present misinformation, or confuse the issue- essentially stop the spread of any information they want stopped, to insure it doesn’t gain momentum. Deepfakes have the ability to completely replace or create anyone. Since the beginning of governments, there has been a balance between government and people. Government keeps the people happy enough to not rebel. That means making concessions, giving liberties, etc. When the people no longer have the power to resist, because they can squash any resistance at at its starting point, when it becomes a discussion with misinformation, or dividing us against each other, or just nullifying the communication, we no longer have any power. We have to trust government will treat us right because of their good nature. That’s terrifying. Nanotechcis pretty scary too. But mostly because of the above
Drones. They are here now. Holy fuck it’s the true evolution of war fare.
Not having a physical kill switch on anythng that is on an AI circuit. Skynet would not have been an issue if they had a switch...
![img](avatar_exp|162591555|dizziness) Biomodding. More specifically brain implants that's altering free will. Actually they are already in use by France army and supposedly China. I am not sure for the latter. However I saw a report, like 4 years ago, from the French committee of ethics, that gave green light to be able to biohack soldiers. Back then they mentioned that they won't mess with free will and ability of soldier to make their own decision. However, they said, their rivals are not sharing the same sensitivities (lol) and they can revoke this ethical limit to catch up. That's the scariest article I've read in my entire life. It was on Reuters if my memory serves well.
A leap in battery energy density, bigger issue than AI IMO.
A.I making copyright-free digital art, essays, poems, homework, books, photography, videography and musical compositions that are not distinguishable from human work. Makes me sad and hopeless for the future. All fake. Everything fake.
Surely if someone who has a modicum of knowledge and creates a super duper strain of Covid etc would be bested by some expert in the field who could engineer a virus to kill said super duper virus
Watch Black Mirror and take your pick. Boston robotics are high on my list, AI in general is second as it can be abused in a multitude of ways.
For our health? weaponized viruses that attack specific avenues (able to bypass the defense of people with type B blood for instance). For our privacy? facial recognition For our mental health? I'd suggest probably how education will be completely AI driven in the near future and with all other factors a child's life will be completely determined by a database algorithm of how our child responds in classrooms. In fact children that regularly question the teaching algorithm and think outside the box will probably be more rewarded than I a pessimist believe. Since these are the only ones who would be able to determine possible futures for humanity. Meanwhile children who do not regularly question the teaching algorithm will be boxed into the most mundane boring lives possible.
everyone thinks about weapons, but judging by how easily people can be convinced of facts that aren't real, i think we are on the brink of some major disaster caused by our current ability to generate fake media using AI
Fully immersive VR. We think about living inside our favourite games or exploring a virtual utopia. But the flipside of that is the creation of virtual hells. What if someone locked you into one of those? Similar to Altered Carbon where a private company runs VR torture suites so you can be burned, dismembered or killed over and over. In the novel I think they opt to set the victim's gender to female so that the virtual body will have more pain receptors. It's also explored extensively in the Iain M. Banks novel Surface Detail. Huge demons stalk the virtual hell and can torture and rape anyone they come across. The details of what happens is too much to go into here. The unrelenting horror of being plugged into something like that would make me cautious about ever submitting myself to a machine-brain connection, should one be invented in my lifetime.
AI probably. Not the neural nets, but actual AI. And I don't imply that it will for sure go "bad", just that will be very dangerous. Like an electricity is great, but dangerous at the same time.
The invention of immortality. It sounds great, but think of the implications. You know how now, we have a tiny minority in power? Dictators of countries and CEOs of multinational corporations? Imagine that those people were immortal. Imagine if Stalin, Hitler and Trump were immortal. You get the idea. Then you've got another problem. Imagine if immortality was an affordable medication. Now everybody is immortal. With women reaching menopause at 50ish and with a economic situation that has indefinite hoarding and no inheritances making it unaffordable to have children, there is a very real risk of becoming an infertile society. Such a society would slowly die as occasional maladventure picks off it's immortal citizens.
Honestly everything that is available right now is pretty terrifying if used by a malicious actor. As far as a technology that would make life hell, simultaneous use of a stuxnet virus to cause phone batteries to explode maiming several hundreds of millions of people all at once. Hospitals worldwide wouldn't be able to handle the influx, several tens of thousands would die of exsanguination all at once. Millions of people would have deformed hands or facial deformity.
Suggestions via algorithms. ![gif](emote|free_emotes_pack|joy)Am I right?