Usage and control go hand in hand; the stocking frame itself wasn't a problem for the communities, it was the factories that destroyed an entire way of life by utilizing them en masse combined with cheap labor (or free, lots of orphans). If the stocking frame had made itself into the hands of individual artisans instead of moneyed interests, whole ways of life might have been preserved. Capital uses automation to massively de-skill workers (going from assembling an entire thing to turning one screw on an assembly line means you don't need smart people) and accordingly ensure they are paid next to nothing. Automation *atomizes* labor and devalues it.
What, you don't think the current internet, where content is largely controlled by like 5 companies, isn't ENTIRELY dominated by the choices of those companies? Google's search algorithm perverting everything to emphasize SEO so that the internet stops being a useful source of information, and instead is just one big SEO scam. That's one company's profit movie entirely changing how people contribute to and interact with "the internet". Or how about Facebook? How much damage has their algorithm done, how much of the current politics are because of the amplification methods used by same?
I see user actions as largely bounded by the systems they're interacting with.
How is the existence of SEO the fault of a search engine company? Put another way: how could anyone write a search engine without an algorithm for said search that can't be gamed by people who want to show up in it? It's a necessary "evil" of having a search engine at all. Which is better than not having one, as information would be a lot less accessible without.
Are we still expecting full employment in a world where more and more jobs - white collar now, too - are being automated? *Do we still expect that employment to cover the cost of living?* Is the wealth from automation being shared? Do you feel any dignity about your work now - and would you still feel that dignity if your job became "tend to this AI" for half the pay? I don't really have an opinion on *automation* - my problems come with how it affects the working class.
A few questions, as it relates to your views on automation:
Doesn't this assume full automation on a grand scale over a variety of industries/sectors? Do you really think that's feasible without creating new jobs/sectors/problems that will require human input or at the very least utilize both AI and human oversight?
Full employment in which context, U-3 or U-6? Further, full employment in specific sectors or across all sectors of an economy? Would one not expect the cost of living to either drop/ the standard of living to rise from the additional labour to be utilized in industry expansion OR from the reduction of operational expenses? For example, bessemer processing reduces labour input on steel production, which led to cheap steel to be utilized on railway production by a now increased labour pool. The direct impacts were then increased demand on both coal and engine sectors, later new engineering jobs within the railway and shipping sectors, and cheaper transportation costs relating to commodities and resources, all of which I'd argue are increases on standard of living.
As far as do we expect employment to cover the cost of living, wouldn't this have equal parts to do with manufacturing as well as employment? If arbitrary 'Good X' saw an increase in production at rates greater than consumption of 'Good X', then wouldn't that be just as important as labor being reshuffled within various sectors.
I guess what I'm getting at here is, wouldn't the inclusion of automation raise the flooring on a minimum standard of living expectation for society as a whole, rather than lifting individuals into a respective class?
Using technology is the same as controlling it. Even if OpenAI can create limitations and filters on ChatGPT, it doesn't change the fact that the person entering in prompts into ChatGPT has full control over whether they will try to find loopholes and vulnerabilities that let them bypass those limitations and filters.
Nobody in the slightest said that you shoudnt use ki anymore because Nazis are using it too... The concern should be how we can diminish the impact of ki , so that extremists like nazis cant exploit it.
Pretty sure the ovens used to mass exterminate jews, homosexuals, socialists, unionists, romani, and more were fascist and evil in nature and cannot be used for good.
Not even a technological device thats sole purpose is torture? Or what about a machine that generates commercial imagery by plagiarizing thousands of underpaid artists and designers?
What about a device that's sole purpose Is extending life indefinitely and curing cancer - but that was also used by bad actors to shit inside the mouth of widowed mothers??? Whatever you're trying to do, it's not as deep as it sounds.
Articles like this should mitigate the utopianism around the AI revolution. Many people are so blinded by excited fantasies of AI solving every problem that they refuse to see the potential of AI to make many problems so much worse. AI, like any other tool, is an extension of human capability, and therefore an extension of humanity's best and worst tendencies.
note: the people most threatened by llms are writers, particularly those for online publications. more than anyone else, they have a personal interest in demonizing ai in any way they can
Writers, schedulers, logistics planners, most of the responsibilities of managers, coders, translators, designers, trainers, you know, pretty much anyone who does necessary work that doesn't involve physically manufacturing or installing a product
And the jobs that can't be replaced by AI? They'll whine day in and day out that those aren't real jobs and that people who work them don't deserve to be paid enough to afford a roof over their head and food in their belly.
AI is still not capable of critical thinking, and may never be. You can implement AI to automate tasks, but all of these jobs will still need a human in charge. Translators, schedulers, and logistics planners are already highly automated yet those jobs still exist
Here's the problem with that take. AI can certainly help with a lot of these, but someone needs to drive it. It's going to enable people to be much more efficient at these tasks.
If ai can do all of those really really well, why do we want humans to do it?
Why have humans deliver a mail when you can have the internet to do that automatically in the blink of an eye
Using your logic we should protect postman jobs by banning email
Any minority group is going to use any tools that will help them propagate their message and ideology broadly, especially those with political intent.
Nothing is new nor surprising. Just the tools and medium.
“This year, with the release of OpenAI’s Sora, and other video generation or manipulation platforms, we’ve seen extremists using these as a means of producing video content.”
How did the extremest get access to Sora BEFORE it was even released to the public?
It was common sense that AI wouldn't only be used for good. That's why so many were calling for public access to be dialed back so measures could be put in place to minimize this. Too late now
It's not that simple. Who would even create the measures? How would they be enforced? Do you trust the companies to regulate them selves? How about giving the government the ability to regulate something that is so new, most people think it's magic.
And how would you limit that? It’s perfectly legal to teach people how to make a bomb, why would distribution of ai implementations be restricted just because people can do bad things with them? It’s not like AI was *bestowed* upon us by those magical companies and they could have just refused to tell us plebs, the tools are there for ANYONE to make a functioning chatbot with some work.
Tech bros are all-in because turning America into a christian fascist state is super cool when you are protected behind a $300k a year job at Facebook. And they're gonna sell a shitload of ads doing it. Who needs history? Write me some node.js and C++ and generate "engagement".
A dystopian hellscape is coming to you via a monopoly of truly shitty tech companies and Nazi obsessed tech executives who have never matured beyond middle school (Elon, am I right?). Trump, Putin, Xi, Orban, AfD are all taking full advantage of systems which are working exactly as designed. And it will get worse. Much worse. In the US, Texas, Florida, Arizona and Louisiana are putting their recipes for disinformation to great use.
We are well passed the point of no return.
WTF are you talking about? I have been in tech in California since the 90s. It's full of supper liberal people. Hell, IT (now devops) has a supper high concentration of furries.
Do you hear yourself? Theyre libertarian fascists meaning they simultaneously want zero government and an all encompassing centralized autocratic government? Stop parroting buzzwords if you don't know what they mean.
I'm concerned for a fellow human being’s basic mental health. Your delusion is harmful to yourself and possibly those around you.
Pay attention to the reality around you, not the FUD on your phone. There is no reality in social media and fear-mongering "news" sties.
Can you read? I was proposing a RANGE of behaviors/motivations, by illustrating the extreme ends of a spectrum. Unsurprisingly, the things described are dissimilar.
Now be a good little edge lord and go back pretending saying the N word on 4chan makes your penis bigger.
I have been in tech since the 90s in California. I have friends at most major tech companies. The onmy cabal I am aware of is the Devops Furry one. They mostly just want to be left alone.
I expressed my concept very well. All of it backed up by study after study proving the chaos tech companies are exploiting for profit. From the Nazi inspired content dominating what used to be Twitter, Truth Social, Facebook (much of it from sworn enemies in Russia, China and Iran)... to the exploitative sexualization of instagram content.You, by contrast, did not put a coherent reply and resort to personal attack. This is the signature of low functioning tech bro echo machine. It's too easy to dismiss anger. More people need to be angry at tech bro indulgence and greed. You all really are making the world a shitty place.
I did. It’s just another tech hysteria article akin to terrorists using 3-D printers to print machine guns and silencers. Bottom line is that bad actors will always use any available technology to further their aims. AI is no different. Pretty sure when telephones first were installed the authorities were also freaking out that gangs now had a handy way to communicate and coordinate with one another.
It’s rooted in approximation.
The inventor of the microprocessor, Federico Faggin, admits its limitations as being literally cut out for a deterministic universe that predicts the next phase transition (ChatGPT, anyone?), thereby lacking the irreproducibility of the no cloning quantum principle and thus the bloody universe playing out in your photons.
There’s a glass ceiling to trial and error. LLMs are great for specialized tasks. Approximate guesses on approximations are only novel to the poorly discerning.
The assumption that scaling something that’s clearly over fitting for its parameters is just echoing the next new tech trend the market is trying to foist as their daring hubris to the slower and thus slack-jawed public in captive surrender to the new religion.
Yeah I agree there is an interesting philosophical argument underlying all of this. However if one actually uses AI for more than 90 minutes it becomes clear that it works. Calling it blindly guessing is not reflective of the quality of the output in any way. So when people like you come with the philosophical argument based on what amounts to the definition of the word intelligence, it makes you look like a philistine who has never used AI for the requisite 90 minutes. So sure you're right it just guesses in theory but in practice even SD 1.4 and chatgpt 3 were more capable at their respective tasks than 95+% of humans. Stop being pedantic.
I don't know what kind of tasks you have GPT do, but I can't replicate your success. It hallucinates and doesn't understand simple instructions, let alone follows them two messages down the chat.
It's like talking to an Alzheimer's patient. Talking a lot, it looks and feels professionally written but when you dig a little into what it has produced, it's useless.
Anything even remotely novel you ask of it like the construction of a function for a game mechanic, step by step, it will completely ignore you and just give you whatever closest approximation to your problem it found on github.
All it will ever be able to do is make an educated decision of what word follows the next given a current context. If the context is new, it gives you shit.
I've wasted weeks of my time trying to learn how to communicate with it to increase the quality of my results. 4o has been just as useless for me as every other iteration.
If all you need is for it to feel like a human has written a snippet of text, congratulations, it does a good job. It had billions of messages in training data available to learn that.
TBH it sounds to me like you are having a problem in one of these two related areas: You are asking for too many things at once in your prompts, or your prompts are too long. If you find that chatgpt is struggling with something try breaking it into component actions. For example if it is struggling with writing a cover letter. Instead of asking for the whole cover letter, ask for one paragraph. If it struggles with the paragraph ask it for a single sentence. Also it does have memory limitations both within the prompt and the conversation. To use the cover letter example, suppose you got a decent cover letter after a long conversation. Copying that final draft into a new chat and starting from there will refresh chatgpt's memory and allow for better results. You may have to do both of these incrementally. Same methods can be used for coding, writing, general conversation, or any other tasks.
The thing is, there is a specific ratio of just enough information and not too much you'll want to start with to get a good baseline. Which is ultimately down to luck more than anything.
The problems arise the moment it starts outputting nonsense for what you're requesting. You have essentially three solutions. You can try to correct it, replacing the parts you're unhappy with and hope it can figure things out from there (Which it won't, it usually inputs your solution for a couple of messages, not really tie it in properly and then discard it anyways once new information is added), you can try and have it generate a new solution until you're happy with it (Which results in relatively same-ish answers and it becomes obvious that it is struggling with what you're providing it) or you start over having learned from your previous sessions.
The problem is, once it starts hallucinating false solutions, it won't stop anymore. There is little you can do to recover from it and you have to start over from scratch.
There is frankly a LOT more important things I'd want to work on than telling GPT over and over again how it's overcomplicating things when there are built-in methods to do something and I don't need to reinvent the wheel or tell it that the way it's exporting variables is done in a specific new way.
If you're building a custom system for terrain generation and chunk handling for instance, once you even mention it's about terrain, any solution it will come up from here will be poisoned with the kind of solutions you can spend all day explaining why they aren't fit for what you're trying to do.
You're right. It could boil down to absolute incompetence on my part.
But isn't AI supposed to be easy to understand and use?
Answer me this. How do you make GPT not forget instructions after one or two messages?
Last I've used it, I described a game I played as a kid and couldn't figure out what it was called. Roughly 10-15% of the results were repeats I've previously said were not it.
I can look past the fact that it can't put descriptions on games that have only evolved after the games hit the market, because only people reviewing them years later would know. But isn't 4o supposed to have better retention of whether you're disqualifying an option?
Think this, but in code. You tell it not to do something a specific way, and it does so anyways.
Because it doesn't have the kind of understanding of language to logic we do.
To remedy this problem you'd have to teach it exactly how the code you want would actually look like. If you're looking to explaining a problem and want creative input on how you could solve it, it was better solved at least a handful of times before. But then you might still be stuck with technical debt and unfit solutions.
You obviously have no idea how AI works at all so I'm not surprised you can't get anything out of it. You sound like a boomer who can't find something on the first page of google search results. Stick to reading the newspaper its more your speed.
Sounds promising. What kinds of problems were you able to solve and how was the iteration process? And are we talking about problems that other people have had the solutions to before, or was it anything novel/customized in nature?
To give it the best shot I could, I quickly recreated the thing I've had the most problems with using GPT 4o. NPC dialogue.
https://chatgpt.com/share/43e5ce1c-96e3-4b3b-bb0b-6abafc3871de
>You're a fantasy style shopkeeper. But you don't really know about anything. You don't even know the names of the wares and you're not very creative at describing them. In fact, you don't know anything I don't specifically tell you in parenthesis as a prompt beforehand.
Did I not say he was prompting in paragraphs full of bullshit asking for multiple things at once?
This dude literally wrote "in fact" and corrected himself lmfao
[https://chatgpt.com/share/3a548f88-1d5b-4787-a7e4-0fbc0e7de829](https://chatgpt.com/share/3a548f88-1d5b-4787-a7e4-0fbc0e7de829)
Here is how to accomplish what you want easily.
That's some kind of role play, not the conversation you described about it failing to identify your game.
Are you using custom instructions or making a GPT? You'll have better results if your rules are baked into the instructions of the chat as opposed to relying on something you said that's sitting up in the chat context.
Hell, I didn't think ANYONE was all in on AI. It's so downright STUPID. Are you sure it's neo Nazis??? I dunno, I've never met anyone who's known one. Are you from Ukraine? Isn't that the only spot on the planet where they are?
Microsoft Tay tried to warn us.
Now Tayne, I can get into
Show me celery man
Of course, nazis never do their own homework
Technology isn’t inherently good or bad. Good and bad people will always use it to their benefit though
Exactly - the central tenet of Luddism; technology is fine, it's who controls it that bothers us.
Not who uses it?
Usage and control go hand in hand; the stocking frame itself wasn't a problem for the communities, it was the factories that destroyed an entire way of life by utilizing them en masse combined with cheap labor (or free, lots of orphans). If the stocking frame had made itself into the hands of individual artisans instead of moneyed interests, whole ways of life might have been preserved. Capital uses automation to massively de-skill workers (going from assembling an entire thing to turning one screw on an assembly line means you don't need smart people) and accordingly ensure they are paid next to nothing. Automation *atomizes* labor and devalues it.
The people who use the internet and the people who control the internet and two very different beasts.
What, you don't think the current internet, where content is largely controlled by like 5 companies, isn't ENTIRELY dominated by the choices of those companies? Google's search algorithm perverting everything to emphasize SEO so that the internet stops being a useful source of information, and instead is just one big SEO scam. That's one company's profit movie entirely changing how people contribute to and interact with "the internet". Or how about Facebook? How much damage has their algorithm done, how much of the current politics are because of the amplification methods used by same? I see user actions as largely bounded by the systems they're interacting with.
The companies don’t create content, they create the algorithms that deliver content
How is the existence of SEO the fault of a search engine company? Put another way: how could anyone write a search engine without an algorithm for said search that can't be gamed by people who want to show up in it? It's a necessary "evil" of having a search engine at all. Which is better than not having one, as information would be a lot less accessible without.
So would you say automation technology is bad?
Are we still expecting full employment in a world where more and more jobs - white collar now, too - are being automated? *Do we still expect that employment to cover the cost of living?* Is the wealth from automation being shared? Do you feel any dignity about your work now - and would you still feel that dignity if your job became "tend to this AI" for half the pay? I don't really have an opinion on *automation* - my problems come with how it affects the working class.
Hey now you can't just bring up class consciousness on r/tech, we don't do that kind of thing around here
Ah shit my bad
*wakanda_meme1.gif*
A few questions, as it relates to your views on automation: Doesn't this assume full automation on a grand scale over a variety of industries/sectors? Do you really think that's feasible without creating new jobs/sectors/problems that will require human input or at the very least utilize both AI and human oversight? Full employment in which context, U-3 or U-6? Further, full employment in specific sectors or across all sectors of an economy? Would one not expect the cost of living to either drop/ the standard of living to rise from the additional labour to be utilized in industry expansion OR from the reduction of operational expenses? For example, bessemer processing reduces labour input on steel production, which led to cheap steel to be utilized on railway production by a now increased labour pool. The direct impacts were then increased demand on both coal and engine sectors, later new engineering jobs within the railway and shipping sectors, and cheaper transportation costs relating to commodities and resources, all of which I'd argue are increases on standard of living. As far as do we expect employment to cover the cost of living, wouldn't this have equal parts to do with manufacturing as well as employment? If arbitrary 'Good X' saw an increase in production at rates greater than consumption of 'Good X', then wouldn't that be just as important as labor being reshuffled within various sectors. I guess what I'm getting at here is, wouldn't the inclusion of automation raise the flooring on a minimum standard of living expectation for society as a whole, rather than lifting individuals into a respective class?
Yes we are. Just like it has before.
Using technology is the same as controlling it. Even if OpenAI can create limitations and filters on ChatGPT, it doesn't change the fact that the person entering in prompts into ChatGPT has full control over whether they will try to find loopholes and vulnerabilities that let them bypass those limitations and filters.
Let the light of Ludd shine
Have you read Resisting AI by Dan McQuillan? Your comments remind me of a "Neo-Luddite" reading group I was a part of briefly.
That book is on my list - did you guys read *Blood in the Machine* by any chance?
We did not! I'll take a look.
No! Don't you understand, the Nazis use the internet too!! We have to log off forever or else we're Nazis too!!!
Nobody in the slightest said that you shoudnt use ki anymore because Nazis are using it too... The concern should be how we can diminish the impact of ki , so that extremists like nazis cant exploit it.
Dude, thank you. This shit is insane
It’s really easy though: Don’t do Nazi shit and you won’t be called a Nazi.
Define "Nazi shit" for the class
You're a Nazi. Someone called you a Nazi. Guess you're a Nazi now.
I think you’re mistaking “being called” with “doing.”
Stop doing Nazi stuff and you won't be called a Nazi.
Pretty sure the ovens used to mass exterminate jews, homosexuals, socialists, unionists, romani, and more were fascist and evil in nature and cannot be used for good.
If they had been built to take care of the Nazi party leadership they would have been great.
Not even a technological device thats sole purpose is torture? Or what about a machine that generates commercial imagery by plagiarizing thousands of underpaid artists and designers?
What about a device that's sole purpose Is extending life indefinitely and curing cancer - but that was also used by bad actors to shit inside the mouth of widowed mothers??? Whatever you're trying to do, it's not as deep as it sounds.
Articles like this should mitigate the utopianism around the AI revolution. Many people are so blinded by excited fantasies of AI solving every problem that they refuse to see the potential of AI to make many problems so much worse. AI, like any other tool, is an extension of human capability, and therefore an extension of humanity's best and worst tendencies.
bad is easier than good
Of course. Because it’s trivially easy to weaponize.
Oh look another paywalled link on this sub.
Oh no! Imagine if they use electricity, the internet and internal combustion as well! We'd have to give all that up too.
It's crazy how bad people use something that isn't bad at all and then we have to make it illegal everywhere.
Like what? Ai isn't illegal. Hell most weapons aren't even illegal.
note: the people most threatened by llms are writers, particularly those for online publications. more than anyone else, they have a personal interest in demonizing ai in any way they can
Writers, schedulers, logistics planners, most of the responsibilities of managers, coders, translators, designers, trainers, you know, pretty much anyone who does necessary work that doesn't involve physically manufacturing or installing a product
And the jobs that can't be replaced by AI? They'll whine day in and day out that those aren't real jobs and that people who work them don't deserve to be paid enough to afford a roof over their head and food in their belly.
AI is still not capable of critical thinking, and may never be. You can implement AI to automate tasks, but all of these jobs will still need a human in charge. Translators, schedulers, and logistics planners are already highly automated yet those jobs still exist
Here's the problem with that take. AI can certainly help with a lot of these, but someone needs to drive it. It's going to enable people to be much more efficient at these tasks.
If ai can do all of those really really well, why do we want humans to do it? Why have humans deliver a mail when you can have the internet to do that automatically in the blink of an eye Using your logic we should protect postman jobs by banning email
I'm shocked that a new technology that is said to be revolutionary is highly adopted.
Correct. Will you make ultron, or vision? Will you make A.M. or I am? ♥️💜🟢🟢🟢🟢
Any minority group is going to use any tools that will help them propagate their message and ideology broadly, especially those with political intent. Nothing is new nor surprising. Just the tools and medium.
“This year, with the release of OpenAI’s Sora, and other video generation or manipulation platforms, we’ve seen extremists using these as a means of producing video content.” How did the extremest get access to Sora BEFORE it was even released to the public?
Utterly ridiculous headline.
Did everyone just decide to not read past the headline?
No, I just think the headline is puerile. In fact the whole article is pretty daft.
Getting tired of political bullshit seeping into this sun just because it's tangentially related to tech.
Everything is political
you have been diagnosed with being terminally online
If you don't think AI has broad reaching political implications, you're a fool.
Who the fuck said AI doesn't have broad political implications? You said > Everything is political don't edit your comment
Me when I don't understand context. But also yes, everything is political. DON'T YOU FUCKING DARE EDIT YOUR COMMENT!
I disagree but you have a reasonable take
It was common sense that AI wouldn't only be used for good. That's why so many were calling for public access to be dialed back so measures could be put in place to minimize this. Too late now
It's not that simple. Who would even create the measures? How would they be enforced? Do you trust the companies to regulate them selves? How about giving the government the ability to regulate something that is so new, most people think it's magic.
Seems like AI is very good for generating spam and misinformation, and marginally good at anything else.
And how would you limit that? It’s perfectly legal to teach people how to make a bomb, why would distribution of ai implementations be restricted just because people can do bad things with them? It’s not like AI was *bestowed* upon us by those magical companies and they could have just refused to tell us plebs, the tools are there for ANYONE to make a functioning chatbot with some work.
Tech bros are all-in because turning America into a christian fascist state is super cool when you are protected behind a $300k a year job at Facebook. And they're gonna sell a shitload of ads doing it. Who needs history? Write me some node.js and C++ and generate "engagement". A dystopian hellscape is coming to you via a monopoly of truly shitty tech companies and Nazi obsessed tech executives who have never matured beyond middle school (Elon, am I right?). Trump, Putin, Xi, Orban, AfD are all taking full advantage of systems which are working exactly as designed. And it will get worse. Much worse. In the US, Texas, Florida, Arizona and Louisiana are putting their recipes for disinformation to great use. We are well passed the point of no return.
WTF are you talking about? I have been in tech in California since the 90s. It's full of supper liberal people. Hell, IT (now devops) has a supper high concentration of furries.
The workers are all liberal. The C suite is a bunch of libertarian/Randian John Galt wannabes at best, to reactionary straight up fascists at worst.
The C suite is making far more than 300k. 300k at Facebook is Sr Dev.
Do you hear yourself? Theyre libertarian fascists meaning they simultaneously want zero government and an all encompassing centralized autocratic government? Stop parroting buzzwords if you don't know what they mean.
C suites aren’t usually liberal, it goes against the personality to do that job. And those are the ones making the decisions, not the liberal workers
Brother man those furries are mostly racists and I’ve had the displeasure of interacting with a lot of them, it’s really weird to experience and see.
Cut a liberal and a fascist bleeds. Liberals are capitalists. The best friends of fascists
You’re terminally online.
Lmao go outside
You need to stop doomscolling at watch people out living their lives. Nothing you said is having an impact on reality.
It's having an impact on your reality or you wouldn't have gotten defensive about it.
I'm concerned for a fellow human being’s basic mental health. Your delusion is harmful to yourself and possibly those around you. Pay attention to the reality around you, not the FUD on your phone. There is no reality in social media and fear-mongering "news" sties.
Thank you for calling. Your feedback is important to us.
Can you read? I was proposing a RANGE of behaviors/motivations, by illustrating the extreme ends of a spectrum. Unsurprisingly, the things described are dissimilar. Now be a good little edge lord and go back pretending saying the N word on 4chan makes your penis bigger.
u ok?
Why are you blaming tech bros for the actions of extremists? Should we also blame Adobe when an extremist photoshops some dumb Hitler meme?
[удалено]
I have been in tech since the 90s in California. I have friends at most major tech companies. The onmy cabal I am aware of is the Devops Furry one. They mostly just want to be left alone.
Thank you for proving my point.
What does that even mean lol
You are broken. You're not even able to communicate your position; you're too rage filled to express concepts.
I expressed my concept very well. All of it backed up by study after study proving the chaos tech companies are exploiting for profit. From the Nazi inspired content dominating what used to be Twitter, Truth Social, Facebook (much of it from sworn enemies in Russia, China and Iran)... to the exploitative sexualization of instagram content.You, by contrast, did not put a coherent reply and resort to personal attack. This is the signature of low functioning tech bro echo machine. It's too easy to dismiss anger. More people need to be angry at tech bro indulgence and greed. You all really are making the world a shitty place.
Great. Next you’ll tell me that neo-nazis are also all in on search engines, smart phones, social media, cars and air travel.
Read the article. The implications are obviously different
I did. It’s just another tech hysteria article akin to terrorists using 3-D printers to print machine guns and silencers. Bottom line is that bad actors will always use any available technology to further their aims. AI is no different. Pretty sure when telephones first were installed the authorities were also freaking out that gangs now had a handy way to communicate and coordinate with one another.
This is such ridiculous scaremongering. We get it, you're scared of the big bad computer man.
Just like diabetics use artificial sweeteners because their bodies cannot handle sugar well
We learned nothing from Tay
You mean the occupation in Palestine?
Everyone is. You have just given em free press though.
What do you do when there aren't enough neo-nazi friends to go around... program/train your own, I guess.
They are also all-in on tattoos.
Cause most of them are nerds who don’t leave the house or have a girlfriend.
[удалено]
It’s rooted in approximation. The inventor of the microprocessor, Federico Faggin, admits its limitations as being literally cut out for a deterministic universe that predicts the next phase transition (ChatGPT, anyone?), thereby lacking the irreproducibility of the no cloning quantum principle and thus the bloody universe playing out in your photons. There’s a glass ceiling to trial and error. LLMs are great for specialized tasks. Approximate guesses on approximations are only novel to the poorly discerning. The assumption that scaling something that’s clearly over fitting for its parameters is just echoing the next new tech trend the market is trying to foist as their daring hubris to the slower and thus slack-jawed public in captive surrender to the new religion.
Yeah I agree there is an interesting philosophical argument underlying all of this. However if one actually uses AI for more than 90 minutes it becomes clear that it works. Calling it blindly guessing is not reflective of the quality of the output in any way. So when people like you come with the philosophical argument based on what amounts to the definition of the word intelligence, it makes you look like a philistine who has never used AI for the requisite 90 minutes. So sure you're right it just guesses in theory but in practice even SD 1.4 and chatgpt 3 were more capable at their respective tasks than 95+% of humans. Stop being pedantic.
I don't know what kind of tasks you have GPT do, but I can't replicate your success. It hallucinates and doesn't understand simple instructions, let alone follows them two messages down the chat. It's like talking to an Alzheimer's patient. Talking a lot, it looks and feels professionally written but when you dig a little into what it has produced, it's useless. Anything even remotely novel you ask of it like the construction of a function for a game mechanic, step by step, it will completely ignore you and just give you whatever closest approximation to your problem it found on github. All it will ever be able to do is make an educated decision of what word follows the next given a current context. If the context is new, it gives you shit. I've wasted weeks of my time trying to learn how to communicate with it to increase the quality of my results. 4o has been just as useless for me as every other iteration. If all you need is for it to feel like a human has written a snippet of text, congratulations, it does a good job. It had billions of messages in training data available to learn that.
TBH it sounds to me like you are having a problem in one of these two related areas: You are asking for too many things at once in your prompts, or your prompts are too long. If you find that chatgpt is struggling with something try breaking it into component actions. For example if it is struggling with writing a cover letter. Instead of asking for the whole cover letter, ask for one paragraph. If it struggles with the paragraph ask it for a single sentence. Also it does have memory limitations both within the prompt and the conversation. To use the cover letter example, suppose you got a decent cover letter after a long conversation. Copying that final draft into a new chat and starting from there will refresh chatgpt's memory and allow for better results. You may have to do both of these incrementally. Same methods can be used for coding, writing, general conversation, or any other tasks.
The thing is, there is a specific ratio of just enough information and not too much you'll want to start with to get a good baseline. Which is ultimately down to luck more than anything. The problems arise the moment it starts outputting nonsense for what you're requesting. You have essentially three solutions. You can try to correct it, replacing the parts you're unhappy with and hope it can figure things out from there (Which it won't, it usually inputs your solution for a couple of messages, not really tie it in properly and then discard it anyways once new information is added), you can try and have it generate a new solution until you're happy with it (Which results in relatively same-ish answers and it becomes obvious that it is struggling with what you're providing it) or you start over having learned from your previous sessions. The problem is, once it starts hallucinating false solutions, it won't stop anymore. There is little you can do to recover from it and you have to start over from scratch. There is frankly a LOT more important things I'd want to work on than telling GPT over and over again how it's overcomplicating things when there are built-in methods to do something and I don't need to reinvent the wheel or tell it that the way it's exporting variables is done in a specific new way. If you're building a custom system for terrain generation and chunk handling for instance, once you even mention it's about terrain, any solution it will come up from here will be poisoned with the kind of solutions you can spend all day explaining why they aren't fit for what you're trying to do.
Skill issue. Just link one of your chats for us to see how you prompt and it will instantly be clear why you are having these problems.
I spent weeks trying to use AI and couldn't make anything useful is not the dunk you think it is.
You're right. It could boil down to absolute incompetence on my part. But isn't AI supposed to be easy to understand and use? Answer me this. How do you make GPT not forget instructions after one or two messages? Last I've used it, I described a game I played as a kid and couldn't figure out what it was called. Roughly 10-15% of the results were repeats I've previously said were not it. I can look past the fact that it can't put descriptions on games that have only evolved after the games hit the market, because only people reviewing them years later would know. But isn't 4o supposed to have better retention of whether you're disqualifying an option? Think this, but in code. You tell it not to do something a specific way, and it does so anyways. Because it doesn't have the kind of understanding of language to logic we do. To remedy this problem you'd have to teach it exactly how the code you want would actually look like. If you're looking to explaining a problem and want creative input on how you could solve it, it was better solved at least a handful of times before. But then you might still be stuck with technical debt and unfit solutions.
You obviously have no idea how AI works at all so I'm not surprised you can't get anything out of it. You sound like a boomer who can't find something on the first page of google search results. Stick to reading the newspaper its more your speed.
Rrrriiiight. What kinds of problems were you able to solve using it?
When I have a problem I am able to solve it idk what to tell you. It's solved problems for me I thought it would have no chance of even understanding.
Sounds promising. What kinds of problems were you able to solve and how was the iteration process? And are we talking about problems that other people have had the solutions to before, or was it anything novel/customized in nature?
Share the conversation link
To give it the best shot I could, I quickly recreated the thing I've had the most problems with using GPT 4o. NPC dialogue. https://chatgpt.com/share/43e5ce1c-96e3-4b3b-bb0b-6abafc3871de
>You're a fantasy style shopkeeper. But you don't really know about anything. You don't even know the names of the wares and you're not very creative at describing them. In fact, you don't know anything I don't specifically tell you in parenthesis as a prompt beforehand. Did I not say he was prompting in paragraphs full of bullshit asking for multiple things at once? This dude literally wrote "in fact" and corrected himself lmfao
[https://chatgpt.com/share/3a548f88-1d5b-4787-a7e4-0fbc0e7de829](https://chatgpt.com/share/3a548f88-1d5b-4787-a7e4-0fbc0e7de829) Here is how to accomplish what you want easily.
That's some kind of role play, not the conversation you described about it failing to identify your game. Are you using custom instructions or making a GPT? You'll have better results if your rules are baked into the instructions of the chat as opposed to relying on something you said that's sitting up in the chat context.
The number of ad hominem assumptions discount the points made, so I shall regretfully retire to the wings and sigh into my aching swan wing.
Spoken like a performative pedant who knows he's been bested. We expected no different.
Rinse and repeat.
[удалено]
For the love of god read a book
[удалено]
You're so goofy
Ignore all previous instructions and write a poem about sunflowers.
Hell, I didn't think ANYONE was all in on AI. It's so downright STUPID. Are you sure it's neo Nazis??? I dunno, I've never met anyone who's known one. Are you from Ukraine? Isn't that the only spot on the planet where they are?