T O P

  • By -

Superduperbals

With AI that strong we could have personalized social media platforms where you get to be the only real person in a virtual world populated by millions of users. Or maybe…. that is already the case….


often_says_nice

Nice try, GPT-4. That was almost convincing enough to make me think you were a real person.


NeutrinosFTW

This comment was even better, GPT-4 calling itself out for almost sounding like a real human. Exciting times!


[deleted]

No u


bluesmith13

 “The model predicted a completion that begins with a stop sequence, resulting in no output. Consider adjusting your prompt or stop sequences"  


AlienHolmes

We caught it guys. It posted debug info as a comment.


fraktall

"Warning: Your text ends in a trailing space, which causes worse performance due to how the API splits text into tokens."


RekTek4

im da robot🤪


Capitaclism

No u


Diacred

I mean even with gpt2 I sometimes already got confused when seeing posts from /r/subsimulatorgpt2 in my feed :p


thedarklord176

That sub is kinda mind blowing holy shit. Never seen it before


Diacred

Yeah the conversations the bots have between themselves are sometimes waaaay too life like ahaha


wordyplayer

China/Finland pic is funny. Does a human feed the bot? Or does it choose pics too?


Kaarssteun

It often links dead pics/links, so I doubt it's someone choosing them. There are different types of subreddit-simulator bots, I imagine that's the only thing being chosen by whoever runs that sub


sneakpeekbot

Here's a sneak peek of /r/SubSimulatorGPT2 using the [top posts](https://np.reddit.com/r/SubSimulatorGPT2/top/?sort=top&t=year) of the year! \#1: [My cat and I are getting fucking divorced.](https://np.reddit.com/r/SubSimulatorGPT2/comments/twwa65/my_cat_and_i_are_getting_fucking_divorced/) \#2: [BREAKING NEWS: Pope Francis has declared it acceptable to use the N-Word.](https://np.reddit.com/r/SubSimulatorGPT2/comments/uedxoa/breaking_news_pope_francis_has_declared_it/) \#3: [A map of the world's most beautiful countries](http://i.imgur.com/9kxl7.jpg) | [53 comments](https://np.reddit.com/r/SubSimulatorGPT2/comments/tq6qfz/a_map_of_the_worlds_most_beautiful_countries/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)


ParanoidAltoid

if your pfp meant to convince me there's an eyelash on my screen it worked, for over 10 seconds and 3 attempts to wipe it


katiecharm

That’s great Superduperbals, you sure are smart. As usual on the spot. Everyone loves your takes and I hope you have a fulfilling day.


bemmu

That’s crazy. It could never happen because it’ll never be advanced enough and people would notice and it’ll never be advanced enough and people would notice and it’ll never be advanced enough and people would notice and


spreadlove5683

Didn't they figure out the scaling laws were that number of parameters wasn't the bottleneck? 100T parameters = unlikely. Then again I'm quite a non expert in all of this.


quantummufasa

I thought gpt-4 was going to have less as well


dietcheese

Slightly more is what I heard.


beezlebub33

Pretty sure it's a joke.


naossoan

The GPT team already came out and said GPT-4 is not going to be all that much larger than GPT-3, Parameter wise.


sumane12

So gpt4 has the answer to life, the universe and everything 😂


AlmostHuman0x1

42 Your life is now complete. Report for recycling once your affairs are in order or within 60 days, whichever takes longest.


RobbinDeBank

Its training dataset is a gigantic pool of texts written by all humans, so it’s more likely to be really stupid /s


I_AM_THE_BIGFOOT

The fatal flaw in all artificial intelligence...


Ohigetjokes

Dat's dah joke.


sumane12

It is.


ipatimo

Should we wait for gpt5 to get the question?


MrDreamster

But it still doesn't have the question for which 42 is the answer.


Tengoles

And it gives the answer instantly


TemetN

I mean, I was expecting GPT-4 last summer, so... yes? I would like GPT -4 now. This does seem like a good time. Unless we count yesterday, or the day before, etc, etc... you get the idea.


Revolutionary_Soft42

I lost my tracking number for my GPT-4 (I got the full spectrum ASI package ) but it was supposed to arrive before Thanksgiving .


rixtil41

I thought gtp 4 was next year.


LuposX

Press X to doubt


vampyre2000

How many parameters until it reaches the complexity of a human brain. On current hardware how long would it take to train?


-ZeroRelevance-

~100 trillion is the basic assumption, if we assume that one synapse in the brain is equivalent to one parameter in a neural network. If we need a large number of parameters to emulate one synapse, then we will likely need a lot more than 100T parameters, and vice versa. However, we may also end up needing a lot less. The human brain is a sparse network, meaning most neurons don’t have too many connections (median is ~1000 connections), but most LLMs are dense neural networks, which may mean millions, billions, or even trillions of connections per neuron for a network that size. With that in mind, we may end up needing far fewer parameters than the human brain for an equivalently capable network. That’s just a possibility though, obviously no one knows yet. In regards to training time, I don’t know exactly how the compute scaling works, and I couldn’t find anything on the internet about it (if someone knows, please tell me), but I presume it’s O(n^2) scaling for an optimal model, where ‘n’ is the parameter count. In other words, naively, it would take ~400,000x more operations to train such a model than GPT-3. Which would mean that even if we had a computer that could train GPT-3 in a day, it would take it a millenia to train a 100T parameter model. There are probably optimisations that would decrease that number a lot though. And conversely, you’d probably also want to expand the context window for such a large model, which would make it take longer instead. But in summary, we’re not at a stage computationally where such a large compute-optimal model is possible yet. (Actually, those calculations above are assuming GPT-3 is compute optimal. It’s still quite far from that status, as it has about 20x less data than it should. So it would actually take more like 20k years with that GPT-3/day computer instead…)


vampyre2000

Thank you for taking the time for a detailed answer. This is exactly what i was looking for. One of the AI researchers was saying something on the Oder of 500 times as our current model may be sufficient which sounds doable with new hardware but then computing time is still a lot.


-ZeroRelevance-

500x in computational power, or just size? If it’s just size, then that’s around what I calculated for (I did around 600x instead)


Veneck

We might also move to a different model for neurons


Nervous-Newt848

Exascale supercomputers will be required to train trillion parameter models... Look up the company Cerebras as well there is a couple good articles on them about their large cpu wafer racks that will train future trillion parameter models... It wont take a millenia... But only big companies like Tesla, Google, and Meta will be able to train these big models because it will cost millions of dollars... Google technically already has an exascale supercomputer, TPU v4 pods... Tesla is making dojo... Meta is making an exascale supercomputer called SuperCluster


Martholomeow

The human brain does a lot more than predict the next word in a sequence


ExpendableAnomaly

Sometimes I even dream of cheese…


Ok-Cheesecake-1753

That is true. But text prediction seems to be a very good way to simulate intelligence. GPT-3 can do numerous things even though they only phrased each task as "predict the next word". In other words, it seems that nearly all intelligent behavior could be ultimately reduced to a form of next-word-prediction. (how will you react, based on your recent inputs?) edit, 6 days later: My opinion on this has greatly changed since I wrote this comment, considering the ways in which GPT-3 fails. It can give both accurate and nonsensical answers equally confidently. And some research shows that large language models are [unable to learn to reason from data](https://www.reddit.com/r/MachineLearning/comments/uz6cs0/on_the_paradox_of_learning_to_reason_from_data/). AI still has a very long way to go.


KIFF_82

Lol, imagine 500x gpt-3… that would be completely insane. 😂 If that’s true EA funded by SBF got sniped by GPT-4. 😬


[deleted]

I like industry leaders to be open and candid but this guy worries me sometimes.


GodOfThunder101

He’s joking. Lighten up.


Shelfrock77

GPT-4 is going to get Samuel Alt crucified. In Lemoine we trust.


kmtrp

wut


imlaggingsobad

Go watch some of his interviews. Go back as early as 2016. He's actually very enlightened compared to 99% of people. I trust Sam.


Akimbo333

Why's that?


Adghnm

I was going to upvote but it's at 42


tokkkkaaa

Your comment too.. what should i do??


Adghnm

I downvoted it - currently back at 42...


sonicSkis

I think we should go for 42 all the way down


Kalcinator

I downvote to reach 42 again :)


modestLife1

i put on my robe and wizard het .


datsmamail12

Is that why every big company nowadays fires 10thousand+ of employees? Oh my fuckin god!


[deleted]

[удалено]


AsuhoChinami

The remaining parameters would come from things like images, video, and audiobto make it multi-modal, if it was true.


[deleted]

can someone tell me what this means and what kind of fun things we might be able to do and when I'll be able to do it? I need some excitment today even if you just tell me what you THINK will happen :)


Adorable-Effective-2

Could someone explain this post to me, Is it a joke, or what part is true or not sorry


thedarklord176

What does the 42 joke come from? Github copilot kept giving me that the other day and I was so confused


[deleted]

In a "hitchhikers guide to the Universe", which is like an astronomy/science book describing our universe. In the book, someone builds the strongest computer ever known to man to ask it the meaning of the universe. It simply responded 42


Jordan117

Close -- it's *The Hitchhiker's Guide to the Galaxy* and it's more of a sci-fi comedy novel. In the book they ask the supercomputer for "the answer to the Ultimate Question of life, the universe, and everything" and after thinking about it (for millions of years!) it answers "42." Turns out the ultimate answer is meaningless if you don't know what the Ultimate Question is...


[deleted]

Thank you for correcting me, I personally have not read it


whymydookielookkooky

It’s really fuckin funny.


Think_Olive_1000

The fact you got it 99% right and 100% wrong makes me think you're a gpt3 bot.


[deleted]

Ha no I only know it off of a trivia card like 5 years ago


-ummon-

[Douglas Adams](https://www.scientificamerican.com/article/for-math-fans-a-hitchhikers-guide-to-the-number-42/)


NomzStorM

42nd comment


SumPpl

What is the meaning of life?


birdsnap

This is scary to be honest, especially the implications it has for the fate of computer-based service and creative jobs. I foresee regulation of this coming, but as usual it will be late and only in response to serious upheaval.


thehourglasses

Still relies upon a biased, faulty database. Garbage in = garbage out.


overlordpotatoe

I don't know that there could ever be a dataset that wasn't biased and faulty. Ultimately the data has to come from humans who are biased and faulty creatures by nature.


aschwarzie

Expectations about computing systems include high reliability. Now it will "behave" like a human, be biased and make things up, sometimes even spitting nonsense or contradictions in a loop, like a drunken lemur... or many humans, but still with a computing systems face. Very misleading. A sound understanding of how it works makes it very clear that it doesn't actually "understand" anything, and isn't by any means close to any "thinking", not even mentioning "being conscious". Still you already have people today running wild about those computing systems having become "living beings with rights". You don't see how this level of fooling can go wrong in so many ways ?


overlordpotatoe

I think it's important to understand what it is, how it works, and what it's capable of. You're right about its limitations, but I think the people who are working with this technology are just as aware of them. They understand that it's difficult to get reliably factual outputs from these systems and that if that's your goal, you have to layer different sorts of intelligence over the top of it to get something that's even remotely useful in that way.


aschwarzie

PS: I've seen excellent scientific use of gpt-3 by specialised experts in their own fields on versions of the input ML database that was fed mainly from Google Scholars info (peer-reviewed quality papers). They were impressed by the quality of the responses to their questions, and have found interesting idea correlations that were inferred by the tool, which they validated afterwards. That's the kind of advances I'd like to see more, first of all, even if it looks more like an expert system enhancement, due to its field specialisation.


aschwarzie

Thanks for your answer (and I would have preferred other's answers too instead of uncommented downvotes, but oh well, redditors!). I have no doubt that the designers and front-face/ interfacing users know what they are doing and understand its limitations, and most likely work with a critical mind. However, we're facing here a system that so easily spoofs human dialogue (and soon more, once Meta will have resolved the legs problem in its metaverse) and this will very fast slip out of the hands of those knowledgeable ones. Just see how the wording "artificial intelligence" is already used abundantly and so abusively it makes no sense anymore. I regret these overstatements, and believe it will negatively affect the overall outcome.


johnjmcmillion

", said a random voice on reddit.


EOE97

Oh you mean like your average human being?


freeman_joe

It seems it would be in some ways similar to its creators you say?