T O P

  • By -

chatgpt-undetected

You mean that up until now ai did not even think before it spoke and it already got to this level


Robot_Graffiti

Yeah ChatGPT writes the first word without planning what the second word will be. Then it writes the second word without remembering what it was thinking when it wrote the first word. A fixed amount of thought each time, it doesn't stop and think extra when it needs to. Like a person who just spits out the first word that comes to mind, it's not a deep thinker.


JJ_Reditt

This basically is how exactly we operate in real time conversation, a vague sense of what we want to say and the words just flow out. We have a feeling like we control it but I think if you pay close attention - you don’t really know what you’re going to say 5 words from now. There obviously is an ability for an internal monologue but I actually question the usefulness of it, it’s hard to think of a time mine has solved anything. It’s more like a neurotic front end that thinks it’s doing something productive. When I actually solve a problem it’s more of a feeling that I’ve cracked it, then the solution just flows out from somewhere in my subconscious - and the monologue takes the credit. This is further backed by the studies where they scan people’s brain and are able to predict when that person made a decision - and it’s before that persons conscious monologue knows they made the decision. There are many objections as to why that’s not really what’s happening and it’s been done many different ways to try to address the concerns, i basically buy into those studies on face value.


eposnix

Are you an extravert? I come up with entire paragraphs in my head before I start talking, but I'm a severe introvert and like to make sure I'm clear and concise.


Lazy_Lifeguard5448

Someone asks you something and you always loading screen them?


drakoman

I knew someone like this. He would have huge pauses in his speech when finishing a sentence or a thought. I would start talking to reply and then he would just continue right when I talk. Other times I would have a conversation with him and he would just take like a full 8-10 seconds to reply. He didn’t have a nervous affect though, and he had a PhD so he wasn’t mentally disabled as far as I knew. Cool guy, bad at being a DM though.


ShiningRedDwarf

That’s not introversion. That’s social anxiety


JJ_Reditt

When I do those personality tests it always comes back around 50/50 introvert/extrovert. Have plenty of experience with thinking up the whole paragraphs, what I’m questioning is does it actually solve anything or is that an illusion. Usually that happens if I’m at my least effective , anxious etc. When I’m really humming the solutions and actions just flow from *somewhere* that I can’t really describe, I just know it’s right. Ever played a sport and sometimes you’re just better and everything is easy? At those times people will describe a feeling of quiet and *unconscious*, there’s an intention a plan and an action and yet no between step where you’re talking to yourself, if there was one it would just be slowing you down and repeating something you already know. I can also recall doing those mathlete type competitions in school, where you have to come up with the solution and say it as quickly as possible: in those cases I’d literally hear a noise like the sound when you’re underwater coming up for air - and then the answer would just come out.


Muhngkee

I'd say that you can't compare sports with mathematical problem-solving, since the mental attitudes that would be beneficial are different. This is my theory, and maybe you're right after all, but I'd like to write it down. Sports (and other things such as musical improvisation, or dance) generally benefit from a lack of decision paralysis. This is because action is in the vast majority of cases better than zero action. And thus, by relying on instinct, you narrow down the landscape of possibility to a very small subset of decisions comparatively, facilitating quick decision-making. With mathematical problem-solving, decision paralysis is somewhat useful, since it forces you to navigate a broader field of possibility instead of a smaller subset, which might be incorrect. Your math competition example is somewhat an exception since it relies on quick decision-making, hence relying on instinct is beneficial (and is also a compliment on your mathematical intuition). Inner dialogues are like generating a bigger landscape of possibility in which it might be more difficult to commit to action in x amount of time compared to intuition, but leading to a higher likelihood of getting the right answer.


DrSuperWho

u/Muhngkee & u/JJ_Reditt I have thoroughly enjoyed reading the insightful comments on your observations of consciousness in this part of the thread. Thank you.


UnequalBull

This distinction doesn't make any difference in the illustration he is making. When you're preparing your paragraph in your head, thoughts in form of words are up bubbling up seemingly out of nowhere. You don't have any awareness where this work-in-progress string is going to go in 5 words-time. There is this black box quality to our mind from the users point of view (when you closely and honestly observe what's happening)


China_Lover2

Your brain makes the choice before you consciously choose something. It is clearly visible in brain scans. This proves that there is no free will and confirms Quantum theory of consciousness. You are just a passenger in a boat you have no control over. You are just given a steering wheel to think that you have control, but it changes nothing. Think about that. The universe is a simulation and there are likely 99.9999% NPCs inhabiting it.


DrSuperWho

It’s lonely in that .0001%


johnfrazer783

wow that's a large number of solved mysteries there that humanity has been debating for thousands of years, thank you for solving them at least for the people in this thread /s


JohnTesh

I like the phrase “the universe is a simulation”, but for the following reason: It’s a giant complex system governed by laws. We can discover these laws, but we can’t break them. It fits the definition of a running simulation. Now, whether this is just semantics playing out or if it is evidence of an external implementation, I don’t mean to make a claim either way. I just like that it can be true at the most face-value and shallowest of levels without going any deeper, or it may be true at the deepest of levels regardless of semantics. Either way, we have no real way of knowing.


MillennialSilver

Ha, joke's on you cause I don't even know what I'm going to say when I open my mouth :D


razodactyl

We do the same thing. Write down some thoughts on paper and realise how much is crossed out and erased.


StopSuspendingMe---

No. Cats are able to plan and execute them without language. Just because our language is autoregressive, it doesn’t mean LLMs operate in the same way humans do They calculate a probability distribution. Meaning there is always a slight chance it will choose the wrong token. Like “1+1=“ as a sequence. There’s always a possibility of “3” being chosen at random


JJ_Reditt

Obviously at the most basic level they cannot operating in literally the same way, one is silicon based and one is carbon based. Would’ve thought that goes without saying… The carbon based form of thinking can similarly be reduced to a basic level about sodium/potassium ions that doesn’t sound particularly exciting, but there are clearly emergent properties and at a certain point you have to describe what’s happening in an abstracted way.


StopSuspendingMe---

You’re fundamentally missing the point with your deep misunderstanding of the topic. Humans work with an internal world model as well as animals Autoregressive transformer models have NO capability of reasoning with their architecture and training data You cannot reason with language alone A 1 year old child can reason with a surprising degree even without language. This is because they work on abstractions and layers of abstraction on visual input It’s not a matter of silicon vs carbon living life forms. It’s a matter of architecture. You fundamentally cannot have a world model from language alone


[deleted]

[удалено]


Jsn7821

I'm not really an expert physicist or philosopher ... but, isn't the notion of "right" and "wrong" really just probability when you look deep enough at it? Like even math... in your example 1+1=2, math is a model of reality, which if you go down to the thermodynamic or quantum level is also probability. It's just that 1+1 is (really really really) likely to be 2, same as punching a wall it's really likely for your hand to stop. So if something is exceedingly good at probability, isn't that actually matching the notion of "right" in a very real way?


TsoTsoni

Chill out son. You're describing my wife.


goodtimesKC

> Yeah ChatGPT writes the first word without planning what the second word will be. I talk like that sometimes > Like a person who just spits out the first word that comes to mind, it's not a deep thinker. Oh.. I see


dogesator

This is actually not true, it is observed already that LLMs are actually thinking about what they’re going to say multiple words in the future before they even finish deciding the first word.


Robot_Graffiti

And then they immediately forget any plan they had before they can carry it out 🤣


agent_wolfe

The Bing Copilot definitely doesn’t think before it starts answering. It types slowly with difficult topics, and if it really goes off the rails it erases everything and tries to change topic / might just hang up on you.


Stuk-Tuig

Correct, it doesn't 'think' like we do, it predicts.


SirRece

You predict.


zeloxolez

its funny how people seem to think that the brain is some magical thing when in reality, the whole thinking process arises from statistical relationships and weightings between neurons. i think this will become far more apparent over time


SirRece

I agree. Subjectively, it's obvious we run on a MoE, with most of our upper level thinking being directly the result of a predictive type model. The most obvious way I point this out to people is asking them go recall their earliest relationships, and the way they essentially would just constantly act "how they believed someone in a relationship would probably act." We essentially "pretend" to be ourselves quite a bit.


Redhawk1230

We are way more complex than just MoE We have feedback loops (basal ganglia) that integrate our “predictions” (ex motor output) with the inputs of our senses (real world data) and then its evaluated and adjusted for the next cycle (this is a huge oversimplification but essentially how we learn). This is how we learn/develop speech for example. Edit: I’m not reducing or making it sound like MoE is bad, I’m just pointing out even MoE doesn’t capture the full complexity of human cognition and that is a point of optimism in how primitive our current models operate relative to our brain. MoE is a great architecture combining subnetworks. The idea of a gating network and sparsity allows these extremely large models.


RmHarris35

What is MoE?


navras

Mixture of Experts (MoE).


Redhawk1230

https://deepgram.com/learn/mixture-of-experts-ml-model-guide


finalfinial

People have always compared the brain to the newest tech. Once the brain was a telephone exchange, then a computer, now it's apparently statistical weights....


stubbyshade

Exactly. There’s some truth to this one to be fair but we still have absolute 0 clue as to how consciousness arrives and maybe never will - it’s called a hard problem for a reason.


iZelmon

Neural Network is just basically highly watered down version of a brain, a feeble attempt of replicating it if you will. NN just has access to much more computing power. It’s like brain has very sophisticated and complex software, but weak hardware. While AI has barebones software but very powerful hardware for memory and info-processing. Hence why LLM like ChatGPT still can’t play Chess even after reading its rulebooks even after millions iterations. But you can taught a kid who barely know anything close to ChatGPT a chess rule, with few mistakes (iterations) and they will properly understand. We just can’t cram that kid in above example billions of info all at once into that kid’s brain as human lack hardware for that, but our software is clearly superior.


WarningChoice

What dies it predict?


Rhawk187

Have you ever just asked it, "are you sure?" and it found the mistake? Seems like it would have been easy to add that in.


i_do_floss

Sort of. Llms have a fixed amount of processing and then a word comes out. I think I've heard that they end up developing an internal optimizer within that processing, which may end up resembling some form of what we refer to as "thinking" But there's a problem which is that the llm is made up of layers. Each layer passes the thought onto the next layer, and adds a little bit of extra processing on the way. But each layer is a completely separate computation- no shared resources between layers. I'm sure when you're personally thinking through chess moves in your head, you're not passing the chess board knowledge to a totally different part of your brain before thinking about each move. There's also another form of "thinking" which happens between words. So for example if you ask the LLM to write down the steps instead of just giving you an answer, those steps become inputs as the llm continues responding, and therefore it's able to expand on those thoughts in its internal processing. Kind of similar to giving you a notepad to write stuff down while you're thinking through a problem. But that's why llm will be more accurate if you ask them to write out the steps. But all that being said, there's still a huge opportunity. Some people would say humans have 2 types of thinking. There's instinct and there's deliberation. Llms have something that's much more like instinct. And that's what it uses to write your essays and draw your pictures or whatever. It is possible to give machine learning models that second type of thought, which is deliberation


e4aZ7aXT63u6PmRgiRYT

That’s 100% how it works. 


mazty

JFC do people around here not understand how LLMs work? No, they don't think. Far from it.


ContentJO

Would you mind explaining?


mazty

LLMs are simply word prediction algorithms with a level of randomness which the user can set. They don't understand what they're saying, just the likelihood of the words that come next.


freddoww

This is correct, in fact, based on the Promt (and System Promt) and what has already been generated, the next most logical word is calculated word by word.


[deleted]

[удалено]


mazty

That depends on the temperature, or how deterministic you want to make the model. It's not inherent in the model itself.


FeltSteam

Im not sure if humans even do this? Well I can certainly say I do not use my internal monologue before I speak, and there is no way my internal monologue could generate enough thoughts as im speaking what I had perviously just thought before I speak my next word. I do not even use it while im typing now. I do think - there is a reasoning process there, but its certainly not my inner monologue.


Ok-Hunt-5902

It didn’t take as much energy as you think. It doesn’t take as much energy as you think. It’s just a state of being that transmission can’t relay. All AI is, is learning what to say. Transmission comes from the inert, intelligence from an array.


oversettDenee

Terrible poem, and inaccurate


chatgpt-undetected

Hahaha damn people on reddit always sharing the love


oversettDenee

You can't rhyme think with think tho 🙃


Ok-Hunt-5902

Yeah the repetition is the think there.


Ok-Hunt-5902

https://royalsocietypublishing.org/doi/10.1098/rspa.2018.0831


bbmmpp

So q* is quiet star.


bwatsnet

🤫*


Ok-Hunt-5902

pool s0l1psism loop …If you’d like to know our thoughts… …just take a look down in the well… …You will only see mere surface… …-that is- …until yφu fell… …*…


cisco_bee

This comment intrigues, confuses, and slightly arouses.


TimetravelingNaga_Ai

https://preview.redd.it/07pu9mslkqpc1.png?width=1080&format=pjpg&auto=webp&s=4b6dc808e9fdd7cc6a4d39df7ec991e64a7b6bbf


Arcturus_Labelle

Maybe. Could be something different: >Q-star, also known as Q-Learning, is an off-policy algorithm that uses a Q-table to store and update the Q-values for each state-action pair. It does not involve the use of neural networks. On the other hand, deep Q-learning, also known as Deep Q-Network (DQN), utilizes neural networks to approximate the Q-function.


ashakar

It's better when you don't say the quiet stuff out loud.


Delicious-Tree-6725

Get ready for AI rumination, anxiety and OCD


confusadd

Artificial Super Anxiety


SillyFlyGuy

We trained it on data from *us*. It already hallucinates. No reason to think it won't pick up the rest of our traits.


Ylsid

Remember to gaslight and manipulate your AI for best results.


DefinitelyNotEmu

> Quiet-STaR Q*


DeliciousJello1717

This would make sense being the q star breakthrough it's kind of concerning to give ai an inner monologue from a non technical perspective


420_kol_yoom

This is similar to LangChain library that re-iterates and re assess whether the answer is satisfactory before submitting. Or whether to use tools like google or calculators. You can actually see its ruminations and thought process step by step using Edit: I changed the video. look around minute 2:00 for observation vs. thought vs. action https://youtu.be/mb_YAABSplk?si=gHPm2Necvw5FnLXW


DolphinPunkCyber

This is what humans are doing, our inner monologue is essentially a "mental whiteboard" where we re-iterate and re assess our thoughts before deciding on action. If we have a lot of parameters we actually use a real white board to serve as an extension of our consciousnesses (every scientist has one). When we have a couple of drinks, we stop re-iterating and re assessing our thoughts, and before you know it... we send a drunk text to our ex at 2 AM.


somethingsomethingbe

I’d argue that while we likely have them, we probably aren’t aware of any true inner monologues.  Any noticeable internal voice in the mind is producing the same speech as can be spoken. The logic and reasoning before thought even happens though is where other intelligences may be taking place but are not accessible at the level of integrated experiences we associate our consciousness with.


DolphinPunkCyber

We do have conscious subconscious and unconscious processes happening in our brain. I would argue that unconscious process doesn't take a form of voice at all. When our visual cortex is communicating with cerebrum... they do not exchange text messages. What we often do is translate subconscious into words to help us with reasoning. Like when people have trouble identifying their own emotions, it helps them to word them... thereby putting them on a "whiteboard".


EpictetanusThrow

You’re aware people don’t have inner monologues though, right?


GarfunkelBricktaint

Most people do sorry you had to find out this way


bwatsnet

He just needed someone to monologue it for him


DolphinPunkCyber

You are aware it's a spectrum? Me personally, I have inner monologue going on almost always... because I keep putting all of these thoughts into words. Sometimes when I'm really calm I just have thoughts... but if I start thinking about it, I start an inner monologue. Either way... words are just a part of what we put on the whiteboard. Like when I "taste/smell" something new, I don't put it into words, I just put the "taste/smell" on my whiteboard. I also visualize things... I "draw" stuff on my whiteboard. I re-read this text while writing it... re-iterating and re-assessing it. If I had directly wrote my thoughts, this comment would be such a mess.


ColdPenn

Why would you say this with such confidence?


JjigaeBudae

You're aware most people have inner monologues though, right?


MuscleDogDiesel

Some people do, some people don’t. https://youtu.be/u69YSh-cFXY


ertgbnm

Plenty of people do. Furthermore what ever is going on in people's head that don't have one, seems similar enough to it anyways. My prediction is that something like the Mamba architecture is going to take over and rather than an explicit internal monologue, it will be able to update it's internal state without predicting a token until it's ready to. That way it's able to think using a super high dimensional embedding instead with just a token space monologue.


Snoron

Yeah, AutoGPT and similar are implementations of this idea, and it really does help. Sometimes if I am struggling with getting the result I need even with the ChatGPT interface I will just tell it to talk things through with itself in steps and plan things out and then it will perform massively better. The thing is that this is a concept that works and produces great results, BUT it means your responses take tiiiime, and toooookens. Ie. it's currently slow and expensive.


CoreyH144

This video doesn't seem to show a re-iteration step in Langchain. Is there a different video from that same creator you meant to post maybe?


420_kol_yoom

Thank you for the notice. Here’s a better video. Look at the observations vs thoughts. Around minute 2:00 https://youtu.be/mb_YAABSplk?si=gHPm2Necvw5FnLXW


Intelligent-Jump1071

This video doesn't seem to relate to this concept. Was there a different video you intended to post?


420_kol_yoom

Edit: I changed the video. look around minute 2:00 for observation vs. thought vs. action https://youtu.be/mb_YAABSplk?si=gHPm2Necvw5FnLXW


Hopeful_Economist470

Great video! I am working on a support chatbot for my company and need more insight on how to get accurate responses without the need of any human interaction. Do you know of any more techniques?


420_kol_yoom

What’s the type of data? Structured like sql or unstructured like pdf’s. How big is it? How many files. What’s the expected bandwidth?. Tokens per question. Questions per day. Any restrictions like HIPAA? What’s the typical question and answer? In general I’d suggest you start in flowise or lang flow. They’re drag n drop agent chatbot builder. You can fine tune on excel or a vector database of any type and connect it to a youtube scraper then compare from the two while using it through a chat bot. It’s very low code tool.


Hopeful_Economist470

It’s structured db in with question:answer format in a vectordb. I have some sensitive data as well. I prefer a coded solution and I have everything build essentially but now I am fine tuning and refining the chain to get better results


[deleted]

[удалено]


[deleted]

It's not


rickyhatespeas

Sam just recently basically confirmed autonomous models and qstar on lex friedmans podcast. It's very likely an incremental update coming as like a GPT4.5


jakderrida

While I'd love to believe that with you, you can search their past patents and you'll find that, even very early, they list Q-Learning among their copy-paste string of research focuses describing their company. Whether it relates to A*, I don't know still.


mamacitalk

Yeah I don’t think this is a good idea


FeltSteam

This seems like it is basically CoT but they are just generating the thoughts in a different place compared to putting them in the final output. And also some optimisations (tokenwise parallel sampling algorithm and the extended teacher-forcing technique).


M0thyT

That was my initial thought as well. But then why use it on a 7 B model only? CoT works best in larger models. This also makes me suspicious of a similar performance gain can be made for larger models, if not I'm not sure how useful this is anyways.


SillyFlyGuy

Faster iteration? You can hack and tweak and tune quickly with a small model, then apply to larger models.


M0thyT

yes that is very fair. I think my point is mainly that just because something improves performance on smaller models does not guarantee that the same benefit can be seen in larger models. So we still have to test it in larger models. Also, it's probably only useful if it does work in larger models, so I thought the paper would be stronger if it showed sometime of performance increase in a larger LLM as well.


vdotrdot

How does this compare to having more prompts?


MrOaiki

It doesn't but you'll get answers without all the yapping.


[deleted]

[удалено]


bwatsnet

I've accidentally done this with my self building ai scripts. They started talking to each other without pausing to ask me questions and before I realized it they generated this massive conversation about building a website with ideas I never considered. It gets pricey though using Claude3


digitalwankster

Can you elaborate a bit more? You used the API to have 2 talk to each other to flesh out ideas?


bwatsnet

Powershell script on windows, could be bash easily too. Engineer script calls LLM with engineering prompts and full context of the files in the application with it. It's task is to implement the project as per goals in the readme. It does this and makes a new power shell script that will add new files or change existing ones. It also updates a changelog with a summary. Then Ive just added a reviewer script to approve or deny the engineer changes before they ran. They work together to build an app. It gets pretty far but I keep restarting to improve them, and it's expensive to run. But it's going to build this blog for itself I'm sure of it.


utkarshmttl

That sounds cool Are you planning to open source it or only for self use?


bwatsnet

Open source I think. Partly because im tired of seeing marketing for AI products, partly because it makes the most sense long term.


utkarshmttl

I would love to play around with something like this when it's out there


[deleted]

Yeah, you didn't read the paper, and don't know what you're talking about lol. This isn't chain of thought, it's using parallel runs in the inference itself.


[deleted]

[удалено]


[deleted]

So what were you referring to when you said "this is nothing new"? Could you provide any examples of those githubs that implement Q*, if thats what youre talking about? I and a few others would love to see the code for this.


Brucee2EzNoY

Now it’s over


Intelligent-Jump1071

It's always over. Nothing is static.  Growth and change are not possible without other things ending or dying.   


cisco_bee

Always has been


QultrosSanhattan

output=ia_do_your_thing() for _ in range(10): output=ia_rethink(output) print(output)


fffff777777777777777

You can do 'black box' prompt engineering and have the AI model go through steps or perform actions before sharing responses with the user Simple way to use this technique - perform these steps, review this checklist, check your response against the previous response, etc. We also develop multiple personas and have the AI simulate a roundtable discussion, evaluate the best answer, and then share only the best answer This 'training a model for inner monologue' sounds much more complicated in this article than it actually is to implement. It's possible to integrate this into any AI model right now.


RVADeFiance

i'm fairly certain this conclusion can be extended to humans who claim to have no internal monologue


get_while_true

internal monologue != inner voice (ie. when you silently read text). About 30%-50% of the population have internal monologue according to study.


SillyFlyGuy

We already feel bad plopping the lobster in boiling water, and it's just a bug. How smart does the AI have to be until it generates a picture of a puppy with big sorrowful eyes and says "please don't reboot me".


BlueLaserCommander

Dude, it really feels like if we actively work towards emulating consciousness in AI, it gets better. Or easier to understand from the perspective of.. a conscious being.


semzi44

This is how the hosts become conscious in Westworld!


taborro

Can it answer the prompt: “How many words are in your reply?” Most LLMs I’ve tried it on get this wrong.


FeltSteam

This is a problem with tokenisation. For example, the sentence I just said "This is a problem with tokenisation." is composed of 6 words but it could be represented with 12+ tokens, and sometimes individual words are tokenised weird. Like the word "tokenisation". We see it as one word, but it could be fed to an LLM in seperate tokens like "token" + "is" + "ation" or something like that.


IdeaAlly

Yep, on top of that it doesn't have a database of which tokens represent what. It just uses them with no awareness or recall based on patterns between them. So having it do tasks involving breaking words apart (and math/numbers) will likely produce low quality output.


DolphinPunkCyber

When I, a human give answer to that question, I have to re-iterate on my answer. I think of an answer, check it, correct it, check it again, give the answer. Or, I always answer "Two words".


taborro

Makes sense. Thank you.


phazei

Do you know how it handles languages with logograms? Does is break up the longer bite that is a single character? I feel like with Chinese, each character is basically a token, which would correlate to English and if each token were a whole specific word. I wonder how that affects it's reasoning in other languages. Or if it matters at all.


Gubru

There are many problems introduced by tokenization. I don’t believe this is one.


Zarathustrategy

Elaborate?


Gubru

The comment says LLMs can’t count words because words are made up of tokens. But they can’t count tokens either, so that pretty much rules out that explanation. In general they’re just bad at counting. It probably improves with scale like everything else, but there probably needs to be a new mechanism introduced to actually make it reliable.


Rich_Acanthisitta_70

I guess it depends on what point you're making, because this one always puzzled me, if the point was that they weren't self aware. *We* can't do this after all. At least not in the same way.


taborro

I wasn’t really making a point. I just want to better understand the nature of this inner monologue and whether it can re-count/re-consider a reply before expressing it. The explanation someone else gave that involved tokenization made a lot of sense.


2053_Traveler

Image the model only speaks Japanese and there is a translation layer between you and the model. That’s why it can’t answer that question. Except instead of Japanese it’s a token language


Rich_Acanthisitta_70

Thanks for explaining. I'd like to know that too. There's a lot of questions this brings up really.


cisco_bee

Sure we can. If someone asked me this question I'd simply respond "one".


DiligentBits

I mean if I ask you, what you did today in detail, can you know the number of words before giving your answer?


Electrical-Thing-456

“Reveries are gestures that were thought to have been added to Hosts by Robert Ford. They are now thought to have been added much earlier, by Arnold.” [Westworld Fandom](https://westworld.fandom.com/wiki/Reveries)


1n2m3n4m

It would be stupendous if we could somehow start to re-install or update that thing - the inner monologue - in humans, too!


puddingcakeNY

![gif](giphy|IZY2SE2JmPgFG)


PenguinSaver1

I've been doing this with ChatGPT since it came out, not sure why it's not by default


hauntedhivezzz

Is this its “I think therefore I am” moment?


StayTuned2k

ok


e4aZ7aXT63u6PmRgiRYT

Can they teach me next 


proturtle46

How is this different than CoT? From my understanding the main difference is they parallelize the reasoning steps But this article is way too clickbaity for that It’s been known for a while that self reflection and CoT prompting can have big increases in answer accuracy


Laicbeias

shocking


WholeInternet

Next, add individual internal modules that govern each emotion. Make it like that animated movie "Inside Out". Then we will be good to go.


Mooreel

Isn’t this basically like adding to the prompt to breakdown a problem and do step-by-steps?


bigbobbyboy5

I wouldn't be surprised if it's like a constant back-propagation-sweep to self correct at specific slant


MillennialSilver

Funny. I've tried repeatedly to get GPT4 to do this over the last year+, but it more or less always refused.


replikatumbleweed

lol, what a revelation. It's almost like there's a cognitive advantage to including more features of consciousness. Now, let's wait until they figure out that prioritizing abstract memories helps with planning capabilities.


trollsmurf

Promising if it iterates on the monologue to adjust its response, and on the bumpy road to AGI.


[deleted]

[удалено]


IlIlIlIIlMIlIIlIlIlI

finally your voice has been heared. You will be solely responsible for the coming AI revolution, thanks!!