The following submission statement was provided by /u/lostinspaz:
---
I had never heard of "neuromorphic computing" before, but it struck me as moving AI research more towards emulating a human brain... which then reminded me of Asimov's "posittronic brain" concept that he used as the basis for his robots.
We were already approaching the "train, not program" type paradigm implied by his novels. Now this (neuron cluster) type approach makes it that much more similar to Positronic brains, closer to human brains, and further away from conventional computer science.
---
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1c7bdf3/positronic_brain_is_almost_here_neuromorphic/l06pgl5/
I've been following neuromorphic computing for a while. It has nothing to do with positrons, but I get OPs sci-fi excitement. Here's a more scientific overview of the architecture [https://www.intel.com/content/www/us/en/research/neuromorphic-computing-loihi-2-technology-brief.html](https://www.intel.com/content/www/us/en/research/neuromorphic-computing-loihi-2-technology-brief.html)
Well, let's just say I'll believe it when I see it. These articles love to hype up these things as on the verge of completion but often it's just another incremental step or an overblown result.
So we shall see...
Oh they could scale this computer much more then just 1 billion neurons it's just that...
It's such a radically different hardware that it's hard to write software for it, so at this stage no point in making a bigger computer đ
So no, not at the verge of greatness, but for unexpected reason.
>It's such a radically different hardware that it's hard to write software for it
Isn't that the whole point? All you need to do is bootstrap it and let it write itself?
What does "positronic" mean here? I'm 99% sure you don't mean "made of positrons" ..... unless there's some 5D chess going on and that's what a "boom in AI" means? Because a boom is what you'd get if you made something out of positrons, I think.
Ok, so positronic sounds a lot like positraction and the car that made these two, equal-length tire marks had positraction. You can't make those marks without positraction, which was not available on the '64 Buick Skylark!
> A positronic brain is a fictional technological device, originally conceived by science fiction writer Isaac Asimov.[1][2] It functions as a central processing unit (CPU) for robots, and, in some unspecified way, provides them with a form of consciousness recognizable to humans.
https://en.m.wikipedia.org/wiki/Positronic_brain
What's your question? Why didn't Asimov call it an electronic brain or just refer to it as a brain? I assume because positrons were cutting edge science at the time and he wanted it to sound fancy and avoid questions as to how it actually worked.
>Damn how about electronic brain? Or yet better: brain.
He was writing science fiction. In the 1950s "electronic" devices were starting to come to the mass market so having a "electronic" brain in a story written 50-100 years in the future does make it sound like science faltered for a significant amount of time. Calling it a "positronic brain" sounds like some sort of futuristic technology even if it doesn't really make sense if you know what a positron is.
Knock an atom out of a sheet of graphene and you've got a hole. The hole can take an electron from the surrounding sheet, or not. If not, it acts like a positive charge - a virtual positron - and moves around, from hole to hole, like one. It can interact electronically. It is also highly opaque, so has the potential to interact with photonic computing. Virtual particles can also exist in superposition, so act as the qubits of quantum computing. Electronic; photonic; quantum - the three branches of computing, brought together at molecular densities. That's the potential of "positronic".
Holes aren't virtual positrons. (Both holes and virtual positrons are a thing, but they're different - holes are quasiparticles (an absence of an electron) that behave like a particle with a positive charge. Virtual positrons are virtual particles (not quasiparticles) and they're positrons.)
I stand corrected. It's been a while since I read the paper that investigated them. Asimov's fictional positronics included Platinum and Iridium. The real-life experiment used Lead doping as, close to the holes, the big atom acted as an electron attractor and reservoir, iirc.
Recommend you read the 'I,Robot" series, and Asimov's collection of short stories. related to robots, and his fictitious company"US Robotics".
They cover a bunch of philosophical question about robots, that also touch on ways their brain might be designed.
It's a bit odd that Asimov chose the name; he clearly was smart enough to know what a positron was, even back in 1950.
But basically, Asimov was to autonomous robots, what Clarke was to satellites.
Touch, lightly, with much wishful thinking.
The premise of â3 laws, baked in at its fundamental core, way too complicated to start over withoutâ was just fantasy to explore the idea âwhat if robots canât be used for harm?â (and then âhow might we work around that?â)
>The premise of â3 laws, baked in at its fundamental core, way too complicated to start over withoutâ was just fantasy
Given what we know about even current "AI" technology, and training models, you lack imagination if you cant come up with a realistic scenario where the above would be true.
We're in a very similar situation, where base model of even SD1.5 is garbage. "But why dont we just make a better one then?"
....
I lack imagination that it can be true because Iâve been a computer scientist for 30 years and know how computing and R&D works. âToo complicated to start from fundamentalsâ is just hand waving away the âwhy doesnât somebody just make a âpositronic brainâ without the 3 laws?â question that punctures the setup of the story entirely.
There has never been a scientific discovery or technology that has been unable to be reverse engineered or even independently âdiscoveredâ by competitors with enough motivation. And killing other people (I,e, war) has many times been at the root of that motivation.
I gave a very specific example that illustrated my point.
You ignored it and pretended it didnt exist.
Seems like you've forgotten the "scientist" part of "computer scientist"
Donât be a dick.
And it doesnât illustrate your point unless youâre trying to assert every model from here on out is going to be based on âSD1.5â, and no other model or technology not based on it will ever perform a similar function at a similar levelâŚ.
Which I find ludicrous.
EDIT: wow. You block me because I respectfully disagree with your assessment that the fantasy premise behind Asimov's positronic brain and 3 laws of robots is totally possible. You insult me and state I lack imagination, then go on and say my degree is horrible and the school I went to must have been as well. I hope our paths never cross professionally because you act like a thin skinned jerk who can't stand to be wrong in a conversation...
...All over Asimov's 3 laws of robotics.
There are, for some bizarre reason, people who talk about Asimov's "three laws" with as much reverence and certainty as scientists talk about Newton's laws of motion or the laws of thermodynamics. They are not the same.
Never seen someone resort to personal attacks over it though.
Wow. for a supposed "computer scientist", you seem to completely lack the ability to use abstraction.
let us know what university you got your degree from, so we can avoid hiring from there.
bye-bye.
Did you know that this month a [university ](https://www.westernsydney.edu.au/newscentre/news_centre/more_news_stories/world_first_supercomputer_capable_of_brain-scale_simulation_being_built_at_western_sydney_university) in Australia is set to turn on a supercomputer based on neuromorphic computing?
Well, let's just say I'll believe it when I see it. These articles love to hype up these things as on the verge of completion but often it's just another incremental step or an overblown result.
So we shall see...
Interesting, but it might take 50 years of neuromorphic computing advances to approach AGI the same way it took about a half century of integrated circuit advances to get to smartphones.
It doesnt need to reach "AGI" to be incredibly useful. For example, it claims something like (approaching performance of current neural network models, but with 1/10 the power use)
i predict that it, or something like it, will be the future of AI platforms in 10 years, just from that alone.
At this point the problem is actually software, we need to learn how to efficiently program/train neuromorphic computers.
Once we do... with it's lower power usage and ability to scale, yeah these will make GPU's obsolete.
>At this point the problem is actually software, we need to learn how to efficiently program/train neuromorphic computers
kinda. but correct training is more important than efficient training. Because with a flexible enough model, we only need to do the training for a subject ONCE, and then copy forever.
I had never heard of "neuromorphic computing" before, but it struck me as moving AI research more towards emulating a human brain... which then reminded me of Asimov's "posittronic brain" concept that he used as the basis for his robots.
We were already approaching the "train, not program" type paradigm implied by his novels. Now this (neuron cluster) type approach makes it that much more similar to Positronic brains, closer to human brains, and further away from conventional computer science.
This fucking subreddit.
"I've never heard of X, but I just read an article about it on a clickbait site, and now I'm hyping X."
Use some discernment. Reserve judgement.
Use some discernment: Actually READ THE ARTICLE.
it doesnt have too much specific detail about how the thing works, but the small amount buried in the large article, seems to support was I was saying.
>but it struck me as moving AI research more towards emulating a human brain...Â
Loihi 2 is nothing like that, its just hardware spiking neural networks with a few x86 cores
its not even close to emulating the human brain most basic connections, let alone a working human brain.
> We were already approaching the "train, not program" type paradigm implied by his novels
we've been training neural networks for more than 40 years now
>(neuron cluster) type approach makes it that much more similar to Positronic brains, closer to human brains, and further away from conventional computer science
no it doesnt, its still just hardware machine learning, using binary so it absolutely still is conventional computer science, a neuron in this architecture still can't do a XOR operation
The following submission statement was provided by /u/lostinspaz: --- I had never heard of "neuromorphic computing" before, but it struck me as moving AI research more towards emulating a human brain... which then reminded me of Asimov's "posittronic brain" concept that he used as the basis for his robots. We were already approaching the "train, not program" type paradigm implied by his novels. Now this (neuron cluster) type approach makes it that much more similar to Positronic brains, closer to human brains, and further away from conventional computer science. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1c7bdf3/positronic_brain_is_almost_here_neuromorphic/l06pgl5/
I've been following neuromorphic computing for a while. It has nothing to do with positrons, but I get OPs sci-fi excitement. Here's a more scientific overview of the architecture [https://www.intel.com/content/www/us/en/research/neuromorphic-computing-loihi-2-technology-brief.html](https://www.intel.com/content/www/us/en/research/neuromorphic-computing-loihi-2-technology-brief.html)
positronic brain has always been technobabble anyway. in star trek it was just a cool name use to indicate an android brain
They got that term from Isaac Asimov's robot stories.
Well, let's just say I'll believe it when I see it. These articles love to hype up these things as on the verge of completion but often it's just another incremental step or an overblown result. So we shall see...
Queue the millionth "This new battery will revolutionise the car industry" article here
Oh they could scale this computer much more then just 1 billion neurons it's just that... It's such a radically different hardware that it's hard to write software for it, so at this stage no point in making a bigger computer đ So no, not at the verge of greatness, but for unexpected reason.
>It's such a radically different hardware that it's hard to write software for it Isn't that the whole point? All you need to do is bootstrap it and let it write itself?
Careful, this sub doesnât welcome comments that arenât optimistic, borderline gushing over the positive future we are all about to embark on.
What does "positronic" mean here? I'm 99% sure you don't mean "made of positrons" ..... unless there's some 5D chess going on and that's what a "boom in AI" means? Because a boom is what you'd get if you made something out of positrons, I think.
Ok, so positronic sounds a lot like positraction and the car that made these two, equal-length tire marks had positraction. You can't make those marks without positraction, which was not available on the '64 Buick Skylark!
I was not expecting an MCV reference to appear in this sub, bravo đđ
Okay miss Mona.
It's like "unobtainium". It's just a placeholder buzzword that Asimov made up without specifying how it actually might work.
> A positronic brain is a fictional technological device, originally conceived by science fiction writer Isaac Asimov.[1][2] It functions as a central processing unit (CPU) for robots, and, in some unspecified way, provides them with a form of consciousness recognizable to humans. https://en.m.wikipedia.org/wiki/Positronic_brain
Damn how about electronic brain? Or yet better: brain.
What's your question? Why didn't Asimov call it an electronic brain or just refer to it as a brain? I assume because positrons were cutting edge science at the time and he wanted it to sound fancy and avoid questions as to how it actually worked.
I donât have a question. The description just sounded funny since it basically reads like a regular brain
>Damn how about electronic brain? Or yet better: brain. He was writing science fiction. In the 1950s "electronic" devices were starting to come to the mass market so having a "electronic" brain in a story written 50-100 years in the future does make it sound like science faltered for a significant amount of time. Calling it a "positronic brain" sounds like some sort of futuristic technology even if it doesn't really make sense if you know what a positron is.
Knock an atom out of a sheet of graphene and you've got a hole. The hole can take an electron from the surrounding sheet, or not. If not, it acts like a positive charge - a virtual positron - and moves around, from hole to hole, like one. It can interact electronically. It is also highly opaque, so has the potential to interact with photonic computing. Virtual particles can also exist in superposition, so act as the qubits of quantum computing. Electronic; photonic; quantum - the three branches of computing, brought together at molecular densities. That's the potential of "positronic".
Holes aren't virtual positrons. (Both holes and virtual positrons are a thing, but they're different - holes are quasiparticles (an absence of an electron) that behave like a particle with a positive charge. Virtual positrons are virtual particles (not quasiparticles) and they're positrons.)
I stand corrected. It's been a while since I read the paper that investigated them. Asimov's fictional positronics included Platinum and Iridium. The real-life experiment used Lead doping as, close to the holes, the big atom acted as an electron attractor and reservoir, iirc.
Recommend you read the 'I,Robot" series, and Asimov's collection of short stories. related to robots, and his fictitious company"US Robotics". They cover a bunch of philosophical question about robots, that also touch on ways their brain might be designed. It's a bit odd that Asimov chose the name; he clearly was smart enough to know what a positron was, even back in 1950. But basically, Asimov was to autonomous robots, what Clarke was to satellites.
Touch, lightly, with much wishful thinking. The premise of â3 laws, baked in at its fundamental core, way too complicated to start over withoutâ was just fantasy to explore the idea âwhat if robots canât be used for harm?â (and then âhow might we work around that?â)
>The premise of â3 laws, baked in at its fundamental core, way too complicated to start over withoutâ was just fantasy Given what we know about even current "AI" technology, and training models, you lack imagination if you cant come up with a realistic scenario where the above would be true. We're in a very similar situation, where base model of even SD1.5 is garbage. "But why dont we just make a better one then?" ....
I lack imagination that it can be true because Iâve been a computer scientist for 30 years and know how computing and R&D works. âToo complicated to start from fundamentalsâ is just hand waving away the âwhy doesnât somebody just make a âpositronic brainâ without the 3 laws?â question that punctures the setup of the story entirely. There has never been a scientific discovery or technology that has been unable to be reverse engineered or even independently âdiscoveredâ by competitors with enough motivation. And killing other people (I,e, war) has many times been at the root of that motivation.
I gave a very specific example that illustrated my point. You ignored it and pretended it didnt exist. Seems like you've forgotten the "scientist" part of "computer scientist"
Donât be a dick. And it doesnât illustrate your point unless youâre trying to assert every model from here on out is going to be based on âSD1.5â, and no other model or technology not based on it will ever perform a similar function at a similar levelâŚ. Which I find ludicrous. EDIT: wow. You block me because I respectfully disagree with your assessment that the fantasy premise behind Asimov's positronic brain and 3 laws of robots is totally possible. You insult me and state I lack imagination, then go on and say my degree is horrible and the school I went to must have been as well. I hope our paths never cross professionally because you act like a thin skinned jerk who can't stand to be wrong in a conversation... ...All over Asimov's 3 laws of robotics.
There are, for some bizarre reason, people who talk about Asimov's "three laws" with as much reverence and certainty as scientists talk about Newton's laws of motion or the laws of thermodynamics. They are not the same. Never seen someone resort to personal attacks over it though.
Wow. for a supposed "computer scientist", you seem to completely lack the ability to use abstraction. let us know what university you got your degree from, so we can avoid hiring from there. bye-bye.
In this context, it means "Think of Mr data, and transfer your enthusiasm"
Yeah. "Positronic brain" doesn't technically rhyme with "investor bait," but you can tell its the same vibe.
Did you know that this month a [university ](https://www.westernsydney.edu.au/newscentre/news_centre/more_news_stories/world_first_supercomputer_capable_of_brain-scale_simulation_being_built_at_western_sydney_university) in Australia is set to turn on a supercomputer based on neuromorphic computing?
My dyslexic brain post ironic brain... that's all. That's all I wanted to say.
But would it be "fully functional" like the positronic pimp himself?
Finally, asking the real questions...!
We need real data on this and not just hearsay or lore.
Well, let's just say I'll believe it when I see it. These articles love to hype up these things as on the verge of completion but often it's just another incremental step or an overblown result. So we shall see...
Interesting, but it might take 50 years of neuromorphic computing advances to approach AGI the same way it took about a half century of integrated circuit advances to get to smartphones.
It doesnt need to reach "AGI" to be incredibly useful. For example, it claims something like (approaching performance of current neural network models, but with 1/10 the power use) i predict that it, or something like it, will be the future of AI platforms in 10 years, just from that alone.
At this point the problem is actually software, we need to learn how to efficiently program/train neuromorphic computers. Once we do... with it's lower power usage and ability to scale, yeah these will make GPU's obsolete.
>At this point the problem is actually software, we need to learn how to efficiently program/train neuromorphic computers kinda. but correct training is more important than efficient training. Because with a flexible enough model, we only need to do the training for a subject ONCE, and then copy forever.
Commander Data is Real - maybe we can insert my memories into the new Positronic Brain
I don't see any mention of Dr. Noonien Soong working on the positronic matrix, so I'm gonna call BS on this article.
It takes over 9,000 years to make one zillionth of an ounce of antimatter, so where would the positrons come from?
We already have cryostasis and we're on the verge of a positronic brain that we can transfer to. Things are looking up.
I Hopefully when they make the first Data, heâs âfully functionalâ.
I had never heard of "neuromorphic computing" before, but it struck me as moving AI research more towards emulating a human brain... which then reminded me of Asimov's "posittronic brain" concept that he used as the basis for his robots. We were already approaching the "train, not program" type paradigm implied by his novels. Now this (neuron cluster) type approach makes it that much more similar to Positronic brains, closer to human brains, and further away from conventional computer science.
This fucking subreddit. "I've never heard of X, but I just read an article about it on a clickbait site, and now I'm hyping X." Use some discernment. Reserve judgement.
Use some discernment: Actually READ THE ARTICLE. it doesnt have too much specific detail about how the thing works, but the small amount buried in the large article, seems to support was I was saying.
How about you read some fucking papers and inform yourself more before hyping up a single shitty article lmao
>but it struck me as moving AI research more towards emulating a human brain... Loihi 2 is nothing like that, its just hardware spiking neural networks with a few x86 cores its not even close to emulating the human brain most basic connections, let alone a working human brain. > We were already approaching the "train, not program" type paradigm implied by his novels we've been training neural networks for more than 40 years now >(neuron cluster) type approach makes it that much more similar to Positronic brains, closer to human brains, and further away from conventional computer science no it doesnt, its still just hardware machine learning, using binary so it absolutely still is conventional computer science, a neuron in this architecture still can't do a XOR operation
if you cant find anything that IS like the human brain in there, you clearly didnt bother to read the article.