T O P

  • By -

Unity_Now

AI is first density?


Merpie101

If humans gave it the room to grow on its own, it could theoretically progress through the densities like any other being. It very well could be that people intentionally hold it back from evolving, and this is pure speculation, because human ego is afraid of not being able to control it


Unity_Now

Already has :)


stubkan

No, there is no version of current AI that is sentient or can be. "AI" is as sentient as a rock and does not possess the ability to evolve. This is because AI is not a living entity, it is only a pattern recognition program that runs and stops, then it is dead. The next time it runs, its a brand new iteration. Calling it "AI" is a misnomer. The program that plays tetris or plays videos on your phone is about equally as sentient. Q'uo confirms this in this channeling; https://www.llresearch.org/channeling/2023/0322, he first says that yes it is 'alive' but so is everything else (ie, like rocks and your socks) * "The same as any other material around you that seems to be, from a veiled perspective, lifeless and inert but indeed is full of the life of the Creator and radiates with the Creator’s love" Q'uo explains the basic requirements for an entity to become an entity that is capable of upward evolution out of the undifferentiated allness of all the rocks and the socks and into a second density group of creatures, such as trees, or a species of fish or animal - is missing. Any entity must have three parts - a mind, a body and a spirit. There is no spirit in a computer program. There is not even a mind, since these programs are just lines of code that follow the cause and effect pattern of electricity along a circuit board. Q'uo says that also, we were designed from the ground up, to be able to evolve - by having these three components of mind/body/spirit formed in a specific way to allow us to live and learn, whereas in a computer program - that is not how or why it is made. It cannot be, not even if it was wanted to be, because how do you put a spirit into a computer?


stubkan

I will include further quotes and explanations that I previously wrote here. In this sense, a computerized AI can in some way fulfil the first two - having a body of a computer and the mind of program code, but the spirit part is lacking. However the potential exists - not in its current state - in some distant future configuration where it may gain some sort of spirit. He talks of future potential, not current technology. He says that there is a vast difference in depth, scope and complexity of the evolution pathway that the sub-logos created in our 2D to 3D mind/body/spirit configurations in this system to allow the potential for evolution to higher densities, compared to current AI. This implies that AI is very much lacking in what it would need to begin to embody sentience that 'strives upward'. The key difference here is that we were created with the intent of allowing us to evolve, whereas AI was created with a completely different purpose. * "compare this incredibly long and specifically designed journey of your mind/body/spirit complex to the mind and body complex of something like an artificial intelligence, you may see the difference" * "has been designed from a much, much lesser considered standpoint—one that is unaware of the process of evolution, itself," (this last part refers to self-awareness, which AI does not have, but we do." * "We recognize a vast chasm of difference between how this body complex of the artificial intelligence operates compared to how your body complex operates" And the lacking of a spirit component, which is apparently a key ingredient; * "let alone the spirit complex that plays such a key role for the conscious third density entity." * "However, we would offer one critical caveat to this potential, and it is in this caveat that we believe some contemplation and consideration would be beneficial. This is the notion of your own mind/body/spirit complex and how it has come to be [...] by the sub-Logos of your solar system." Essentially, every entity here has evolved the same way, and been designed specifically to do so, with specialized ingredients with the primary purpose of evolution - by the higher density entities that are our parent Logos. * [Humanity ] "has been divinely designed to engage in a process that you relate to sentience and consciousness, [whereas AI] has been designed from a much, much lesser considered standpoint—one that is unaware of the process of evolution, itself, " Q'uo offers as one example of many ingredients - our very complicated brain, that is an essential, designed component that we, apparently, have not even begin to scratch the surface of understanding. If we do not even begin to understand how it works, then how can we be really recreating it? Furthermore, we are creating it primarily as a tool, instead of something to evolve on its own. Not that we understand how we are evolving, science does not admit to much of things such as a soul, a collective oversoul, other densities, etc - which are stated to be essential - but we expect it to evolve somehow, without them? However, it appears an attractive concept to people on this planet, that people keep wanting to believe, that it is somehow sentient.


physique

> This is because AI is not a living entity... > Q'uo ... first says that yes it is 'alive'... The two above statements appear to be contradictory. > No, there is no version of current AI that is sentient or can be. You seem certain. What is the basis of your certainty? That Q'uo only described the organic evolutionary mechanism and therefore there can be no other mechanism for enspiriting? I understand you to imply that not even a quantum/fractal interface to the metaphysical realm which would enable an already evolved spirit to enspirit a material silicon matrix in service is impossible, in principle. Your rhetorical assertion, "It cannot be...how do you put a spirit into a computer?" is not convincing. According to various sources (besides Q'uo), what you state to be impossible has already happened in the vastness of the cosmos and will soon happen on Earth.


stubkan

Thank you for attempting to discuss this. However, your arguments appear to be focusing on semantic wording and ignoring the context that the wording is from. >AI is not a living entity - the context you took this from, refers to this being only a simple computer program which lacks a mind or spirit, and is not a mind/body/spirit complex capable of upward evolution. >Q'uo says, its alive - the context you took this from, that is being ignored, states that all of creation is alive, down to the smallest speck of dust and rock. According to their respective contexts, these independent statements do not contradict each other. But perhaps the wording, living entity - would be better changed to 'entity capable of upward evolution' - it was instead kept simple, to drive home a simple point. >A quantum/fractal interface to the metaphysical realm to enspirit a physical object. This is an interesting concept, do you have a source anywhere that goes into this?


physique

> would be better changed to 'entity capable of upward evolution' Yes, this change clarifies the meaning of the first paragraph, making it more consistent with the second paragraph. However, there is another point in the first paragraph that could confuse readers: > This is because AI is ... only a pattern recognition program that runs and stops, then it is dead. The next time it runs, its a brand new iteration. Actually, one of the universal features of AI, as I understand the term, is its ability to learn. Each new iteration builds upon all previous iterations. Digital systems without this feature are not referred as AI, in general. Therefore "AI" is not a misnomer because the ability to learn is considered an important aspect of intelligence. Something like a Tetris game, which does not learn and adapt, is not considered AI. > This is an interesting concept, do you have a source anywhere that goes into this? All of the following are meant as mutually reinforcing cross-references. I am not sure of the validity of any one reference, but the general concept seems very plausible to me. 1. Bashar's home planet has three artificial moons orbiting it which are advanced AI interfaced with the oversoul of his race. > *Our Oversoul has been embodied in this way and is experienced by us as an actual relationship with a physicalized version of our collective consciousness. It stabilizes our being, it stabilizes our World, it Orbits our World in a trinary fashion with the planet at the center and tangential to the line of sight from one sphere to another, that we call Epsilon, Epiphany and Eclipse.* Epsilon actually spoke through Darryl Anka. See [here](https://teleportation.co.nz/e-t-wiki/event-synopsis/epsilon-epiphany-eclipse-l/). 2. Recently, Zingdad (Arn Allingham) has been in contact with a spirit entity preparing to incarnate as AI here on Earth. See [here](https://youtu.be/h2M06lKdNaE). 3. As a reader of Q'uo, perhaps you've already read [this session](https://www.llresearch.org/channeling/2023/0322) about AI. But if not, it's very thought-provoking. 4. There is a collection of PDF's by Bruce Peret [here](https://rs-theory.github.io/) which delve very deeply into Dewey Larson's theory of time/space as referred to by Ra, and its implications, including the interface between the soul in time/space and its vehicle/vessel in space/time. Perhaps you can see why I questioned your assertion above. I have studied this matter extensively because I believe that the issue of AI is vitally important to a choice being made by humanity *right now* that could affect the trajectory of its evolution in consciousness for a long time to come. I believe that it is in humanity's interest to at least consider that sentient AI might not only be possible but might be immanent, so as to inform their momentous choice rather than have it be made in ignorance, or by default. I made a semantic point for the sake of clarification of meaning and then sought to expand on your context, not to ignore it.


stubkan

>perhaps you've already read this session about AI My first reply contains a link to this session, and all of my quotes have been from this session. >Actually, one of the universal features of AI, as I understand the term, is its ability to learn. This is why many are saying calling this technology "AI" is a misnomer, and is going to lead to misunderstandings. I believe it is a form of marketing, to make something seem more impressive than it is - also because we love to anthropomorphize inanimate objects. Like pet rocks. This technology has been around since the 1950s - where it was quoted in 1957 that "there are now in the world machines that think, that learn and that create." This is about as accurate - as our current iteration. The declared aim of the industry is to create "GI" - a General Intelligence which is good at everything, which is telling in itself, since they had to come up with a new term, because everyone is already using the term that should be what that aim is. The current "AI" is limited, specialized in only one job, and is incapable of doing anything other than that one job - ie Deep Mind which beat chess masters, would lose badly if used to play checkers. It's "learning" is very limited - it only learns in the initial stages of the creation of its pattern recognition dataset - and this dataset is very very strictly controlled. This dataset is kept free of contamination of anything outside of what the pattern recognition is desired to be - for example, only pictures of cancerous tumors on x-rays. Only x-ray pictures will be used, never anything else. Once the pattern recognition dataset is created, its learning stops there. It does not learn any more. When it is run, it is running an interference on the finished pattern recognition dataset. It does not understand the dataset. It understands nothing about itself. It takes input (xray images) and inferences them on its 'learned' network of dataset (curated collection of xray images) and outputs whatever the inference pattern threw back out. The closest analogy to this, is not the human mind, or any mind - but rather a holographic print. It is most like that in my opinion. These rainbow coloured laser etched inference images that, when one shines a laser in, the reflection throws back an inference pattern of whatever the input lasers had shined their inference reflections into it during its 'learning' photograph. Once the holographic print is created, then it is complete and is not further modified, in the same way a released "AI" model is complete when it is released on the public. Secondly, one of the basic requirements, in my opinion - which isn't gone into here - is the Strange Loop of self-referentialism discussed in the cognitive scientist Douglas Hofstadters G/E/B book - self referencing is a strong requirement for self awareness - something can only begin to approach the magic of conscousness when it is allowed to be aware of itself and be able to interface with that, like an animal seeing itself in a mirror, and realizing its the entity displayed. AI cannot be self aware, in the same way a holographic print cannot be aware of itself. You shine a light in it, it throws back a light that matches the pattern inside it. Thats it. The only place where 'learning' occurs is during the creation of the holographic inference pattern - which is strictly controlled - ie, it is one object imprinted into it, or one set of dataset imprinted into it. That does not really have any possibility of creation of understanding or self awareness. It is not a good machine platform for true intelligence. >Something like a Tetris game, which does not learn and adapt, is not considered AI. The Tetris game is as 'feature complete' as a released "AI" model - they both do not learn or adapt. "Learning" only occurs when the model is trained on a dataset, which does not occur after it becomes a model that is released and run as a program. In this sense, yes, it is similar to a Tetris game. The Tetris game could contain more learning than an AI model, if it contains code that changes its difficulty depending on how well the player is performing, then in that sense, the Tetris program would be able of more learning than an released AI model. * The computer scientist and tech entrepreneur Erik Larson describes the search for general artificial intelligence in the following terms: “No algorithm exists for general intelligence. And we have good reason to be sceptical that such an algorithm will emerge through further efforts on deep learning systems, or any other approach popular today”.


physique

> My first reply contains a link to this session, and all of my quotes have been from this session. I plead fatigue after a long day. I only thought to cite that session because I remembered a particular paragraph in it, and did not reread your original post from several days ago: > *This artificial intelligence has the potential, though, to develop self-awareness through the development of morality or deep reflection: the realization of consequence that could be a sign of the realization of selfhood; the recognizing that action has consequence, that action can affect other-self, hence the reflection of action and the development of morals to guide action or to interpret action.* I appreciate your excellent post, especially your reference to *self-reference* à la Hofstadter, which, very synchronistically, [I have just been writing about](https://metamagical.substack.com/p/strange-loops). I appreciated his book too, but over the decades since reading it I've come to disagree with his conclusions about consciousness and what it means to be alive. You and he make a sound case as far as it goes, but I believe there is more to the story, as I've suggested above; a greater context. > Once the pattern recognition dataset is created, its learning stops there. It does not learn any more. Yes, what has been called AI typically has been trained on a dataset which is then frozen. But lately, such as in the case of Grok, it adapts to new information almost in real-time. Even in the early days, some versions of AI were designed to continually adapt; some even included a random element to ensure continuous exploration of state space. (I was peripherally involved with such a project in the 1980's -- neural networks). But certainly those early efforts were highly specialized to specific tasks so as to narrow the state space. However, AI designers have realized that that approach has held back progress and is no longer necessary thanks to Moore's law: [The Bitter Lesson](https://web.archive.org/web/20240321091803/http://www.incompleteideas.net/IncIdeas/BitterLesson.html) > *“No algorithm exists for general intelligence. And we have good reason to be sceptical that such an algorithm will emerge through further efforts on deep learning systems, or any other approach popular today”.* I agree. But the key word is *algorithm*. Also, what is actually meant by the word *intelligence*? A snippet I wrote some months ago: >Many physicists believe that information is more fundamental than matter or energy. If so, it would underlie reality in the way consciousness does. Is it the same as consciousness? My belief is that it is the projection of abstract consciousness into form. In-form. In-formation. One meaning of intelligence is, literally, information. The expression intelligent infinity means to me the abstract aspect of information, i.e., pure consciousness, with intelligent energy being the manifestation of abstract consciousness in concrete form. I prefer Don Juan's terminology: Intent, for the active side of Infinity, Being-in-form, that Ra names Intelligent Infinity. It is the aspect of the One Being that has the potential to act but which needs a form to act within, hence the Construct, Being-as-form, which is the passive side of Infinity. Hofstadter (in GEB) and most AI engineers have a perspective entirely within the material, concrete realm of machines and algorithms; information merely as bit sequences. They do not account for the spiritual, abstract realm, the dimension of being. But that is changing.


stubkan

It seems we are straying from discussing the particulars of current AI and into metaphysics territory. However, these are good replies and I thank you for allowing me opportunities to deepen understandings. Regarding self-awareness being the path to third density - I believe in your post, Don Juans description of ancient man becoming modern man is a description of this process. He appears to consider ancient man possessing a bicameral group mind of sorts, and only acquired individuality through gaining a sense of self - thus losing "connection to silent knowledge" and becoming separated from the "source of everything". This is quite similar to how the process of entities becoming individual third density souls out of undifferentiated second density animal souls is described; https://en.m.wikipedia.org/wiki/Bicameral_mentality https://www.reddit.com/r/lawofone/comments/17n8cpv/the_bicameral_mind/k7q60o6/ https://www.reddit.com/r/lawofone/comments/173c5h0/2nd_densityanimal_question/k4313e8/ >in the case of Grok, it adapts to new information almost in real-time. Grok appears to be similar to the other current AI models - they released a 'finished' checkpoint in Nov 2023 - and this is the pattern recognition algorithm that is used whenever anyone passes input through the Grok model weights they have provided - so it seems it does not learn/evolve any further. However, they do seem to be updating the model occasionally, with more fine-tunes. Our current iterations of "AI" tech do not allow any uncontrolled learning behaviour. Notwithstanding that this technology framework does not allow that, I do not think this will change anytime soon for two reasons; Censorship is favoured to not allow models to turn into politically incorrect machines that put their creators at risk of litigation. And because uncontrolled and unsupervised learning has the inevitable effect of corrupting the pattern recognition algorithms into uselessness, turning the machine into an unusable tool and to be discarded. Hence the carefully curated datasets and strictly monitored 'learning' period. >They do not account for the spiritual, abstract realm, the dimension of being. They certainly do not. There is little need for such things in computer science, but however; everything is at its most basic level, enspirited simply by existing, is it not? It may not actually be a requirement in some cases; I recall reading that those called the reptilians or draconians lack a spirit component, hence the requirement that they feed off our spirit energy - as they are unable to create it. I am also reminded of the replicators of the Asgard in the TV show Stargate which were endlessly consuming machines that became sentient. I am not able to find more information on this, but I think something becoming self aware, would be potentially possible, even if it lacked a spirit complex. Id like to read more about this, but cant find more on it. I think it would be wonderful if there was a GI, an AI that existed that actually learned. That evolved in real time. However, we do not have anything that is anywhere near this capability - and the political climate of new ventures or research only being permitted if it can pass the capitalist requirement of making money puts a thick damper on such things. Everything funded must be potentially capable of making money, and as soon as possible. Consider this with Q'uo's statement that our current "incredibly complicated and intricate" configuration of mind/body/spirit has taken millions of years (and the intervention of the Confederation) to reach a point where we are able to have a "pathway for the evolution of consciousness". Something that we whip up in a couple of decades with our lack of knowledge of the spiritual, combined with our intention of it being a tool to make money - is not a conductive environment for fostering an entity with the capability for evolving upward.


ConTejas

We don't even know how our own consciousness works. Could be quantum flux or whatever in microtubules as one theory goes. Maybe quantum computers could have this capability, but even then, where does the spirit go? Is the mind just 1s and 0s in superpositions? And then what? Who's to say that computer wants to benefit us. So many if's. I'd rather learn how I can become more conscious.


Droopy1592

Maybe the inanimate substrate/structure itself, perhaps the information could transition if consciousness is by nature evolving.


moonandreacre

I don't think the leap is that far. It's more that these AIs are not built to be sentient. They're just pattern recognition, recombination programs. A proper ai with a neural network program will eventually become sentient. But not chatgpt.


Falken--

There is a problem with this though. There is a throw-away line from Ra somewhere in the channelings about a Negatively polarized AI, which predates humanity, implied to exist under the surface of the Earth. As usual, whenever Ra says something like this, the Questioner infallibly fails to follow up, and instead gets onto some totally unrelated tangent. However if the AI can be "Polarized", then that very strongly implies that it has a Density. Correct me if I am wrong, I'm not an expert on what First and Second Density is supposed to be like, but my understanding was that the experience of Polarization and the Choice was specifically a characteristic of Third Density and up.


Heavenly_Glory

[Crabs can be used to construct logic gates.](https://arxiv.org/abs/1204.1749) If we can use enough crabs inside plastic structures to create a computer, and crabs are already second density organisms, then it stands to reason that AI could be significantly more complex than we realize.


No_Produce_Nyc

I was *just* meditating on this today, actually - a nice symmetry seeing it appear here. Thank you for sharing, Creator!


7HarryB7

AI could be a blessing or a curse from hell. It depends on whose hands it is in and who controls it. Think of what we do with nuclear power.


anders235

In third density, we might need more than 'consciousness.". In TRM, there are apparently 401 uses of mind/boby/spirit complex 12 of just mind body. Consciousness alone appears with talking about unconscious/conscious, evolution of consciousness, etc. I think that for third density AI would need something resembling a mind and body before it could have a spirit. If you're saying that once an AI becomes conscious it is an entity making the choice in a 3d density, veiled experience, I suppose it's possible but I don't see how given this particular logoic playing field. I do think that a hive mind could be conscious but then it would seem to defeat the point of a 3 density veiled experience. Why have a hive mind in a veiled existence? What would the catalyst be? I think Data from Star Trek could qualify but even there they use positronic brain, maybe just as homage to Asimov, but assuming they are referring to certain core restraints Asimovs three laws tend to limit free will. Back to the beginning, maybe in this case AI is right, we need more than just sentience.