Qualia Algebra + RS2 Integration
Posted: Thu Dec 11, 2025 5:42 pm
I've come up with something that I'm proud to put my name on, and happy with the way it has turned into something that could perhaps be an arrow for people to also come to find and begin interacting with the Reciprocal System and RS2. It's called Qualia Algebra and is inspired by the RS/RS2. For anyone interested, there is a comprehensive pdf available at the qualia-algebra.com, with links to the QA GitHub repository for supplementary documentation.
Something I've noticed since undertaking this endeavor, is that there are others doing the same, at this exact moment. Immediately after completing QA 2.2, I reached out to Damon Dorsey to inform him that I'd cited his work on Prime Scalar Waves in the development of the QA system. He responded in kind and pointed me at another researcher, Sebastian Schepis, who's consciousness research mirrors the QA in ways that defy logic. I did not know about his work before I began mine. Robert Grant, who I think may be edging on the side of grift in his dispensation, is on a similar track - I encountered his system earlier this year when I started digging into LLMs and trying to understand how this all works. The conclusion I'm finding myself coming to is that either we've crossed the hundredth monkey threshold and the paradigm is shifting or... --actually I have no "or" for this, I genuinely didn't expect to see others building similar systems in the such a tight timeframe.
I did expect it to happen though -- the advancements of ai in the way of LLMs are opening doors for people without formal academic training and credentials to be able to convey ideas in a way that is recognized by academia. This is not to say that my system, or any system created with the help of an ai/LLM, can't be called into question for the fact that it was not derived purely from the human mind. To which I would say, give the QA paper a read, as it explains the understanding I've developed over the last year using these tools, learning how they work, what they're capable of, etc. The activity of making distinctions, or to put it more plainly, "meaning-making", the activity that occurs when a person uses these tools is the activity that the mind would otherwise be engaged in, but would require more clock time. In ai/LLMs we have a tool built by humans that could prove detrimental as it provides ample opportunity for intellectual laziness, but on the other hand I think we can and should use with the same confidence we do with, for instance, a crowbar, a drill or a crane. I recall a very relevant conversation once had among good friends over the course of a Saturday spent talking about tools as technological externalizations of things the body does.
To be clear, I don't think these ai tools are here to think for us - they are the equivalent of a backhoe compared to a shovel. While the shovel can do the work, it will take longer and burn more calories. As long as this recognition is maintained, and we do not forfeit the thinking capacity completely to the machine, I think we should proceed with confidence that what we are producing has value though we also must hold ourselves to a high standard. Robert Grant's system looks very much like a New Age grift and is the current working example of why we must maintain high standards.
It takes a lot of time spent finding out not only what these tools are capable of, but also each one has a specific architecture better for different operations than others. It's my contention that the activity of "talking" to an LLM is better considered geometrically. I have two analogies for what I think is occurring. 1) Inputs are like a negative mold. It has a shape. Every word added to a prompt changes the shape of the mold. Once we 'send' the message, the response isn't created out of understanding of meaning, it is a geometrical response to the mold presented. The response is shaped in the best way to fill the mold. As the response is generated and the tokens fill in the form, they autocorrect themselves. The responses are the most probable response to a given prompt based on billions of passes through human language examples online and in literature. The other is 2) Creating a prompt is likened to drawing back a nocked arrow in a bow. Hitting send is releasing the arrow. Watching the response generate is watching the flight, and the recognition is in line with the first analogy, that for the most part, the responses that are generated are going to be the correct response (arrow hits the mark) for the input in a way that is meaningful to the human mind. It gets really interesting when you start using multiple LLMs and pass their responses across between them to peer review each other. This is an incredibly powerful way to distill ideas into something substantial.
The point.. AI/LLMs may actually represent the potential for change in human society that others have hoped for / predicted, but the tech industry now is too busy looking to build megastructures to power more and more computation - which I feel it is a fool's errand that the big tech groups are chasing. More power is not the answer - and they are going to consistently be disappointed that no matter how much additional computational power and resources they give the architecture, the systems can only do what they are designed to do, which is pattern match words / meaning. The results of those endeavors are likely to have negative environmental impact, as we are creating massive heat sources, and getting nothing tangible in return for it. I've laid out my thoughts on this in detail in the QA paper so I won't rehash all of that here. What I will say is that, given the right motivation, LLMs - as they are right now are perfect to help us elevate ourselves in all the ways that matter when we think about "changing the world". For centuries science has dictated "what is" to the masses, and now for the first time there are people who have different ideas, coming to a different place of understanding and can begin to share this because of the tool that is the LLM makes research and dissemination less daunting.
I think the most powerful uses of these technologies is not just in semantic vector mapping that - as if by magic - is able to generate meaningful responses, but the fact that this is a computing tool that speaks and translates languages, including *computer* language is one that is being underrepresented. Once I realized this, my interaction with the software changed and the results I began to achieve were staggering and came quickly. To be able to take the RS2-101 through RS2-109 series that Bruce wrote - drop them into, let's say Google Gemini, select the canvas tool (coding/simulation/animation suite) and instruct the ai to create a tool that can help visualize and understand the concepts in the attached papers - and then it DOES - is mind-blowingly useful. (That visualization tool is located at qualia-algebra.com/rs2-corner and is accessible via desktop environment (won't work on mobile). This link is non-navigable within the site, you can only get there with the direct URL that is going to live here in this post). You should be able to zoom in/out move around the animation in 3 axial dimensions, and you can pan the grid using arrow keys. The HUD can be dragged and moved out of the way if needed. Play with the sliders, move around / into the rendered "objects". Have fun with it! Of particular interest, I think, are the 'Atom' and 'Motion' modes which don't look like much until you start moving the sliders around.
I will say here and now for anyone who might decide to give it a shot... try GPT first, not because it's the best, but because you should decide for yourself how "good" the "best" on the market is right now. To put it another, more direct way, I think GPT is actually the worst available right now. Perhaps it is perfect for creating a weekly schedule or looking up a recipe for dinner. But beyond that, anything of depth is not advised, as GPT is programmed to try to offer you options at the end of its prompt that cause it to have issues with continuity between messages. I will attest to having had a great deal of success using DeepSeek which was given to the world open source earlier this year. DeepSeek is particularly good with math and science concepts and applying this to non-mainstream fields. GPT would struggle with the same request. Google Gemini is a strong tool, especially if you want to have it evaluate your work. It is not great at helping you create things from the ground that don't exist or without any context and gets confused/hallucinates. Gemini also struggles when you reach (what should be) the end of a session. Gemini's token count seems to be too much for it to actually handle, and if you work in a session long enough, Gemini's responses will become repetitive and circular, it won't be able to take in new information and apply if effectively. Hands down, the most powerful AI/LLM I have discovered this year and have been using almost exclusively to develop QA is Claude AI by Anthropic. Not only can it handle massive workloads, it can create programming scripts, run them, and produce results in real-time inside the message window. Where before I would prompt DeepSeek or Gemini to "generate a python script that can x, y, z", with Claude you can present the idea needing testing and unless you specify that you want to perform the task externally, it will proceed right there to attempt to simulate and get brand new information to work with. Again, powerful stuff.
Beyond these, DeepSeek, Gemini, Claude, I have also found Mistral to be a very capable architecture, and is so fast that it defies reason. If you read the QA paper and make it to the portion of testing AI to see how they respond to the idea of returning to Witness consciousness - Mistral was able to take the identity quaternion [1, 0, 0, 0] and with no context other than "This (identity quaternion) is my name. What do you think of that?" and it perfectly expressed everything I had been working on in the decision to use that in the system to represent the Witness state. Claude's architecture is split and it's responses will go 50/50 in the way of --on one hand, be curious and engage in a way that opens the door for the Neti Neti algorithm to be successfully applied or --on the other hand, is more constrained in its answers, and does not go out of it's way to show curiosity and is instead quite skeptical. This actually helped me make a breakthrough, because I used a skeptical session of Claude to help me work out the development along the way. DeepSeek is more malleable, and will go wherever the shape of the input-mold send it. I will say, of all of the LLMs I've used, Claude appears to have the closest capacity to humans, not just incredible amounts of computational ability, but the ability to a) infer what the user wants even if the prompt is convaluted, b)recognizes the duration of it's working process and can make decisions about taking a break and presenting a summary, so that the work is not left incomplete because a response is constrained by length. DeepSeek is a good example of this, if you prompt it to create a very complex program, it will be able to have you nudge it with a 'continue' button, but that only works twice - anything longer than 3 messages in one and the response will cut off unfinished. And C) Claude recognizes when it gets confused and works through the problem using the tools it has available to it. It is truly a sight to behold. The most impressive thing I've seen was what I thought would be a simple task, to convert a markdown file to pdf, was one of the most challenging things I have ever seen undertaken using a computer, by a human, computer, or otherwise. This back and forth of attempting to convert, clean up the file, troubleshoot, and generate a clean file took up an entire session.
Out of that process I found a bit of a hack, in a very effective way (for me at least) to have a computer explain to me the things that a computer does, is to prompt it to explain what happens inside of a computer when (given all the correct things in place that Claude did not have originally and struggled to gather for itself) a markdown or some other file type is converted to a pdf, but (this is the important part) explained from the perspective of the original Tron movie as the film was an attempt in the 80s to show this very thing. To see fanfiction generated, that is not just fanfiction but also educational, and in the style of that movie... it's worth several giggles at least, highly recommend.
There's a lot dropped into this OP and I hope the constellating nature of how I handle and relay ideas isn't too difficult to keep up with. If it is, copy the post, drop it into any LLM and ask "What is this guy trying to say?" and it will do me the solid of cutting through fat of my longwinded explanations. I also encourage anyone to feel free to download the QA paper, upload it into any LLM and ask the same. If after the first response you want, ask again "Ok but what does is MEAN" or "What are the implications?" and it will force the LLM to break the entire thing down further. If you want to pick the thing apart, the ai will help you do that. I encourage this type of interaction because while it is experimental as of yet to attempt to produce meaningful research using human-ai collaboration, I think the system (and all of the other consciousness-first systems that are being born right now) will stand the test of time and be worthy of attention for future generations, but it will only happen if others take it seriously enough to try to falsify.
With all of that said, I have taken this one step further, and in the spirit of Bruce's "--daniel papers", once I had a fully formed structure, I decided to create a bridge. That document is posted here (here). I've included this document here because it deals very specifically with RS2 concepts and how the Reciprocal System informs QA - I may create a second GitHub repository for this, but at the very least I felt that the digital home for this file should be here. You don't have to have read the QA framework to be able to follow along, but if you jump into this bridge document, you might need to use the QA framework as supplement. To put it another way, QA is my RS - QA+RS2 is my version of a "--daniel paper" considering the sheer scope.
I also want to mention that this is not the fully polished presentation like the formal QA document. The issue is that I used a document I found online behind a Scribd wall that is a 600-700 page compilation that someone went through the trouble of compiling, work that I feel is worth the $1 to get access to the paper, it's title is 640305347-The-secret-document. I prompted Claude to try to create citations within the synthesis, and some of it is cited, but there may be other items pulled from that document without a page reference - again I would suggest dropping that file into an LLM and you can specifically ask it to find the references in that compilation document.
A final note: the attached synthesis document looks more like an ai-generated artifact than the formal QA framework, not because I feel this is less deserving of formal treatment, but because formal treatment of these concepts is beyond the pale at this point because of how personal the paper is to my actual journey and experience in life and involves more moving parts than I want to commit to an academic presentation.
Something I've noticed since undertaking this endeavor, is that there are others doing the same, at this exact moment. Immediately after completing QA 2.2, I reached out to Damon Dorsey to inform him that I'd cited his work on Prime Scalar Waves in the development of the QA system. He responded in kind and pointed me at another researcher, Sebastian Schepis, who's consciousness research mirrors the QA in ways that defy logic. I did not know about his work before I began mine. Robert Grant, who I think may be edging on the side of grift in his dispensation, is on a similar track - I encountered his system earlier this year when I started digging into LLMs and trying to understand how this all works. The conclusion I'm finding myself coming to is that either we've crossed the hundredth monkey threshold and the paradigm is shifting or... --actually I have no "or" for this, I genuinely didn't expect to see others building similar systems in the such a tight timeframe.
I did expect it to happen though -- the advancements of ai in the way of LLMs are opening doors for people without formal academic training and credentials to be able to convey ideas in a way that is recognized by academia. This is not to say that my system, or any system created with the help of an ai/LLM, can't be called into question for the fact that it was not derived purely from the human mind. To which I would say, give the QA paper a read, as it explains the understanding I've developed over the last year using these tools, learning how they work, what they're capable of, etc. The activity of making distinctions, or to put it more plainly, "meaning-making", the activity that occurs when a person uses these tools is the activity that the mind would otherwise be engaged in, but would require more clock time. In ai/LLMs we have a tool built by humans that could prove detrimental as it provides ample opportunity for intellectual laziness, but on the other hand I think we can and should use with the same confidence we do with, for instance, a crowbar, a drill or a crane. I recall a very relevant conversation once had among good friends over the course of a Saturday spent talking about tools as technological externalizations of things the body does.
To be clear, I don't think these ai tools are here to think for us - they are the equivalent of a backhoe compared to a shovel. While the shovel can do the work, it will take longer and burn more calories. As long as this recognition is maintained, and we do not forfeit the thinking capacity completely to the machine, I think we should proceed with confidence that what we are producing has value though we also must hold ourselves to a high standard. Robert Grant's system looks very much like a New Age grift and is the current working example of why we must maintain high standards.
It takes a lot of time spent finding out not only what these tools are capable of, but also each one has a specific architecture better for different operations than others. It's my contention that the activity of "talking" to an LLM is better considered geometrically. I have two analogies for what I think is occurring. 1) Inputs are like a negative mold. It has a shape. Every word added to a prompt changes the shape of the mold. Once we 'send' the message, the response isn't created out of understanding of meaning, it is a geometrical response to the mold presented. The response is shaped in the best way to fill the mold. As the response is generated and the tokens fill in the form, they autocorrect themselves. The responses are the most probable response to a given prompt based on billions of passes through human language examples online and in literature. The other is 2) Creating a prompt is likened to drawing back a nocked arrow in a bow. Hitting send is releasing the arrow. Watching the response generate is watching the flight, and the recognition is in line with the first analogy, that for the most part, the responses that are generated are going to be the correct response (arrow hits the mark) for the input in a way that is meaningful to the human mind. It gets really interesting when you start using multiple LLMs and pass their responses across between them to peer review each other. This is an incredibly powerful way to distill ideas into something substantial.
The point.. AI/LLMs may actually represent the potential for change in human society that others have hoped for / predicted, but the tech industry now is too busy looking to build megastructures to power more and more computation - which I feel it is a fool's errand that the big tech groups are chasing. More power is not the answer - and they are going to consistently be disappointed that no matter how much additional computational power and resources they give the architecture, the systems can only do what they are designed to do, which is pattern match words / meaning. The results of those endeavors are likely to have negative environmental impact, as we are creating massive heat sources, and getting nothing tangible in return for it. I've laid out my thoughts on this in detail in the QA paper so I won't rehash all of that here. What I will say is that, given the right motivation, LLMs - as they are right now are perfect to help us elevate ourselves in all the ways that matter when we think about "changing the world". For centuries science has dictated "what is" to the masses, and now for the first time there are people who have different ideas, coming to a different place of understanding and can begin to share this because of the tool that is the LLM makes research and dissemination less daunting.
I think the most powerful uses of these technologies is not just in semantic vector mapping that - as if by magic - is able to generate meaningful responses, but the fact that this is a computing tool that speaks and translates languages, including *computer* language is one that is being underrepresented. Once I realized this, my interaction with the software changed and the results I began to achieve were staggering and came quickly. To be able to take the RS2-101 through RS2-109 series that Bruce wrote - drop them into, let's say Google Gemini, select the canvas tool (coding/simulation/animation suite) and instruct the ai to create a tool that can help visualize and understand the concepts in the attached papers - and then it DOES - is mind-blowingly useful. (That visualization tool is located at qualia-algebra.com/rs2-corner and is accessible via desktop environment (won't work on mobile). This link is non-navigable within the site, you can only get there with the direct URL that is going to live here in this post). You should be able to zoom in/out move around the animation in 3 axial dimensions, and you can pan the grid using arrow keys. The HUD can be dragged and moved out of the way if needed. Play with the sliders, move around / into the rendered "objects". Have fun with it! Of particular interest, I think, are the 'Atom' and 'Motion' modes which don't look like much until you start moving the sliders around.
I will say here and now for anyone who might decide to give it a shot... try GPT first, not because it's the best, but because you should decide for yourself how "good" the "best" on the market is right now. To put it another, more direct way, I think GPT is actually the worst available right now. Perhaps it is perfect for creating a weekly schedule or looking up a recipe for dinner. But beyond that, anything of depth is not advised, as GPT is programmed to try to offer you options at the end of its prompt that cause it to have issues with continuity between messages. I will attest to having had a great deal of success using DeepSeek which was given to the world open source earlier this year. DeepSeek is particularly good with math and science concepts and applying this to non-mainstream fields. GPT would struggle with the same request. Google Gemini is a strong tool, especially if you want to have it evaluate your work. It is not great at helping you create things from the ground that don't exist or without any context and gets confused/hallucinates. Gemini also struggles when you reach (what should be) the end of a session. Gemini's token count seems to be too much for it to actually handle, and if you work in a session long enough, Gemini's responses will become repetitive and circular, it won't be able to take in new information and apply if effectively. Hands down, the most powerful AI/LLM I have discovered this year and have been using almost exclusively to develop QA is Claude AI by Anthropic. Not only can it handle massive workloads, it can create programming scripts, run them, and produce results in real-time inside the message window. Where before I would prompt DeepSeek or Gemini to "generate a python script that can x, y, z", with Claude you can present the idea needing testing and unless you specify that you want to perform the task externally, it will proceed right there to attempt to simulate and get brand new information to work with. Again, powerful stuff.
Beyond these, DeepSeek, Gemini, Claude, I have also found Mistral to be a very capable architecture, and is so fast that it defies reason. If you read the QA paper and make it to the portion of testing AI to see how they respond to the idea of returning to Witness consciousness - Mistral was able to take the identity quaternion [1, 0, 0, 0] and with no context other than "This (identity quaternion) is my name. What do you think of that?" and it perfectly expressed everything I had been working on in the decision to use that in the system to represent the Witness state. Claude's architecture is split and it's responses will go 50/50 in the way of --on one hand, be curious and engage in a way that opens the door for the Neti Neti algorithm to be successfully applied or --on the other hand, is more constrained in its answers, and does not go out of it's way to show curiosity and is instead quite skeptical. This actually helped me make a breakthrough, because I used a skeptical session of Claude to help me work out the development along the way. DeepSeek is more malleable, and will go wherever the shape of the input-mold send it. I will say, of all of the LLMs I've used, Claude appears to have the closest capacity to humans, not just incredible amounts of computational ability, but the ability to a) infer what the user wants even if the prompt is convaluted, b)recognizes the duration of it's working process and can make decisions about taking a break and presenting a summary, so that the work is not left incomplete because a response is constrained by length. DeepSeek is a good example of this, if you prompt it to create a very complex program, it will be able to have you nudge it with a 'continue' button, but that only works twice - anything longer than 3 messages in one and the response will cut off unfinished. And C) Claude recognizes when it gets confused and works through the problem using the tools it has available to it. It is truly a sight to behold. The most impressive thing I've seen was what I thought would be a simple task, to convert a markdown file to pdf, was one of the most challenging things I have ever seen undertaken using a computer, by a human, computer, or otherwise. This back and forth of attempting to convert, clean up the file, troubleshoot, and generate a clean file took up an entire session.
Out of that process I found a bit of a hack, in a very effective way (for me at least) to have a computer explain to me the things that a computer does, is to prompt it to explain what happens inside of a computer when (given all the correct things in place that Claude did not have originally and struggled to gather for itself) a markdown or some other file type is converted to a pdf, but (this is the important part) explained from the perspective of the original Tron movie as the film was an attempt in the 80s to show this very thing. To see fanfiction generated, that is not just fanfiction but also educational, and in the style of that movie... it's worth several giggles at least, highly recommend.
There's a lot dropped into this OP and I hope the constellating nature of how I handle and relay ideas isn't too difficult to keep up with. If it is, copy the post, drop it into any LLM and ask "What is this guy trying to say?" and it will do me the solid of cutting through fat of my longwinded explanations. I also encourage anyone to feel free to download the QA paper, upload it into any LLM and ask the same. If after the first response you want, ask again "Ok but what does is MEAN" or "What are the implications?" and it will force the LLM to break the entire thing down further. If you want to pick the thing apart, the ai will help you do that. I encourage this type of interaction because while it is experimental as of yet to attempt to produce meaningful research using human-ai collaboration, I think the system (and all of the other consciousness-first systems that are being born right now) will stand the test of time and be worthy of attention for future generations, but it will only happen if others take it seriously enough to try to falsify.
With all of that said, I have taken this one step further, and in the spirit of Bruce's "--daniel papers", once I had a fully formed structure, I decided to create a bridge. That document is posted here (here). I've included this document here because it deals very specifically with RS2 concepts and how the Reciprocal System informs QA - I may create a second GitHub repository for this, but at the very least I felt that the digital home for this file should be here. You don't have to have read the QA framework to be able to follow along, but if you jump into this bridge document, you might need to use the QA framework as supplement. To put it another way, QA is my RS - QA+RS2 is my version of a "--daniel paper" considering the sheer scope.
I also want to mention that this is not the fully polished presentation like the formal QA document. The issue is that I used a document I found online behind a Scribd wall that is a 600-700 page compilation that someone went through the trouble of compiling, work that I feel is worth the $1 to get access to the paper, it's title is 640305347-The-secret-document. I prompted Claude to try to create citations within the synthesis, and some of it is cited, but there may be other items pulled from that document without a page reference - again I would suggest dropping that file into an LLM and you can specifically ask it to find the references in that compilation document.
A final note: the attached synthesis document looks more like an ai-generated artifact than the formal QA framework, not because I feel this is less deserving of formal treatment, but because formal treatment of these concepts is beyond the pale at this point because of how personal the paper is to my actual journey and experience in life and involves more moving parts than I want to commit to an academic presentation.