LOVEOFDOING
loveofdoing
AGI
0:00
-1:04:19

AGI

Speaker A - 62590 to 137330 (in milliseconds)

Sam. Sam. Sam.

Speaker B - 144190 to 258070 (in milliseconds)

Hello. Hello. Can you all hear me? Yes or no? Hello, Carter from QCU. Hello, Dallas. Hello, Carter. Good afternoon. Good afternoon. So this is going to be a bit more of a directed space. I have a couple of questions specifically that I wanted to get some feedback on that we were talking about in the group chat a little bit, but let me see. QCU, are you going to get your mic? QCU. Hello. Cool. How's it going? So, yeah, I'm trying to do some writing, and I don't know, I feel like my writing process really requires a lot of bouncing ideas off of other people, because I just end up, I don't know, just thinking about things in my own head. And I really don't know how plausible some things are, especially with up to date, I guess. So what did you think about the conversation in chat? I didn't read the last of the stuff that you just written about Zip's Law. What do you think about that, though?

Speaker A - 261560 to 304820 (in milliseconds)

Hi, I was thinking that the chat was taking a pretty good stance towards the topic by focusing on what distribution of things in an environment require words to communicate something in the first place. Like, the distribution of words in a language has to reflect some sort of list of agendas or goals or objects or people that exist inside of the group of people that share that language. And I guess there's a lot of different reasons why you could end up with a certain distribution of words when you record them. And I guess you could end up with lots of different distributions of words that have different biases in what kind of features are preserved the most in the thing you're measuring.

Speaker B - 308200 to 330140 (in milliseconds)

Yeah, I generally agree with that. What do you think about the notion there's almost like a biological or sociological kind of limit to the number of words that a society can have without language.

Speaker A - 336220 to 338452 (in milliseconds)

In a society without written language.

Speaker B - 338596 to 339290 (in milliseconds)

Correct.

Speaker A - 343360 to 395230 (in milliseconds)

I think there's probably some way you could try to bound that one in and make specific claims to how many words can sustain themselves in a group like that. I remember a little bit of a digression, but I remember hearing a long while ago that different languages have different rates of mumbling or incomprehension, where if a speaker emits a perfectly semantically valid sentence, the listener to the sentence will simply not understand it and require the sentence to be repeated or rephrased before they're able to understand it. Even if they're both fluent speakers. And we're hearing that some of the worst languages for this have like an 83% success rate just for the communication of deliberately expressed ideas between two speakers that are facing each other in the same room and talking about the same thing. So spoken language might be very lossy and not be able to express very much information in the first place.

Speaker B - 397040 to 465620 (in milliseconds)

Okay, yeah, I guess that's a different take or perspective. I wasn't even really considering I've heard about the lossy nature of language, I guess in an information theoretic sense. But I was more so, I guess, thinking about whether or not writing in and of itself allows for this. Kind of notion that I guess I've heard that human populations might have, like, a 250 population limit to the number of people an individual might feasibly be able to recognize in a particular group. And if people are kind of task dependent and whatnot to whatever environment that they're in, especially as a tribe, you can imagine like 250 people and then their own particularized survival things they need to do, those would be kind of limits to what a particular society could do without some form of systemic writing.

Speaker A - 470760 to 484410 (in milliseconds)

Um, I guess you're sort of asking if you think there's sort of like a biological ultimate end or ultimate conclusion that language would turn towards with like the kind of memory we have, the kind of communicative tools we have, the kind of bodies we have.

Speaker B - 487180 to 535400 (in milliseconds)

Yes. I don't know if I consider it biological necessarily. I think it's more so afforded by the environment. Right. Because language or written language is like an environmental tool that is afforded by the creation of the writing in and of itself. Right. The environment, I guess, affords the ability to create written language to get past this bottleneck, if you will. So I wouldn't necessarily call it like a biological restraint. I guess maybe it would be good to call it that, I don't know. But I am talking about the constraint nonetheless.

Speaker A - 540380 to 644510 (in milliseconds)

Okay. I think this is like, reminding me a bunch of stuff I read a really long time ago, like when I was in middle school or something about the limits of the number of mental objects someone can hold inside their head at one time and stuff like that. And limits to abstraction. Because I read it in middle school, it was probably 2000s era pseudoscience popcorn stuff that would come out in the Scientific American and has no credibility or meaning anymore. But I think the direction I would take this is I would say that you're right to note that the tool of writing lets you take some part of your interior cognitive system and literally project it into your shared environment that other people around you can interact with and directly see after the fact. And I think that that without question expands the number of ideas you can look at at one time. If you're orally narrating an idea, which is sort of what I'm doing right now because I'm sitting on a veranda outside in summer heat without a computer, you can only think about an idea serially and think about the last couple of words you've said or some sort of overarching goal you might have. It's very difficult to reflect on what you were thinking about two sentences ago or two pages ago without the ability to literally look at the marks that you made with your hands at the same time you had the thoughts. And that poses a very serious limit to the kind of, I think, cross comparisons you can meaningfully make if you're trying to think without writing. And to ground that to your question, I think that really does reduce the number of useful words or words that describe different concepts that you can really keep in practice or keep in use within a culture at one point in time.

Speaker B - 646240 to 709170 (in milliseconds)

Yes. And then I guess the corollary or the consequence of that would be that I think ultimately what language is, I mean, written language is what allowing us to do is then building higher level concepts or higher level abstractions based on the previous ones. Right. The fact that you have now a more unlimited number of words that you can keep track of now allows you to look at almost perceptually the fact that you might need a distinction between certain perceptually, certain symbols right. You'll be able to look at, okay, I have a number of symbols that represent these concepts and you're not even necessarily thinking about them conceptually. Right. You're just talking about symbols that represent these percepts that you're dealing with in some sense. Right. But when you need to then make a distinction between these symbols or.

Speaker A - 711220 to 711536 (in milliseconds)

A.

Speaker B - 711558 to 728150 (in milliseconds)

Distinction between the fact that this symbol represents a really large number of percepts in your environment, but you need to, what's the word? Distinguish between those percepts. And so you would need a new symbol in order to differentiate between those percepts. Does that make sense?

Speaker A - 730120 to 781770 (in milliseconds)

Yeah, that makes perfect sense. It's like if you have an ideographic language and you have literally the word for a tree, it's very easy to express the idea of a forest by just clumping together a bunch of the sigils used for tree and then bounding them inside of a box or keeping them next to each other in a certain constellation. And we've actually tracked this happening in a lot of different languages. People love to use the repetition or grouping of symbols to represent some sort of shared quality that was not present. If you use the symbol by itself, there's like tons of backing for the idea. You're saying that just being able to lay down the symbols in a really clear way that lets you look at the symbols in isolation of the objects they point towards makes it easier to think about what conceptual tools you might be missing or might already have for organizing the things in your environment that you're abstracting about.

Speaker B - 783980 to 789850 (in milliseconds)

And so I guess how do you think dead metaphor relates to that?

Speaker A - 793920 to 866220 (in milliseconds)

That's a free association challenge, I guess the way I would take it is that a dead metaphor would be sort of like a comparison that requires some sort of intermediary step. Like, I remember talking about the construction of words with my partner a while ago and saying that a certain pattern of striped, white and green colors had a certain special word for it that meant only that border division between two different textures of those two different colors. And if I said the special word I used to describe this to anyone else that wasn't my partner, they would not be able to ground the comparison to the thing I was talking about. Even if I used the word repeatedly in many different situations because the thing I was describing was a color texture and could be seen in many different kinds of objects. And figuring out the level of abstraction it was on would be nearly impossible without some sort of special clue like regrounding the meaning by drawing the texture instead of just pointing at list of objects or stuff like that. Anything else would require a super advanced theory of mind to be able to guess why I was pointing at something by saying the word.

Speaker B - 866390 to 870230 (in milliseconds)

My bad, you broke up on my end. Can you repeat that last bit?

Speaker A - 871800 to 940830 (in milliseconds)

Oh, sorry. I'm probably on a weak internet connection right here. Okay. So I had this special secret word that I'm not sharing for the sake of making the dead metaphor thing even more explicit. If I pointed at things in the environment I'm in right now and said the word many different times for many different objects, it would be easy to pick up the wrong grounding for the word. Someone might guess it means vegetation or leaf or a place where sunlight and shadow are touching each other, and it might be contextually true in all the things I pointed towards. But there is a hidden extra abstraction which is actually more simpler and more useful for a certain kind of goal that would be inaccessible without having a very explicit grounding for what the symbol was pointing towards. Just seeing examples of its use are not enough proof to be able to figure out what the word really meant. And that means that the word could still be in circulation or even memetically, used by other people, but it would be a dead metaphor because it wouldn't resolve the actual explicit thing that all of the original cases had in common.

Speaker B - 943280 to 954190 (in milliseconds)

Interesting. Do you have any questions for me? I've just been asking you a bunch of questions.

Speaker A - 955540 to 964130 (in milliseconds)

I'm super curious about the connections you had between those authors in that list on the side of that whiteboard. What did they have in common for you?

Speaker B - 968840 to 996830 (in milliseconds)

I don't know. Their systemic approaches to kinds of abstractions. I like Gregory Bateson for his kind of notion about how a non rational animal might become more rational through kind of like this idea of a double bind. Are you familiar with a double bind is?

Speaker A - 1002190 to 1006206 (in milliseconds)

I've heard it in use a lot, but I never think I've read any essays by Bateson about it.

Speaker B - 1006228 to 1083960 (in milliseconds)

Actually, it's kind of like this idea that sometimes an animal can almost receive a signal or environmental cue that are seemingly in contradiction with one another. And so if the two environmental cues are seemingly in contradiction, or maybe they are actually in contradiction with one another, and by trying to pursue both goals for whatever reason, you end up in a state of kind of like psychosis. And the idea being that if you can abstract beyond those environmental cues or goals and find the third option that fulfills both of them, then you are, in some sense, doing rationality. You're doing the act of becoming more you've abstracted away from this almost biological necessity, and you've gone beyond it, right, in some sense, because you've abstracted both of those goals. Does that make sense?

Speaker A - 1086330 to 1103340 (in milliseconds)

That makes perfect sense, actually. A little follow up since this is a recorded space and you probably want to do something in the future with it. Do you have any examples of essays that represent this idea you're talking about that you could point towards? Or if you can't find any, would you like to write one?

Speaker B - 1104210 to 1157070 (in milliseconds)

I think Bateson talks about this idea of a porpoise. I think they'd done some naval experiment or something with porpoises back in the 60s or something like that. I don't really know. The link is on, like, if you search for it on Google, you can find it. I don't know. It off the top of my head, though. If you type in, like, Gregory Bateson double bind somewhere there what's interesting, too, about Gregory Batson, though, he was like a cyberneticist, I guess. And so he was also, I guess, thinking about these things. The double bind, I guess, more so was from the idea of schizophrenia. And in some sense, I guess, that brings us to this idea of Julian James. And what do you think about Julian James?

Speaker A - 1160850 to 1170258 (in milliseconds)

I personally have a really weak familiarity. I haven't gone through the canon of all the good cyberneticist authors yet. So you're actually giving me a reading list through this space.

Speaker B - 1170424 to 1316500 (in milliseconds)

Well, I don't think Julian James could be considered a cyberneticist. I just like him for his ideas on the origin of consciousness. And I guess it kind of ties in really interestingly with Julian Jane. I mean, with Basin let me actually add this thing to the kind of let me add it to the top, just in case anybody else wanted to take a look at it. But what were they saying in that? Julian Jane's kind of posited in the space between humans being more animalistic, and he calls it consciousness. I honestly think that he used the wrong name for what he was describing. I think he was really describing self awareness or self consciousness, but he almost posited that human beings would have behaved rather schizophrenically in this time period. And he gives a bunch of reasons for why and kind of posits this narrative that I find really interesting. But what's really fascinating about his work, I guess, is that he kind of posits that the hemispheres were kind of talking to one another and that eventually there's kind of like this breakdown in this kind of distinct way of cognition, I guess. And then because of that breakdown in the way cognition occurs, human beings become more self aware. And he pauses. It really fascinatingly early around the time of Homer and the written language in ancient Greece. And he has a bunch of linguistic reasons for why this might be the case, with the idea that in the Homerian kind of dialogue, there aren't a lot of words that kind of talk about the will of the individual, almost as if the individuals didn't even really have will. What do you think about that?

Speaker A - 1320470 to 1433910 (in milliseconds)

I remember around the time when the breakdown of the bicameral mind topics being really popular in internet circles, I was really obsessed with demaus's history of childhood for pretty similar reasons. I thought that the psychohistory that talked about the presence or absence of certain ideas in the history of people talking about childhood was pretty similar, that there were these mysterious voids in people's narratives. And how this early psycho historian there have probably been psycho historians since noticed. There was way more of certain kinds of writings that covered certain parts of what we would think of as universal human experiences and other parts and tried to gesture at. There may be being mass bulk, extremely common repression or absences of certain kinds of internal experiences that became way more common after a certain epic. And this isn't to say like, I don't like the bicameral mind idea, but I think that the history of childhood example is probably very useful because the transition being described in that psychohistory is very recent and it's a lot easier for ordinary historians to go look for texts that contradict or go along with that theory. And I think it's a lot harder to try to build up a theory about the origin of self reflective consciousness if the texts we have to rely on are fragments of recorded oral histories from the transition from pre writing to post writing. It's just naturally harder to find confirmation of the idea itself unless you have rare captive populations of people that were completely deprived of normal developmental developmentally appropriate language use and then had to develop their self reflective awareness in an unusual way. And those people might not be a good example for understanding normal human development.

Speaker B - 1437210 to 1528420 (in milliseconds)

I think you're kind of broaching on that thing that you'd link. Who was it that linked it earlier? That wasn't you, was it? I think it might have been. No, it was teague. Right. I think he was when it but yeah, I didn't link it. But I'd read something similar to that in this one, I think neuroscience kind of article. I read online about this one woman who taught some guy that had been born deaf but hadn't been taught language until his adulthood. And there's this one line in the article that I just really find interesting about how he doesn't like to think about how life was before language. He describes it as very dark. And it just makes me think that there's a really interesting transition, like a boundary transition between going from having to deal with the world in pure abstractions and then coalescing those abstractions into distinct boundaries with words by taking that abstract, embedding space in one's mind or whatever and then having distinct words that refer to particular regions that are then also isolated in the environment. Symbolically. I find that interesting.

Speaker A - 1530310 to 1544220 (in milliseconds)

So I love doing whenever Teague's neurohelmet is more available for you. So you're going to sign up for a course to disable your ability to think in language with a transcranial magnetic stimulation for a couple of hours and see what it feels like.

Speaker B - 1547390 to 1584950 (in milliseconds)

No comment. But actually, it'd be pretty interesting, I guess, because I do most of my thinking, I guess almost all of it really verbally. I don't know. So it would be interesting. Even when I try and meditate, I don't know if I do it properly. So it would definitely be interesting. I really don't know if there's a lot of cognitive benefit to it, though. Like disabling thought. I don't know, maybe I'm biased, but I almost think that there's a lot of benefit to thinking linguistically.

Speaker A - 1593700 to 1598390 (in milliseconds)

It would certainly make for a very interesting arrowid trip report, one like no other.

Speaker B - 1600760 to 1615290 (in milliseconds)

Yes. Going on to the next person on the list. Which one should we go to next? What do you.

Speaker A - 1623180 to 1627196 (in milliseconds)

I mean, I hope some people start raising their well, I.

Speaker B - 1627218 to 1634370 (in milliseconds)

Didn'T really add that many people. LaShawn, are you hey, I'm charging my car.

Speaker C - 1635460 to 1639440 (in milliseconds)

Is this the list in the photo there?

Speaker B - 1639590 to 1652790 (in milliseconds)

Yeah. Just trying to think about some things, I guess, and maybe do some writing eventually. And I have QCU here, so I'm using him as my in person GPT. Nice.

Speaker C - 1653720 to 1674650 (in milliseconds)

Awesome. Still some alpha in human GPTs, definitely. Yeah. No, I really wouldn't have much to comment on the names here. Looking at the rest of it, I'm not sure what to think of it.

Speaker B - 1676460 to 1683240 (in milliseconds)

So let's go to Gibson then. Are you familiar with Gibson QCU?

Speaker A - 1687560 to 1690180 (in milliseconds)

Was Gibson like the visual neuroscience?

Speaker B - 1690600 to 1831328 (in milliseconds)

Yes, but primarily the reason why he's on the list is because his kind of idea of affordances and I guess to tie the idea of affordances back to Jane. So Jane's kind of has this idea of this thing called an Aptic structure, and then Gibson has this idea of an affordance, right? And so an affordance is kind of like this relationship between the environment and the system, if you will, like the cognitive apparatus or the animal. It's like this idea that things are objects in some sense out there, but the animal can't do whatever right? It can only do what the environment gives it to do, right? It can't arbitrarily do. So that's kind of what the idea of an affordance is. And James's idea of an Aptic structure is more so that through the development of the cortex and the brain more generally. And like the neurological system more generally, there are structures that are kind of developed to do specific tasks, which is kind of different than Jeff Hawkins, who's also on the list, who kind of talks about the neocortex being this kind of generalized system. Right. How do I parse this? But yeah, so the idea that there are kind of different structures, right, that do specific things and that there are affordances from the environment that would make an animal do particular things based, or there are apt structures that have evolved because of avoidances in particularized environments and that would cause an animal to do certain things. Does that make sense? I think those really long winded, I.

Speaker A - 1831334 to 1907050 (in milliseconds)

Think I feel like I get the definitions you're putting down. Okay. The Aptic structures are talking about how there are functional structures which in animal studies seem to be used extensively or exclusively or very selectively for specific behavioral actions. I'm not going to say stuff whether it's like instinctual or learned or whatever, just that if you look at an animal that has to climb on trees or whatever, you will notice there are some neurons that seem to exist for the sake of allowing the animal to grasp onto it or something. Right. And affordances would be talking about how the objects the animal can interact with seem to stand out inside of the worldview of that animal. Like when the animal is looking for things to looking for ways to move around in its environment, if they have the right Aptic structure to grab onto branches, the branches will stand out and be extra salient in the worldview of the organism. And if the animal has not yet learned to swing from branches, if they're a small little crawling infant, they might not see the same scene as containing the same salient features that can be used to accomplish plans they want to carry out, I guess.

Speaker B - 1911970 to 1914430 (in milliseconds)

Yeah. Do you have any thoughts on that, Lashley?

Speaker C - 1917270 to 1922050 (in milliseconds)

No, sorry, I was getting back on the road here. So, going further.

Speaker B - 1942330 to 1952330 (in milliseconds)

Yeah. Affordances, Aptic structures and then how do you think prediction falls into this whole thing? QCU.

Speaker A - 1954510 to 2035540 (in milliseconds)

I think I'm kind of segueing into that already with my own personal theory of intelligence by specifically mentioning worldviews and how an organism needs to be able to perceive things in its environment to use them. I think that sort of connects the Aptic structures to the idea of the affordance. Not only does the organism need to have ecologically selected Aptic structures that allow it to have, like, let's say, if they're a bird, have an egg guarding response, the organism also has to have Aptic structures that allow it to perceive its own eggs in the first place and also perceive things that might be a threat to those eggs and allow it to perceive world. No. What is it? Theories of mind of other organisms that could in theory interact with its eggs so it can threaten or startle or fight them or whatever. And an organism that's missing the biological features to make those perceptions cannot actually make use of any of its bodily features or motor activities without having those Aptic structures first. So affordances necessarily entail prediction and predictions to pretty long time horizons, I think actually for the organism to be able to coordinate its body parts to make a specific action at a certain time to carry the plan to affect some value, I guess.

Speaker B - 2040060 to 2042380 (in milliseconds)

What do you mean by a long time horizon?

Speaker A - 2046320 to 2105600 (in milliseconds)

I think that maybe more than the next two muscle clenchings of like a limb or more than a couple of heartbeats. These decisions that we see organisms making in the environment around us seem to be motivated by actions that take place over many different neuronal firings. Like I'm looking at a bird right now and if I threw a rock at it it would have already been paying attention to my arm for maybe 100 or 1000 neuron firing times before its ultimate reaction or whatever. And that seems like a pretty interesting feature and something that requires organisms to have specialized aptics neurobiology whatever that allow them to sustain a plan or a piece of information for longer than one single stimulus response period.

Speaker B - 2110130 to 2111680 (in milliseconds)

Yeah, that makes sense.

Speaker C - 2112370 to 2156060 (in milliseconds)

I'm a bit curious the animals having these responses for protecting their eggs and seeking the value of reproduction kind of through the natural selection kind of process. I wonder how reasonable in general or what ways you could justify connecting that to kind of a more generalized intelligence. Certainly I can't imagine an entity that doesn't build a world model that doesn't like to have any form of conscious.

Speaker B - 2159880 to 2161690 (in milliseconds)

Lashan. You're kind of breaking up.

Speaker C - 2162700 to 2199232 (in milliseconds)

Yeah, it might be because I am driving here. Yeah. I don't know how much you can hear me. Feel free to cut me off here. But I guess my ultimate question here is how does these animal natural selection how does the intelligence that's arisen there really map to more generalized intelligence that may just have a motive as simple as filling in the gaps? To existing literature, and that's still like an intelligent action, but it doesn't necessarily.

Speaker B - 2199296 to 2206196 (in milliseconds)

Require are you talking about building like an artificially, generally intelligent machine or? I don't think we're at there in the discourse yet.

Speaker C - 2206298 to 2234880 (in milliseconds)

Okay, yeah. No, I mean that is kind of where I'm going. But I'm more so asking just like generalizing the intelligence of natural selection to more general forms of intelligence. Is there a reason why there's a strong connection between or is there a reason why you think certain things must follow over to any form of intelligence, kinds of intelligence as we see in natural selection?

Speaker B - 2238540 to 2311890 (in milliseconds)

Well, for one, I wouldn't necessarily consider natural selection and intelligence, even if it does lead to animals that might be considered intelligent. But yeah, I don't think we're there yet. I think we're going to approach there in discussion, but first we need to kind of talk about how human beings are more differentiated, I think, but we'll get there. I was going to say QCU about the time horizons, right? When you were using the word long. I thought you meant long, not short. That moves us kind of closer to talking about human beings and kind of what differentiates us more so. Right. Why is it that human beings in your thinking are able to make decisions over much longer time cycles than other animals? I guess.

Speaker A - 2314500 to 2403310 (in milliseconds)

Well, I guess I was being really foundationalist by saying that I think I was making an implied comparison to the run and tumble notion of individual single cell bacteria as a kind of action that is informed by a worldview. Like the bacteria believes there's a gradient of something it wants or doesn't want that it can avoid or move towards. And that reaction to its environment doesn't happen through a bunch of information containing state in the organism other than like a single protein that deforms a little bit in response to something. Run and tumble motion for bacteria is a decision that happens like every single window of opportunity for that decision to be made. But other kind of decisions seem to last way longer, unused, just floating as options that could be taken for hundreds or thousands or millions of metabolic cycles that a cell can go through. And I think that there's something very specific there that could be used to think about interpreting intelligence in a kind of less contingent, more objective seeming sense than saying something. I'm just saying that there is something very interesting about a bird being able to hold a potential opportunity in its head for seconds or minutes instead of only being able to take immediate responses to anything it senses. And I think I'm gesturing towards that.

Speaker B - 2405360 to 2572070 (in milliseconds)

Yes, I think you're getting to the heart of the question that I was asking and I don't know, it seemed like you were kind of anthropomorphizing the bacteria when you use the word belief. But that actually gets, I guess, kind of one of the main hearts or main points to the diagram, right. The whole ego thing. And I guess two of the thinkers that are not on there are like Ayn Rand and John Boyd, but everybody I guess, is kind of pointing at ego, right? And I guess the main question I think right, and I think it's going to be the most fundamentally interesting question is how does the ego develop? Right? And personally I think that through because if you'd imagine right, you can have all these abstractions, right. But going back to this timescale kind of thought, right, you can have abstractions, but the ability to use these abstractions are not really. Unlimited. Right. They're typically more particularized to afforded environmental kind of tasks. But through conceptualization you're able to hold things in your mind and apply them to situations that are not limited to the particular location or the particular time period. And so through the development or the conceptualization of the self or the ego I think this is kind of where self awareness develops, which is interesting. I think that this idea of written language is kind of one of the tools that allowed for this potential explosion of explosion past. Kind of like this perceptual bottleneck into kind of like this conceptual world. Right. And so I think almost in some sense through the use of language or written language human beings were able to kind of start conceptualizing about themselves. And I think almost once the animal, the human being is able to start thinking about itself conceptually is almost when it gets to start making decisions about itself. And I think kind of that's where the heart of free will resides. What do you think about that?

Speaker A - 2576490 to 2580614 (in milliseconds)

Free will is a kind of loaded term and I typically steer away from it.

Speaker B - 2580732 to 2590010 (in milliseconds)

Yes. Before we talk about the free will problem itself, what do you think about the development of ego in that paradigm?

Speaker A - 2592990 to 2716580 (in milliseconds)

Okay, I think my response to that would be to say that we don't need to go too deep into things that sound spiritual or philosophical here because we have some really useful tools here already. One of them was that at the start of the conversation there was this idea of, I think, abstraction from examples. And another idea is that writing allows us to collect maybe more examples of things in our environment or more phenomena we care about. I think that writing could be a tool that would allow someone to develop more of a sense of what features of their phenomena come from them specifically instead of their environment, by giving them the chance to literally record more examples of things that are determined by them or their observations instead of determined by their outside environment or whatever. Someone that takes a journal of every experimental setup they take of trying to record some sort of astrological recording. They want to figure out when some star is going to ascend in the night sky or something. So they take fastidious notes of how they're setting up their equipment every single day and how the setup of their equipment does not quite match the way it was the previous day and how they need to adjust dials and twist knobs on some Astrolobe or something. The person that's taking notes in their journal and noticing the rift between the actions they intended to take and the effects that actually played out in the world is building up a chronology of events where their body and their mind were involved in some sort of physical process. They are sort of taking their ephemeral moments of human experience and casting them onto paper in a way that lets them notice themselves later and account for their own physical errors, account for mistakes their assistants make, account for themselves, I guess, in a way that they might not be able to if they were left only with their running, intuitive memory of what they think they've been doing with their body. And I think that does something that contributes towards the end goal you're describing of having reflective awareness without anything more special happening.

Speaker B - 2719590 to 2765840 (in milliseconds)

The whole entire process I am trying to describe is like a naturalistic. It's not special at all. I mean, maybe the fact that it occurs at all is special, but I don't know. Um but yeah, so like yeah, that's that's kind of the way I've been thinking about it for a really long time that through the I was just thinking about this one example. Shit, I can't remember the example. What's the example that I was going to give? I can't remember. Say some more words while I think.

Speaker A - 2767730 to 2826450 (in milliseconds)

Sure. This is also reminding me the Aptic structure's topic, in that we could ask if there is some special, some specific Aptic structure that enables self reflection in humans, and whether this Aptic structure might literally be some sort of conglomeration of nerves that allows for the part of us that notices things in our visual field or notices sounds or things like that to also notice the previous activities of itself. And that literal connection could be something that is literally involved in a moment of self reflection. And we are very fortunate because in the next couple of years we probably can literally do brain scans to try to examine which brain circuits are activating in which orders, when humans use things like recursion or try to pay attention to their awareness of their experiences.

Speaker B - 2831350 to 2859920 (in milliseconds)

While I do think that it'll be interesting, I'm almost disinclined to think that there will be one particular specific thing. I think it'll be a conglomeration of probably a number of different features like neural correlates that lead up to this kind of self reflective, controlled awareness of the world. I keep talking though, I'm still thinking.

Speaker A - 2860610 to 2888700 (in milliseconds)

Oh sure, but there's more evil directions. We can take this like we could grade breeds of dog by how reflectively self aware they are of their consciousness and we can explicitly breed more sentient or less sentient animals to suit our worst motivations for animal handling and stuff like that. There will probably be a startup sometime soon that sells a cow which is genetically proven to have no awareness of anything and that will be a treat to experience.

Speaker B - 2891550 to 2897900 (in milliseconds)

I really don't think that you said no awareness of anything. I don't know if animals can function that way.

Speaker A - 2898690 to 2901230 (in milliseconds)

Well, I mean, it could be a very expensive cow.

Speaker B - 2901650 to 3034440 (in milliseconds)

No, but I mean, I just fundamentally maybe you could grow a collection of cells that are grown in a lab with no neurological structure. But I don't know if you would consider that a cow, right, you could potentially remove the organs from a modern cow right now, and it wouldn't necessarily experience the world. You could force feed it. And even then, it still has the sense of touch. But I don't know if you could biologically design a living animal that that was inherently incapable of awareness. But this does raise another interesting question. Not question, but thing that I've been thinking about, which is this notion that it requires kind of the integration of multiple different sensations in order to differentiate concepts. And what I mean by that is that you can perceive sound or sight, but when you have a concept of something, it integrates the sight, the sound, the smell of that thing, right? And so you're thinking about the thing in and of itself, the thing qual the thing because you've conceptualized it. And I almost wonder if it requires multiple different sensory organs to even be able to do that. I don't know if it would be possible to do it with one, like, kind of sensory channel, if you will. It might be possible, but I just I tend to wonder, right? Like, how many or, like, is it possible to do it one sensory channel, or does it require multiple? You get what I'm saying?

Speaker C - 3034810 to 3115220 (in milliseconds)

Actually, remember Teague mentioning how I believe it was Teague how when you put your hand on a hot stove, the response to that happens? It really reaches your full conscious experience in the sense that your external nervous system is able to react faster than your conscious awareness is to react to save your body. In a sense, I think that generally the question of how this binding problem of all of our experiences kind of bind into one kind of form of consciousness and how that differs from if there is any sense that our consciousness or awareness is split across kind of multiple sensory organs that binding together. Maybe you can remove the binding together in a cow. And so the cow doesn't have any kind of or single pointed consciousness or however you want to express that, but it still has maybe the sensory responses that may have maybe an unbinded kind of consciousness to them or awareness to them.

Speaker B - 3117030 to 3215298 (in milliseconds)

So I think you're raising kind of a different point, which is an interesting point, right? But I almost think that the point that you're raising is almost a given, right? Typically, it's like the brain or I don't know, your heart also has a bunch of neurons, and your stomach has a bunch of neurons, and you're doing a bunch of different types of processing. But typically, when we're thinking about it, we're thinking about the brain and how it's taking in a lot of different signals and then integrating those signals. And, sure, if you genetically engineered a cow without a brain or without the central processing, it'll, in some sense, necessarily be less aware of the world right. In some sense, you could probably even have really localized really localized neural networks across the animal system that were not kind of wired together in a central location. They were just all operating locally. And you might have a system like an animal that functioned similarly, maybe not similarly the right word, but it functioned in some sense like a living system, but was much less aware. I don't know, but I think this.

Speaker A - 3215304 to 3270340 (in milliseconds)

Is a different question. That's the startup pitch, dude. You've got it. If you figured out some sort of genetic damage you could do to the cow's genome that causes the nervous system to break up into a bunch of smaller sub nervous systems, like maybe even after it's already developed into a mostly cow shaped organism, and you plugged in a bunch of electrodes to simulate the uplink behavior the brain would have had for regular metabolic processes. For each individual subneural network, you could have a cow that gets nothing but those growth signals and the things that it needs for the body to stay alive and otherwise has no coherency except for chunks of tissue that are like a centimeter across or something. That organism could, in theory, be said to have nothing going on because the different parts of the body are not in unity with each other. And it also would have, like, way less actual neural activity, probably, too.

Speaker B - 3272630 to 3298700 (in milliseconds)

Yeah, it is interesting. And I guess it is coextensive with the opposite notion, right? The opposite notion being what happens when you or is it required to have multiple modalities in order to do this whole concept formation thing. Right. What do you think about that?

Speaker A - 3300510 to 3347980 (in milliseconds)

Yeah, I think the multimodality thing is it's a good place for trying to find where physically in a body consciousness is happening, if we define consciousness in terms of the integration of multimodal stuff. But I suspect that it might be really hard to falsify because individual cells can have multimodal perception. Like, a cell can have personal individual awareness of a chemical gradient and then also personal individual awareness of a temperature gradient and then also a special different temperature awareness for the environment being too cold. The signal for an environment being too cold is like some specific protein that's reacting or some enzyme or something. It's so localized, you don't need an entire organism of multiple cells to have that awareness. So I think that cells might be dangerously close to being multimodal intelligences by this definition already.

Speaker B - 3349410 to 3446460 (in milliseconds)

No, I'm not talking about multimodal intelligence because I'm talking about concept formation specifically, which would be like the fact that we use words to denote specific abstractions, which I don't think BIFF Currier are using abstractions in that sense, even though Michael Levin's work I don't know. That shit is fucking crazy. So who really knows, right? But yeah, more generally, I'm referring to this notion that do you think that it would be possible for an animal with only say touch to like a human being, like a modern Homo sapien sapien, to arrive at conceptual level distinctions with only one modality. Like if you were to remove touch, remove sound, remove a taste, I know we like proprioception or something like that, I don't know. But if we only had sight as our modality, I guess this is kind of also excluding motion, right? Because I do think that motion, we haven't really talked about it, but I think it's highly interrelated to all of these things. In fact, I don't know if it even makes sense to talk about these things without talking about the fact that perception and motion are in some sense the same thing.

Speaker C - 3447230 to 3497578 (in milliseconds)

So it sounds like there's some kind of combination where we have the systems to do language, like the Brocas area or whatnot, where we have an architecture that allows for the intermingling with our senses and our language processing capabilities together in our minds. I think perhaps, obviously there are deaf and people who are lacking senses. And as long as the language areas still develop and have some way of binding together with the sensory experience, people can come to language. So I think it's like you need the language capabilities, the ability to assign symbols, and however that works in the mind, I think you need to be able to mix the sensory experiences with that capability. But I don't know, it seems to.

Speaker B - 3497584 to 3502240 (in milliseconds)

Me like I don't understand what stance are you taking on the position?

Speaker C - 3504530 to 3516100 (in milliseconds)

So I'm just expressing the sense that in my mind, you need to have the language, the capability to assign symbols, to experience.

Speaker B - 3516950 to 3546410 (in milliseconds)

Yeah. My question more specifically is assuming that you're a modern Homo sapien sapien and you have the Aptic structures Broca's area for language, but you removed a bunch of the sensory channels and only left vision, do you think it's possible for the human being to, if they were exposed to the written word, arrive at concept formation?

Speaker C - 3546990 to 3551230 (in milliseconds)

My answer is yes, as long as they had those two aspects.

Speaker B - 3552530 to 3559326 (in milliseconds)

So you think inherently, just because the system has broke his error, it would arrive at concepts more.

Speaker C - 3559348 to 3570494 (in milliseconds)

So whatever the language processing systems are to some extent, I guess I feel like our senses and everything kind of bind together in a blob.

Speaker B - 3570542 to 3635160 (in milliseconds)

And then the language I don't know, the Jeff Hawkins like, SDR thing is like the neocortex doesn't really care about the signal. It just turns them all into SDRs supposedly. And so if it's just turning them all into SDRs, it doesn't really care about the signal per se. But that still doesn't answer the question of even if you do have this neocortex that is, this really generalized algorithm that's taking in any number of different signals and then just operates based on those signals, the question is really? Do you need different channels of signals in order to integrate them into a concept? Or can you do it with only one channel? Right. The question isn't and maybe your answer is oh, well, the neocortex, because of the fact that it is able to operate on any signal, it can just do it. And that might be a sufficient answer. I don't really know the answer to this question, but I think your answer is kind of not distinguishing between those two different options. Right?

Speaker C - 3637230 to 3671618 (in milliseconds)

Yeah. I think just intuitively I sense that I'm driving down the road, so I'm seeing a bunch of trees and objects floating by as I'm talking with you. And the process of symbolizing or mapping symbols to all of these objects. It seems to me like it's like an amorphous consciousness that is being assigned the symbols more so than a bunch of individual streams that are no, I.

Speaker B - 3671624 to 3690940 (in milliseconds)

Still don't think that you're answering the question. You're answering it based on how you presume it's going on in your mind. Right. But my question, how is it functionally happening? My question is more so is it possible for it to happen without these other modalities? QCU, what do you think?

Speaker A - 3692590 to 3755200 (in milliseconds)

I think I was stewing on whether this is like one of those computability things and then I realized that it is okay if you stripped out literally everything else. You just have the perception of noise we use for hearing and music. You can do abstraction purely within music without any intermediary step in something else. Visual abstraction is also possible. I think that smell as a sensory domain would be really hard to work with because you'd have a hard time discovering objects in your environment just by smell that you could use as markers. But you could probably abstract and communicate with most of the modalities we have if you had enough scratch space, if that makes sense. And I think that's because a lot of our sensory modalities have a sense of relatedness and we can use that to bootstrap comparisons or we can bootstrap comparison and contrast and we can use that to build up stuff, sort of like logic gates, probably. And you might not be able to express everything, but you would be able to condense some experiences into abstractions. Even with just a single sensory mode, I think.

Speaker B - 3757970 to 3834420 (in milliseconds)

You'D be able to be able to arrive at abstractions. The question is, can you take those abstractions and then arrive at concepts? I think this is the question that I'm kind of asking, right. In some sense, when human beings do concept formation, we have a bunch of things that we see in the world and then we assign a phoneme, which is auditory. Right. This is a different channel. Right. With sign language, we're going from vision to another vision symbol. So maybe you'd argue that this is possible. Right. But there's also a bunch of the sense of touch, right? Shit, one of my friends just showed up. Guys, we got to do this another time, guys. Or I can just leave this on in the background if you all still want to talk about it. Should I leave it on the background? If you all I can call you.

Speaker C - 3835870 to 3836620 (in milliseconds)

No.

Speaker B - 3837150 to 3839260 (in milliseconds)

What about you, Kusi? You want me to end it?

Speaker A - 3840750 to 3843820 (in milliseconds)

Oh, I'm probably going to drop because my phone is at like 10%.

Speaker B - 3844270 to 3849930 (in milliseconds)

We'll do it for later. Peace. I really appreciate it you all. Bye.

0 Comments
LOVEOFDOING
loveofdoing
Philosophy, Cognitive Science, AGI
Listen on
Substack App
RSS Feed
Appears in episode
loveofdoing