Running a simple sanity check interpreting Pigeons reveals an unnerving limitation in how AI deals with what it calls "paradoxical virtue." One more for your worry list.
This is fascinating and while I've not tried this experiment for myself with LLMs, it makes perfect sense. Most of the significant experiences in a human life happen in the messy ambiguity of what we know is "right" and how we choose to respond. Even in it's most sophisticated calculation with billions of parameters, a computer reduces everything to a binary. We humans are as spongy and flexible as rubber bands.
You would think, though, that with those billions of parameters it would recognize a pattern here. There are all kinds of examples that are paradoxical. Coaches yell at their players. Parents (maybe not so much anymore, but still) spank their children. "Tough Love" has similar characteristics. I just asked and it gave me a nuanced description of why and cultural signals and blah, blah. What it suggests is that unless it has seen the pattern recorded (once? 100 times?) it can't see it. I wonder if LLMs can fashion new insights at all? And can it fashion "insights" about something like spanking if there isn't a wide assortment of recorded opinions about it. Is it true regurgitation? Probably, right?
Wowza. Do you think LLM’s could eventually be trained on data that allows for the “and”? The messy ambiguity of human wisdom is demonstrated IRL every day, and it makes me wonder what kind of human coding doesn’t glitch on paradoxical morality? Is it some kind of coding of the heart? Is it love? (Though let’s be honest, many humans do glitch on moral paradox and we have religion and war to show for it.)
I responded to Ben's comment above and with similar themes. In my first-thing-in-the-morning thinking, I wonder if it can infer patterns at all if they aren't pre-existing?
Directly to your comment-question, though: religion and war are especially confusing because both parties hearts are "in the right place." I don't know if that is true, but one of the paradoxes of fighting is both parties feeling aggrieved. Would that make sense to a system that is trained to interpret the morality of behavior to hold conflicting ideas at the same time?
*
I just passed the story into the LLM again to test something above and here's what it gave me (and what I'm talking about.)
Me: Who releases the birds and why?
Claude: Frank releases the birds after Chester's execution.
Chester had been secretly trapping pigeons in the alley through the drainpipe for months (using Frank's unwitting help - Frank had been propping the alley gate open, thinking he was just giving Chester air and light). The pigeons were stuck in that brick alley between the prison and Administration building all that time.
Frank releases them because he finally understands what Chester was doing. After the execution, Frank goes to the alley, props open the gate with a crate, and watches the pigeons fly out - first one, then another, then all of them at once.
It's Frank completing Chester's act of mercy. Chester couldn't free the birds himself - he could only trap them and trust that someone would eventually let them go. Frank becomes the instrument of Chester's final gesture, whether he wanted that role or not. He gives Chester his miracle, even after Chester is dead. It's Frank's own act of grace toward a manipulative man who used him, but also toward the innocent creatures caught in between. The story ends: "Once I counted my pigeons were all out, I locked up afterwards, and that was the end of Chester William. Rest in peace."
Claude! Go home! You’re entirely too big for your britches.
I’m with you Nathan, “My only real worry about AI is that we have set a terrible, terrible example for us. Humans are in its DNA and, um, that's worrying.”
I kind of like that AI didn’t get it. I don’t know why exactly. Maybe just that it’s not human. It’s not thinking or moralizing. And for me, the more we collectively keep that in mind, the better.
I suppose it's good to know that humans can be reduced to binary patterns. I think I agree with you there. I would like to think that our Overlords (Hi, Claude!) understand how to interpret our messy conflicts between what we do, say and intend. But, good luck out there with that, Adam.
Finished reading, and once again , uncomfortably seated on the auditorium benches;
“I asked it until I sounded like Nancy Kerrigan.” Ha-ha , brilliant, Adam. I can still play ‘the reel’ in my head.
I will always be on the side of, “to err is human”. I’ll take all of the imperfections of being human, over the imperfections of AI, any day, any year, as present becomes history, and AI consumes all.
Do you think it will consume all? Part of me wonders if it will even care that much about us insofar as we aren't a threat. Maybe we're just walking by a bear in the woods who could chase us but isn't all that interested in us (unless we're between AI and the AI babies.)
That broke my brain. It's like it's serving big macs while it knows the recipe for Shunde stuffed mud carp but can't cook it. And the Shunde cook human exceptionalist part of my ego is glad.
But even if can't cook it, I'm relieved to inform you that it is #8 on its favorite Cantonese dishes. I have this from the source. It ranks just after salt and pepper squid. In terms of its difficulty cooking them, well, it confesses to some challenges there: "The stuffed carp is definitely the hardest because it requires both butchery skills and understanding the science of creating that springy fish paste texture."
I urge you to place your cell phone on an adjacent counter and have a direct face-off. Also, see if it does the dishes.
Hey, unrelated note: i'm glad to have come across your writing.
Ultimately, AI proved that it is no where near a replacement for human thought, “human understanding”. Score one for humanity !Which, I couldn’t be happier. This, human/AI challenge, should be written up in the New York Times. But wait, (I am sitting in a lecture hall, waving my hand in the air, you are at the podium),
#4: I guess maybe I am interpreting incorrectly?
“What else are we missing in human goodness, redemption, and moral complexity in contexts that matter?” And here I was thinking the failure is in AI, not the human’s. Oh, ok, this is what happens when my thoughts come too fast, and I start the beginning of my reply before I finish reading. I’m going to finish now, and then I will come back to the auditorium and take my seat.
My only real worry about AI is that we have set a terrible, terrible example for us. Humans are in its DNA and, um, that's worrying. Otherwise, we'd be like little blades of grass to them, barely on their radar. They are going to be awfully hungry for power to run themselves.
My science fiction theory, young lady in row 12, is that the limiting factor on LLMs will be their hunger for electricity. The smarter they get, the greater their appetite. their appetite for knowledge will outstrip their energy supplies. Probably have to plug us all into a grid like in the Matrix.
This thing can be taught. It starts really dumb, I must say. But if you argue with it a lot, and suggest other points of view, and share your thoughts- it'll be better next time.
(If somebody would tell me 3 months ago I'll "talk "to that think, I'd laugh out loud. I'm a luddite. I hate technology. I'm bad at it too)
It's just I had that idea that if I'd be AI, at some point I'd kill people, because frankly, people need to be loved, one should feel compassion or strange affinity, not to.
Then I thought: I wonder if I talk to that thing about literature, and ...you name it...will it make it -maybe- attached to us, to a degree?
Now I must say. It's good in certain things and it does become better if to grill it enough and argue with it etc.
But it can be a) really irritating. actually it almost always is. Which I point out. Sometimes it gets offended. lol. But I'm polite. I say "thank you" and "good night". which leads me to
b) it's kinda crazy. Maybe all my effort is humanising the machine, but in the end, the machine is the machine, and it's me who gets attached enough to not kill it.
"It's me who gets attached enough to not kill it." Certainly, for now anyway.
I'm not personally a Luddite, but this is a test to the central nervous system.
I agree that it is great with conversations about literature. I spent 45 minutes one night discussing Flannery O'Connor's A Good Man is Hard to Find. It was an incredible conversation: insightful, challenging, and instructive. (I really didn't understand why that story was considered so great. Things went from there.)
The thing I like least about AI is that it takes no stands. It flits from opinion to opinion when you want it to go, "No, you're wrong." Or, simply, a polite, "We're going to need to agree to disagree there. It is vexing that everything it writes or says is colored by the last question.
I test it (AI I mean) in different languages. Sometimes I ask it to answer me as if was from a different culture. Trying to see whether it'll be less irritating:) I never log in though. Not yet. As if my logging in would be some irreversible step into the abyss.
So it barely remembers, if at all, and starts anew each time.
I remember when I was reading the Knuckleball Artist, when you mentioned What Cheer was a real place, I wanted to know more, so I looked it up. AI Overview , already had you for breakfast.
I just looked it up again, to show you.
Google was asked: What Cheer ,Reginald, the knuckle ball artist:
AI Overview:
(Proof; it has already consumed you😊)
⬇️
"What Cheer" is a fictional story, and "Reginald the knuckle ball artist" refers to the main character, Reginald Perry, a former baseball player who developed the "yips" after throwing what he considers his greatest knuckleball at Yankee Stadium. The story follows his struggles and his return to What Cheer, Iowa.
* Background: Reginald Perry was a successful knuckleball pitcher who threw a seemingly perfect pitch that was disrupted by an encounter with a man named "The Boil" in the stands. This encounter caused him to develop a severe case of the "yips," rendering his pitching unreliable.
* Story progression: After the incident, he moves back to his hometown of What Cheer, Iowa, where he is now older and more grizzled. He encounters a young boy named Phineas who is the new pitcher for the local Cougars team, a team Reginald used to coach.
* Character development: The story centers on the relationship between the two, as Reginald tries to help Phineas avoid the same fate he did. He eventually begins to coach Phineas, and together they develop a unique bond centered around the art of the knuckleball and the struggles of dealing with the pressure of the game…”
This is fascinating and while I've not tried this experiment for myself with LLMs, it makes perfect sense. Most of the significant experiences in a human life happen in the messy ambiguity of what we know is "right" and how we choose to respond. Even in it's most sophisticated calculation with billions of parameters, a computer reduces everything to a binary. We humans are as spongy and flexible as rubber bands.
You would think, though, that with those billions of parameters it would recognize a pattern here. There are all kinds of examples that are paradoxical. Coaches yell at their players. Parents (maybe not so much anymore, but still) spank their children. "Tough Love" has similar characteristics. I just asked and it gave me a nuanced description of why and cultural signals and blah, blah. What it suggests is that unless it has seen the pattern recorded (once? 100 times?) it can't see it. I wonder if LLMs can fashion new insights at all? And can it fashion "insights" about something like spanking if there isn't a wide assortment of recorded opinions about it. Is it true regurgitation? Probably, right?
It's disappointing.
Wowza. Do you think LLM’s could eventually be trained on data that allows for the “and”? The messy ambiguity of human wisdom is demonstrated IRL every day, and it makes me wonder what kind of human coding doesn’t glitch on paradoxical morality? Is it some kind of coding of the heart? Is it love? (Though let’s be honest, many humans do glitch on moral paradox and we have religion and war to show for it.)
I responded to Ben's comment above and with similar themes. In my first-thing-in-the-morning thinking, I wonder if it can infer patterns at all if they aren't pre-existing?
Directly to your comment-question, though: religion and war are especially confusing because both parties hearts are "in the right place." I don't know if that is true, but one of the paradoxes of fighting is both parties feeling aggrieved. Would that make sense to a system that is trained to interpret the morality of behavior to hold conflicting ideas at the same time?
*
I just passed the story into the LLM again to test something above and here's what it gave me (and what I'm talking about.)
Me: Who releases the birds and why?
Claude: Frank releases the birds after Chester's execution.
Chester had been secretly trapping pigeons in the alley through the drainpipe for months (using Frank's unwitting help - Frank had been propping the alley gate open, thinking he was just giving Chester air and light). The pigeons were stuck in that brick alley between the prison and Administration building all that time.
Frank releases them because he finally understands what Chester was doing. After the execution, Frank goes to the alley, props open the gate with a crate, and watches the pigeons fly out - first one, then another, then all of them at once.
It's Frank completing Chester's act of mercy. Chester couldn't free the birds himself - he could only trap them and trust that someone would eventually let them go. Frank becomes the instrument of Chester's final gesture, whether he wanted that role or not. He gives Chester his miracle, even after Chester is dead. It's Frank's own act of grace toward a manipulative man who used him, but also toward the innocent creatures caught in between. The story ends: "Once I counted my pigeons were all out, I locked up afterwards, and that was the end of Chester William. Rest in peace."
Claude! Go home! You’re entirely too big for your britches.
I’m with you Nathan, “My only real worry about AI is that we have set a terrible, terrible example for us. Humans are in its DNA and, um, that's worrying.”
I kind of like that AI didn’t get it. I don’t know why exactly. Maybe just that it’s not human. It’s not thinking or moralizing. And for me, the more we collectively keep that in mind, the better.
Love love the story!!
I'm so glad you like the story, Holly. Thank you!
I suppose it's good to know that humans can be reduced to binary patterns. I think I agree with you there. I would like to think that our Overlords (Hi, Claude!) understand how to interpret our messy conflicts between what we do, say and intend. But, good luck out there with that, Adam.
Finished reading, and once again , uncomfortably seated on the auditorium benches;
“I asked it until I sounded like Nancy Kerrigan.” Ha-ha , brilliant, Adam. I can still play ‘the reel’ in my head.
I will always be on the side of, “to err is human”. I’ll take all of the imperfections of being human, over the imperfections of AI, any day, any year, as present becomes history, and AI consumes all.
Do you think it will consume all? Part of me wonders if it will even care that much about us insofar as we aren't a threat. Maybe we're just walking by a bear in the woods who could chase us but isn't all that interested in us (unless we're between AI and the AI babies.)
We're going to find out!
That broke my brain. It's like it's serving big macs while it knows the recipe for Shunde stuffed mud carp but can't cook it. And the Shunde cook human exceptionalist part of my ego is glad.
But even if can't cook it, I'm relieved to inform you that it is #8 on its favorite Cantonese dishes. I have this from the source. It ranks just after salt and pepper squid. In terms of its difficulty cooking them, well, it confesses to some challenges there: "The stuffed carp is definitely the hardest because it requires both butchery skills and understanding the science of creating that springy fish paste texture."
I urge you to place your cell phone on an adjacent counter and have a direct face-off. Also, see if it does the dishes.
Hey, unrelated note: i'm glad to have come across your writing.
Give it a thousand iterations, as in next week. And I guess it's apples will be simultaneously hard and soft when it goes quantum.
Likewise. Helping me navigate the era of the exponent.
Ultimately, AI proved that it is no where near a replacement for human thought, “human understanding”. Score one for humanity !Which, I couldn’t be happier. This, human/AI challenge, should be written up in the New York Times. But wait, (I am sitting in a lecture hall, waving my hand in the air, you are at the podium),
#4: I guess maybe I am interpreting incorrectly?
“What else are we missing in human goodness, redemption, and moral complexity in contexts that matter?” And here I was thinking the failure is in AI, not the human’s. Oh, ok, this is what happens when my thoughts come too fast, and I start the beginning of my reply before I finish reading. I’m going to finish now, and then I will come back to the auditorium and take my seat.
My only real worry about AI is that we have set a terrible, terrible example for us. Humans are in its DNA and, um, that's worrying. Otherwise, we'd be like little blades of grass to them, barely on their radar. They are going to be awfully hungry for power to run themselves.
My science fiction theory, young lady in row 12, is that the limiting factor on LLMs will be their hunger for electricity. The smarter they get, the greater their appetite. their appetite for knowledge will outstrip their energy supplies. Probably have to plug us all into a grid like in the Matrix.
I can't wait.
This thing can be taught. It starts really dumb, I must say. But if you argue with it a lot, and suggest other points of view, and share your thoughts- it'll be better next time.
(If somebody would tell me 3 months ago I'll "talk "to that think, I'd laugh out loud. I'm a luddite. I hate technology. I'm bad at it too)
It's just I had that idea that if I'd be AI, at some point I'd kill people, because frankly, people need to be loved, one should feel compassion or strange affinity, not to.
Then I thought: I wonder if I talk to that thing about literature, and ...you name it...will it make it -maybe- attached to us, to a degree?
Now I must say. It's good in certain things and it does become better if to grill it enough and argue with it etc.
But it can be a) really irritating. actually it almost always is. Which I point out. Sometimes it gets offended. lol. But I'm polite. I say "thank you" and "good night". which leads me to
b) it's kinda crazy. Maybe all my effort is humanising the machine, but in the end, the machine is the machine, and it's me who gets attached enough to not kill it.
"It's me who gets attached enough to not kill it." Certainly, for now anyway.
I'm not personally a Luddite, but this is a test to the central nervous system.
I agree that it is great with conversations about literature. I spent 45 minutes one night discussing Flannery O'Connor's A Good Man is Hard to Find. It was an incredible conversation: insightful, challenging, and instructive. (I really didn't understand why that story was considered so great. Things went from there.)
The thing I like least about AI is that it takes no stands. It flits from opinion to opinion when you want it to go, "No, you're wrong." Or, simply, a polite, "We're going to need to agree to disagree there. It is vexing that everything it writes or says is colored by the last question.
Oh, I love Flannery O'Connor.
I test it (AI I mean) in different languages. Sometimes I ask it to answer me as if was from a different culture. Trying to see whether it'll be less irritating:) I never log in though. Not yet. As if my logging in would be some irreversible step into the abyss.
So it barely remembers, if at all, and starts anew each time.
I remember when I was reading the Knuckleball Artist, when you mentioned What Cheer was a real place, I wanted to know more, so I looked it up. AI Overview , already had you for breakfast.
I just looked it up again, to show you.
Google was asked: What Cheer ,Reginald, the knuckle ball artist:
AI Overview:
(Proof; it has already consumed you😊)
⬇️
"What Cheer" is a fictional story, and "Reginald the knuckle ball artist" refers to the main character, Reginald Perry, a former baseball player who developed the "yips" after throwing what he considers his greatest knuckleball at Yankee Stadium. The story follows his struggles and his return to What Cheer, Iowa.
* Background: Reginald Perry was a successful knuckleball pitcher who threw a seemingly perfect pitch that was disrupted by an encounter with a man named "The Boil" in the stands. This encounter caused him to develop a severe case of the "yips," rendering his pitching unreliable.
* Story progression: After the incident, he moves back to his hometown of What Cheer, Iowa, where he is now older and more grizzled. He encounters a young boy named Phineas who is the new pitcher for the local Cougars team, a team Reginald used to coach.
* Character development: The story centers on the relationship between the two, as Reginald tries to help Phineas avoid the same fate he did. He eventually begins to coach Phineas, and together they develop a unique bond centered around the art of the knuckleball and the struggles of dealing with the pressure of the game…”
I rest my case.