It occurs to me that Trump's Council of Economic Advisers might deliberately write documents to cause maximum pain to readers, with the goal of obfuscating any information therein. If your career, interests, etc. involve knowing what's in those documents, of course you'll want some way to reduce the pain.
Relatedly and differently, as a writer myself, I take this as a call to take pains to make my own writing less painful to my readers, that they may feel no need nor desire to feed my writing into an automatic re-processor to extract information I ought to have made clear. If I can't nourish and spark joy, I'll make myself obsolete, replaceable by AI.
I guess that's the corollary: there is certain types of writing/info where joy-creation is essential (fiction, short & long-form "popular" non-fiction), but others where the information is important, and cannot be delivered with joy ... or at least not for everyone. So, joy-type info creators will have to continue being good at joy, but AI can sit as a layer over painful info that helps deliver the content in less painful ways?
Maybe. I'm anti-AI in the cases where it's unnecessarily and poorly replacing human art, knowledge, and interaction. But the potential good use I can think of is pretty much the one you've given — to reduce boredom and save time. Depending on what someone's job is, the amount of good they can do in the world may hang on processing information ASAP and acting fast on it. I inhabit a deliberately slow-paced life so I do my own reading (lots of it) and work differently in that regard.
Years ago I used to take notes at committee meetings, which was migraine-inducing for me. I never got better at it. Even in highly structured meetings, participants sometimes seemed to me to make irrelevant comments, and I couldn't always read everyone's voice tone and body language, nor did I necessarily know the history or context of what they were saying. I would take notes in pen and later try to type them up. I'm aware that now people use AI transcribers. There could be any number of concerns about that, but I do sympathize with the desire to use them, and I recognize that an AI might take better meeting notes than I could. Some of us make great secretaries for certain organizations, but some of us really struggle and just want a shortcut through the migraine.
I shall have to think more about the division between joyful/painful content. For me, there's a lot of overlap. First of all because my art is often of the morosely obsessive type. Lots of artists like to be unhappy. It's, like, the point of the art, and without unhappy art, no joy. Another type of overlap is that some of my intellectual interests involve doing deep-dives into hate-reads (i.e., reading things that I already know are fake and will infuriate me). I'm currently halfway through reading a 400-page government PDF that is really, really painful and also fake, but since this is my area of expertise, it's important to me that I be the human to point out all the ways in which it's wrong. I'm reluctant to feed this sort of PDF into an auto-summarizer, as I anticipate that a summary may make the terrible PDF sound more authoritative and reasonable than it is, especially if it strips out all its hallmarks of fakeness (buzzwords, name-dropping, internal contradictions, etc.). I am the one who recognizes those hallmarks. When I'm doing the painful hate-read, it's because I want to take the inventory of all the bad stuff I see and point it out to others, and not have it cleansed for us.
But in other cases, were I still a secretary for an organization, I could see myself wanting to save 20 hours of migraine by having an AI transcribe the annoying meeting and then just asking the air what the PDF means so I can understand the meeting I just sat through.
On the third hand, if I had an AI doing my note-taking job for me, I'd miss out on the (painful) discernment process of which jobs I'm bad at and therefore shouldn't persist in because there is more joy to be found elsewhere.
Yes, I think you are right, and that's the reason for the x and y axes: pain/joy but also high/low value.
The most obvious place for this AI use case is high pain, low value. In the high pain, high value (such as your government PDF ... = value to you for work, or project or whatever), there's more of a judgement call to make. One thing I use this for is deciding whether I want to dig further into a document.
Yeah there’s a question about how much suffering is required for real learning / progress. But I think lots of valuable things are stuck inside formats / containers that are more painful than they need to be. Maybe if learning should be about building things, then that’s where the hard things should happen. I think I have another article to write :)
Yeah, I think there is plenty of needless suffering in learning that is ripe for weeding out. For example, this humanities tech transplant is learning engineering workflows so much more quickly with the help of Chat. The sort of struggle I like is with dense old texts for intellectual pleasure, not trying to grasp practical concepts! I like your idea of transferring the struggle of learning practical concepts to the building stage--will stay tuned! Thanks for the great writing! 😀
It occurs to me that Trump's Council of Economic Advisers might deliberately write documents to cause maximum pain to readers, with the goal of obfuscating any information therein. If your career, interests, etc. involve knowing what's in those documents, of course you'll want some way to reduce the pain.
Relatedly and differently, as a writer myself, I take this as a call to take pains to make my own writing less painful to my readers, that they may feel no need nor desire to feed my writing into an automatic re-processor to extract information I ought to have made clear. If I can't nourish and spark joy, I'll make myself obsolete, replaceable by AI.
I guess that's the corollary: there is certain types of writing/info where joy-creation is essential (fiction, short & long-form "popular" non-fiction), but others where the information is important, and cannot be delivered with joy ... or at least not for everyone. So, joy-type info creators will have to continue being good at joy, but AI can sit as a layer over painful info that helps deliver the content in less painful ways?
Maybe. I'm anti-AI in the cases where it's unnecessarily and poorly replacing human art, knowledge, and interaction. But the potential good use I can think of is pretty much the one you've given — to reduce boredom and save time. Depending on what someone's job is, the amount of good they can do in the world may hang on processing information ASAP and acting fast on it. I inhabit a deliberately slow-paced life so I do my own reading (lots of it) and work differently in that regard.
Years ago I used to take notes at committee meetings, which was migraine-inducing for me. I never got better at it. Even in highly structured meetings, participants sometimes seemed to me to make irrelevant comments, and I couldn't always read everyone's voice tone and body language, nor did I necessarily know the history or context of what they were saying. I would take notes in pen and later try to type them up. I'm aware that now people use AI transcribers. There could be any number of concerns about that, but I do sympathize with the desire to use them, and I recognize that an AI might take better meeting notes than I could. Some of us make great secretaries for certain organizations, but some of us really struggle and just want a shortcut through the migraine.
I shall have to think more about the division between joyful/painful content. For me, there's a lot of overlap. First of all because my art is often of the morosely obsessive type. Lots of artists like to be unhappy. It's, like, the point of the art, and without unhappy art, no joy. Another type of overlap is that some of my intellectual interests involve doing deep-dives into hate-reads (i.e., reading things that I already know are fake and will infuriate me). I'm currently halfway through reading a 400-page government PDF that is really, really painful and also fake, but since this is my area of expertise, it's important to me that I be the human to point out all the ways in which it's wrong. I'm reluctant to feed this sort of PDF into an auto-summarizer, as I anticipate that a summary may make the terrible PDF sound more authoritative and reasonable than it is, especially if it strips out all its hallmarks of fakeness (buzzwords, name-dropping, internal contradictions, etc.). I am the one who recognizes those hallmarks. When I'm doing the painful hate-read, it's because I want to take the inventory of all the bad stuff I see and point it out to others, and not have it cleansed for us.
But in other cases, were I still a secretary for an organization, I could see myself wanting to save 20 hours of migraine by having an AI transcribe the annoying meeting and then just asking the air what the PDF means so I can understand the meeting I just sat through.
On the third hand, if I had an AI doing my note-taking job for me, I'd miss out on the (painful) discernment process of which jobs I'm bad at and therefore shouldn't persist in because there is more joy to be found elsewhere.
Yes, I think you are right, and that's the reason for the x and y axes: pain/joy but also high/low value.
The most obvious place for this AI use case is high pain, low value. In the high pain, high value (such as your government PDF ... = value to you for work, or project or whatever), there's more of a judgement call to make. One thing I use this for is deciding whether I want to dig further into a document.
I like the nutrition metaphor! Tbh sometimes the difficulty and pain is what I'm craving, but certainly not in professional contexts. 😅
Yeah there’s a question about how much suffering is required for real learning / progress. But I think lots of valuable things are stuck inside formats / containers that are more painful than they need to be. Maybe if learning should be about building things, then that’s where the hard things should happen. I think I have another article to write :)
Yeah, I think there is plenty of needless suffering in learning that is ripe for weeding out. For example, this humanities tech transplant is learning engineering workflows so much more quickly with the help of Chat. The sort of struggle I like is with dense old texts for intellectual pleasure, not trying to grasp practical concepts! I like your idea of transferring the struggle of learning practical concepts to the building stage--will stay tuned! Thanks for the great writing! 😀