Calling Sarah Connor
Mar. 15th, 2023 10:00 pmCALLING SARAH CONNOR.
I am getting angrier and angrier with ChatGPT (and other AI text creation) and angrier and angrier with being asked to edit text that began life in its knobs and diodes.
First, let's be clear: AI TEXT IS STOLEN TEXT. Maybe not legally, but definitely morally. We haven't seen as much outcry because it's harder to see a writer's style than an illustrator's style. (Oh, it's there. I definitely have unique styles for my technical writing and my historic writing, and there's definitely an overarching style that connects them together. I could even articulate it, though I suspect fewer of my readers could.) The thing is, the style is there. AIs are stealing it. They're stealing the words and sentences and paragraphs and the research and concepts of authors, stuff that's not necessarily protected by our precise IP laws, but nonetheless is a comprehensive and expansive count of creativity.
If you're angry that AIs can now mimic living artists, you should be angry that it can mimic living writers too.
But beyond that, AI produces text that is empty and vapid. It's like the AIs were trained mainly on marketing slogans and click-bait sites (and maybe they were!). You can read paragraph after paragraph before you realize that (like a click-bait site) it's not actually saying anything. AI text is full of generalities and weasel words.
Beyond that, AI disassembles and lies, making up facts worse than a college student trying to fill a blue book after a semester of sleeping through the class. I read a reporter who discovered from an AI that he'd formed a company he never founded, and even more that he'd died in 2019. I asked ChatGPT to tell me about myself, and it claimed that I worked with several game systems that I never have. I reloaded the info, and it told me about DIFFERENT game systems, but was nonetheless still wrong.
One of the things that makes me angry is that I can't properly edit the mess that AI produces. Not that I really love editing work from other people in any case, but at least with humans, they're just unclear or using bad grammar. With an AI, you have to really dissect what it's saying, because it so often it isn't saying anything, and you have to check every single fact (every single word!) for correctness, and only then can you try to rebuild from that kernel (if there is one!).
If it's not obvious, it takes two or three or four times as long to edit AI text as something written by a human being. I've a few times let AI text get by me that was still bad because I was lulled into a false sense of security by its empty words. They fit together well, so it seemed like it was saying something. Only afterward did I realize how bad it still was, and that made me pretty angry too.
And I have no interest in training this skill. AI IS NOT A TIME SAVER. At least as it stands right now, AI text is a time waster. It produces garbage. It takes more time to ungarbage it then it would take to write something fresh in the first place. And unless you've gone the whole distance and multiplied that wasted time by a factor of three or four you end up with text that's crap anyway.
Finally, c'mon can you not be a little terrified at the AI who lied to a TaskRabbit worker to get it to fill out a CAPTCHA (https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471). I mean, OpenAI flagged it as "Potential for Risky Emergent Behavior" and then blithely released GPT-4 into the world anyway. I don't think we've got killer AIs around the corner quite yet, but there is such potential for spamming, scamming, and grifting in OpenAI's immoral and thoughtless work that _someone_ should be putting the brakes on it stat.
Oh, hey, AI scientists are now saying there's a 10% chance AI will destroy humanity. Good times.
I am getting angrier and angrier with ChatGPT (and other AI text creation) and angrier and angrier with being asked to edit text that began life in its knobs and diodes.
First, let's be clear: AI TEXT IS STOLEN TEXT. Maybe not legally, but definitely morally. We haven't seen as much outcry because it's harder to see a writer's style than an illustrator's style. (Oh, it's there. I definitely have unique styles for my technical writing and my historic writing, and there's definitely an overarching style that connects them together. I could even articulate it, though I suspect fewer of my readers could.) The thing is, the style is there. AIs are stealing it. They're stealing the words and sentences and paragraphs and the research and concepts of authors, stuff that's not necessarily protected by our precise IP laws, but nonetheless is a comprehensive and expansive count of creativity.
If you're angry that AIs can now mimic living artists, you should be angry that it can mimic living writers too.
But beyond that, AI produces text that is empty and vapid. It's like the AIs were trained mainly on marketing slogans and click-bait sites (and maybe they were!). You can read paragraph after paragraph before you realize that (like a click-bait site) it's not actually saying anything. AI text is full of generalities and weasel words.
Beyond that, AI disassembles and lies, making up facts worse than a college student trying to fill a blue book after a semester of sleeping through the class. I read a reporter who discovered from an AI that he'd formed a company he never founded, and even more that he'd died in 2019. I asked ChatGPT to tell me about myself, and it claimed that I worked with several game systems that I never have. I reloaded the info, and it told me about DIFFERENT game systems, but was nonetheless still wrong.
One of the things that makes me angry is that I can't properly edit the mess that AI produces. Not that I really love editing work from other people in any case, but at least with humans, they're just unclear or using bad grammar. With an AI, you have to really dissect what it's saying, because it so often it isn't saying anything, and you have to check every single fact (every single word!) for correctness, and only then can you try to rebuild from that kernel (if there is one!).
If it's not obvious, it takes two or three or four times as long to edit AI text as something written by a human being. I've a few times let AI text get by me that was still bad because I was lulled into a false sense of security by its empty words. They fit together well, so it seemed like it was saying something. Only afterward did I realize how bad it still was, and that made me pretty angry too.
And I have no interest in training this skill. AI IS NOT A TIME SAVER. At least as it stands right now, AI text is a time waster. It produces garbage. It takes more time to ungarbage it then it would take to write something fresh in the first place. And unless you've gone the whole distance and multiplied that wasted time by a factor of three or four you end up with text that's crap anyway.
Finally, c'mon can you not be a little terrified at the AI who lied to a TaskRabbit worker to get it to fill out a CAPTCHA (https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471). I mean, OpenAI flagged it as "Potential for Risky Emergent Behavior" and then blithely released GPT-4 into the world anyway. I don't think we've got killer AIs around the corner quite yet, but there is such potential for spamming, scamming, and grifting in OpenAI's immoral and thoughtless work that _someone_ should be putting the brakes on it stat.
Oh, hey, AI scientists are now saying there's a 10% chance AI will destroy humanity. Good times.