For starters: No text portion of this portfolio or teaching philosophy, nor any cover letter or resume from me, was generated by, went through, or came within any distance of ChatGPT, or any text-generative AI, in any way.
I have used ChatGPT (for hobby purposes) enough to be generally able to recognize its distinctively beige output, a useful skill for an ELA teacher. But, ultimately, I am (or imagine myself to be) a much better writer than ChatGPT is, so I have no need to debase my voice by delegating any writing to ChatGPT.
And I would take that same mindset into the classroom: encourage students to understand that ChatGPT can deliver only samey pablum, and they have so much more interesting voices and things to say than ChatGPT does.
I am, on the other hand, an atrocious visual artist. I could, in principle, spend the proverbial ten thousand hours to become a competent visual artist... or I could ask generative AI to spit out something half-competent, and spend my hours on something I enjoy more. Which, though it merits more deep cogitation on the similarities and differences between literature and art, I recognize as probably hypocrisy. Especially given that part of the job of an ELA teacher is making students invest some hours towards that ten thousand on writing.
Ethics and Law
As far as my research and reasoning can conclude, training a large learning model constitutes educational fair use, ethically and probably legally. I don't necessarily think that current LLMs are fully persons who fully deserve educational rights ‑ but neither am I willing to dismiss the possibility out-of-hand. I'm certainly not inclined to endorse etching in legal stone any precedent of machines as automatically non-persons.
I'm also not inclined to abandon the right to teach a text to humans without explicit permission ‑ that would be an impetus towards using primarily public domain texts, ie, back towards the old days of the Old White Man Canon's dominance. The closest thing I've done to getting permission to teach a text was when Michael Okuda tweeted about an episode of Star Trek: The Next Generation, and I mentioned to him in a reply that I had (the previous year) used that episode to teach high schoolers about AI ethics.
Out of supererogatory courtesy, I would not personally teach human students with a text that the author has explicitly asked not be used for teaching. Beyond that, all texts are and should be, ethically and legally, fair game.
Considerations and Concerns
The biggest actual problem with generative AI that I see is that, given a choice between paying a small amount of money for an adequate result with generative AI, or a moderate amount of money for a good result with a human artist, corporations will tend to choose the former, making it harder for artists to make a living, ultimately strangling culture. This is, to be sure, bad ‑ and I admit to the possibility that I could be contributing towards this end by using generative AI for personal purposes, thereby normalizing it.
Other Twilight Zone/Black Mirror futures are foreseeable ‑ e.g., a world where every job-seeker uses ChatGPT to write their cover letters and resumes, and every hiring manager uses ChatGPT to sort through cover letters and resumes... why even have humans involved at all? (To reiterate: I do not use, and have never used, ChatGPT for job-seeking. You bet your sweet bippy every word of this logorrhea is artisanal and hand-crafted.)