“Most of us are psychologically unable to truly imagine living in moving history,” writes economist Tyler Cowen. And by “moving history” he means dynamic history, disruptive history. The sort of rapid change coming with artificial intelligence (AI). Are Coloradans making themselves ready? Am I?
I wrote every word of this essay, except for a specific quote, without the aid of AI or Large Language Models (LLMs), I swear! However, I did use AI for some of the research, as most people have been doing for quite a while. While Google only recently included an “AI Overview” with many search results, its searches have been AI-driven at some level for years. I used Google as well as ChatGPT here for research. LLMs do take AI-driven research to a different level. Some of the news articles cited here I saw first in my usual feeds.
After I finished writing this column, I tried an AI experiment by asking ChatGPT to write an 800-word essay, in my voice, about how Coloradans are adapting to LLMs. Interestingly, it refused to mimic my voice because that’s against the policies of its owner, OpenAI, but it offered to write a column matching some of my “high-level stylistic traits.” You can read the results here, along with my interactions about background research. Chat’s article bears little resemblance to what I wrote, and it has that generic-sounding LLM voice, but it’s okay.
I’m not going to further discuss the intense fights over AI-related regulations involving the legislature and the federal government. I haven’t yet spent enough time reflecting on those debates to form a robust position. As you can probably guess, I tend to approach regulatory issues from libertarian dispositions. At the same time, I’ve become more open to some regulations when tightly formulated to protect people’s rights. So, although I’d be surprised if I ended up siding with the regulators here, I’m surprised fairly often. Here I’m going to focus on other aspects of AI.
Some scary cases
The Denver Post’s Elliott Wenzler reported last September, “Two lawsuits filed in Denver District Court this week allege that artificial-intelligence-powered chatbots sexually abused two Colorado teenagers, leading one girl to kill herself.”
Google and Character.AI settled the suit and others in January, the Washington Post reports. Parents of the Denver girl, then 13, “allege that she took her own life after extensive conversations with AI companions on Character’s app,” reports the Post. Scary. The response basically has been to beef up safety protocols.
When I asked ChatGPT to give me updated news stories about the case, initially it pointed to a Los Angeles Times article, but that article doesn’t specifically mention the Denver case. So I asked Chat for a link to a story that directly discusses the case, and it gave me an article from the Washington Post News Service published by the Maryland Daily Record. I figured Chat probably had hit a paywall at the Washington Post, so I looked up the original article through the Denver Library to confirm the contents. I find that Chat often points me in some useful directions but needs a lot of hand-holding.
I want to acknowledge the horrible nature of such cases while gently urging parents to pay attention to how their children spend their time. While we can recognize the legitimacy of certain tort actions and perhaps of some regulatory guardrails, at some point we’ve got to lean into parental responsibility. We shouldn’t expect government or AI corporations to parent for us.
Denver schools block ChatGPT
Denver Public Schools blocked students’ access to ChatGPT, reports 9News. A January 9 letter from DPS raised concerns about collaborative chat, which presents “increased opportunity for cyberbullying, student data exposure, unmonitored interactions, academic misconduct, over-reliance on the tool and much more.” DPS also expressed concerns about potential adult content.
However, this is a service-specific restriction. DPS is not against LLMs in general. The letter continues, “We encourage the exploration of AI tools by our students and staff, and have alternative tools where they can do that safely. Google Gemini is the district approved AI tool, which follows all of our safety and privacy rules. Students and staff can access it through their DPS accounts.”
I asked Chat to evaluate DPS’s claims about Chat. It said, in part, “These claims are predicated on hypothetical or anticipated risks rather than documented incidents specific to ChatGPT in DPS. They echo broader concerns some educators have expressed about AI tools, but they are not evidence-based findings of harm within the district.” Seems like a fair reply.
University of Colorado embraces ChatGPT
“The University of Colorado is giving students and faculty across all four of its campuses access to ChatGPT,” Axios reports. The school is paying $2 million for access.
CU’s Michelle Ames told Axios, “By investing at the system level, CU is helping remove barriers and ensuring that all members of our community can engage with these tools, regardless of discipline or background.”
I asked Chat to point me to other sources about how Colorado colleges are handling AI. It came up with some good references that I would have struggled to find otherwise, including some policy statements from various campuses.
Coverage of an AI conference by the Rocky Mountain Collegian quotes CSU president Amy Parsons, “It’s true that the AI landscape is moving so fast that it can cause a lot of angst and anxiety. At CSU, we don’t see that uncertainty as a reason to hesitate; we see it as a reason to lead. Our job is to lead in understanding our needs, understanding those use cases, and to move swiftly to putting them to work.” The school is partnering with Microsoft.
The limits of LLMs
Krista Kafer opens her recent column on AI with some recent humorous (perhaps disturbing) examples of AI gone awry, like the time when the Weather Service posted a weather map with AI-hallucinated towns. Brad DeLong offers other examples.
In 2023, ancient history in AI terms, a Colorado lawyer was suspended for submitting briefs with hallucinated citations, Colorado Politics reported.
The key to successfully dealing with LLMs is simply to remember they are just probabilistic text generators “trained” on billions of pages of text. LLMs are not conscious, do not have values, do not care about you, do not care about the truth. They are, however, extremely good at generating text coherent in a given context.
Because they “riff” starting with context that you provide, they tend to sound sycophantic. If you think, “Wow, this LLM really gets me,” that’s because it’s extremely good at filling in text based on your prompts. If you think of an LLM as a friend or a person, you’re making a big mistake. The same goes if you don’t skeptically check all LLM output.
I’ve heard LLMs described as confident idiots, akin to Gilderoy Lockhart from Harry Potter. That’s not quite fair, because Lockhart mixes truth with intentional deception. When LLMs hallucinate (make stuff up), they’re not trying to deceive; they’re not trying to do anything in any teleological sense. They’re just generating plausible text based on their training and the prompts you provide.
Some personal examples
When I created the website for Secular Homeschoolers of Colorado, I used ChatGPT extensively for editing, and it even offered some content ideas. Chat encouraged me to break up long blocks of text into multiple pages, avoid unnecessary diversions into controversies, tighten up some of the phrasing, and add some content here and there. I wrote all the drafts and critically reviewed all of Chat’s advice, but in many cases I updated the site following Chat’s recommendations. I relied on Chat for that more than I do for my columns because for the site I wanted to avoid overimposing my voice, whereas for columns maintaining my voice is essential.
I had a bunch of old text documents that I wanted to reformat. I asked Chat how to do this, and it wrote a Python script for me and told me how to run it. I’d never programmed in Python before in my life. I had to go back and forth a few times, but the script worked, and Chat saved me dozens or hundreds of hours of tedious labor.
My son has been preparing for a science project for a fair sponsored by his homeschool enrichment program. He decided to see how different mixes of soil and sand affected radish seed sprouting. We ran the text through Chat, and it gave him some good ideas for improvement. For example, where we had included photos of the sprouts on grid paper, Chat encouraged us to record, average, and present the measurements. So my son spent a few extra hours recording that data, and the project was much stronger for that effort. (In case you’re wondering, a mix of 75% sprouting soil and 25% sand worked best in this case.)
In homeschooling, I’ve asked ChatGPT to explain certain points of grammar and how to work an algebra problem. It’s performed well.
I also regularly use Chat for background research, as with this column. So I have personal experience of LLMs being useful when used responsibly.
Discount the chicken littles
“That terrifying future threat posed by AI? It’s already here.” So says a recent headline from the Independent. Even generally-sober commentators such as Noah Smith are worried about certain scenarios.
But Cowen reminds us that other technologies have been extremely disruptive. Consider how nay-sayers would worry about human-guided fire. The printing press set off religious warfare and helped pave the way for the socialist and fascist genocides of the 20th Century. But the fact that the printing press made possible the Communist Manifesto, Mao’s Red Book, and Mein Kampf doesn’t mean that humanity should have rejected the technology. We also got the scientific and industrial revolutions, Cowen points out. Plus, Cowen adds, do we really want China to take the lead in AI development?
I am convinced we are in the early stages of an extraordinarily disruptive time, even apart from all the political turmoil of the day. But the future is coming, whether we’re ready or not. I suggest we prepare for it.
Ari Armstrong writes regularly for Complete Colorado and is the author of books about Ayn Rand, Harry Potter, and classical liberalism. He can be reached at ari at ariarmstrong dot com.

