To academics and those associated with higher education, artificial intelligence can seem a little redundant. After all, who needs machine learning when the most finely tuned machines live between our ears and run on coffee? In reality, though, no matter how beautiful a person’s mind may be, artificial intelligence will always have utility for those who are smart enough to use it. And while AI isn’t going to replace your job in higher education, there are plenty of ways to leverage it to make you even better at doing your job.
ChatGPT in Higher Education
We’ll start with that elephant in the room: ChatGPT. It’s the white whale, the Big Kahuna, the pipe-dream of AI turned reality: a large language model (LLM) that truly recreates authentic human expression. Of course it does more things than that–it can program apps, it can write formulas, it can answer questions–but its ability to produce sophisticated written pieces is certainly one of its greatest strengths (and concerns) if you’re in education.
First, how is this kind of output even possible for artificial intelligence? ChatGPT is essentially the productized version of GPT-3, Open AI’s large language model that uses deep learning and other techniques to learn accurate and natural language use. It was trained on trillions of data points, with 175 billion parameters, and plenty of reinforcing human feedback to ensure that it’s as accurate, smart, and authentic-sounding as possible.
That being said, there’s plenty that ChatGPT doesn’t do–or at least doesn’t do very well–which means job security for our friends in higher education.
Discerning the Truth
Academics are, at their core, concerned with truth above all else–which makes ChatGPT a liability. It operates by a truly massive set of rules that put it on the right track, but ChatGPT can’t actually make judgment calls–it can’t separate fact from fiction. For that reason, ChatGPT is liable to spread inaccurate information.
For instance, when I asked ChatGPT about its own learning, the program actually failed to effectively describe how it was taught. It specifically claimed that it was not trained through reinforcement learning with human feedback (RLHF). This would’ve been fine, except for that RLHF was a key strategy for training ChatGPT’s LLM.
Of course, once I reminded ChatGPT that it learned this way, it remembered and described its training process. But if I hadn’t already known this, I couldn’t have extracted the right response.
In academia this is obviously an issue. Your work is interrogated by peers, edited for journal submission, and cited for its legitimacy–so it better be accurate…at least more accurate than ChatGPT. ChatGPT is even known to fabricate quotes and misattribute them. For academics, that’s nightmare fuel…but it’s also an absolute check in the box for job security.
ChatGPT Is Not Up-to-Date
It takes a loooong time to train an LLM on its huge data sets. Long enough that, by the time the training is over, the world has changed significantly. ChatGPT can deliver massive amounts of information, but its datasets only run up to 2021, so it can’t really speak on anything that’s happened in the last two years.
Also, from a generative content standpoint, it’s basically a super-powerful word organizer. That means it can structure and communicate information based on what it already knows, but it can’t generate information on a new topic. For instance, it can’t report on the latest current events and it also can’t–here’s the big one–explain a new concept or advancement. That is still firmly the intellectual real estate of academics, whose job it is to make scientific and conceptual breakthroughs that ChatGPT can eventually talk about…once it updates its training, of course.
ChatGPT is Formulaic
ChatGPT is an incredibly sophisticated formula–but it’s still a formula. It lives by a set of extensive, but ultimately conventional, rules that keep it on track. For this reason its voice, communication structure, and overall style comes off as stilted and, well, formulaic. Humans, conversely, can recognize conventions, toy with them, find nuances within them, and break them. It is, indeed, this impulse and ability to break from convention that inspires artistic and intellectual advances. It’s also what separates us from ChatGPT.
ChatGPT, for instance, can write a fun little rhyming diddy. But ask it to write a free verse poem–with no established rhyme scheme and meter–and it doesn’t even know where to start. So it’s limited, and artificial intelligence will likely always be limited in the sense that it will always follow conventions set by its human designers and trainers.
Fortunately the human brain is more elastic, inspired, and discerning. That’s one of the reasons that good professors and teachers, who truly value those intrinsic traits of good writing–voice, tone, stylistic verve–shouldn’t be too concerned with ChatGPT’s ability to instantly churn out C+ essays for their students. Its constrained style should send off alarm bells for anyone with their ChatGPT radar on. And if you’re still concerned about getting duped by the AI, there are apps in production that will help you filter those AI-generated answers.
So, it's limited, but that doesn’t mean it isn’t useful. ChatGPT, like other forms of AI, is a wonderful tool. So it’s best to use artificial intelligence to make your process more efficient and productive.
How to use AI and ChatGPT for Your Own Work
AI and higher ed have enjoyed a comfortable working relationship for decades, and if teachers or professors aren’t using it, they’re kinda missing out. AI can be leveraged to help with grading, responding to student requests, even tutoring (although that might not quite be ready to operate at scale). AI can create lesson plans, and even serve differentiated content to students based on their learning styles and preferences. What’s more, AI tools can sort through information about your students and their performance to identify trends instructors can use to create better lesson plans and assignments.
Most of these function separately from even LLMs, even reaching back into the Good Ol’-Fashioned AI vault to operate through discrete expert systems. In this way they’re often elegant, user-friendly, and easy to operate. As a more expansive tool with corresponding utility, LLMs like ChatGPT have myriad applications in the education space.
No, that doesn’t mean it should perform writing assignments for students, but, frankly, it’s a great way to brainstorm writing strategies. Students can workshop introductions within the interface, and can play with different tones to see how that affects style. And as a search engine, it offers plenty of information that gives insight into different writing tasks. For instance, students writing persuasive essays can use ChatGPT to explore Aristotle’s different rhetorical strategies. And answers are delivered in a conversational manner that’s accessible to students.
ChatGPT and AI in Higher Ed Marketing
Artificial intelligence has also had a long-standing place in marketing, from conversational chatbots, to automations, to data analysis. However, the content generation capabilities of ChatGPT and other large language models can help marketers become more efficient and effective. Halda, for instance, uses a natural language model to support tools that personalize the website content experience for institutions of higher ed. This allows marketers to take something that used to be incredibly time consuming–content personalization–and implement it at scale. It’s another way to leverage language models like ChatGPT for powerful communication.
If you're interested in all the ways AI can help you build a more personalized marketing strategy, we'd be happy to have a conversation. Halda can help you incorporate and optimize your AI usage to make the biggest impact for your team. You can even demo our tools in our personalization playground to preview how your website could provide the most tailored, AI-driven experience to your students.