Why I Use ChatGPT to Tell Me Things I Already Know

I Know I Know That – But What Is it?

I use ChatGPT to tell me things I already know.

Why? If I already know something, why use ChatGPT to tell it to me? Among the many reasons are that, even when I “know” something, often I can’t recall it:

  • easily
  • quickly
  • accurately
  • completely
  • clearly
  • concisely
  • in a well-organized form

If you’ve ever struggled to remember the name of one of your favorite movies, or to write a concise summary of a topic that you know well, or to recall industry-standard terminology, then you know the difference between knowing something and being able to put that knowledge into polished written form quickly.

Google was recognized long ago as a kind of “external memory” that we could use to supplement our biological memory. So it shouldn’t be surprising that a tool like ChatGPT that can not only effectively search for information but also synthesize information from multiple sources and describe the results of that synthesis in easily understandable natural language would also be useful as a memory enhancer.

Using ChatGPT in this manner, as a means rather than an end unto itself, can be an effective tool in the process of generating high-quality content efficiently. 

The Misleading Allure of Total Accuracy

It’s valuable to recognize the value of ChatGPT as a memory aid because doing so makes clear why some widespread critiques of ChatGPT (and Large Language Models generally) aren’t as generally applicable as they claim to be. The first of these critiques is that ChatGPT’s output can contain inaccuracies (sometimes called “hallucinations”).

The unstated premise behind many of the arguments about ChatGPT is that ChatGPT isn’t useful, and possibly shouldn’t even be used at all, if its output isn’t 100% accurate. I’ve heard many lawyers proclaim proudly that they would never use ChatGPT for any purpose because they heard of an example in which it produced a response that contained an error.

Although that lawyerly conservatism may be extreme, the hyperfocus on correctness is widespread and draws attention away from the fact that, in many of the situations in which we seek to recall information, we don’t need 100% accuracy or anything close to it. Instead, what we need is a good-enough memory jogger—something that tells us the gist of what we already know in a way that is clear and concise enough (and provided quickly enough) that it refreshes our memory and enables us to proceed with the task at hand more easily and quickly than if we had attempted to remember the same information using our unaided minds alone.

Make no mistake: accuracy is important. But as I describe below, the use case for which we apply ChatGPT, combined with our own knowledge base of the subject matter, will dictate the importance of the level of accuracy required for a given task.  

The Sliding Accuracy Scale

The reason that so many arguments against ChatGPT rely on examples involving using its answers for medical and legal purposes is that those are contexts in which we need information to be highly accurate because even minor errors can have catastrophic consequences. In those cases, relying solely on an answer from ChatGPT definitely isn’t prudent.

But not all situations are analogous to asking your doctor to determine whether you have cancer. We can tolerate information that is only in the right ballpark in a wide range of situations.

Take a topic that I know a fair bit about: writing source code in C. I did a lot of that in my youth but haven’t done much in recent years. If I want a refresher on how to write a loop in C, either Google or ChatGPT is a reasonable application to ask. I might turn to ChatGPT instead of Google if what I want to know are the options for writing a specific type of loop, since ChatGPT is more capable of producing a natural-language answer that is tailored to my specific prompt.

The widespread concern about inaccuracy isn’t particularly applicable in my programming example above because I know the topic well enough to be confident that I will spot significant errors and omissions in ChatGPT’s output. This is similar to how I would treat an answer from a human research assistant who I tasked with answering the same question. I would be much more hesitant to rely on ChatGPT’s answers to questions about topics that I know nothing about, or in high-stakes situations.

Another reason I’m comfortable asking ChatGPT about programming in C is that information about the topic was available widely enough on the internet as of ChatGPT’s training date that I have confidence that ChatGPT’s answers to questions on that topic will be “well-informed.” I have much less confidence in ChatGPT’s answers about topics that are generally obscure or that aren’t widely described on the public Internet. So in this sense, our assumption of how widespread accurate information is available on the topic, information that likely was used to train ChatGPT, should factor in our decision to use the technology as a tool. 

Furthermore, the required degree of accuracy will ratchet up depending on the purpose of your inquiry. If you’re asking just for personal interest or to share information with a friend, then a higher error rate is tolerable than if you’re asking for medical information that you will rely on to decide how to treat a newly diagnosed illness.

Many of the arguments that ChatGPT’s inaccuracies should render it unusable seem to assume that humans who receive that output won’t have the ability or the inclination to spot and fix errors in that output. Although that argument is fair enough, it applies just as much to the use of existing search engines and to other sources of information, including books. The main differences with ChatGPT are that it doesn’t cite to its sources and it makes even blatantly false statements with the same utter confidence as true statements. That does mean that we need to be particularly careful when reviewing and relying on the answers we get from ChatGPT. (Note that requesting citations to ChatGPT’s sources, such as URLs and citations to legal cases, is known to frequently produce completely false “hallucinations.”) 

Responsible LLM Use

What these examples make clear is that we must use ChatGPT mindfully and responsibly.  For example, the situations I have in mind are for:

  • refreshing your memory;
  • about topics you know well-enough to spot basic errors; and
  • in non-safety-critical situations.

This does require you to read whatever ChatGPT writes with a critical eye—the intellectual equivalent of defensive driving. Think of it as applying the adage, “trust, but verify.” If we do this, the concerns about accuracy are largely addressed. The same has always been true when turning to any external memory aid, whether human, written, or machine.

Furthermore, the potential for inaccuracy shouldn’t amount to an all-or-nothing approach to the technology. To use an analogy, a car is great at driving on roads, but you wouldn’t use it to get you from one room in your house to another. That doesn’t cause you to stop driving your car. It just means that you know not to try driving it through your house.

Novelty Is Overrated

The other argument that I hear most often against ChatGPT is that it can’t come up with anything truly new because it can only generate answers based on its training data. Although I understand and share curiosity about whether ChatGPT and other software can generate information that is “new” in some sense, everything I’ve said above makes clear that we often have a need to retrieve old information and that such retrieval is particularly valuable when the output comes in a form that is clear, succinct, relevant to our query, and generated quickly.

As a patent lawyer, novelty is my stock in trade. I focus on the potential novelty of technologies that come across my desk so much that it can be easy to forget how much time and energy we put into merely recalling information that already exists, and even information that we already know. It seems that much of the public debate about the merits of ChatGPT suffers from this same over-emphasis on the importance of novelty. Let’s remember the value of keeping alive — and making readily-accessible — what is already known.

A More Balanced Approach

Critiques of ChatGPT that focus solely on one of its features—such as the degree of accuracy of its output—fail to recognize that we often are engaged in a kind of multi-objective optimization when we seek information, whether we are aware of it or not.  Those objectives include:

  • accuracy of information retrieved;
  • relevance of information retrieved to our query;
  • clarity of the retrieved information;
  • concision of the retrieved information;
  • the cost of retrieval, which, in the case of ChatGPT, is measured primarily in the form of the amount of time required to generate a response (because, at least for now, OpenAI and its investors are shouldering most of the financial cost of generating the response).

What the “ChatGPT makes mistakes and therefore should never be used” crowd fails to recognize is that it’s often rational to assign various weights to the objectives above and that different combinations of weights make sense in different situations. For example, if all I want is a brief refresher on a topic, five minutes before meeting with an expert on that topic who will be collaborating with me, then I might assign a much higher weight to speed than to accuracy.  If I’m looking to produce text for a marketing email, I might assign a very high weight to clarity and concision (and to another objective—persuasiveness) and a lower weight to speed.

Viewing ChatGPT in this way makes clearer how it can be very valuable even if it only does a “pretty good” job at helping you to remember information that you already know.

And I’ll make an evermore radical claim: Even if you assign a lower weight to the accuracy of ChatGPT’s output, wise use of ChatGPT as a tool can enable the final output that you produce to be just as accurate—if not more accurate—than the output you could have produced on your own. For example, assume:

  • Writing a document on a particular topic with 95% accuracy entirely manually takes you 1 hour.
  • Instead, you:
    • Use ChatGPT to produce a draft with 80% accuracy in 2 minutes (including the time it takes you to write the prompt).
    • Spend 20 minutes revising the document to 95% accuracy.
    • Result: 22 minutes spent producing a document with 95% accuracy.
  • Or you:
    • Use ChatGPT to produce a draft with 80% accuracy in 2 minutes (including the time it takes you to write the prompt).
    • Spend 58 minutes revising the document to 98% accuracy.
    • Result: 60 minutes spent producing a document with 98% accuracy.

In one case, you spent less time to produce a document that was as accurate as you could have produced by yourself without ChatGPT. In the other case, you spent the same time to produce a document that was more accurate than the one you produced yourself without ChatGPT.

None of this should be too surprising or radical. It’s the same thing we do when we collaborate with other humans or turn to other humans to perform tasks for us, such as performing research and writing memos that we use to help us write documents. We all acknowledge that such a process, when it solely involves humans, can result in us saving time without sacrificing accuracy (or even while increasing accuracy), even if the inputs we receive from other people contain errors. Why would we think the same is impossible to accomplish when using software as a tool in the writing process or that such software must produce perfectly accurate output in order to be useful?

Don’t Believe the Counter-Hype

Much has been said about the hype behind AI and ChatGPT in general. The red flags being raised about the current AI hype cycle are valid.

However, I think that many of the common criticisms of ChatGPT are themselves hype— remember that “hype” stems from “hyperbole”—because they exaggerate valid concerns, such as concerns about accuracy and novelty. That hyperbole then elicits a reaction, resulting in a mode of dialogue in which both sides try to counter each other. The unstated premise of these debates is that the value of a tool such as ChatGPT hinges on which side of the debate is correct.

What I’ve tried to point out is that a tool such as ChatGPT can have significant value regardless of the outcomes of the debates over accuracy and novelty (and other features). For example, even if I concede that ChatGPT’s output can contain factual errors, that doesn’t affect the value of ChatGPT as a memory-jogging tool as long as the nature and extent of those errors stays within reasonable bounds in light of the purpose for which ChatGPT is being used.

To get the maximum benefit out of ChatGPT, we need to be aware of the tool’s strengths and weaknesses and, just as importantly, our own goals for using ChatGPT in a particular case and the importance that we assign to each of those goals. Then we need to evaluate ChatGPT’s output in light of those weighted goals. If we do this, and stay mindful and flexible in making strategic use of ChatGPT as a tool in those circumstances in which it can add value, we’ll avoid being sucked into the hype/counter-hype cycle and reap some real rewards from this incredible tool.

Unlock the Power of AI!

Sign up now and receive our exclusive guide, “Top 10 Ways to Supercharge Your Skills with AI,” for FREE! Stay ahead in your profession with AI insights delivered straight to your inbox.