<

Thought Outsourcing

Published: 2024-02-29

We live in a world where we brainstorm new ideas using LLMs, visualize our most ambitious movie ideas via Diffusion Models. Initially, this sounds great. You save time by letting the LLM give you "top 10 Halloween costumes that are not cliché" and maybe also a prompt like "generate me in this Perry the Platypus onesie." All works like a charm, in my opinion (not that I've asked the LLM these things before). However, recently, I unconsciously went to an LLM as the first thing to ask for a different approach to a math problem. Without a thought, I typed in the problem and asked for the solution. I realized that I didn't even try at all to use my own cognitive load to solve it. It is basically a subconscious response when I face a question that I don't have an immediate answer.

This is kinda terrifying, in my opinion. I wasn't even feeling lazy at that moment. I was just way too dependent on this tool; my brain is now wired to ask the LLM questions first. This can be explained by how convenient requesting an LLM is. I simply spend a few seconds to type in a prompt and receive an answer with explanation back in no time. It's like always having an answer sheet or a highly qualified and obedient tutor next to you at all times. You are not forced to use the answer sheet or ask the tutor questions, but they're just so enticing. I believe that all of our brains are lazy, and we always crave the correct solution in the fastest time. I know this does not apply to everyone, but I think everyone will experience these temptations. It is similar to how your mom asks you to clean the floors, but a Roomba is right next to you, only a button away from cleaning the whole house. I know you would press that button. You can also link this to having dozens of dirty dishes and being only a few clicks away from starting a dishwashing machine.

I know these examples are not totally correlated to the topic of using your cognitive load, but they perfectly describe the temptations that LLMs create. Of course, there are some—no, a lot—of questions that these language models can't solve at the moment, and you'll have to rely on your good old neurons to solve them. Nevertheless, when these models improve even more and these initially impossible questions become child's play for the LLM, would you press the 'enter' button to send your prompt away?

I think 'Thought Outsourcing' is real, and it is going to get even more and more relevant when these models improve more in reasoning as well as tool use. What would happen when these models simply devise a plan to research new algorithms, search the whole internet for references and inspirations, and start experimenting and iterating on their own? Would we even need researchers anymore? Then, would our 'biological' or 'human' thought have any value anymore? For now, we should probably just use the 'thoughts' service with more caution and intention. Before writing that first prompt for an essay on the Roman Empire, ask yourself: 'Should I write this one about "..." or "..."? What are the different sections I should mention? Maybe I should write the outline first?' Eventually, you'll craft the essay yourself without AI help, or maybe just a little bit in fixing your terrible grammar, since you've been using Grammarly for 5 years.

You will see 'Though Outsourcing' in more fields like programming, copywriting, design, and more when the LLM, and Diffusion Models keep improving at the current pace. We'll have to start question our education system and societal values in general all over again when the lines between a white-collar employee and a LLM get more and more blurry.

In the end, are we outsourcing our thoughts so we can use them for more important concerns, or are we actually replacing them?