top of page
Search

The Care and Feeding of the Human Brain in the Age of AI

A Call for “Slow-Thinking”

The Care and Feeding of the Human Brain in the Age of AI
The Care and Feeding of the Human Brain in the Age of AI

By Malaika Cheney-Coker


A much-needed wake-up call or the provocations of an alarmist? Experts and AI pundits have been choosing these and other lenses to interpret Anthropic CEO Dario Amodei’s recent statement that AI could wipe out half of entry-level white collar jobs in one to five years. But within the swirl of opinions and the unknowns on how the technology will develop and be adopted, white-collar workers are still left largely on their own to figure out just how concerned they should be about their jobs.


Anyone who’s seriously experimented with generative AI—has given the tools not just prompts but counterprompts, asked them to assume perspectives or personas, etc.—has probably felt some shade of concern, as I have, about just how easily they can replicate humans’ higher-order thinking. But it gets worse. At some point in my own experimentation—maybe it was during the phase of giddy abundance induced by switching to a professional ChatGPT account with its unlimited queries—I began to notice something. Or rather, as I felt my own powers of ideation eroding just that tiny bit from repeated outsourcing of problem-solving, I remembered something – the warnings of my teen-years math tutor as she cautioned us against overreliance on calculators.


It turns out my personal experience is shared. After decades of steady gains, there’s evidence we may have reached peak cognition as a species and are now in a period of decline, exacerbated by—yes—technology, including digital media and AI. But recognizing this threat, and reversing it, is the first step toward using the AI revolution to vaunt, rather than stymie, human cognitive potential. Human cognition is malleable and, as such, there is room for improvement. Multiple studies and expert analyses, such as one showing startling declines in creativity over life stages, suggest that traditional school systems often suppress creativity through standardized testing, overcrowded curricula, and a focus on rote learning over creative exploration. 


A Proposition for Expanding Human Abilities


The consequences of untapped creativity are real. As AI developers like to remind us, AI is a one-of-a-kind asset in creating new value—whether that means new jobs, new companies, or the augmentation of existing productive assets. But it will still take human cognition and creativity to detect and define new applications of AI within productive and non-productive (read non-economic) domains of the human experience. So, here’s a proposition: instead of excessive handwringing over the speculative future capabilities of AI, we could focus on exploring all the rooms in the house of human cognition and creativity, both individual and collective. Upskilling in AI is a necessity nowadays, but we have to figure out how to do so while simultaneously expanding human abilities.


One way to do both is to use AI as a mental sparring partner, an ever-patient listening ear and debate team buddy that doesn’t mind being shown up. So, challenge yourself to be better than the bot—at least where it counts. When about to query for, say, unconventional lamp shades, try thinking of a few yourself. (When I tried this I came up with about 8 ideas on my own, four of which also featured in ChatGPT’s list of 22. It took me several ponderous minutes to think of them, versus the mere seconds for the bot. But my own “slow-think” process was a mental workout in simultaneously considering the purpose of these eclectic shades (aesthetics? function? statement piece?) and weighing the sentiment each idea induced in me. If going straight to the bot for ideas, challenge yourself to come up with something missing from the list – or a unique iteration of each shortlisted idea. (This principle, as well as critical analysis– can be applied in other uses of AI too. For example, while I used AI for a light edit of my (self originated) initial draft of this piece, I then used probably most, if not all of the time it saved in removing some of the deadwood it introduced). 


How Organizations Can Safeguard Human Cognition and Creativity


Safeguarding cognitive potency should be a systemic effort. If organizations are serious about their statements that people are their greatest assets, then they should simultaneously stoke human cognition and creativity as they adopt AI. This might mean discouraging the use of AI for certain purposes and redesigning professional development to strengthen skills like strategic thinking, connecting the seemingly unconnected, and imagination–areas where humans still have the upper hand. It might require exposing techno-solutionist paradigms and complementing them with human rights frameworks—and spiritual values, in the case of faith-based institutions.


Organizations should also address the culture aspect of AI adoption. Most human workers will feel demoralized if they’re thought of as expensive, biologically needy alternatives to bots. But the human brain is still a marvel. It’s the interface to a being animated by desire, passion, love, disgust, the scent of cinnamon, the feel of hair, and a zillion other things. The myriad surfaces and sensors that are the human being add up not just to the creation of unique value but to the definition and redefinition of what “value” even is.


Imagining Jobs of the Future


One practical team exercise in defining value while exercising the part of the brain that dreams, is a structured imagination of the jobs of the future. As AI wraps its tentacles around virtually every industry, the ensuing tug will be deeply unstabilizing and will require governments to act decisively to address issues such as potential mass unemployment, the environmental costs of AI, infringement on intellectual property rights, and more. But policy change is slow and political will, particularly in the U.S., frighteningly inadequate. Individuals and organizations can’t wait — they can ideate future jobs that shape the contours not only of the cognitive revolution upon us but the environmental and socio-political ones as well. The point isn’t whether such imagined roles can be implemented now, it’s about prompting the mind to explore the possible, and harvest from that imagined future, seeds that can be planted today.


When our team experimented with this exercise through our Food Jobs of 2050 Walkabout, we came up with ideas that, once plucked from the ether, linger in the present. For example, we imagined a food spiritual elder, an office-in-farm expert, a food style expert, and a food comic that hilariously calls out mis/dis-information. These speculations create a kind of reverse memory, an ability to learn from the future that, in theory, will help us better connect concepts in the present. For all you know, they influenced this idea that having learned as a society that fast food causes harm, a call for slow-thinking might help us mitigate some of the harms of the cognitive revolution.


In folklore, a genie brings desire to fruition—the heart’s deep and cloistered wishes are brought into daylight. But the stories show that even with the limitation of three wishes, unbridled desire can lead to greed, self-destruction, and calamity. In the age of the cognitive genie, we must learn to use the new technology with both excitement and restraint. Our brains, and the future only they can make, are at stake. 


Malaika Cheney-Coker is the founder and principal of Ignited Word and the author of the novel, Creature of Air and Still Water.

 
 
 

Comments


bottom of page