top of page
Search

Does Gen AI help or hinder the evolution of our Intelligence?

I’ve just finished reading Max Bennett’s book “A brief history of intelligence”. It details the origin of intelligence from microbial organisms to current day and later in the book draws parallels to AI.

It defines five key breakthroughs; the most recent one is Speaking/Language. And at one point in this book, Bennett states that a key aspect of how we’ve become so successful as a species is how we transfer information. The written language is key to this. It allows us to transmit an almost unlimited amount of information across population and generations so that we can build on and extend that knowledge.

It had me thinking about the current generation of LLMs and whether they are positive or negative in that respect. They allow easy access to pretty much all available information of mankind, but we’ve had that already through search engines. They allow synthesizing information, defined as: “pulling together ideas from multiple sources, perspectives, or domains and presenting them in a coherent, context-aware way.”

This is new and arguably the greatest strength of an LLM. Synthesising information means that LLMs can:

  • Summarizing complex topics

  • Comparing viewpoints

  • Explaining nuanced ideas in simple terms

  • Connecting dots across disciplines

So not only do we now have access to all the information in the world, but we can also interrogate this information in an instantaneous way that previously could take weeks of research. 

But this is also where the danger lie. If transmission of information is a key aspect of how we’ve developed as a species, then how we synthesise information is as important.


Synthesising truth

There are whole industries build up around Search Engine Optimisation, because it was possible to do and possible to understand. How do you hustle for a good mention in ChatGPT? How do you even make an LLM behave the way you want if you have full control over it? Grok as Mecha-Hitler,  ChatGPT as sychophant and Claude as blackmailer are just the most recent demonstrations that we put our trust in LLMs and the companies behind them at our peril.


Does LLMs expand or contract knowledge?

Then we have the concept of model collapse, where the model's outputs become less diverse, less accurate, or overly repetitive over time. If the mechanism we use for synthesising information reinforces some information based on prevalence in its training data set, it will only get stronger each time you train the model. If we allow that to happen, LLMs will only contract our knowledge, not expand it. It would actually mean that it prevents transmission of information, not help it.

Taking these two aspects, it is at least possible that the prevalent use of LLMs is not only contracting our knowledge, but also hides the logic of what is presented to us in an arcane black box. I’m not sure about you, but I go on constant fact checking quests to ensure the information presented to me by LLMs are hallucination free and based on reputable sources.


Does Generative AI Help or Hinder the Evolution of our Intelligence?

Will AI in its current GenAI/Chatbot incarnation increase the average IQ in any given room? I'd argue that is unlikely. People take shortcuts. People want to seem smarter than they are. People will use whatever is available to them to achieve a goal.

Thinking is hard work! Just as lifting rocks was hard work, so as soon as we could come up with machinery that did it for us, we were quite happy to never use those rock lifting muscles if we could get away with it. The same will happen with our brains. We will pass synthesising information, creative and critical thinking to the tool that we perceive to be better at it.

But these are key aspects of developing our intelligence. Question is, once AI is so smart is does all the thinking for us, will we even miss it?


References:

 
 
 

Comments


© 2024 by Mikael Svanström
bottom of page