Tech and futures blog | Where ideas in AI, design, human cognition, and futures converge. Thinking out loud — in pursuit of what matters next.

Tech and futures blog | Where ideas in AI, design, human cognition, and futures converge. Thinking out loud — in pursuit of what matters next.


From Searching to Asking: How AI is Reshaping the Way We Learn

This isn’t just the end of search — it’s a reprogramming of how we process knowledge.

Thought Exploration Series

Rabih Ibrahim

Rabih Ibrahim

7 min read
May 26, 2025

“The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.”
— Stephen Hawking

We used to search. We would launch Google, navigate through a dozen tabs, jump between articles, and piece together our own responses. Although it wasn’t smooth, it made us think — compare, interpret, and engage in the process of learning.

Something has changed now.

We’ve given up looking. We’re asking. Additionally, the responses are more confident, quicker, and cleaner than before. Not only do ChatGPT and Gemini provide us with data, but they also provide us with well-thought-out answers. synthesized. organized. It’s satisfactory.

Sam Altman recently made the quiet but telling statement, “I don’t do Google searches anymore,” which is indicative of a larger cognitive shift. Sensing the same momentum, Google redesigned the search interface with its new “Ask with Search” feature at I/O 2025. This feature is based on conversational responses and AI-native querying.

It’s not just technology that’s changing. It’s cognitive. psychological. even philosophical. It’s a change in how we define understanding itself and how we engage with knowledge.

From Retrieval to Understanding: A New Cognitive Shortcut

A Novel Cognitive Shortcut from Retrieval to Understanding
The purpose of search engines was retrieval. It was their responsibility to compile and sort through enormous volumes of data, highlighting the most relevant information. They pointed us in the right direction but rarely completed the journey.

We had to piece together meaning on our own while searching. We read. We made cross-references. We got things wrong. However, thinking also took place in that friction.

AI has disrupted that entire model. Instead of guiding us through the chaos, it clears a path. Ask a complex question, and it replies in full — structured, reasoned, and often pre-digested.

This isn’t just convenience — it’s a cognitive shortcut. But if we let AI do the interpreting for us, what happens to our ability to think critically?

. . . . .

The Psychology of the Perfect Answer

There’s more to this shift than meets the eye. It’s very psychological.

It took work to search. It forced us to consider the meaning of opposing viewpoints, assess contradictions, and challenge sources. Asking AI is easy. It feels good to receive a confident response quickly.

That experience taps into a deep bias: our preference for certainty. We’re hardwired to find answers, to get rid of uncertainty as soon as possible. On demand, AI provides that. It works well. It’s tidy. It conveys a sense of comprehension.

We’re not just looking for answers anymore. We’re looking for the best answer — immediately.

And AI, trained to optimize coherence and fluency, serves exactly that. However, that very optimization may lead us to mistake confidence for accuracy and structure for truth.

If we’re not careful, the need for simplicity and speed may make it harder for us to deal with complexity. And wisdom is found in complexity.

. . . . .

Packaging Wisdom: The DIKW Pyramid

To understand what’s really changing, we need to revisit a foundational idea: the DIKW Pyramid, proposed by systems theorist Russell Ackoff in 1989. It breaks down human understanding into four levels:

  • Data — raw, unfiltered signals
  • Information — structured data that answers “who,” “what,” “when,” and “where”
  • Knowledge — contextualized information that helps us understand “how” and “why”
  • Wisdom — the ability to apply knowledge with judgment, ethics, and foresight

Conventional search engines are ranked last. They facilitate our access to basic information and data. However, they let us handle the difficult part, which is interpretation and judgment. That climb up the pyramid is what used to define our relationship with knowledge.

AI, on the other hand, attempts to elevate us in a single jump.

It synthesizes. It puts things in perspective. It can even mimic insight at times. We receive something that feels like knowledge — or even wisdom — instead of raw inputs.

The risk is this: are we still earning wisdom if AI packages it for us? Or are we allowing it to avoid the very thought processes that make wisdom so profound?

. . . . .

Bias in the Answer: Who’s Doing the Thinking?

Let’s not forget: AI is not neutral.
Even when it provides a variety of viewpoints, the way those viewpoints are framed, toned, and prioritized reflects decisions made by algorithms, training data, and human developers.

One of AI’s most alluring features is the appearance of objectivity. A well-written paragraph has authority. However, behind that fluency lies a series of statistical conjectures that are influenced by its predecessors. Not judgment. Not intuition. Just prediction.

Google, too, is far from neutral. As Safiya Umoja Noble writes in Algorithms of Oppression, “search is not simply a reflection of society but a powerful shaper of what is visible, knowable, and prioritized.” Its biases may be less obvious, but they’re baked into rankings, monetization strategies, and metadata logic.

The key difference is what happens after the result: with search, we still need to decipher, reconcile, and construct meaning on our own. That synthesis is frequently completed for us by AI — framed, packaged, and ready for consumption.

Searching used to require us to sort through inconsistencies, refute assertions, and create our own synthesis. AI frequently produces a single, simplified version of “truth.” We run the risk of confusing coherence for understanding and polish for accuracy if we place too much trust in it.

The question then arises: are we merely consuming conclusions or are we still interacting with ideas?

. . . . .

From Search to Ask… to What Comes Next?

If search brought us information and asking delivers knowledge, what lies beyond? Is the leap to wisdom still up to us, or can AI help us get there?
Answers are only one aspect of wisdom. It has to do with judgment. It involves understanding when to apply knowledge and when to challenge it.
It’s an understanding of context, consequence, and subtleties. AI can imitate that. However, imitation is not mastery.
AI is a bodyless entity. or recollection. or a stake in the outcome. It doesn’t deal with the consequences of its choices.
We do.

Because of this, humans continue to play a crucial role — not only as AI users, but also as editors of its output, doubters of its veracity, and defenders of moral depth.

Because the more capable these systems become, the more important it is that we don’t just become better askers — but better interpreters.

. . . . .

The Future of Wisdom Still Belongs to Us

AI can accelerate learning. It can surface patterns, clarify complexity, and deliver information at speed and scale. However, it cannot take the place of the gradual, unpredictable journey of human insight — the kind molded by hardship, failure, introspection, and development.

Being wise is more than simply being more knowledgeable. It all comes down to knowing what matters and why. It is motivated by judgment, purpose, and a vision of what ought to be rather than merely what is.

AI is capable of providing answers, but only humans are able to give those answers context. Therefore, the future of wisdom may as well remain ours, even in a world shaped by machines.

. . . . .

Sources & References

  • Stuart Russell & Peter Norvig (2020). Artificial Intelligence: A Modern Approach.
    Especially relevant to the quote about optimizing objectives and emergent behavior.
  • Ackoff, R. L. (1989). From Data to Wisdom.
    Systems Thinking and the DIKW Pyramid (original concept attribution)
  • Safiya Umoja Noble (2018). Algorithms of Oppression: How Search Engines Reinforce Racism.
    A foundational work on bias in search algorithms.
  • Shoshana Zuboff (2019). The Age of Surveillance Capitalism.
    Details how platforms like Google structure knowledge and influence behavior.
  • Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.
    Explores the implications of treating AI systems as neutral or autonomous.
  • Bengio, Y. (2023). Why We Must Rethink the Role of AI in Decision-Making.
    Scientific American

. . . . .