Default view
Education Artificial intelligence (AI) Critical thinking and evaluating AI-generated content

Critical thinking and evaluating AI-generated content

Artificial Intelligence, particularly generative models, is increasingly used to support learning, research, and everyday tasks. However, content produced by AI cannot automatically be regarded as reliable or accurate. This makes the use of critical thinking and appropriate verification techniques essential.

How to distinguish between valuable and misleading AI content:

  • Content consistency: Check whether the AI’s answer is logically coherent and internally consistent. If contradictions appear, caution is needed. Sometimes the text sounds convincing, but the reasoning is flawed because one statement does not actually follow from the other.

  • Plausibility: Explanations that are overly simplified, overly detailed without justification, or lack references should raise red flags.

  • Absence of proper citations: If the model provides references, always verify that the source truly exists and contains the information attributed to it. Sometimes the original source either does not exist or says something quite different from the AI claims.

  • Relevance to the question: Evaluate whether the response is actually relevant to your question. If the answer strays into unrelated details, this is often a sign of “misleading” content.

The phenomenon of “hallucination”

As noted before, generative models often produce content that is grammatically correct and persuasive but factually inaccurate or entirely fabricated. This is known as “hallucination.”

  • For example, AI may cite a book or article that does not exist.
  • It may give a theory the correct name but attribute it to the wrong author or misrepresent its meaning.
  • It may “invent” logical-sounding connections that have no scientific basis.

Hallucination is not intentional deception, but rather a consequence of how these models work: they generate outputs based on probabilistic patterns, without direct access to actual facts, and without the cultural or contextual grounding that would support true understanding.

Examples of critical verification

Check references in databases: If AI cites an article or book, look it up in the university library catalogue or trusted databases (e.g., CAB, Scopus, Web of Science). For instance, if AI claims “Smith (2019) argues that AI is revolutionizing veterinary medicine,” verify whether this study exists and whether the author actually made this statement.

Cross-check facts in multiple reliable sources: If AI claims “45% of Hungarian students use AI tools daily,” check Statista or other credible surveys. If no such figure appears, do not accept the claim.

Test logical consistency: Consider whether the answer is logically sound. If AI states, “AI always both reduces and increases energy consumption,” this is contradictory. Ask clarifying questions and consult scholarly literature.

Analyze language and style: Be cautious if the text is overly general, overly confident, or simplistic, as these may be warning signs. For example: “AI will solve all problems in the future” — this is exaggerated, unscientific, and not provable.

Seek expert consultation: If an AI response is connected to an important professional or research decision, consult your supervisor or a subject expert. For example, if AI suggests a statistical method for a thesis, verify with a knowledgeable academic whether it is appropriate and correctly applied.

Three critical questions to always ask

  1. Does the referenced source actually exist? (If unsure, librarians can help you locate and check it.)
  2. Do other reliable sources support the claim? (Librarians can also guide you on where and how to search.)
  3. Is the content logically and academically sound? (If necessary, confirm with your supervisor.)