This case study examines ChatGPT's citation behavior across 50 structured queries spanning technical, academic, and general knowledge domains.
When citations were provided, approximately 70% linked to legitimate sources that contained relevant information. However, the remaining 30% exhibited various issues including broken links, incorrect attributions, or tangentially related content—a form of hallucination where the system generates plausible but inaccurate source references.
Technical queries involving programming documentation showed higher citation accuracy than queries about recent events or rapidly changing fields. This aligns with how AI systems evaluate authority signals from well-established sources.
Explicit requests for citations produced more consistent results than implicit expectations of source attribution.
Clear, specific queries about established topics with well-documented sources produced the most reliable citations.
Queries about recent events, niche topics, or content requiring synthesis across multiple sources showed the highest rates of citation issues.
ChatGPT's citation capabilities vary significantly by domain and query type. Users should verify all citations independently, particularly for recent or specialized content. For more on AI trust and retrieval, explore our Topics.