My blog

Ontological Roots

Stanford PhD candidate Kevan Haghighi has shown how generative AI systems like ChatGPT visualize abstract ideas through culturally shaped metaphors. In one example, prompting for “a picture of a tree” produced only trunks and branches while omitting roots. Only when asked to depict interconnection did the model include roots. The experiment reveals how AI amplifies certain conceptual defaults while leaving others out, exposing subtle ontological bias baked into its training data. If AI fails to picture roots without explicit cues, what deeper ideas might it be missing?

Read Stanford News →