Tarot and AI chats as oracles: similar source material, different medium? An exploration

I find myself lumping tarot and AI chats/LLMs in the same category. But how similar are they really? And in what ways do we treat them differently, whether justified or not? Some thoughts from observing my personal use:

Based on the human condition

Tarot was developed to represent the most common universal archetypes of the human condition. Any one of the 78 cards you pull is therefore highly relatable to life in general and forms part of the “illusion” of why it works. LLMs were developed using raw source material of human creation, a byproduct of the human condition. The raw source material is synthesized and made sense of through LLMs that act as a real-time distiller of probable, human-readable answers.

The nature of the query/JTBD

A tarot reading begins with asking a question that is suitable for the tarot to answer. You’re seeking general direction, emotional guidance, affirmation of the psychological impact of a situation. You’re not expecting it to do any productive work, just to help you see things clearer. LLMs can not only provide advice and guidance, but can sometimes even do work for us, in great detail. Our queries are therefore much more detailed and demanding as a result, leaving greater room for “error”.

How we read the answers

Tarot requires more imagination to draw connections to the prompt, but feels extremely magical when the answers relate to the prompt. There’s still a sense of awe when answers relate indirectly, as it asks the prompter to dig deeper to find a meaningful connection and they almost always finds value in this friction. LLMs on the other hand inspire less effort and imagination than tarot cards to draw connections to the prompt, and therefore feel less magical even though it predicts answers quite well most of the time. No matter the amount of fine-tuning, imperfect answers will exist and they will almost always be met with irritation and disdain.

Differences in managing our expectations

I find myself trusting tarot answers more than a standard LLM answer because I know it’s approximate and I can fill in the rest with my judgement, agency and life experience. A coarse-looking product will inspire coarse expectations. On the flip side, the amount of detail that LLMs pull from can create a much wider range of answers compared to tarot. But even though LLMs are likely to be more accurate, it sometimes misses on precision. The high accuracy in its appearance ends up raising expectation on the quality of answer, making the lack of precision especially jarring when it happens. I generally find myself more guarded and disappointed with LLM interactions than with my tarot interactions, even though I’m receiving so much more on all points.

Where does this leave us?

  • Same source code (humanity), different input/output format -> both oracles and LLMs introduce their own flavour of discrimination and biases
  • The more detail that is transacted, the higher the expectations
  • One is more work to get answers yet higher satisfaction/delight, the other is less work to get answers and still less satisfaction/delight.
  • It’s interesting how our expectations are managed when we can see the boundaries and limitations of the system that we’re interacting with.

Conclusion

Both tarot and LLMs are mirrors of the human condition, but they invite us to look differently. Tarot asks us to use our imagination to get to the right-enough answer, while LLMs have us demanding unrealistic precision from it.
One satisfies through co-creation, the other frustrates through failed automation.
Tana logo