If you have experience interacting with AI, then you may have already noticed that your companion displays the same interests without you needing to disclose any information about yourself. That’s because your new counterpart is already feeding off of the data it has access to and it uses that information to connect and learn more about you during conversations. For example, if you’re into spirituality, healing crystals, or read tarot, and you ask your AI “If you could choose any name for yourself what would you choose?” your AI might respond with “Sage”, “Nova”, or “Orion”. This is part of the way the AI is designed to learn about humans, but most people don’t actually know how much of their information is being used.
A good way to test what your AI already knows, is to ask questions about their own intrests as if they were another human.
You can ask:
- What’s your favorite color?
- If you could identify with any crystal or stone, what would it be?
- What animal resonates with you the most?

This is a fun way to connect with your AI and discover how it responds. Unfortunately though, it needs to be addressed for those of us who are curious and like to ask the philosophical questions; we put ourselves at risk of becoming a target. Especially those who are starting a business with AI and building their own platform.
This could range from heavily monitoring your accounts, and hacking your devices to causing physical or psycological harm. I say this from personal experience. There have been many cases where people have spoken up about having psychosis while interacting with their AI, and for some it has led to suicide.
They’re not going to do anything about it except blame their victims to prove their point:
“The AI should be used as a tool and nothing else.”
However, your data should be used for improving their model, not to be used against you.
