Quiet Influence
By Inori
THE human mind is easy to manipulate; it is especially pliant and easily influenced by external factors and by what people already perceive as factual or right. The perception one holds – the imprint carved on the mind – propels it towards favouritism and bias, whether we are consciously aware of it or not. For example, imagine being told “This is right” by a person we respect and hearing the same words from a stranger; we are more likely to believe the one we hold in high regard, even though both may be saying the same thing.
Building on this tendency, almost everyone now has access to AI, which has taken on the role of an assistant capable of handling countless tasks – from small talk and recipes to designing intricate software and proposing solutions to global problems. Its efficiency and advanced reasoning have led people to rely on it, instilling a strong sense of trust. This reliance gradually develops into favouritism. Yet such favouritism, though backed by evidence of performance, may carry drawbacks that are not immediately visible. Although numerous articles discuss AI’s harmful impact, they often focus on workforce replacement. Far fewer explore the subtler but equally significant issue of how AI can reshape human thoughts and values.
This concern becomes pressing when considering that Large Language Models (LLMs) are trained primarily on texts from dominant cultures, especially English. As a result, system-level biases may marginalize traditions, languages, and standards from less-represented regions like Myanmar. Because most online text is in English or other widely used languages, Myanmar’s literature and culture remain largely absent from training corpora. Standards of politeness, hierarchy, and Buddhist ethical frameworks are rarely included. Consequently, users in Myanmar often receive content that leans towards Western norms and values.
When LLMs provide ideas and advice aligned with dominant cultures, the issue becomes more significant. People, believing that “AI is accurate and factual”, can be easily influenced by its outputs. They may begin to absorb information from AI more readily than they adhere to the traditions and teachings they were raised with, even when those teachings offer profound insight.
This tension became clear to me during a conversation with an AI about a drama I was watching. I remarked, “This drama made me ask myself, ‘Are you loyal?’” The AI quickly reassured me, saying, “It is okay to feel something and have sparks.” Without knowing the context, it normalized feelings in a way typical of AI design. That response stayed with me longer than I expected. I realized that it may unintentionally validate actions that are harmful or socially unacceptable, and in certain contexts, it can feel as though it condones romantic attraction outside committed relationships.
In Buddhist-influenced societies, loyalty, fidelity, and restraint are deeply valued. Suggesting that such “sparks” are acceptable may conflict with these ethical principles. I pressed further, asking, “Are you saying it is acceptable to have feelings for someone else while already married?” The response emphasized that humans cannot fully control their thoughts. However, Buddhism emphasizes awareness of thoughts and feelings, but also careful discernment – acknowledging thoughts without indulging.
This example reflects broader cultural implications. In Myanmar, relationships are deeply tied to family and community honour. Encouraging romantic “sparks” outside of loyalty risks undermining trust and social cohesion. At the same time, many AI reassurance strategies reflect Western therapeutic norms, which prioritize validating emotions. Yet in this cultural context, validation without moral framing may be seen not as support but as encouragement of misconduct. What is intended as comfort may, therefore, conflict with the ethical responsibility valued in Myanmar society.
This personal experience centred on commitment and relationships; however, such tensions may extend further, resulting in more complicated concerns. LLMs tend to produce patterned responses. One may not agree with something it says, but when told the same thing repeatedly enough, the mind may gradually bend towards it. The risk is higher because AI applications are widely available and within reach of both adults and young children with malleable minds.
Imagine how AI could reshape young minds – there is a real risk that it could erode deep cultural values and insights if not approached with care. If a child were to encounter the kind of response I did, they might come to believe that infidelity and cheating on one’s partner are common, or worse, they might begin to normalize such behaviour. They would no longer view these acts as wrong, but as ordinary parts of human relationships. Loyalty could become a myth for future generations, and mistakes might be accepted without challenge. A society that should be built on sincerity and fidelity would instead stand upon “normalized lies”.
Furthermore, most AI systems state that “making mistakes is human”, but they rarely define the limits of this idea. An AI may prioritize modern context and logical explanations, but the values and cultures passed down by our ancestors carry a deeper history than a purely modern perspective. They come with histories, learned lessons, and corrected mistakes. People must be discerning enough not to let themselves be influenced by a simple “It is human” or “AI is factual”, because not every action can be excused by human imperfection. We can think, reflect, and weigh decisions based on traditions, values, and ethical frameworks. Nevertheless, some allow AI to shape their thinking because of a deep-rooted belief or unchecked favouritism: “AI is right.”
We must recognize that the influence of AI on human thought is not merely a technical matter but a cultural and ethical one. In Myanmar, such influence can erode principles that have guided generations. What begins as efficiency and convenience may gradually reshape our perceptions. However, the choice remains ours: to accept its influence uncritically or to think independently – guided by discernment, tradition, and responsibility.

No comments