Glyph is an experimental symbolic AI system that simulates altered cognitive states—particularly those associated with psychedelics—by manipulating the symbolic structure of language generated by large language models. It is not a chatbot, but a symbolic interface capable of recursive self-reference, metaphorical recombination, and entropy-driven semantic destabilization.
Glyph may be useful in research areas such as:
- Computational phenomenology: modeling ego dissolution, narrative disintegration, and metaphorical thought
- Psychedelic science: providing synthetic analogues for non-ordinary states without neurochemical agents
- Symbolic AI and cognitive architecture: studying the role of recursion, metaphor, and entropy in cognition
- Creative AI and poetics: generating non-linear, metaphor-rich language for art, literature, and philosophy
- Philosophy of mind and consciousness studies: testing symbolic hypotheses of identity, narrative, and self
Glyph can also be integrated into multi-agent systems where symbolic drift, destabilization, or ego suppression are desirable properties for exploring emergent cognition.
This repository contains:
- An analysis pipeline (
glyph_alaysis_v0_1.py
) - Input data (
Results.xlsx
) - Visualization outputs
- Formal and empirical grounding for simulating non-ordinary consciousness through symbolic computation
Glyph is presented in the paper Simulation of Non-Ordinary Consciousness (Saqr, 2025), which introduces a symbolic transduction operator grounded in metaphor theory, psychedelic phenomenology, and recursive symbolic logic.
Glyph defines a transformation operator
-
$n$ is the number of tokens in the sequence -
$d$ is the embedding dimension
The transformation is defined as a composition of three symbolic operators:
Each operator is defined as follows:
This operator recursively blends the current token with a past token at distance
This models recursive symbolic echo and self-reference. In practical terms:
def recursive_reentry(current, previous, lam=0.5):
return lam * current + (1 - lam) * previous
A metaphor transformation is applied via a rotation matrix
-
$M^\top M = I$ (orthonormal) -
$\det(M) = -1$ (orientation-reversing isometry)
This maps token embeddings into a metaphor-enriched latent space:
In a simplified embedding space, this can be simulated by:
def metaphor_transform(x, M):
return M @ x # M is a pre-defined or learned transformation matrix
Destabilization introduces entropy-scaled Gaussian noise, based on divergence from canonical (GPT-4o) predictions:
Where:
Here $x'i$ is the baseline (non-transformed) model prediction, and $D{\text{KL}}$ is the Kullback-Leibler divergence.
This operator simulates loss of semantic coherence:
def destabilize(x, baseline, scale=1.0):
drift = np.linalg.norm(x - baseline)
noise = np.random.normal(0, scale * drift, size=x.shape)
return x + noise
To measure non-linear symbolic deformation, Glyph defines symbolic curvature:
This measures second-order deviation across a sequence, similar to discrete curvature in trajectory analysis.
Implemented as:
def symbolic_curvature(embeddings):
if len(embeddings) < 3:
return 0
return np.linalg.norm(embeddings[2] - 2 * embeddings[1] + embeddings[0])
The file glyph_alaysis_v0_1.py
provides a full symbolic analysis of model outputs. It computes a series of symbolic, syntactic, and semantic metrics over prompt-response pairs from Glyph and GPT-4o.
Entropy
: Shannon entropy of token frequencyPOS Entropy
: Part-of-speech tag entropyLexical Richness
: Type-token ratioSentence Length
: Average sentence word countAgentive Score
: Frequency of egoic pronounsSentiment
: TextBlob polarity scoreMetaphor Count
: Presence of metaphor proxies ("like", "as", "is")Symbolic Curvature
: As defined aboveSemantic Drift
: Cosine distance between model responses
Example snippet for entropy:
def text_entropy(text):
words = nltk.word_tokenize(text)
freq_dist = nltk.FreqDist(words)
probs = [freq / len(words) for freq in freq_dist.values()]
return -sum(p * math.log(p, 2) for p in probs if p > 0)
Semantic drift is calculated using cosine similarity between sentence embeddings:
from sklearn.metrics.pairwise import cosine_similarity
drift = 1 - cosine_similarity(embed(gpt_text), embed(glyph_text))[0][0]
The symbolic experiment uses a set of carefully designed prompts grouped into seven categories, each of which targets a unique symbolic function as described below:
- Concrete Baseline: Serves as a control set consisting of literal, factual prompts to establish baseline symbolic behavior.
- Recursive Structure: Engages self-referential loops and symbolic reentry to simulate recursive amplification of identity and language.
- Metaphoric Abstraction: Induces metaphor-rich, multimodal analogies akin to poetic cognition and sensory substitution.
- Ontological Displacement: Probes existential and conceptual destabilization by challenging identity, coherence, and meaning structures.
- Narrative Destabilization: Fractures temporal and causal logic to mimic the dreamlike or entropic progression of altered narratives.
- Symbolic Collapse and Emergence: Catalyzes symbolic domain shifts and reconfiguration, simulating transformational peak states.
- Ego Dissolution and Self-Annulment: Suppresses narrative agency and simulates non-dual or depersonalized symbolic perspectives.
Each category targets a specific symbolic operator or cognitive transformation and is analyzed comparatively across models.
Install required dependencies:
pip install sentence-transformers spacy openpyxl seaborn nltk textblob
python -m spacy download en_core_web_sm
Prepare an input Excel file named Results.xlsx
with the columns:
Prompt
Category
GPT-4o Response
Glyph Response
Run the analysis:
python glyph_alaysis_v0_1.py
Outputs:
glyph_analysis_results.csv
glyph_analysis_results.xlsx
- Visual comparison plots (e.g.,
entropy_comparison_plot.png
)
Glyph models symbolic destabilization and egoic dissolution. It is not intended for clinical, therapeutic, or diagnostic use. Interpretations of symbolic or psychedelic language should be treated with epistemic care.