SymbioticLM-8B

Model Type: Hybrid Symbolic–Transformer
Base Model: Qwen-8B
Framework: PyTorch + Transformers-compatible
Purpose: Long-memory symbolic reasoning + high-fidelity language generation


Overview

SymbioticLM-8B is a state-of-the-art hybrid transformer model with built-in symbolic cognition. It combines an 8B Qwen-based transformer with modular symbolic processors and a persistent memory buffer. The model supports both general conversation and deep symbolic tasks such as theorem generation, logical chaining, and structured reasoning with retained memory across turns.


Architecture Highlights

  • Backbone: Qwen-8B rotary transformer
  • Symbolic Dim: 4096
  • Symbolic Modules:
    • ThoughtDynamicsLNN (multi-head LSTM attention)
    • CrystallineProcessor (DNAConv GNN)
    • LiquidThoughtProcessor (recurrent symbol folding)
    • HelicalDNAProcessor (helical linear projection)
  • Memory: 2048 symbolic vectors (float32) with entropy-aware retrieval and contextual recall
  • Dream Mode: Self-generates symbolic cognition offline

Files Included

File Description
model.bin PyTorch weights (LFS tracked)
model.safetensors Same weights in safetensors format (recommended)
memory.pt Symbolic memory snapshot (entropic, pretrained)
config.json Base model configuration
generation_config.json Sampling and decoding config (temperature, top_p, etc.)
tokenizer.json Tokenizer data with custom tags and structure
added_tokens.json Extra tokens like <THM>, <PROOF>, <D_EPS>
special_tokens_map.json Maps for special tokens used during generation

Intended Uses

  • General symbolic reasoning and logical conversation
  • Memory-aware tutoring, research assistants
  • Code + math proof modeling
  • Context-persistent dialogue systems

Limitations

  • Not instruction-tuned (e.g., chat-style inputs may require prompt engineering)
  • Larger memory buffer may increase CPU load slightly
  • Symbolic inference is offline-evolved; memory must be actively seeded

Discrepancy Calculus Foundation

This model is part of the Convergent Intelligence LLC: Research Division portfolio. All models in this portfolio are developed under the Discrepancy Calculus (DISC) framework — a measure-theoretic approach to understanding and controlling the gap between what a model should produce and what it actually produces.

DISC treats training singularities (loss plateaus, mode collapse, catastrophic forgetting) not as failures to be smoothed over, but as structural signals that reveal the geometry of the learning problem. Key concepts:

  • Discrepancy Operator (D): Measures the gap between expected and observed behavior at each training step
  • Jump Sets: Boundaries where model behavior changes discontinuously — these are features, not bugs
  • Ghost Imprinting: Teacher knowledge that transfers to student models through weight-space topology rather than explicit distillation signal

For the full mathematical treatment, see Discrepancy Calculus: Foundations and Core Theory (DOI: 10.57967/hf/8194).

Citation chain: Structure Over Scale (DOI: 10.57967/hf/8165) → Three Teachers to Dual Cognition (DOI: 10.57967/hf/8184) → Discrepancy Calculus (DOI: 10.57967/hf/8194)

Citations

This model was designed and built from Discrepancy Analysis, paper to be published soon!


Convergent Intelligence Portfolio

Part of the Symbiotic AI Series by Convergent Intelligence LLC: Research Division

Related Models

Model Downloads Format
Symbiotic-1B 4 HF
Symiotic-14B 3 HF
Symbiotic-Beta 3 HF

Top Models from Our Lab

Total Portfolio: 41 models | 2,781 total downloads

Last updated: 2026-03-28 12:57 UTC


From the Convergent Intelligence Portfolio

DistilQwen Collection — Our only BF16 series. Proof-weighted distillation from Qwen3-30B-A3B → 1.7B and 0.6B on H100. Three teacher variants (Instruct, Thinking, Coder), nine models, 2,788 combined downloads. The rest of the portfolio proves structure beats scale on CPU. This collection shows what happens when you give the methodology real hardware.

Top model: Qwen3-1.7B-Coder-Distilled-SFT — 508 downloads

Full methodology: Structure Over Scale (DOI: 10.57967/hf/8165)

Convergent Intelligence LLC: Research Division

Downloads last month
421
Safetensors
Model size
8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for reaperdoesntknow/Symbiotic-8B

Finetuned
Qwen/Qwen3-8B
Finetuned
(1320)
this model
Quantizations
2 models

Dataset used to train reaperdoesntknow/Symbiotic-8B

Collection including reaperdoesntknow/Symbiotic-8B