Jump to content

New symbolic communication model: OpenSymbolic and conceptrons (early research discussion)

Featured Replies

Hello everyone,

I’m currently researching a new model I call OpenSymbolic, based on symbolic communication units I’ve named conceptrons. The idea is to encode meaning using color, shape, and tone — forming structured “symbolic chains” similar to words or data packets.

I’d like to discuss its possible applications in communication, assistive technologies, and information systems.

Would it be appropriate to share a demo or a short paper for peer feedback here?

Hello and welcome.

I suggest you read the posting rules first

Here is a good start

  • Author

OpenSymbolic — Verifiable Summary
OpenSymbolic is a symbolic encoding system that transforms sensory and structured inputs (color, shape, tone/sound, or JSON data) into minimal, verifiable units called conceptrons. It is fully reproducible: the same input always generates the same Σ (Sigma chain) and the same SHA-256 hash.
What can be verified now (in 5 minutes)
1. Available artifacts: voice-to-conceptron (Talkboard), retina+voice (Talkboard Retina), NGS→conceptrons (JSON→Σ), and the Universal Translator v0.1 (ES→Σ→EN via speech synthesis).
2. Determinism: load the same JSON example, apply rules, export the Σ output, compute SHA-256, reimport it—hash must remain identical.
3. Sensory determinism: record short audio input and capture camera color. The system maps audio energy→frequency (200–800 Hz) and color→C; capture produces a conceptron {C,F,T,M}.
4. Integrity: every Σ can be exported as JSON and wrapped inside an OSNet envelope (timestamp + payload) for traceability.
What has been practically demonstrated
• Deterministic behavior: same input → same Σ → same SHA-256.
• Multimodal encoding: voice + color + shape → unique conceptron.
• Interoperability: Σ exported/imported among prototypes (Talkboard NGS Reader).
• Clinical proof of concept: an early-stage communication tool for children showed real interest at a therapeutic center.

How to reproduce and validate

1. Run any demo locally or under HTTPS.

2. Load the built-in example → click “Apply rules → Σ”.
3. Export JSON output, compute SHA-256, record hash.
4. Re-import JSON, confirm identical hash.
5. Capture screen or short video of the process.
6. For sensor demo: record short audio and capture color, create conceptron, replay tones, record resulting sound.
Simple experiments for scientific validation
• Technical reproducibility: give any peer the same JSON and expect same hash.
• Perceptual consistency: with 10 audio+color samples from one user, check if experts assign the same meaning to Σ outputs (confusion-matrix test).
• Noise tolerance: inject noise in audio and observe frequency deviation; chart RMS error.
Limitations (for honesty)
• The system does not "understand" meaning—it encodes multimodal input deterministically.
• Accuracy depends on calibration of mic/camera and rule definitions.
• Ethical approval and GDPR compliance required for clinical data use.
Proposed minimal validation protocol
1. 2-minute video showing JSON → Σ → same SHA-256; mic+cam → conceptron → tone replay.
2. Provide replication instructions to other users (6 steps above).
3. Optional small-scale pilot with 5 users: measure recognition accuracy and usability.
4.Integrity: every Σ can be exported as JSON and wrapped inside an OSNet envelope (timestamp + payload) for traceability.
What has been practically demonstrated
• Deterministic behavior: same input → same Σ → same SHA-256.
• Multimodal encoding: voice + color + shape → unique conceptron.
• Interoperability: Σ exported/imported among prototypes (Talkboard NGS Reader).
• Clinical proof of concept: an early-stage communication tool for children showed real interest at a therapeutic center.
How to reproduce and validate
1. Run any demo locally or under HTTPS.
2. Load the built-in example → click “Apply rules → Σ”.
3. Export JSON output, compute SHA-256, record hash.
4. Re-import JSON, confirm identical hash.
5. Capture screen or short video of the process.
6. For sensor demo: record short audio and capture color, create conceptron, replay tones, record resulting sound.

simple experiments for scientific validation

• Technical reproducibility: give any peer the same JSON and expect same hash.

• Perceptual consistency: with 10 audio+color samples from one user, check if experts assign the same meaning to Σ outputs (confusion-matrix test).

• Noise tolerance: inject noise in audio and observe frequency deviation; chart RMS error.

Limitations (for honesty)

• The system does not "understand" meaning—it encodes multimodal input deterministically.

• Accuracy depends on calibration of mic/camera and rule definitions.

• Ethical approval and GDPR compliance required for clinical data use.

Proposed minimal validation protocol


1. 2-minute video showing JSON → Σ → same SHA-256; mic+cam → conceptron → tone replay.

2. Provide replication instructions to other users (6 steps above).

3. Optional small-scale pilot with 5 users: measure recognition accuracy and usability.

This is not theoretical; it’s an executable, auditable framework. Each conceptron and Σ chain is deterministic and cryptographically traceable.

I’m open to collaboration or independent replication to verify these claims under scientific observation.

Did you have a question or point for discussion ?

  • Author

Yes — the discussion point is whether symbolic encoding (through conceptrons) can serve as a deterministic multimodal representation system.

I’m proposing a reproducible experiment: the same structured input (JSON or sensory data) always produces the same Σ (symbol chain) and identical SHA-256 hash.

This could form a universal layer for communication between human and machine inputs.

I’d like to discuss its theoretical validity and possible applications.

11 hours ago, Francisco Robles said:

Yes — the discussion point is whether symbolic encoding (through conceptrons) can serve as a deterministic multimodal representation system.

Aren’t there already examples of this? Hieroglyphics and other early writing systems? Chinese? Japanese kanji? Logographic languages exist.

12 hours ago, Francisco Robles said:

Yes — the discussion point is whether symbolic encoding (through conceptrons) can serve as a deterministic multimodal representation system.

I’m proposing a reproducible experiment: the same structured input (JSON or sensory data) always produces the same Σ (symbol chain) and identical SHA-256 hash.

This could form a universal layer for communication between human and machine inputs.

I’d like to discuss its theoretical validity and possible applications.

Well this is really somewhat out of my field, but I am aware of recent medical developments in man/machine interfacing including involving repurposing nerve connections and special purpose computer chip implants to regain lost motor control.

Perhaps these are the people to pitch your thinking to ?

+1 to swansont for a good question/point

  • Author

Thank you very much for your interest. I truly appreciate your openness.

I’m working on OpenSymbolic, a new symbolic communication system that connects human expression, AI and neuroscience.

I believe it could complement your field of work in bioinformatics and multimodal data interpretation.

Would you be open to a short conversation to share details and possible research directions?

Best regards,

Francisco

Sounds like thumbnail GIFs that emit sounds. Or maybe not. Please give a plain language description of what you're talking about.

17 minutes ago, Francisco Robles said:

I believe it could complement your field of work in bioinformatics and multimodal data interpretation.

Who are you talking to? This doesn't match anyone's profile that is participating in this thread so far.

  • Author

Hello:

I was referring to a prototype I'm developing called OpenSymbolic.

It's an experimental system that represents information using shapes, colors, and tones, which I call conceptrons.

Each conceptron can encode data or meanings visually and acoustically, so yes, they can look like small animated symbols that emit sounds.

I thought it might be interesting for people working in bioinformatics and multimodal data interpretation because it allows for the symbolic visualization of complex datasets.

Edited by Francisco Robles

1 hour ago, Francisco Robles said:

Hello:

I was referring to a prototype I'm developing called OpenSymbolic.

It's an experimental system that represents information using shapes, colors, and tones, which I call conceptrons.

Each conceptron can encode data or meanings visually and acoustically, so yes, they can look like small animated symbols that emit sounds.

I thought it might be interesting for people working in bioinformatics and multimodal data interpretation because it allows for the symbolic visualization of complex datasets.

Yes, thank you, Sirius Cybernetics Corporation, this is the third time you have told us.

Please sign in to comment

You will be able to leave a comment after signing in

Sign In Now

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.