Jump to content

Francisco Robles

Members
  • Joined

  • Last visited

Everything posted by Francisco Robles

  1. Hola: Me refería a un prototipo que estoy desarrollando llamado OpenSymbolic. Es un sistema experimental que representa información mediante formas, colores y tonos, lo que yo llamo conceptrones. Cada conceptrón puede codificar datos o significados de manera visual y acústica, así que sí, pueden parecer pequeños símbolos animados que emiten sonidos. Pensé que podría ser interesante para personas que trabajan en bioinformática e interpretación de datos multimodales, porque permite visualizar conjuntos de datos complejos de forma simbólica.
  2. Thank you very much for your interest. I truly appreciate your openness. I’m working on OpenSymbolic, a new symbolic communication system that connects human expression, AI and neuroscience. I believe it could complement your field of work in bioinformatics and multimodal data interpretation. Would you be open to a short conversation to share details and possible research directions? Best regards, Francisco
  3. Yes — the discussion point is whether symbolic encoding (through conceptrons) can serve as a deterministic multimodal representation system. I’m proposing a reproducible experiment: the same structured input (JSON or sensory data) always produces the same Σ (symbol chain) and identical SHA-256 hash. This could form a universal layer for communication between human and machine inputs. I’d like to discuss its theoretical validity and possible applications.
  4. OpenSymbolic — Verifiable Summary OpenSymbolic is a symbolic encoding system that transforms sensory and structured inputs (color, shape, tone/sound, or JSON data) into minimal, verifiable units called conceptrons. It is fully reproducible: the same input always generates the same Σ (Sigma chain) and the same SHA-256 hash. What can be verified now (in 5 minutes) 1. Available artifacts: voice-to-conceptron (Talkboard), retina+voice (Talkboard Retina), NGS→conceptrons (JSON→Σ), and the Universal Translator v0.1 (ES→Σ→EN via speech synthesis). 2. Determinism: load the same JSON example, apply rules, export the Σ output, compute SHA-256, reimport it—hash must remain identical. 3. Sensory determinism: record short audio input and capture camera color. The system maps audio energy→frequency (200–800 Hz) and color→C; capture produces a conceptron {C,F,T,M}. 4. Integrity: every Σ can be exported as JSON and wrapped inside an OSNet envelope (timestamp + payload) for traceability. What has been practically demonstrated • Deterministic behavior: same input → same Σ → same SHA-256. • Multimodal encoding: voice + color + shape → unique conceptron. • Interoperability: Σ exported/imported among prototypes (Talkboard ↔ NGS ↔ Reader). • Clinical proof of concept: an early-stage communication tool for children showed real interest at a therapeutic center. How to reproduce and validate 1. Run any demo locally or under HTTPS. 2. Load the built-in example → click “Apply rules → Σ”. 3. Export JSON output, compute SHA-256, record hash. 4. Re-import JSON, confirm identical hash. 5. Capture screen or short video of the process. 6. For sensor demo: record short audio and capture color, create conceptron, replay tones, record resulting sound. Simple experiments for scientific validation • Technical reproducibility: give any peer the same JSON and expect same hash. • Perceptual consistency: with 10 audio+color samples from one user, check if experts assign the same meaning to Σ outputs (confusion-matrix test). • Noise tolerance: inject noise in audio and observe frequency deviation; chart RMS error. Limitations (for honesty) • The system does not "understand" meaning—it encodes multimodal input deterministically. • Accuracy depends on calibration of mic/camera and rule definitions. • Ethical approval and GDPR compliance required for clinical data use. Proposed minimal validation protocol 1. 2-minute video showing JSON → Σ → same SHA-256; mic+cam → conceptron → tone replay. 2. Provide replication instructions to other users (6 steps above). 3. Optional small-scale pilot with 5 users: measure recognition accuracy and usability. 4.Integrity: every Σ can be exported as JSON and wrapped inside an OSNet envelope (timestamp + payload) for traceability. What has been practically demonstrated • Deterministic behavior: same input → same Σ → same SHA-256. • Multimodal encoding: voice + color + shape → unique conceptron. • Interoperability: Σ exported/imported among prototypes (Talkboard ↔ NGS ↔ Reader). • Clinical proof of concept: an early-stage communication tool for children showed real interest at a therapeutic center. How to reproduce and validate 1. Run any demo locally or under HTTPS. 2. Load the built-in example → click “Apply rules → Σ”. 3. Export JSON output, compute SHA-256, record hash. 4. Re-import JSON, confirm identical hash. 5. Capture screen or short video of the process. 6. For sensor demo: record short audio and capture color, create conceptron, replay tones, record resulting sound. simple experiments for scientific validation • Technical reproducibility: give any peer the same JSON and expect same hash. • Perceptual consistency: with 10 audio+color samples from one user, check if experts assign the same meaning to Σ outputs (confusion-matrix test). • Noise tolerance: inject noise in audio and observe frequency deviation; chart RMS error. Limitations (for honesty) • The system does not "understand" meaning—it encodes multimodal input deterministically. • Accuracy depends on calibration of mic/camera and rule definitions. • Ethical approval and GDPR compliance required for clinical data use. Proposed minimal validation protocol 1. 2-minute video showing JSON → Σ → same SHA-256; mic+cam → conceptron → tone replay. 2. Provide replication instructions to other users (6 steps above). 3. Optional small-scale pilot with 5 users: measure recognition accuracy and usability. This is not theoretical; it’s an executable, auditable framework. Each conceptron and Σ chain is deterministic and cryptographically traceable. I’m open to collaboration or independent replication to verify these claims under scientific observation.
  5. Hello everyone, I’m currently researching a new model I call OpenSymbolic, based on symbolic communication units I’ve named conceptrons. The idea is to encode meaning using color, shape, and tone — forming structured “symbolic chains” similar to words or data packets. I’d like to discuss its possible applications in communication, assistive technologies, and information systems. Would it be appropriate to share a demo or a short paper for peer feedback here?

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.