Comparison of Synthesis to other programming languages

DISCLAIMER: The following discussion, although interesting, it gets technical.

  • You don’t need to understand any of the following stuff in order to use Synthesis.

Short version

Compared to other programming languages, Synthesis has:

  • a lot of similarities
    • Most programming languages are similar to each-other, because they target the same computing architecture.
  • a lot of differences
    • Natural languages are for common humans, while programming languages are for experts.
    • The programming language that comes closer to Synthesis is pseudocode.
      • not well-defined, but Synthesis can resemble most of its variations

Long version

There are endless ways to classify a programming language.

  • This is highly problematic:
    • as languages tend to:
      • fit into multiple classes
      • yet only partially to each one of them
    • because languages are designed:
      • to serve specific needs
      • not to satisfy theoretical classifications
  • Designing a language is all about making design choices.
    • plenty of trade-offs to consider
  • Therefore, rather than trying to fit a language into classes
    • e.g. could say that Synthesis is a general-purpose multi-paradigm very-high-level reflective etc. meta-language:
      • Such a statement:
        • carries very little actual information
        • is potentially misleading
  • it is more meaningful to list some of its design choices (especially the implemented ones):
    • simplicity
      • easier to both learn and use than both:
        • most programming languages
        • most natural languages
      • but also in implementation
        • easy to maintain and extend
      • some examples are marked lower for adhering to the KISS principle
    • truly high-level (or human-first)
      • targets common humans
        • not machines, but also not experts
      • can be adapted for non-English speakers
        • though it doesn’t facilitate advanced grammars
          • Could use an LLM to translate from advanced grammar to simple one.
    • extreme modularity and scalability
      • Synthesis uses only two units (KISS principle):
        • definitions
          • similar to Logseq’s blocks
          • essentially expressions that provide some functionality
          • Synthesis encourages (but doesn’t enforce) next-level DRY:
            • breaking composite expressions into separate reusable blocks
            • grouping relevant definitions under the same ancestor-blocks
        • modules
          • similar to Logseq’s pages
          • essentially named block-spaces that provide a local static context
          • they delegate to each-other, e.g.:
            • Consider these steps:
              • page G/A in my graph G asks something from page G/B in the same graph
              • page G/B defines an API to page F/B, which is in my friend’s graph F
              • page F/B delegates to page F/A in my friend’s graph
            • Effectively my page G/A gets its answer from page F/A of my friend.
              • It doesn’t need to worry where the answer came from. When pure, can even be:
                • downloaded from someone else who hosts my friend’s page
                  • ensuring the version and safety of the file
                • executed locally, for local logging and debugging
                  • without messing the local graph and model
            • All these are both defined and communicated in the same natural language.
              • This approach allows for endless scaling, avoiding special constructs and protocols.
        • no classes
          • generally inferior to a type-system with multiple dispatch (see lower)
    • context-orientation
      • a mostly unexploited paradigm that fits the human mind like no other
        • among other things, it enables self-adaptation
      • static context
        • narrow: block-based
          • takes full advantage of Logseq’s outliner
        • broad: page-based (thus also namespace-based)
          • allows the creation of domain-specific subsets of the language
      • dynamic context
        • narrow: type-based
          • intuitive left-to-right multiple dispatch
        • broad: caller-based
          • explicit state, yet without side-effects
    • balanced type-system
      • primitive types
        • the four basic JSON types with their literal expressions (KISS principle)
          • namely: floating point number, free text, ordered list, dictionary of key-value pairs
          • not booleans nor null: These can be words (see right below)
        • word-type: for plain real words (or word-first)
          • some words are predefined (e.g. yes, no, void etc.)
            • These are words, not keywords.
          • they don’t correspond to memory addresses
            • memory is addressed naturally
              • e.g. with pronouns: it/them, that/those, this/these
          • they act as both words and values
            • but they don’t have fixed values
              • their meaning is context-specific
                • in the same context, identical words have the same value
          • space character
            • Logseq’s outliner takes care of the indentation.
            • Consecutive spaces are treated as one.
              • like in most environments
            • Lack of space is not always ignored:
              • it often merges the words into one
                • a - b is a subtracting expression
                  • can be overriden, because - is a word, not an operator
                • a-b is a composite word
                  • composite words can be treated both as one and as multiple words
                • This is by design.
              • it rarely has a special meaning
                • a=b is a simple word
                • a = b is a comparing expression
                  • can be overriden, because = is a word, not an operator
                • a= b is an assignment statement
                  • shortened version of let a b
                  • cannot be overriden, because here = is a suffix
      • custom abstract types
        • simple wrappers of other types (KISS principle)
          • Nothing is intrinsically abstract, it simply can be viewed in abstract ways.
        • no inheritance, no troubles: fully-possible for the future, though few chances
      • platform-specific system-types
        • Synthesis is currently based (on Logseq, on Electron) on Javascript.
          • This is for purely practical reasons.
          • This carries over some of Javascript’s own unfortunate choices.

Conclusion

  • Design choices like the above should make obvious that Synthesis has the following:
    • non-priorities: familiarity to programmers, performance, completeness
      • such things can get improved, but without competing to specialized software
        • if you need them, prefer using the right tools
          • Synthesis can call some of them
    • priorities: familiarity to common humans, flexibility, innovation
      • no surprise that Logseq is a fitting platform
  • Synthesis is purposely different enough.
    • Logseq already supports clojure, datalog, javascript, python, r, and even some tiny DSLs
      • no need for another language similar to them
        • although it wouldn’t harm
        • especially since Synthesis can call them (to various degrees)
  • Is that relevant with AI around?
    • Short answer: AI makes Synthesis even more relevant.
    • Long answer: [Comparison of Synthesis to modern AI] (topic under construction)
2 Likes