Logseq performance very bad as graph grows

I’m new to logseq, I just started some days ago and I’m still heavily modeling my environment.

I really like the concept and think the journal + page, tags, references, properties are top notch.

I have not really looked into the architecture deeply, but the way the “database” is build and the way the data seems to be stored in the front-end, I wonder not. JS or V8 is a very bad architecture for storing data with lots of wasted memory. The JIT will create new code paths very time you access an object by a different path. Also, will it be very hard to utilize your cpu cores without wasting even more memory. Just because clojure was designed for parallel execution, does not mean closujurescript translated to js running in an electron environment will get any parallelism out of that (I doubt that).

I highly suggest, separating the data core into a new rust based backend and then transition the frontend to that.

  1. create markdown parser flavor crate
  2. model the current data model using graph
  3. create a api server with axum + tokio
  4. add api to run js scripts in a deno environment inside the server
    4.1 Add WASI (webassembly) API to run high performance code on the graph
  5. transition front-end to use new core. Enhance api until front-end only uses API as well
  6. rewrite frontend in something faster (tauri ?)

Separating the core like this would yield many advantages. Early spawning of the backend after login, minimized memory footprint when in systray, running the backend on local server, …

Let’s be honest, clojure is choice that will not yield many contributors.

4 Likes