Logseq performance very bad as graph grows

I’m coming upto 2 years using Logseq, and now that the graph is getting bigger, the app is becoming unusable on my mac with a regular non-SSD hard drive.

I have ~2000 pages and ~430 journal entries. (which really isn’t that many - I have more in Microsoft OneNote from years earlier)

On the macBook pro - circa 2017 with SSD it now takes 4 minutes to open logseq; on the iMac, also 2017, with regular drive it now takes 10+ minutes.

This morning was the last straw…logseq on the iMac stopped responding.

I’m rebooting the mac, but I can’t have this level on non-productivity. I’ll give it a few days and see if I cool off…but right now this second brain information experiment using Logseq seems to be done, as the product does not seem to stand the test of time.

The good - its my data, text (i.e. markdown focused), easy to resurface information (thru graph relations), minimal time curating and reorganizing data, easy to include into a workflow; PDF notes integration is pretty good.

The difficult - so much data starts on the web or in another app, so the process of capturing that information with links or pointers to that data so you can resurface it again in Logseq is timeconsuming; the product is focusing on new whiteboard features rather than pure performance; I don’t need whiteboarding - every app from zoom to google to miro has that feature… I want to use best of breed tools (such as miro) and same that knowledge into my second brain (or at least find it again)

Here’s chatGPTs more cogent version of my rant:
Feedback:

  1. Performance Issues: As my Logseq knowledge graph grew, the app’s performance on both my MacBook Pro and iMac (with a regular hard drive) deteriorated significantly. Opening Logseq takes 4min on SSD drive mac 10+ minutes on normal drive, and it sometimes becomes unresponsive.
  2. Hardware Impact: Consider optimizing Logseq’s performance for users with regular hard drives. Many users may not have SSDs, so ensuring good performance on standard hardware is crucial.
  3. Focus on Core Features: While new features like whiteboarding are interesting, prioritizing performance improvements for existing users should be a top concern.
  4. Efficiency in Data Capture: Simplify the process of capturing data from the web or other apps and integrating it into Logseq. Streamlining this workflow can save users significant time.
  5. Product Direction: Consider whether Logseq’s evolving feature set aligns with users’ primary needs for a second brain/knowledge management tool.

As a user with an 8th gen intel processor and SSD with a 2883-page database, I can say the application does can struggle to get started, especially if there have been changes made on another computer. Do check your plugins, of course, in case something is limiting the process there.

I’m new to logseq, I just started some days ago and I’m still heavily modeling my environment.

I really like the concept and think the journal + page, tags, references, properties are top notch.

I have not really looked into the architecture deeply, but the way the “database” is build and the way the data seems to be stored in the front-end, I wonder not. JS or V8 is a very bad architecture for storing data with lots of wasted memory. The JIT will create new code paths very time you access an object by a different path. Also, will it be very hard to utilize your cpu cores without wasting even more memory. Just because clojure was designed for parallel execution, does not mean closujurescript translated to js running in an electron environment will get any parallelism out of that (I doubt that).

I highly suggest, separating the data core into a new rust based backend and then transition the frontend to that.

  1. create markdown parser flavor crate
  2. model the current data model using graph
  3. create a api server with axum + tokio
  4. add api to run js scripts in a deno environment inside the server
    4.1 Add WASI (webassembly) API to run high performance code on the graph
  5. transition front-end to use new core. Enhance api until front-end only uses API as well
  6. rewrite frontend in something faster (tauri ?)

Separating the core like this would yield many advantages. Early spawning of the backend after login, minimized memory footprint when in systray, running the backend on local server, …

Let’s be honest, clojure is choice that will not yield many contributors.

2 Likes

Just to be clear, it’s not like the original developer did a mistake, but he developed Logseq to run client-side as a Web app. Then the desktop app was introduced and the development focused on that. I think that if they had to redevelop it from scratch they would do like you suggested. But let’s see how the performance of the DB version and SQLlite as WASM will be.

1 Like

Of course. I’m not blaming anybody here and as a software dev I know that dev work is never straight and you end up with lots of legacy code you never planned for.

I just wanted to give my 0.02$ what I think would be a good direction to develop to.
Using rust as a core has the additional advantage to easy compile into WASM and to run in the browser.

1 Like

There are a couple of similar stories on this recent Reddit thread:
https://www.reddit.com/r/logseq/comments/17wvwcl/is_logseq_performance_really_terrible_when_you/