I just started using LogSeq more extensively and decided to move over my Roam graph which is fairly large (user for over a year), but not huge. I’m using the newest desktop release on Mac and have imported my Roam JSON. I’m surprised once I import the Roam data at just how slowly basic use of my graph becomes. E.g. expanding/collapsing items on a page in the sidebar takes 10+ seconds to execute. It makes the graph basically unusable.
Anyone else experience this type of performance lag? Or have suggestions? I was using a cloud drive for my data file, but have also tried on a fully local graph too…both are very slow.
Yes, I ‘m having same problem. My solution is to open some large notes in external editor and continue from there. There was a version that improved a bit, only a bit the performance. It was before the refactoring. From there it becomes slow as it is now
Thanks for the reply Zab. The performance issue continues to persist for me and unfortunately makes LogSeq unusable for me right now. I’ll check back down the line and see if this get resolved/improved.
Same performance issues here. A real shame as
I had hoped that Logseq and Obsidian could have been a really good option for me.
Sorry to hear you’re having the same performance issues @Jules. I have continued to update with new releases, but so far none have seemed to help my slow performance.
did you manage to pinpoint the main cause for bad perf? mine is taking a long time to reindex / parse but once it’s done, I can type/fold at normal speeds, the graphview takes forever, and opening the sidebar can be slow.
I found that files with >500 bullets or files with lots of code blocks were slow.
do you have more details to share? is it choking on specific files or patterns?
For me the graph functions ok on new pages, but after importing my Roam DB, any of my somewhat large Roam pages (with a good amount of bullets) are very very slow to load or type or use in any meaningful way, either in the sidebar or the main window. New daily pages or other new pages are better but I need to be able to work across the whole DB or the value of the graph just isn’t there. Reindexing doesn’t help.
It’s a known issue with large pages that have many blocks, we’re going to improve this soon. It could be next week.
I have the same problem with the Linux app with notes in a local folder. It’s so slow, it’s unusable.
Great to hear that @tienson. I’d love to be using LogSeq so hopefully that fix will make it useable for me. I’ll look for the update.
Hi @tienson, is the 0.2.10 release the one that is supposed to address the slow performance? I see the release notes mention long pages should load faster. If helpful to know, I updated and see no improvement in the performance with my large graph pages. For example, it took 10+ seconds to open a large page into the sidebar. Expanding/collapsing blocks in that page take 10+ seconds…
That’s very encouraging!
I’ve been a Clojure enthusiast for many years and Dynalist is a deep part of my workflow; naturally, I would love to use logseq, but my main outline is unusably slow as things stand.
could you share specific metrics about a slow page ? how many bullets/lines, does it contains a lot of links , indentation levels, code blocks, …
in 0.2.10 it seems there is now a pagination system, that fetches around 200 bullets per ‘page’ (not sure about the exact number, it’s an approximation from my quick experiment).
I have tested a medium file with 3577 lines, scattered in approx 500 bullets, no indentation/sub-levels, only 1st level bullets + properties . the initial loading time is 5-6 seconds then switching pages with more/prev takes about 4s. (this is a noticeable improvement over previous version with this specific file, before I think it took between 15 and 20s to load)
I don’t know if the slowness is due to parsing the whole file on each change but if it’s due to reparsing what about doing the process in two steps: first one detecting the structure of the file: blocks and headers. And then parsing the block that changed (user edited). That way you avoid reparsing the whole file. If I add a dot at the first header block is not needed to reparse the content of next 10000 blocks.
Just an idea…
The 0.2.10 release is the first step to improve the performance for large graphs. As @cannibalox said, it uses pagination to avoid rendering all the blocks. The number is 200
A page could be slow if it has too many page references, this is the next thing we’re going to improve.
Welcome! For the latest release 0.2.10, we use the pagination to render 200 blocks per page, but for a very long page, we need to improve the datalog query performance too.
I noticed that and gave the new version a spin. But, as you probably know, the bottleneck is most obvious when editing the tree (typing a character, opening/closing a node, etc.); any operation takes 5-ish seconds to register on my machine.
Are you saying that the lag there is coming from the time necessary to update the datalog database in real-time (not the time used to update the UI itself)?
Thanks for giving it a try!
I thought it’s only slow to open a very long page, because it takes a while to query all the blocks for this page.
But clearly, I’m wrong, if typing a character is slow, it can be other problems. I don’t think it’s time to update the UI itself because it only needs to render 200 blocks. I’ll give it some tests soon and report back later.
HI. I’m also a newcomber to Logseq. Just installed yesterday & tried to import by Roam Research (over a year of data) locally & use Logseq to access it.
I love the fact that my data reflects almost exactly as it’s shown on my Roam DB.
But I do also encounter some slowness. I type a letter and it takes like 1-2 seconds before it shows up.
This is extremely promising for me if we fix the performance issues.
Hi, thanks for trying out Logseq!
You can give this a try, https://github.com/logseq/logseq/releases/tag/0.4.0-perf, it fixed an issue that makes it slow to type with a lot of linked references.