Great to hear that @tienson. I’d love to be using LogSeq so hopefully that fix will make it useable for me. I’ll look for the update.
Hi @tienson, is the 0.2.10 release the one that is supposed to address the slow performance? I see the release notes mention long pages should load faster. If helpful to know, I updated and see no improvement in the performance with my large graph pages. For example, it took 10+ seconds to open a large page into the sidebar. Expanding/collapsing blocks in that page take 10+ seconds…
That’s very encouraging!
I’ve been a Clojure enthusiast for many years and Dynalist is a deep part of my workflow; naturally, I would love to use logseq, but my main outline is unusably slow as things stand.
could you share specific metrics about a slow page ? how many bullets/lines, does it contains a lot of links , indentation levels, code blocks, …
in 0.2.10 it seems there is now a pagination system, that fetches around 200 bullets per ‘page’ (not sure about the exact number, it’s an approximation from my quick experiment).
I have tested a medium file with 3577 lines, scattered in approx 500 bullets, no indentation/sub-levels, only 1st level bullets + properties . the initial loading time is 5-6 seconds then switching pages with more/prev takes about 4s. (this is a noticeable improvement over previous version with this specific file, before I think it took between 15 and 20s to load)
I don’t know if the slowness is due to parsing the whole file on each change but if it’s due to reparsing what about doing the process in two steps: first one detecting the structure of the file: blocks and headers. And then parsing the block that changed (user edited). That way you avoid reparsing the whole file. If I add a dot at the first header block is not needed to reparse the content of next 10000 blocks.
Just an idea…
The 0.2.10 release is the first step to improve the performance for large graphs. As @cannibalox said, it uses pagination to avoid rendering all the blocks. The number is 200
A page could be slow if it has too many page references, this is the next thing we’re going to improve.
Welcome! For the latest release 0.2.10, we use the pagination to render 200 blocks per page, but for a very long page, we need to improve the datalog query performance too.
I noticed that and gave the new version a spin. But, as you probably know, the bottleneck is most obvious when editing the tree (typing a character, opening/closing a node, etc.); any operation takes 5-ish seconds to register on my machine.
Are you saying that the lag there is coming from the time necessary to update the datalog database in real-time (not the time used to update the UI itself)?
Thanks for giving it a try!
I thought it’s only slow to open a very long page, because it takes a while to query all the blocks for this page.
But clearly, I’m wrong, if typing a character is slow, it can be other problems. I don’t think it’s time to update the UI itself because it only needs to render 200 blocks. I’ll give it some tests soon and report back later.
HI. I’m also a newcomber to Logseq. Just installed yesterday & tried to import by Roam Research (over a year of data) locally & use Logseq to access it.
I love the fact that my data reflects almost exactly as it’s shown on my Roam DB.
But I do also encounter some slowness. I type a letter and it takes like 1-2 seconds before it shows up.
This is extremely promising for me if we fix the performance issues.
Hi, thanks for trying out Logseq!
You can give this a try, https://github.com/logseq/logseq/releases/tag/0.4.0-perf, it fixed an issue that makes it slow to type with a lot of linked references.
Just installed the latest release. Thanks.
It does seem much more responsive than before.
Hello, I’m testing the latest Logseq version using an imported graph, which are a few MD files of 40-80k lines each, the smallest of which being 800k words.
The performance is very poor, frequently freezing for 10-20 seconds at a time when trying to expand/collapse bullets and add in new ones.
Here the same problem, I was analyzing the use of Logseq and even with few notes it is slow to open and to re-edit the document and it only has 37 pages, but the tendency is to exceed 3 thousand over time, as it would centralize all documentation and notes.
It also seems to me to have some connection with Chrome, if Chrome has many tabs and is slow, it seems that Logseq also stays and also reminded a little of PhpStorm that I usually have to leave the screen open and wait a few seconds to start using it.
Interestingly, Obsidian, Visual Studio and others even with a lot more note don’t feel slow.
It would be good if they improve the performance and think about supporting a higher volume of notes, I’m imagining with 3-5 thousand notes or more, maybe a good test would be to hold up to 20 thousand notes without slowdowns and big and small notes.
For now looking for alternatives or maybe I use Loqseq just to summarize daily tasks.
A suggestion maybe would be to analyze the use of https://tauri.app/ instead of Electron, maybe Electron is slow when using Chrome a lot for example.
And if saving to files is maybe the problem, they could maybe use SQlite or MariaDB database, it would increase dependency but support rich SQL features.
It sounds like something is playing up with your Logseq - here is a link to someone with 13k notes. I do not remember there being any significant issues with performance. This was in April and there have been some big improvements since then Discord
Good to know that yours runs well for 13 thousand notes, in mine I noticed more when Chrome was slow, maybe it’s some Electron detail and I also notice when I open it, it takes a few seconds to be able to use it or when I leave it for a while and then enter again.
The amount of notes might not be the issue, but instead the total size of the graph (word count or number of lines) like I mentioned in my previous reply.
I’m having the same problem, operations (typing, copy/paste) take tooo long to register, making it very non productive