I use Google’s DriveSync - no issues.
Looking at the current state I think that next to sync it should really have things like a frequent backup to avoid people making assumptions on it’s reliability. It does so in limited fashion in logseq/bak but it’s all far from perfect.
Logseq is clearly aware though, if you mess around in things like the DB branch you sometimes get blocked from removing/changing things to avoid data loss. This type of, avoid shooting your own foot, tells me there’s a departure from the developer mindset go fast and break things to a more user mindset where it’s slower but more reliable.
Thanks for the clear steps, I’m going to see if I can break my own sync for a bit.
Used syncthing a lot, mostly with a central hub on a VPS (encrypted partition) and that worked very well.
Did get a lot of small conflicts, mostly due to updates on multiple sides. But no data loss.
I’m also using syncthing for a year+, without data loss.
I used Syncthing for a long time before moving to Logseq sync.
I had less waiting and strange data changes on Syncthing but it drains my mobile battery so fast.
It indeed is difficult. There’s a lot of work done across decades to make it right, and still there’s frequent trouble. Remember Apple’s problems with MobileMe, until they burned it and switched to iCloud? Here’s a website just documenting sync problems in famous databases.
So the surprising thing is that Logseq people seem to be trying to reinvent one of the most difficult wheels in existence. What could go wrong?
I concur – it doesn’t make sense to me, just as it doesn’t make sense as to how long it seems to be taking. I suspect that there is a large backend rewrtite happening related to their move to a db… which means more bugs and integrations issues, not less.
They are changing the architecture to address the issues that arise from the current one, including Sync problems. To my understanding, Sync will interact directly with the new inner SQLite database, then Logseq procedes to keep propagate changes to files and vice versa.
Sounds great!
As someone that coded content management systems for more than 20 years, I’ll maintain my hopeful skepticism. What you are describing is a solution to a file sync issue that adds an additional set of syncronization processes. I am sure they will work it out eventually, but friend… I’ve been there and I don’t like the look of this roadmap. Just one person’s view…
I think git is the culprit here, I read it causes trouble with sync. I’ve been using syncthing too
Update: 7 months of almost daily use and still have not encountered any issue with Syncing.
When you say “sync”, do you mean opening it only in one device while being careful that it’s closed everywhere else?
The bug that I’ve experienced multiple times is this:
I am working somewhere offline. I do a lot of work. Either immediately before this period of work or immediately after something happens where logseq logs me out of logseq. This is an upgrade or some other event… it is hard to know, but it always relates to a period of time working offline. Then when my computer again has a connection to the internet all the work that was done offline is erased.
When I look at version tracking the versions of the documents that I edited while offline are not there. I use hazel to make a sync’d copy of my logseq directory every 10 minutes. When I look through those files there is no updated document. I also have time machine running. When I look through that, there are no versions… even when I’ve worked for 12 hours offline. So the bugs seem to be threefold: 1. logseq has a problem with something that silently logs users out while they are offline. 2. There is something that can happen while working offline such that changes are not written to disk, and only held in memory. 3. When #1 and #2 happen sometimes going back online and/or logging in cause the existing sync data to overwrite what is in memory, clobbering the changes.
I’ve written this up in github reports and written it up here multiple times. There has not been a single action taken on the part of developers. The first time this happened to me was in May of 2023 and it continues at least through mid-December 2023 when I stopped using Logseq for everyday work.
To answer your question, this issue happens when you have other instances of Logseq closed and when you do not. But it should not matter unless there is simultaneous editing (and it really shouldn’t matter then, either) and there never has been. And lastly – even in the worst possible combination of possible causes I should be able to find versions of a document edited for 12 hours on disk or in backups or time machine… that is the true bug that prevents workaround: not writing versions of the file to disk. If that were to be addressed I could go back to Logseq as I would be able to recover my work every 1-10 weeks when this bug bites me. But without that being addressed I can’t trust Logseq for any of my work, sync issues or not.
Pretty sure I got the exact defect you’re talking about. I posted my experience here:
The sync defects are frustrating. This particular defect is significant. But it all paints a larger, gloomier picture. Every defect I logged in Github was ignored. And most if not all have never been fixed. I have little to no faith in the Logseq dev team to build a quality product that will be a good steward of my data.
I’m really bummed about this. Logseq is a genius product and was such a game changer for how my workflow. And I very dearly hope the Logseq team will prove me wrong. I hope they crush it with the new DB-backed version. But so far, they seem hell bent on proving me right.
@n9n9n9 I agree with you, including the lack of dev reactions in GitHub. I opened an issue about constant 10% CPU usage when Logseq is supposed to be idling, it still is the same since August. I posted performance profiles to help debug but now Logseq can’t even save those profiles - so things aren’t improving.
My question was to @Pulz who says sync works for him - though he also said that he “once opened two instances of logseq with sync enabled” and already had trouble! So, not sounding like a really big use of syncing.
8 months of daily use. Work and personal. All of my notes are done through logseq, all of my work tasks are planned through logseq and a range of third party plugins.
I even have my graph hooked up to github which automatically uploads all public pages to my notes.domain.com website.
No issues to report on this end with syncing.
Yes, I did mention that when I first started to use it I had a problem with multiple active sessions, but clearly this was fixed.
Sorry to hear you’ve had problems, but I can only say that I’m yet to come across any of the described problems, thankfully.
I was thinking about this some more whilst cleaning up my graph - and I think it’s important to note that in my case - I’m very, very, very rarely without an internet connection (at work I’m running two SOGEA lines with failover to SIM, similar setup at home with one line, failover to Mi-Fi SIM).
Not that I every do much note taking when I’m not connecting to the net.
I had exactly the same problem… until I’ve stopped using sync in december (go back to syncthing).
Same thing happened to me, and the team’s response has been disappointing.
Followed everything listed in Logseq data protection guides like this one and this one to a T. Last month, I was making daily changes, was always online, made sure the sync indicator was on before changing pages or closing my graph.
Still experienced two weeks of data loss. Nothing in bak, nothing in File History.
At first, Logseq’s sync support told me via email:
"The engineers recovered your encrypted remote backups. They’re now working on a decryption tool so you can decrypt your files on your machine by entering your graph password.
I will send you the data, tool, and instructions as soon as possible."
A month later, the tune has changed. I’m told that “it’s too complex to build a tool to decrypt server snapshots” and that they “can’t take responsibility for whatever happens on a local machine.”
“All potential recovery operations would pose a potential privacy risk, which we’re not able to take.”