History of actions in graph!

Hi all. - is there a way or a simple query that lists all the actions I did in the graph say on a given day. I don’t mind giving me just the list of the blocks I played with on that day. I can figure out my edits myself …

Asking this because , as a new user , I am kinda lost in the graph. I keep adding pages and blocks at random places , not knowing what I really accomplished on a day. And this really is not the version history of a block or page.

2 Likes

Welcome to Logseq @shutosha! :wave:

As per the documentation, it’s recommended that you mostly use the Journals page. That way, most of what you do in Logseq is at least datestamped.

Due to limitations in the Markdown format (the plain text files that Logseq works with), Logseq doesn’t keep a record of changes in individual blocks. Only the creation and modified dates of pages are kept. This may change in the future, but currently the best practice is to use the Journals page as much as possible.

I do ! … but I keep changing tons of notes started yesterday or week before. Plus sometimes, it is just more convenient to edit a long article/ note on its own page instead of time traveling into the journal and making changes there … I mean, pretty soon it gets fairly complex to restrict inputs in the Journal only protocol …

Plus journal also start losing its literal sense if you are continuously editing your past :slight_smile:

Btw , blocks (or pages), based on Last modified date time record , should be easy to query … I am not much into graph technology but that seems like an easy piece to pull from a database !

I guess, I should move this to a feature request if it is currently not available. Just to put it on dev radar.

I get the issues of missing block timestamps. We’re fully aware of it, and we’re looking for ways to make it possible — so no need to start a feature request (there are already a few about it).

The fact remains that Logseq works with Markdown files, first and foremost. Any database that sits in between is based off the Markdown files.

This is exactly the issue that I pointed out in my previous reply:

Logseq works with Markdown files. Unless we pollute the files with lines of metadata for every block, it’s simply not possible with the current architecture. At that point, we might as well run a proper database and replace Markdown files, but that would defeat the entire use case of Logseq for many of our users.

We’re looking into ways we can offer different data formats in Logseq, but until then we have to accept the trade-offs of using plain text Markdown files.

1 Like

Got it. … the only reason I am on Logseq is markdown file and somewhat vim support :slight_smile:

I can sure sacrifice a tiny feature for above two … those are the must !

Keep up the good work …

1 Like

@Ramses Thanks for this explanation. If I may make a remark: Logseq aleady “pollutes” de MD files quite a bit. The indented formatting is not very MD friendly, and in some occasions, when for example time logging TODOs, the file is already “polluted” with a whole bunch of timestamp metadata.

So it might make sense to both add this metadata and if you are worried about clean MD files, at the same time accelerate the work on export possibilities (for example a “clean MD” export option, which would remove all of the bad formatting and additional metadata).

As some have noted that logseq already adds a bit of metadata into markdown in order to enable various features especially todos.

I think a markdown export would be better than markdown as source of truth. Metadata could in be stored next to eg a json or edn file adjacent to the markdown file instead of inside it.

I understand some would like the ability to modify markdown files directly and then use logseq to index and search these files. Thus a concern is that if users edit markdown directly the metadata may become out of sync and reference the wrong blocks and storing metadata separately from document prevents people from changing the document directly without compromising the integrity of the metadata.

However I can tell you with that it is possible to keep the metadata separate and still allow users to edit the markdown directly. Users can have both clean markdown and rich metadata.

I solved this exact problem when I was working at atlassian and we introduced collaborative editing for confluence. The existing apis allowed users to post a markdown document to the api and we needed to treat this post as if it was a series of edits made by another user these edits were then send these edits to the collaborators which would then update their document with those changes.

The trick is to do a diff with the existing document and convert the difference into a sequence of edits that can be used to update the positions of any markers you may have in the existing metadata.

Eg todo was on position 123 in the document before they edited it, now after they inserted 100 characters into the document before position 123 are metadata will reference position 223.

To perform a diff you obviously need to have a copy of the document saved in the metadata as well. Thus the metadata is your source of truth and the markdown is like a live copy that is continuously exported and imported and thus can be modified directly.

We implemented this logic in clojure unfortunately it it is not open source and I no longer work there to push on anyone to open source it, but there is a recording on youtube of a presentation describing the implementation in more detail.

You can find a link to the presentation on my linked in profile

https://www.linkedin.com/in/kurtharriger

1 Like