Opinion: development focus should shift towards improving LogSeq's partially unstable core

I want to emphasize that the point is polishing (stability) …

When you define stability as a polishing point then I am not sure if I can understand the rest of your response. Stability is the main feature … without it using Logseq is pointless as it can not be trusted. It pains me to see it described as polishing :frowning:

Absolutely.
But given the Devs are already focusing on doing stability improvement, we should figure out how to make better & how to make the efforts more direct:

English is not my native language. Let’s not focus on the exact wording here as the whole sentence structure should’ve made it clear that I am stressing the importance of stability.

5 Likes

Thank you so much for all the hard work you’ve put into managing issues and engaging with the community. Your unwavering commitment serves as a remarkable model for the rest of the team to emulate.

There is a lot of advice to give the issue is that committing and following through.

1 Like

Upvoted that one too. Thanks for not taking it personally!

My intention was to direct or summarize this post to make clearer requests, as I observed the messages we’re trying to communicate are not fully consistent, for example the following messages have intersections but don’t fully align:

  1. logseq should direct more resources from “feature projects” to bug-fixing and “enhancement projects”
  2. losgeq isn’t working on the top FRs here (implying they should) ( — but many of them are features)

To be clear, I think things like EPUB annotation are valid requests, but in the context of this thread bringing up “Top FR” muddies water a bit. “Message 1” is closer to the core of this thread so I emphasized that.

Another example is this message from View as numbered List - #16 by Joe_Smoe

So I have a question, why would the team focus on implementing this, but not something like Prompt user to confirm deleting block when an existing block reference is linked to the block - #6 by tobid ?, which I think would be a higher priority.

  • For one, it’s an example against “Message 2” because numbered list is a highly-voted community request.
  • Secondly, this mesage implies FR should not be taken according to votes but according to how much it would improve user experience. (Numbered list has 85 votes, the other has 28.)
    This aligns with “Message 1” and my personal opinion.


@Junyi_Du Thank you for being so active in this thread, though I don’t think the core question has been addressed yet. Rather than questioning the amount of bug-fixing the team had been doing (we all appreciate your hard work!), I believe OP and many were asking why some of the bandwidth that could’ve been spent on bug-fixing were used on “feature projects” instead.

In my previous post I outlined the past priorities I could understand, but I left out numbered list, because I could not justify it given the unstable current state of logseq that some resources went into that instead of fixing more bugs, especially because

  1. I don’t think it should take precedence over “parser replacement or improvement” which is on the table and would solve the number list problem in a much better way.
  2. There already is an excellent plugin; and the official implementation is also visual-only so it isn’t better than the plugin (except it works on mobile, but mobile plugin is also on the roadmap — no?)

So rather than asking us if we have advice on making issue management more efficient or direct, you need to address how you would make more strategic decisions on resources allocation.



I wanted to show “the pace of enhancement implementation stagnated” is subjective so we do not get into more discussion on that and go off-topic if the team also focused on answering that instead of the main question.
Likewise, while “top requests are not being worked on” is mostly right, I worried someone would extend that to mean “developers are not addressing the FR section as a whole” which is imprecise, especially not for the “non-top but with decent votes” requests.

Here let me speak for the team a little bit so that they don’t do it themselves and make this thread go off-tangent:
If the first 10 requests are taken out, we start to see “done” requests sprinkled in, so devs have been addressing them, though not in the order or at the pace everyone desires.

For most of the unsolved “Top 10”, I could venture a guess why it’s not given a higher priority, so to me it’s not too unfair.

3 Likes

Thanks for pointing this one out, I was worried nobody would see it.

This is a very good point and I think it warrants the discussion of Feature Request vs Enhancements. Feature Requests would be those things that are new features that do not currently exist. Enhancements would be things that enhance existing features. The users can pick and choose what things are Enhancements or a Feature Request, but it’ll ultimately be up to the devs based off their knowledge of the codebase.

For example I considered this request to be an enhancement since it would be extending the existing block-reference feature: (Prompt user to confirm deleting block when an existing block reference is linked to the block - #6 by tobid, and something I would consider an improvement to user experience since it would prevent accidental deletion of existing links.

This would help the dev team get an idea of what things we are struggling the most as users and use this as another go-to list to pick from.

I think it warrants the discussion of Feature Request vs Enhancements.

I think it would be better as its own thread, because —

Feature Requests would be those things that are new features that do not currently exist. Enhancements would be things that enhance existing features.

This is also my criteria when making that screenshot, but even so I hesitated on many FR that extend existing features but the “extension” is a function which doesn’t exist yet. I can see people disagreeing with either so I put “equal parts?” there.

it’ll ultimately be up to the devs based off their knowledge of the codebase.

If using the developers’ definition, then a lot more of those would be features. For example the two examples (backlinks for block ref in properties & pages in query tables) I gave were both considered features, but to differentiate them from the whiteboards & AI kind of stuff I called them enhancements here.

Therefore, if we start the discussion, it’ll just be everyone stating their subjective opinion and demanding the team to find a tricky balance that satisfy most. And I don’t think we need that in this thread. The good thing is the developers are all dailydriving logseq and using it as a collaboration tool, so I trust them to know the pain points and make sound judgments.

I hope this thread wouldn’t get too noisy until a team member replies again because I went for it with

rather than asking us if we have advice on making issue management more efficient or direct, you need to address how you would make more strategic decisions on resources allocation.

so I want to see it answered :slight_smile:

1 Like

For the difference between tag FR & Enhancements, it doesn’t matter that much to the Devs, as they are treated as the same. In most cases, it only reflects a guess on the workload to implement the requested stuff. So in recent days, we also introduced the estimation tags to avoid confusion.

rather than asking us if we have advice on making issue management more efficient or direct, you need to address how you would make more strategic decisions on resources allocation.

For FRs & Enhancements, basically it’s a mixed decision based on ((Impelentation difficulty)), votes, UX impact & the personal interest of EACH dev. I may rate ((Impelentation difficulty)) as the top consideration, as Logseq is really a much more complex software than most of the competitors in this field. It’s making a “context switch cost” so high that devs can only focus on a field of the Logseq to work on in a given period of time.

Also refer to About the Feature Requests category for how the FR vote works.

I’m open to any advises on making this “allocation process” be more transparent. Adding estimation tag was one result of such the suggestion. But some case-by-case blaming is not that constructive. It’s having more effect on negating the team’s and contributors’ efforts on running this open-source project.

Also we highly appreciate those contributors who submit PRs to the codebase for the features they want. It’s providing an extra development dimension for Logseq.

To maximize the bandwidth proficiency, I will append this peer-to-peer reply to the Logseq - development strategy and quality control

6 Likes

Thanks for your time and patience, this is a good answer!

of EACH dev

Thanks for highlighting the individual aspect. This is understandable, relatable and humanizes the developers, which is a reminder we sometimes need. logseq hasn’t left the phase of transitioning from a hobby project (which should remain so for the community contributors, of course) to an optimized business. But even if logseq fully transitioned, personal interest should definitely still play a part in the decision-making process because developers shouldn’t work for logseq like it is a chore. That takes the fun out of it.
But as a startup, logseq needs to be careful at planning, checking if each step fits into the big picture (for example first confirming if there would be a parser change, and if so, how does numbered list fits into the rough plan, and settle on the most efficient path) and it would require more intra-team communication, which could be a challenge for the remote international team.

Secondly, as this whole thread has been saying, the UX impact aspect should probably be moved up higher in the “priority ladder”. Personally I trust y’all to know what matters for UX and am satisfied with the transparency.

My last point comes from another observation, that the implementation difficulty aspect doesn’t only mean bigger projects are postponed (understandable and I’m not upset about it), but the “easiest jobs” are postponed indefinitely too because the team wants to provide opportunities for contributors — which of course is a good thing, and there had been feedback that the good first issue tag is greatly appreciated and has encouraged people to try their hands at a big complex project. But when a “small task” that is crucial for UX is left unnoticed for a time period that is getting unacceptable for mainstream users, then the team really should do it yourselves instead of waiting and forgetting about it — an example would be the removal of formatting tools from the mobile editing toolbar, with the PR welcome tag. It drops the “usability of the mobile app” from like 80 to 50 points, frustrated many users and was left untreated for half year . I have seen people left logseq because of this as they had to use mobile at work. I think for this particular example, the optimal waiting time would be like 2~3 months, then the team should just do it. So it ties into my second point that the UX impact should be the deciding factor most of the time.

But some case-by-case blaming is not that constructive.

I need to apologize about it. I did wonder if it would make @Charlie feel like he did the wrong thing (please don’t, you did a good job) but still decided to go with it. I wanted to use numbered list as an example, but I couldn’t figure out how to say it without implying the blame. But while I could wait for this feature to come out at a later time, one thing I really appreciate about this PR is that it shows the logseq team cares about community feature requests. In fact I was excited I could use it as a counter-example to “logseq doesn’t care about the community”.

1 Like

Obviously this ticket is forgotten. We need a mechanism of re-surfacing the simple but actively requested tickets.
The priority-A tag on GitHub for me to do monthly re-visit but it’s not that flexible.
Any idea?

  • One way is as you said, remind team members via discord as your github inbox is hell. Openly saying so might flood the #feedback channel too, though.
  • As I’ve repeated, I really trust you guys with the “knowing” part (i.e. knowing what’s important). The question is how this ticket went missing from the team’s internal board. I think it needs to be added to your internal kanban the first time you see it, with #priority-A , #enhancement , #could-wait-for-contribution tags.
    • The tasks with both #priority-A and #data-stability takes the highest priority
    • #priority-A + #bug generally takes higher priority than #priority-A + #enhancement , but it’s flexible as personal interest & implementation difficulty need to be factored in. I totally trust you with these.
    • #could-wait-for-contribution lowers the priority, but calls for a monthly review. If a task remains on the board after one or two rounds, and it has a #priority-A on it, then it needs to be done no matter what.

(↑↑ just an example. you guys need to figure it out)

3 Likes

I saw the conversation scattered in several places, but I share the sentiment. At the same time though I must praise that the team has so far always listened and communicated well, or at least at the face of Ramses who’s been running around and helping out on a lot of recent issues.

I left a response in this thread: https://twitter.com/kirso_/status/1658079122026012676

What started to interest me is how the features are actually being prioritised and what do they result into in terms of metrics. Perhaps the question of clarity on the roadmap is valid here.

1 Like

I like the P0-P10 system. This gives a good range to assign priority to a task.

1 Like

Hi. Another Logseq engineer chiming in here. I’d like to say thanks to everyone for sharing your concerns. I care about product stability and quality so it’s good to hear that you all appreciate this. There are a lot of bug fixes mentioned in the changelog but that doesn’t necessarily address regressions. We are making investments there with tests and automated code linting which are helping but it takes time and the process isn’t always smooth. For example, I recently fixed a group of reference bugs that had been around for over a year. In doing so, I accidentally introduced this disappearing references bug. Sorry about that. This new bug has been fixed with tests to prevent a regression and is available in nightly.

To help us help you, please continue sharing bugs with reproducible steps. Reproducible bugs means more time we can spend fixing bugs and less time trying to reproduce it. If you’re disappointed in Logseq’s docs, I encourage you to contribute what you know. Feel free to ping me (@cldwalker) in Discord’s #documentation and I’d be happy to get you involved. Cheers.

8 Likes

We are making investments there with tests and automated code linting which are helping but it takes time and the process isn’t always smooth.

I completely agree with the importance of testing. I almost introduced a minor regression in #9245, but it was prevented because @Junyi_Du wouldn’t accept it without E2E tests (thank you :sweat_smile: ) which ended up catching the regression.

After picking up the basics of Playwright, writing the E2E tests wasn’t too difficult. The major issue I encountered was running the tests locally. Logseq kept loading the demo graph before loading the test graph causing all tests to timeout. I am not sure what is causing this issue, but I think it’s worth investigating further to improve the process of writing E2E tests. Some tests also randomly time out in the CI pipeline, and then I have to push a new commit to re-trigger the test, which restarts the build process from 0.

3 Likes

Thank your for the great work Bad3r!

Basically it encourages running E2E locally before submitting. You may have direct view of what’s happening via the Electron window, from both UI and console.
But we also have some facilities for debugging E2E for the CI runs by dumping screenshots & logs:

There's a traceAll() helper function to enable playwright trace file dump for specific test files #8332

If e2e tests fail in the file, they can be debugged by examining a trace dump with the playwright trace viewer.

Locally this will get dumped into e2e-dump/.

On CI the trace file will be under Artifacts at the bottom of a run page e.g. https://github.com/logseq/logseq/actions/runs/3574600322.
1 Like

I appreciate all the details!

I do donate to Logseq, and will continue to do so, as long as visible effort is made towards improving the product (i.e. highlighting what you just have).

2 Likes