Using Logseq for Nonlinear Audio Storytelling (Graduate Thesis Project)

Hi everyone!

I’ve been an avid Logseq user for the past year and a half, and used the tool to create a non-linear, audio-based interactive project called Digitized Diasporic Memory.

This project was part of my Master’s thesis (which you can read here), which I completed in May of 2022.

Since then, I’ve been giving hybrid workshops and presentations on the project and on how I used Logseq to create it. One of those workshops was recorded, and I’m excited to share it with the community. I’m still working on some proper tutorials and documentation to add to the main project’s website, but I thought I would share this recording for anyone curious.

Here’s a write-up on the presentation I gave with the video recording, timestamps, and slides:

That’s a really cool and creative way to use Logseq!

2 Likes

Thank you so much! I was initially using Logseq to manage my thesis writing, and saw its potential for more creative applications.

1 Like

Hi, Candide! Thank you for presenting the project. My loss not giving it a thorough look earlier.=)
Nicely spotted that we can have an ever-accreting graph of knowledge.
The idea that we could use voice to express knowledge is alluring.
What benefits do you think expression of knowledge via voice has over expression of knowledge via text?

Curious to see where this project evolves next. Any plans?

Btw, that’s a nice color theme, gives your eyes a bit of joy after having to stare at black-and-whites for hours.

1 Like

Thank you for this thoughtful response @andrewzhurov! “ever-accreting” is now a part of my vocabulary :sweat_smile:

I would say it’s the affective quality of voice, which is something my primary advisor helped me realize. There is something special about the tone, pauses, hesitations, mid-thought redirections that can be experienced by listening to someone tell their story vs reading it in writing, even when it’s in a language you don’t understand. Sometimes words/grammar/structure gets in the way.

My hope was to get people to respond and share in the moment, and I felt like it might be easier to do with speech. It would be interesting to try this format with text, illustration, video, or other formats.

I have no plans to expand the database created for my thesis, but I’d love to see this concept remixed and reimagined. That is why I’ve been focusing on delivering workshops and getting some documentation and tutorials together. I’m open to collaborating with anyone who would like to create their own version of this project.

I’m also working on adding transcripts for all the audio segments, and revisiting the user experience.

Hopefully, Logseq introduces a real-time collaboration feature in the future. It would make a project of this kind a lot easier to do.