Specification for public graph discovery. Decentralized social network on logseq

From my understanding, since Logseq is written in ClojureScript, it is compiled into JavaScript that is supposed to be run by a browser.

But it seems you can use Clojure(Script) with a command line interface using a tool called Babashka and this is what logseq-query (lq command) does:

Somehow it can connect to your Logseq graphs without running Logseq and perform queries. It’s already very useful and it can be combined with the usual Bash/Python/etc scripts.

Now what would be nice is the same with other Logseq features, not only queries.

Even better would be a library (maybe in C or Rust for max compatibility) that could be used from any programming language to manipulate Logseq files and entities programmatically without any JavaScript (or other interpreted languages) involved.

For instance I spent some time trying to write a parser for properties:: (to read and write their values programmatically) but I failed because I’m not very used to this.

2 Likes

You raise a good use-case - programmatic manipulation on top of our notes, that’s powerful and we want that. And a good problem - that dealing programmatically with text is a pain.

So we would want data (e.g., Logseq’s inner representation of our text notes) to be programmatically accessible.
One way would be to have Logseq serve it via an API. It’s a common approach.
Another would be to export Logseq data, in JSON or EDN.
But yet another way, that seems so appealing to me, would be to derive a Semantic Web representation of Logseq’s data. Then it could be queried with SPARQL (akin to Datalog (that DataScript uses), but able to perform queries across the whole Semantic Web, not just local dbs). Also, it can be serialized as JSON for those cases where we don’t need SPARQL queries. And JSON can be dealt with any programming language out there. And another dope thing is that we wont be limited in access to only our knowledge graph, but will have access to graph of others and the rest of the Semantic Web, building one interconnected graph of knowledge. ^.^

I agree that having programming access to our data is of huge value, and it’s more valuable the more data there is. Integrating our graphs into the Semantic Web would be like merging our lakes of data into the ocean.

To have our notes as data would be a dream! Then indeed we can work on them programmatically from whatever language we prefer.

Good pointer to lq.
From what I reckon, atm Logseq’s graphs are stored as ~/.logseq/graph1.transit (.transit is a serialization of EDN).
lq runs in NodeJS, reads a graph and feeds it to DataScript engine. For this use-case .transit is ideal.
For reach from other languages .transit is… not that accessible. Having it in JSON would give us way broader reach.

To have our notes in a semantic form, such as JSON, and have a client of your choise to work with it (e.g., Logseq), yet having programmatic access to it to do whatever we wish with it - that sounds very appealing.

FYI Tienson did an experiment about using EDN instead of Markdown/Org.

Interesting, more info?

I saw this tweet by Tienson:

https://twitter.com/tiensonqin/status/1583170757823430657

Sorry, I have not any other information, as I said, it seems just an experiment.

Edit: found the branch

1 Like

It is very important … not only from the knowledge sharing perspective but also to provide ready reference graphs to new users (like me) who wish to use Logseq for publishing perspective. In fact, I landed on https://demo.diasporamemory.com/#/page/Diasporic%20Memory , which inspired me to explore this area … my initial attempts , with just couple days of work is at https://shutri.com

I fully realize that Logseq is primarily targeted to mine your mind - notes , todos , journals etc … but that doesn’t stop it from being best publishing platform as well. As regards to social features like “follow” or “share” - they are sure important but they are not a MUST have to get started …

Maybe - it is just a thread, in this community (or elsewhere), where expert hosted graphs are listed with basic instructions on features , and how to use them as a reference template …

1 Like

Hey @shutosha : Digitized Diasporic Memory is actually my graduate thesis project!!! I’ve been following this thread quietly and was pleasantly surprised to see it mentioned here. Thank you for the shoutout! Happy to hear that it inspired you. :slight_smile:
If you’d like to reach out to chat some more, let me know.

1 Like

Wow …great to meet you ! Please keep up the good work …

1 Like

Regarding implementation, I think that there was a lot of good work done in semantic web Semantic Web - Wikipedia space, and to me combining two (logseq with tripes, and urls to be entities in RDF sense) makes perfect sense when thinking about multi graph or federations in logseq.

(potentially related topic: RDF/JSON-LD/Tripple Store/Schema.org, etc )

2 Likes

Hi all, I just replied to @andrewzhurov on that referenced thread on this subject and just read over the latest posts here, so wanted to follow up here.

Exactly so, and that is basically what JSON-LD is designed for. To start, I think it might be as simple as the sketch I offered on the other thread for decorating property blocks with JSON-LD @context and @type tags at the system level.

From the schema.org homepage:

Schema.org vocabulary can be used with many different encodings, including RDFa, Microdata and JSON-LD. These vocabularies cover entities, relationships between entities and actions, and can easily be extended through a well-documented extension model. Over 10 million sites use Schema.org to markup their web pages and email messages. Many applications from Google, Microsoft, Pinterest, Yandex and others already use these vocabularies to power rich, extensible experiences.

Beyond providing a shared namespace of types which benefit from relational composability, there are huge wins baked in, including making real graph queries across federated graphs, a path to being indexed by traditional search engines, and so on. In short, JSON-LD is the lingua franca of linked data on the web these days, and is isomorphic to standard RDF triples.

Having gone there, I’m going to take one more swing at my Fluree pitch to support a lot of the goals I read through in this thread. Andrew had expressed (in the other thread) that while Fluree would be a fairly straight forward engine to drop in, but come with added complexity that seemed misplaced in the application. Given the discussion here, I beg to disagree.

I’ll preface this by recognizing Logseqs origins and mission are around being a second brain and is a more inwardly focussed mindset with a more ah hoc evolution that doesn’t lend itself to strong typing. Yet, to be interoperable with the semantic web, type consistency is needed.

Anyway, I realize the notion of shared editing of graphs may seem sacrilegious, yet, I need to share my knowledge and not by rendering out graph segments and hosting them as webpages, and I actually do want to co-edit knowledge graphs. And even further, I want to have granular access control on the graph. And to be clear, I get that isn’t/wasn’t the target use-case of Logseq. But it would be great, lol.

Anyway, to clear up a couple things about the fluree architecture. It has two layers which run (and scale independently) in separate containers:

  • A blockchain persistence/state, and
  • A RDF graph overlayed/indexing the state (this can even be run on client side javascript).

So, yes, the underlying state is immutable and append only. When an object is deleted or updated, a new block is written to reflect that change of state, while the graph (which is what is queryable) is updated the new state. The graph can time travel across state for free… queries include a “at time t” input and it costs the same to look into the past as the present. It also provides for independent scaling of read and write performance. The engine can achieve millisecond response time for queries. Clients register for updates from the ledger nodes for commits to triples in their local cache (basically functions like a CDN).

Granted, there is complexity added around consensus, and why do it if you don’t need consensus. But for shared editing of a graph state that all parties can rely on, it would be well worth it either on a local node or in the cloud.

Turning to the out of the box advantages:

  • Semantic Web native out of the box. Can be queried using SPARQL and GRAPHQL. Native JSON-LD in first half of the year.
  • All transactions are cryptographically signed by identities and encrypted; absolute provenance.
  • Built in “smart functions” for identity based access control
  • ACID transactions
  • Client edge-node graph only reads in the data in needs and loca queries are blazing fast. Write performance can be scaled (and paid for) according to need.

Then, finally, the power to use real, nestable, graph queries executed directly against the Fluree API seems very powerful.

Anyway, I hope you’ll give Fluree a second look in light of the developments in this thread as it seems to hit a lot of the features that have been discussed here.

All the best!

This was done in 0.8.15!! (PR #7699)

2 Likes

Thank you for a thoughtful response, a delight to read.
I’ll give it more thought later on and will get back to you.

I’m ignorant of that, how’s that done?

Essentially, the schema.org vocabularies were established as an initiative amongst the big search engines to put a semantic patina across the web 2.0 space. In sum, google et.al. “understand” these types and can use them to contextualize results. Obviously Logseq would need to present an http endpoint for a search engine to crawl, but if the contained information is coded according to the schema.org vocabularies, it enables the indexer to reason about the content and contextualize results.

(edited for clarity)

2 Likes

Hello @andrewzhurov - In case you aren’t familiar with it, I wanted to share a link to ipld.io which is part of the larger universe of IPFS and all that related tech (libp2p, etc).

IPLD is the data model of the content-addressable web. It allows us to treat all hash-linked data structures as subsets of a unified information space, unifying all data models that link data with hashes as instances of IPLD.
(from their home page)

So, i.e. it can interact with data on Git as easily as IPFS, or any other hash based address space. Given the earlier discussion in this thread regarding CIDs, IPFS, IPNS, etc, it seems potentially useful to have an interface that allows you to interact with all of them as a unified namespace.

I also wanted to add my two cents on the mutable vs immutable question. In my mind, I’d like to have both. Certainly there is canonical knowledge and fields - science is obviously built on the canonical history of published works. Another way to look at it is that while I never want to lose the history of “my” graph, I want the state of my graph to evolve. I want the state of my second brain to reflect my current state of mind, which may be different that it was five years ago. There are lots of contexts where this sort of dynamical graph is useful.

Anyway, I’m a big proponent of leveraging everything that comes for free from the IPFS universe, and it seems to offer a lot of flexibility on the immutable->mutable dimension. To those ends one thing to be aware of is that IPNS (iirc) has recently introduced multiple modes including peer to peer meshing to update records between nodes, creating faster consensus amongst coordinated peers.

Perhaps by leveraging the IPLD tooling, you could have federated graph space across i.e. Git and IPFS.

Another approach to both persistence and shared state is the Ceramic project which is in the web3 space, using things like Lit Protocol (smart wallets with encryption features), which can write encrypted blobs to ipfs. They’ve recently updated their database to be what they frame as a graph database, but which is backed by Postgres, so it’s fine for short path traversal. The cool thing about it is that they use GraphQL schemas to define the types in their system, which yields a composable network of types.

It does have the downside risk of being a startup web3 project, but it is seeing big adoption in the space so is likely to thrive. Offsetting the downside risk is that anyone can run their own network of nodes.

On the positive side, it lets you offload a lot of the big lifts for what’s being discussed here. I’ve also discovered their DB layer is totally pluggable, so replacing (or complimenting) Postgres with Fluree should be an easy lift, which I’m stoked about.

Last thing… Check out dunlin.xyz. It’s a basic Logseq/Roam style thing, although not nearly as feature complete. That said, it’s web3 based, and provides for sharing graphs with all sorts of permissions and conditions. Might be worth a gander.

Hope this is helpful.

2 Likes

Hi, @Erich_Greenebaum! Thank you again for such thoughtful responses. :heart:
Sorry for taking so long to get back to you.

I’ve been surprised to see how you share understanding of both Semantic Web and content-addressable stores, proposing an intriguing synergy between the two. :yin_yang:

It’s been interesting to learn about the Ceramic project, perhaps we could think of use for it.
Also it’s been interesting to get familiar with dunlin.xyz, a close in spirit project to ours. May be a source of inspiration.

Thank you for bringing attention to the Fluree project, it does have a ton of interesting aspects that we could find of use. Aside of other good points, it seems to be a mighty fine solution for when we’re in of consensus. And the good thing about content-addressable data (in RDF, in our case), is that it can be stored and shared via any number of ways! We essentially care little about where it comes from ^.^, so it may come from a Fluree blockchain, as well as from any other source, such as ActivityPub, Matrix, IPFS & other sweet tech, as been creatively thought of by folks in thread. We’re not locked to just one - the more the merrier.=)

All of them seem to posess features that make them particularly suitable for different use-cases.
For example,
Fluree - for when we need consensus and Datalog capabilities. It seems excellent in that. :100:
IPFS - for when we need a global content-addressable storage.
& other mentioned sweet tech has its strong sides, which I’d fail to present at the level it deserves.

As you mention, the data can be in JSON-LD, making it possible to use within the Semantic Web.
How would we publish blocks as JSON-LD into a content-addressable storage? :thinking:
So we can ref to such blocks from Logseq and run Semantic Web queries on them.
Curious to learn your thoughts!

Are you aware of this script that generates RDF from Logseq pages?

1 Like

I wasn’t! Thanks for bringing it to my attention. Very nice to see work towards the Semantic Web! :metal:
Also stumbled upon it in an adjacent thread, where it’s been kindly shared by @cldwalker, here.

Hey Andrew - thanks for the reply and no worries on the time!

Regarding IPFS, I’ve been looking at using IPLD for persisting graph structure, but definitely just writing an IPFS CID onto the Fluree ledger gives you provenance, and then using a combination of smart-functions and i.e. Lit, we get full access control - which I grant is not always something you want for shared semantic data, lol, but sometimes is.

I think the biggest win is that by using a RDF database underneath Logseq, you get all that great semantic querying from the graph engine itself. People have done some interesting things to extend the semantics of GraphQL, which can be a lot more accessible than people having to learn SPARQL, if they don’t need all of its features. That has the benefit of making Logseq data objects accessible using one of the most popular API styles on the web, easily consumable by things like React apps.

But I think your basic notion of putting blobby content into IPFS and referencing it in the graph is probably the best strategy for that aspect. I’m using that approach elsewhere and it seems to be a good fit.

Storing JSON-LD in IPLD sounds interesting, what would be the benefits over storing JSON-LD as blobs on IPFS?