RDF/JSON-LD/Tripple Store/Schema.org, etc

I’m curious if there is any thinking/work around using JSON-LD/Schema.org to leverage i.e. @context and @types to create structure in LogSeq, i.e. instances of typed objects with available properties. Also wondering if the LogSeq graph uses a tripple store of some kind.

I was speculating that Fluree could be a good fit - it is clojure based, has a blockchain state model with a RDF graph layered on top giving it some interesting properties:

  • The fact it is built on an append only log means one can “time travel” over the data… everything is stored as a delta.
  • Having the state engine separate from the graph engine offers some great methods for graph consumption; one can have many graph nodes streaming updates from the ledger.

Meanwhile, they’re about to add support for JSON-LD, which will make interop with Schema.org and other ontologies/vocabularies easy to work with.

Finally, everything is encrypted.

Anyway, mostly curious about how the developers see LogSeq in relation to technologies like RDF, etc.

Thanks much for the great tool and any feedback on this!



Is looks like we have similar interest!

i.e. to distinguish between like [[Apples]] vs hate [[Apples]] :

Also to be able to export graph How to export graph from commandline? to eventually later convert to other formats (e.g. n-triples).

(bot asked me to edit this message…)


Hey Hey! Sorry I didn’t find your other post when I was searching around. I think one basic thing to explore with the devs is starting with if there is some structural reason that i.e. RDF doesn’t work for this use case.

At the simplest level, it would be great to be able to “prototype” based on types available in common ontologies/vocabularies i.e. schema.org. It seems like a natural solution to integrate LogSeq graphs into the larger semantic web. It would open up a giant search space of structured content to LogSeq, and vice-a-versa.

1 Like

Structurally most RDF stores (e.g. using SPARQL), source data from different sources and map into RDF.

E.g. here it is example with CSV=>RDF

  • https://github.com/w3c/csvw
  • https://www.w3.org/community/csvw/
  • https://www.w3.org/2013/csvw/wiki/CSV2RDF

working group.

So if one can map logseq (e.g. relationships) into CSV ( https://discuss.logseq.com/t/how-to-export-graph-from-commandline/13047 ) ,
AND one can define mapping from CSV to RDF,
then transitively mapping from logseq to RDF can be provided!

Needs may differ,
but to me it’s two things that I am looking for:

  • get graph visible in visualisation page
  • if possible, extended to colour edges (so it’s not only “Alice–Apple” but “Alice–(likes)–Apple”

And as bonus to add all other extra vertices and edges, e.g. to text blocks , and properties and their values etc. as further steps (e.g. importing into Apache Jena to query via SPARQL https://jena.apache.org/tutorials/sparql.html )


Regarding LogSeq->CSV->RDF…

This seems like a viable method, but a couple points:

  • this is what JSON-LD was developed to address in the first place and
  • it already provides for including the edge “type” (predicate in the subject->predicate->object triple) via reference to i.e. schema.org vocabularies.

If any of the developers are reading this, if you could comment on LogSeq’s philosophical disposition towards semantic web tech that would be great. Maybe the more direct question is, how LogSeq imagines sharing/linking/collaborating, “federated” graphs, etc?

Hope this finds everyone well!

1 Like

There is an ongoing effort to have our graphs integrated with the Semantic Web, discussed under this post, as you have already discovered.=)

It uses DataScript’s quad store. (speaking in RDF terms, a quad is of shape subject+predicate+value+time)

Fluree is an interesting solution.
It uses Flakes data model, that is closely akin to DataScripts’s, which seems to make it possible to drop-in a Logseq graph into Fluree with little effort.
It can provide a SPARQL endpoint on top of it, deriving RDF data from Flakes.

However, there are some downsides which may make it a less appealing solution for the problem at hand.
It’s a blockchain, meant to guarantee consensus in a distributed system, whereas we don’t need consensus when we build an immutable acrete-only public graph, as there are no conflicts in the first place, it’s eventually consistent. So it seems to me.

It requires to be run from a shell, perhaps in a Docker for better reproducibility, and that is a hefty footprint on user’s OS. May not be practical / possible on some devices at all, such as mobile phones. May require not that user-friendly setup steps.

There is consensus chit-chat, which makes publishing data take longer and more costly on computation and network than publishing a signed immutable block to a content-addressable storage of your choice.

Using Fluree solely as the source of truth would limit options of where we can get the data from.
Whereas we could be using any combination of content-addressable stores to publish and discover data (IPFS, GNUNet, https CDNs).

In Fluree Interconnection with the SemanticWeb happens by exposing SPARQL endpoint on a Fluree server. Atm this server seem to only include just some Semantic Web sources, such as Fluree blockchain, Wikidata and BigData. source
I haven’t found them mention how to adding more sources, although I guess it is possible.

SPARQL endpoints are expensive on the server. source
Serving Semantic Web data as Linked Data Fragments strikes a good balance between client and server cost, and allows for federated queries from the browser, via js libs such as Comunica.

Overall, it seems Fluree is a good fit for consensus-requiring use-cases, and this feature comes with a hefty architecture complexity. It also seems our use-case does not require consensus, so putting Fluree to use would add accidental complexity.

When this is interesting, allowing to find the exact version of a Logseq block that other block referred to (which I believe is a must have, as it prevents link rot), it comes with the need to have this time blockchain and search through it, recreating state as of some time, which is costly on storage and requires blockchain architecture or some other way to guarantee that time log is not tempered with.
An alternative solution to use content-based addressing for block references, baking in reference to the exact content/block. Such a reference can be represented as a HashURI, being a part of RDF representation of a black, and being lazily resolved to it’s actual content by a query engine, such as Comunica (it would need to be extended with such hash-based-uri resolving module though (in plans)).

Thank you for bringing Fluree to attention, it’s is an interesting piece of tech and perhaps some of it’s ideas could be used as inspiration sources, e.g., how data can be signed and encrypted.
Keen to hear more ideas this way, it gets us one step closer towards our dream.=)

1 Like

Hi Andrew, I thought I’d sent a reply to your thoughtful response, but it seems not, so returning to it here.

There are a few points on Fluree that would be worth following up on at some point, but I think my original post focussed on it too much. My main curiosity is about applying shared type vocabularies, ala schema.org, to scope/contextualized Logseq parameters as types from those vocabs. i.e. a “person::” could be in the context of the schema.org @person type. Likewise an @Author in the Schema.org context is a @Person with additional parameters, i.e. refs to @Publications.

Anyway, just curious if that kind of thing might be possible in the future.

Hope you had a great New Years, and thanks for replying.

1 Like

With Logseq’s properties I’m able to create RDF and leverage schema.org concepts. I’ve put up a well commented example script at nbb-logseq/examples/linked-data at main · logseq/nbb-logseq · GitHub. The script should be configurable enough to handle different approaches to making ontologies in Logseq. A small portion of Logseq’s docs can now generate turtle rdf :slight_smile: See https://twitter.com/cldwalker/status/1618355498176352259 for some more info


Hi! do you have an example of what the RDF looks like?

Wow, so great to see work in this direction! ^.^
Thank you for sharing the code and docs, Gabriel, well written, a pleasure to read. :muscle:
I’m very curious, do you have any further plans on expansion towards the Semantic Web??