This is an incredibly useful swiss-army-knife like tool. I have similarly found rclone to be quite useful. I'm wondering if people know of more tools of similar nature than rclone and/or benthos.
From 7/10/2019, 4:40:54 PM till now, @abraxaz has achieved 246 Karma Points with the contribution count of 52.
Recent @abraxaz Activity
This is an incredibly useful swiss-army-knife like tool. I have similarly found rclone to be quite useful. I'm wondering if people know of more tools of similar nature than rclone and/or benthos.
Benthos: Fancy stream processing made operationally mundane
2 points • 1 comments
Kroki: Diagrams from Textual Descriptions
3 points • 0 comments
> it's pretty difficult to build One ELN to Rule Them All given how flexible many kinds of biological experimental designs are - especially when you're working on the bleeding edge.
RDF is quite flexible and using a combination of domain specific ontologies like cheminf[1] and other top level ontologies like BFO[2] should allow you to capture most of the semantics.
[1]: https://www.ebi.ac.uk/ols/ontologies/cheminf [2]: https://en.wikipedia.org/wiki/Basic_Formal_Ontology?wprov=sf...
A place to start looking may be the OWL primer (https://www.w3.org/TR/owl2-primer/) and the RDF primer (https://www.w3.org/TR/rdf11-primer/)
Other resources: https://github.com/semantalytics/awesome-semantic-web
> Ok, fine. But I'm not sure how this helps if you have six different systems with six different definitions of a customer, and more importantly, different relationships between customers and other objects like orders or transactions or locations or communications.
If you have this problem, consider giving RDF a look - you can fairly easily use RDF based technologies to map the data in these systems onto a common model, some examples of tools that may be useful here is https://www.w3.org/TR/r2rml/ and https://github.com/ontop/ontop - you can also use JSON-LD to convert most JSON data to RDF. For more info ask in https://gitter.im/linkeddata/chat
To pile on a bit here, JSON-LD is based on RDF, which is an abstract syntax for data as semantic triples (i.e. RDF statements), there is also RDF* which is in development which extends this basic data model to make statements about statements.
RDF has concrete syntaxes, one of them being JSON-LD, and it can be used to model relational databases fairly well with R2RML (https://www.w3.org/TR/r2rml/) which essentially turns relation databases into a concrete syntax for RDF.
schema.org is also based on RDF, and is essentially an ontology (one of many) that can be used for RDF and non RDF data, but mainly because almost all data can be represented as RDF - so non RDF data is just data that does not have a formal mapping to RDF yet.
Ontologies is a concept used frequently in RDF but rarely outside of it, it is quite important for federated or distributed knowledge, or descriptions of entities. It focuses heavily on modelling properties instead of modelling objects, and then whenever a property occurs that property can be understood within the context of an ontology.
An example is the age of a person (https://schema.org/birthDate)
When I get a semantic triple:
<example:JohnSmith> <https://schema.org/birthDate> "2000-01-01"^^<https://schema.org/Date>
This tells me that the entity identified by the IRI <example:JohnSmith> is a person - and their birth date is 2000-01-01. I however don't expect that i will get all other descriptions of this person at the same time, I won't necessarily get their <https://schema.org/nationality> for example, even though this is a property of a <https://schema.org/Person> defined by schema.org
I can also combine https://schema.org/ based descriptions with other descriptions, and these descriptions can be merged from multiple sources and then queried together using SPARQL.
RFC: Log to stderr Instead of stdout (12factor : PR #295)
2 points • 0 comments
> If you want to 'protect' FOSS projects you care about, take some time to find out what help is useful to the maintainers and contribute towards items that make sense to you. Joining OSI won't help those struggling projects you gain from using.
Indeed, it is an incredibly rewarding experience. Take one thing you use and like, go to it's issue backlog and start fixing/improving things - if there is nothing take the next thing, there are likely 10s of thins you rely on every day that need contributors and contributions. The first issue will be hard, the next one easier, you will be a happier person for doing it, you will make a bigger impact than starting another project you won't finish and that nobody will use, and you will become a better engineer.
Another option is to fund actual open source projects, like go sponsor python: https://github.com/sponsors/python
> NFTs do solve a problem, and the problem that they solve is that creatives are getting paid for their creative output.
This has been happening for 1000s of years already before NFTs.
> Never in history has anyone been able to buy shares of highly coveted art... maybe now you can?
Yes, they have: https://www.masterworks.io/
Conan can be made to do what homebrew does with minimal effort, I have written some convenience wrappers around it which makes it slightly easier to use for this use case, you can have a look here: https://gitlab.com/aucampia/proj/xonan
Last I checked mix-n-match was using CSV, while this is okay, it still would be nicer to have direct RDF ingestion. And yes, I realize the reason why Wikidata does not have it, but it is not impossible to provide, just really difficult. I would work on it if I had more time and would likely sometime in the future.
> I’ve been considering using it for some projects, but the main thing that’s keeping me away is the concern that some moderator will decide that my data doesn’t fit and remove it.
To me the greatest value of Wikidata was making me aware of RDF and SPARQL.
In most cases, if you are relying on data business needs, it would be best to maintain your own RDF dataset and host it either just on HTTP, or on something like https://dydra.com/.
WikiData deseperately needs RDF ingestion, and if this is made available (can be done outside of Wikidata) then it would be easier to periodically sync datasets with Wikidata.
On that note however, you could export all Wikidata triples you need and just host that on your own SPARQL server (e.g. Jena) or use it with RDF tools like rdflib.
Can the database be exported somehow?
OWL for data model and RDF for data would work well for it, though I don't know if that is what they use.
Compare for example:
- Atelestite on mindat: https://www.mindat.org/min-407.html
- atelestite on WikiData (RDF based): https://www.wikidata.org/wiki/Q3627885
Thanks for the tip.
Can't see how, and this seems to suggest this functionality is not implemented yet: https://github.com/cuelang/cue/discussions/663
Mind sharing a reference?
Is there any plans to support model generation in future as can be done with JSON schema through something like https://github.com/quicktype/quicktype ?
You don't have to use neo4j or any graph database to use RDF. It is just your current model seems very graph based and actually not that difficult to map to RDF, it would likely be possible to do with a jsonld context, and if you provided such a context then it would make your data a lot easier for others to consume.
Is there a reason why you did not use rdf for representation and some rdf aware encoding like jsonld for serialization?
Would be significantly easier for others to work with, could easily query it with SPARQL.
site design / logo © 2022 Box Piper