Ask HN: What does your team use to sync and manage secrets?
4 points • 2 comments
From 7/17/2020, 5:51:29 AM till now, @abrazensunset has achieved 5 Karma Points with the contribution count of 7.
Recent @abrazensunset Activity
Ask HN: What does your team use to sync and manage secrets?
4 points • 2 comments
+1, the Pandas API is somewhere between mediocre and bad, and results in garbage code unless you use it in a carefully constrained way (which is admittedly true of many complete languages, much less libraries that organically evolved several tooling generations ago)
It's worth noting that hydraulic fracturing itself is rarely the problem. Issues come from moving fluid volumes somewhere else during production: subsidence due to extraction from the place where the hydrocarbons are, or (most often) from injecting produced water into disposal wells, eventually triggering faults. Some of that produced water was introduced by the operations, but most of it was just in the ground with the oil & gas being produced.
It's possible to directly trigger small faults with while fracturing the rock, or to do something stupid like fracture into a freshwater zone other people are using, but that's not what's driving quakes in e.g. Oklahoma.
+1, Migadu is simple, reliable, easy, and just works. Cost is low enough it might as well be free. I've only ever had one issue (from incorrectly interpreting documentation) and had an actual human respond and walk me through it.
The only downside to be aware of is the lack of calendar support (technically yes via CalDAV, but that doesn't work for most users--e.g. Calendly won't work).
Edit: the way they handle domains and email aliases has simplified my email life.
In that situation (dual usage modes) I think I'd rather have the primary data store be Materialize, and just snapshot Materialize views back to your warehouse (or even just to an object store).
Then you could use that static store for exploration/fixed analysis or even initial development of dbt models for the Materialize layer, using the Snowflake or Spark connectors at first. When something's ready for production use, migrate it to your Materialize dbt project.
The way dbt currently works with backend switching (and the divergence of SQL dialects with respect to things like date functions and unstructured data), maintaining the batch and streaming layers side by side in dbt would be less wasteful than the current paradigm of completely separate tooling, but still a big source of overhead and synchronization errors.
If the community comes up with a good narrative for CI/CD and data testing in flight with the above, I don't think I'd even hesitate to pull the trigger on a migration. The best part is half of your potential customers already have their business logic in dbt.
In my experience (dependency-heavy data engineering & ML), Poetry is unbearably slow[^1]. Great interface/workflow, though.
Anyone here with practical experience using [Flying Squid](https://github.com/HazyResearch/flyingsquid) over the open-source Snorkel library? I'm curious if this platform re-uses some of that line of research or if it's not practical for some reason.
site design / logo © 2022 Box Piper