This demo downloads the entire React issue repo into browser storage so that the UI can do filters locally with no waits. Also the entire thing updates live between collaborators if multiple people are online at the same time.
From 8/8/2008, 8:55:14 PM till now, @aboodman has achieved 2020 Karma Points with the contribution count of 646.
Recent @aboodman Activity
This demo downloads the entire React issue repo into browser storage so that the UI can do filters locally with no waits. Also the entire thing updates live between collaborators if multiple people are online at the same time.
Show HN: Linear issue tracker recreated with Replicache and ~5k loc
3 points • 1 comments
Hi chad, thanks! It’s been great working together. And I don’t think assetbots is different at all, except maybe you don’t use the multiuser sync?
Linear clone, with realtime sync and instant UI - built with Replicache
17 points • 3 comments
I was obsessed with the idea of smartwatches in 2011-2012 and was involved in a few early efforts that, while exciting, were more complex than what I was really dreaming of.
When the pebble kickstarter launched it was so close to what I had in mind, it was pretty disheartening. It's possible I even reached out on the website to seeing about joining the team :). I can't remember.
Anyway, what an inspiring product and team. Thanks for sharing the story, and for the honesty.
This is a really clear description of the problem and solution. Thanks!
I recommend amending the blog (or maybe making a new post) with this exact content. I regularly run SQL databases for smaller projects but was not able to immediately conceptualize how I would use this feature from just the blog post. Maybe your target audience would be able to? But I don't see the downside in just spelling it out clearly!
That doesn't help you know the transfer is final, it just gives you a timestamp that the transfer occurred at which somebody reputable has attested to. You can't know that the owner didn't sign a transfer to some other buyer with an earlier timestamp.
> all an NFT does is state "author X gives ownership of Y to Z",
Incorrect. As @saurik says, but I'll just restate: an NFT also tells you the ordering of all such statements within one context (a single blockchain). So you can know whether this was the first such transfer by this owner or a subsequent one. This is the entire purpose of distributed databases, including blockchains.
> will be dated more recently than the other
How does the receiver of such a transfer know that the transfer is final? There's no way for them to know that an "earlier" transfer of same item won't show up days later.
The key is the "any arbitrary program".
With zk-snarks, I can, theoretically, run some complex analysis of some data -- imagine something that requires millions of compute hours -- and provide a receiver (a) the answer, and (b) a compact hash-like proof that (a) is correct *without* receiver having to re-run the calculation to trust the answer.
No it has to be a pure function with no reference to outside systems, and the function has to be in the complexity class np.
zksnarks are one of the most wonderful results in computer science and mathematics.
On first learning of them, I had the common reaction of "I can't believe this is even possible to do".
The way that I came to intuitively understand them is by analogy to public key cryptography. Digital signatures allow someone to run the RSA algorithm on some piece of private data X and prove the output is Y, without revealing X.
zksnarks are a generalization of this idea. They allow one to prove that they ran any arbitrary program P (in np) with some private data X, and it produced output Y, without revealing X.
It seems (slightly) less mystical for me to see it that way.
I think it’s a little different because people do regularly maintain bridges without replacing them and rerouting all traffic. But I get your drift. What a crazy thought to have these completely autonomous programs moving billions in value.
Thank you for the information. Yes, I was talking about the proxy pattern. My mental model was that this was the classic approach.
I did not know that popular major projects were using immutable smart contracts.
This seems crazy to me, but it does at least address the question of centralization (while introducing massive bug risk).
This makes sense to me and I think can be considered decentralized. How common are completely immutable contracts though?
It seems like for any contract that holds significant value it would be insane to make it immutable, particularly when it's written in a turing complete language like Solidity.
Your comment doesn't address the point of the article. The author isn't talking about Ethereum itself, which they acknowledge up front can comfortably be considered decentralized.
The point is smart contracts. They are code, and they have to get updated if there are bugs. How do they get updated? Well, in the simple (and common) case somebody holds the private key and updates it. That's centralization, obviously.
You can get arbitrarily complex with how smart contracts are updated. You can have multisig, where 2:3 have to vote to update, or you can have actual voting, or you can even do voting by stakeholder in the contract.
But whatever you choose, it's fully a function of the smart contract itself what level of decentralization it achieves, and not at all about Ethereum.
Alright, there is some naming collision then.
"offline-first" (terrible name, but here we are) generally refers to a classic web application that wants to be able to run offline either for network resiliency reasons or for performance.
"local-first" is a term that has been coined for something close to what you are talking about: https://www.inkandswitch.com/local-first.html
For a SaaS style (aka client-server) application, the right way to think of client-side storage is as a persistent cache, for a few reasons:
* it can be deleted at anytime (by browser, or even by user!)
* you generally want the server to be authoritative. if there's a bug client-side, server view of state should win.
* it's not possible in the general case to store all user data offline, it's always a subset.
Once you realize that the client-side state is a cache, potential uses of it become a lot more clear.
aww, thanks.
You should take a look at my project, Replicache: replicache.dev.
While it is true that you have to duplicate the mutations in the basic setup, you do not have to share the querying/reading code as it lives primarily on the client.
Also, if your backend happens to be javascript/typescript, then you can share the majority of the mutation code between client and server and the result is quite sweet.
site design / logo © 2022 Box Piper