One of the most interesting things to me about CRDTs, and something that a skim of the article (with its focus on low-level CRDTs) might give the wrong impression on... is that things like https://automerge.org/ are not just "libraries" that "throw together" low-level CRDTs. They are themselves full CRDTs, with strong proofs about their characteristics under stress.
Per the Automerge website:
> We are driven to build high performance, reliable software you can bet your project on. We develop rigorous academic proofs of our designs using theorem proving tools like Isabelle, and implement them using cutting edge performance techniques adopted from the database world. Our standard is to be both fast and correct.
While the time and storage-space performance of these new-generation CRDTs may not be ideal for all projects, their convergence characteristics are formalized, proven, and predictable.
If you're building a SaaS that benefits from team members editing structured and unstructured data, and seeing each others' changes in real time (as one would expect of Notion or Figma), you can reach for CRDTs that give you actionable "collaborative deep data structures" today, without understanding the entire history of the space that the article walks through. All you need for the backend is key-value storage with range/prefix queries; all you need for the frontend is a library and a dream.
That's a great summary of CRDTs, starting from the basics and to the more advanced ones.
Speaking of Riak, it's still around, in the form of https://github.com/OpenRiak!
CRDTs are something you still have to write by hand, I finished creating a custom sequence based CRDT engine about 2 months ago (inspired by diamond types) and it was hilarious to ask Ai for assistance.
It's interesting when you are working on something that:
1. Is essentially a logic problem.
2. That LLMs aren't trained on.
3. That can have dense character sequences when testing.
4. To see how completely useless an LLM is outside of pre-trained areas.
There needs to be some blackbox test based on pure but niche logic to see if an LLM model is capable of understanding and even noticing exposure to new logics.