What are you working on? Any new ideas which you're thinking about?
Currently a one-man side project:
Last year PlasticList discovered that 86% of food products they tested contain plastic chemicals—including 100% of baby food tested. The EU just lowered their "safe" BPA limit by 20,000x. Meanwhile, the FDA allows levels 100x higher than what Europe considers safe.
This seemed like a solvable problem.
Laboratory.love lets you crowdfund independent testing of specific products you actually buy. Think Consumer Reports meets Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid's snacks, whatever you're curious about.
Here's how it works: Find a product (or suggest one), contribute to its testing fund, get detailed lab results when testing completes. If a product doesn't reach its funding goal within 365 days, automatic refund. All results are published openly. Laboratory.love uses the same methodology as PlasticList.org, which found plastic chemicals in everything from prenatal vitamins to ice cream. But instead of researchers choosing what to test, you do.
The bigger picture: Companies respond to market pressure. Transparency creates that pressure. When consumers have data, supply chains get cleaner.
Technical details: Laboratory.love works with ISO 17025-accredited labs, test three samples from different production lots, detect chemicals down to parts per billion. The testing protocol is public.
You can browse products, add your own, or just follow specific items you're curious about: https://laboratory.love
This is really cool - it'd be great to test for other chemicals like heavy metals.
Specifically, rice seems to contain a good deal of arsenic (https://www.consumerreports.org/cro/magazine/2015/01/how-muc...) and I've been interested for a while in trying to find some that has the least, as I eat a lot of rice.
If you are concerned about heavy metals, look at herbs: https://www.consumerreports.org/health/food-safety/your-herb...
BTW I love Consumer Reports.
Rice is easy to solve by just buying California grown. They have the lowest regional levels in the world and I expect the variance amongst those growers to not have significant impact.
How do you find California grown in other states? Often it just says US
Some brands tell you. I think Nishiki is one of the big ones. There are family farms that sell online too.
IIRC, This was previously (recently) discussed wrt rice sold (or given) to Haiti. Because that rice came from the Confederate States, it has more arsenic.
IIRC2, don't buy rice from land formerly used to grow cotton. Because calcium arsenate was used to kill the boll weevil.
https://en.wikipedia.org/wiki/Boll_Weevil_Eradication_Progra...
Are there any tests like this for rices imported from abroad?
> the FDA allows levels 100x higher than what Europe considers safe
I thought it was an exaggeration so I checked. It's actually even worse:
EU is 0.2 ng/kg body weight and US is 50 µg/kg body weight. So the US limit is 250,000 times higher.
>All results are published openly.
Where can I find the link? Do I need to submit my email to see the "openly published results"?
https://laboratory.love/plasticlist may work for you. If not, the input 'email@example.com' is what led me there.
> Powdered Milk from 1952 Korean War Rations: High in Phthalates
Wow, thanks for the heads up, website. I'll throw out my stock of these right away.
I don't understand? It would be useful to see how items from the past test for these materials. There are also plenty of current items.
Do you have an arbitrary date we should use to ignore items for testing?
Seems like a fair point, given OPs opening says “crowdfund independent testing of specific products you actually buy” - having the top products be more commonly bought items may be interesting.
I was really just making a joke
cool idea, fyi on an s21, each word (bisphenols etc) has the last letter going to a second line.
- [deleted]
Seems odd that two different flavors of the same product would have different phthalate content? Would that mean that shelf life could have an impact?
Vanilla (high): https://laboratory.love/plasticlist/59 Strawberry (medium): https://laboratory.love/plasticlist/60
Nice observation ;-) If I'm reading the underlying data[0] correctly, it looks like the threshold for DEHT is significantly lower in the Vanilla tests (<4,500ng) vs the Strawberry tests (<22,500ng)
0: https://i.imgur.com/L1LVar1.png
Edit: I guess that should impact the Substitutes category, though, and not the Phthalates category.
Is the identity of those who make donations protected in any way? Could a company seek legal damages against all or some crowdfunders for what they might deem as libel (regardless of merit)? I doubt people who donate $1 here or $2 there have the capability of warding off a lawsuit.
Super compelling project. When I saw PlasticList, my first thought was how to get the results to create pressure on the food companies. The interactivity and investment of your project might do that. Best of luck.
This is a great idea! It could also expand to testing non-food items for dangerous chemicals (lead, heavy metals, etc). Many products keep a certification on-hand confirming that the product has been tested and found not to exceed the threshold, but I always am suspicious of (1) how thorough the initial testing actually is and (2) how well these results hold up as manufacturing continues. I realize I'm just plugging my pet-peeve though, not sure if others are as concerned about this.
Really cool, definitely donating to a few products!
How do you hold the money for up to 1 year? Does it go into escrow until the project is funded?
Any connection/collaboration with https://www.plasticlist.org/ ?
It looks like just a wrapper around the data from plasticlist for now. One can fund other products, but I searched and could not find any others that were funded as a result of this project. Some transparency about the cost seems critical for successfully running such a crowd-funding project.
I think this would integrate well with Yuka
Great initiative! Would it not be cheaper to produce home testing kits that can consumers can purchase?
Incredible.
Often desired something like this so thank you for making this happen.
this is cool, if you'd like some help on the web UI stuff, I'd love to contribute.
wow! all the best to you.
Repo: https://github.com/mochilang/mochi
I'm building Mochi, a small programming language with a custom VM and a focus on querying structured data (CSV, JSON, and eventually graph) in a unified and lightweight way.
It started as an experiment in writing LINQ-style queries over real datasets and grew into a full language with:
- declarative queries built into the language
- a register-based VM designed for analysis and optimization
- an intermediate representation with liveness analysis, constant folding, and dead code elimination
- static type inference, inline tests, and golden snapshot support
Example:
The long-term goal is to make a small, expressive language for data pipelines, querying, and agent logic, without reaching for Python, SQL, and a half-dozen libraries.type Person { name: string age: int } let people = load "people.yaml" as Person let adults = from p in people where p.age >= 18 select { name: p.name, age: p.age } for a in adults { print(a.name, "is", a.age) } save adults to "adults.json"
Happy to chat if you're into VMs, query engines, or DSLs.
If one wished, could they do something like
`save adults to "adults.json" as XML`
The expected output would be a file with the name "adults.json" containing XML data. I don't see much benefit in this specific use case but I do have a 'code smell' in having the language automagically determine the output structure for me.
looks super cool for some quick data filtering and manipulation
It's been great for quickly filtering and transforming structured data like CSV and JSON. Optimizing the VM is fun too, though it sometimes comes at a cost, we once broke around 400 tests after adding peephole optimizations that changed how the IR handled control flow.
Interesting project. I'm quite interested in developing a small programming language myself, but am not sure where to start. What resources do you recommend?
Crafting Interpreters https://craftinginterpreters.com is a super friendly, step-by-step guide to building your own language and VM, looking forward to seeing what kind of language you come up with too!
I'll second this. It's fantastic.
The concepts that the OP talks about (liveness analysis, constant folding, dead code elimination), and similar stuff revolving around IR optimization, can be found explained in great detail in Nora Sandler's "Writing a C compiler".
This is awesome. I often start to reach the limits of my patience trying to figure out how to do things in `jq` DSL. This seems way more friendly.
Very cool!
This is exactly the kind of thing I've had in mind as one of the offshoots for PRQL for processing data beyond just generating SQL.
I'd love to chat some time.
Still working on: an enclosure-compatible open-source version of the 2nd gen Nest thermostat. It reuses the enclosure, encoder ring, display, and mounts of the Nest but replaces the "thinking" part with an open-source PCB that can interact with Home Assistant.
- The encoder ring which works like an LED mouse, but in reverse: Fully reverse-engineered and on its own demo PCB
- The faceplate PCB, which does the actual control of the thermostat wires, has been laid out, but the first version missed a really-obvious problem involving the behavior on power-on with certain of the GPIO pins from the ESP32, so I've got rev 3 on order from the PCB manufacturer.
Nest Thermostats of the 1st and 2nd generation will no longer be supported by Google starting October 25, 2025. You will still be able to access temperature, mode, schedules, and settings directly on the thermostat – and existing schedules should continue to work uninterrupted. However, these thermostats will no longer receive software or security updates, will not have any Nest app or Home app controls, and Google will end support for other connected features like Home/Away Assist. It has been pretty-badly supported in Home Assistant for over a year anyway, missing important connected features.
M5 Stack sells a nice controller knob if you don't have a used nest handy
https://shop.m5stack.com/products/m5stack-dial-esp32-s3-smar...
> As a versatile embedded development board, M5Dial integrates the necessary features and sensors for various smart home control applications. It features a 1.28-inch round TFT touchscreen, a rotary encoder, an RFID detection module, an RTC circuit, a buzzer, and under-screen buttons, enabling users to easily implement a wide range of creative projects.
> The main controller of M5Dial is M5StampS3, a micro module based on the ESP32-S3 chip known for its high performance and low power consumption. It supports Wi-Fi, as well as various peripheral interfaces such as SPI, I2C, UART, ADC, and more. M5StampS3 also comes with 8MB of built-in Flash, providing sufficient storage space for users.
I've build a few HA-compatible systems using M5Stack products; mostly the Atom-S3 Lite connected to various sensors and lights.
I really like the nest encoder/button feel, so I was considering trying to hack mine into a becoming desktop volume control/button... but probably lacking the skills to not make a mess of it. Would love to see how you interface with the existing hardware!
Vaguely related - two encoder wheel projects on YouTube that might interest you:
- "Wireless High Resolution Scrolling is Amazing": https://youtu.be/FSy9G6bNuKA
- "DIY haptic input knob: BLDC motor + round LCD": https://youtu.be/ip641WmY4pA
Clever.
Any ideas on how to source 2nd gen Nests? I just checked ebay and my local craigslist; nadda.
Do recyclers accept requests? Like pulling all the Nest units from the waste stream?
Sounds very cool! Also interested in how to follow progress. Is it using ESPHome?
Is this project online anywhere yet that I can watch for it to be ready?
seconded, I have never wanted a HN "follow" feature before, but this project sounds great
Currently reading Tony Fadell's book, sounds interesting.
Wow! Useful work, if that’s true about them planning to remotely nerf everyone’s product.
Yet another example of why not to buy a product that needs to be tethered to its manufacturer to work. Good luck. I’d be willing to beta test (I’d have to check what rev mine is)
> if that’s true about them planning to remotely nerf everyone’s product
https://support.google.com/googlenest/answer/16233096?hl=en
> Upcoming end of support for Nest Learning Thermostats (1st and 2nd gen)
> Nest has announced the end of support for Nest Learning Thermostats (1st and 2nd gen). Your thermostat will no longer connect to or work in the Google Nest app or Google Home app starting on October 25, 2025.
I made a film called "Searching For Kurosawa". This short documentary chronicles the story of Kawamura, a man who worked with legendary Japanese director Akira Kurosawa on the set of his opus "Ran". Kawamura was working in the BTS crew, but his footage got confiscated. It took almost 40 years to recover the footage and present that as his feature film.
My film got screened at the Academy Award-qualifying Bali International Film Festival and the Marina Del Rey Film Festival in the past month. It will be screening next month in New York City at the Asian American International Film Festival.
funny, I was just dubbing some great edits of Kurosawa films in somebody else's film essay with some music I like.
- [deleted]
Awesome! I hope I can find a way to watch it in Barcelona.
I wonder if there's a nice film festival in Barcelona or nearby.
Otherwise, I'll let you know once it's widely available.
Wow :)
+++1
Wow congrats!
I've been working on a 3D voxel-based game engine for like 10 years in my spare time. The most recent big job has been to port the world gen and editor to the GPU, which has had some pretty cute knock-on effects. The most interesting is you can hot-reload the world gen shaders and out pop your changes on the screen, like a voxel version of shadertoy. https://github.com/scallyw4g/bonsai
I also wrote a metaprogramming language which generates a lot of the editor UI for the engine. It's a bespoke C parser that supports a small subset of C++, which is exposed to the user through a 'scripting-like' language you embed directly in your source files. I wrote it as a replacement for C++ templates and in my completely unbiased opinion it is WAY better.
10 years? Man, I envy you. Seriously. You say you work on it in your spare time so it's no like is your life passion or something like that right? How do you keep momentum? I have hundred of never finished projects, and I really struggle to finish them or work on them enough to want to keep doing it. Teach me.
Hah, thanks for the kind words <3
In all seriousness, I think I have the same propensity to have a hundred unfinished projects and have a hard time finding motivation to complete them. The difference might be that I have this 'big' project called a 'game engine' that wraps them all up into some semblance of a cohesive whole. For example, projects that are incomplete, but mostly just good enough to be serviceable (sometimes barely):
1. Font rasterizer 2. Programming language 3. Imgui & layout engine 4. 3D renderer 5. Voxel editor
.. etc
Now, every one of those on their own is pretty boring and borderline useless .. there are (mostly) much better options out there for each in their specific domain. But, squash them all together and it's starting to become a useful thing.
It just happened that I enjoy working on engine tech and I picked a huge project I have no hope of ever finishing. Take from that what you will
"I hate to advocate drugs, alcohol, violence or insanity to anyone, but they've always worked for me. --Hunter S. Thompson
Hah. I've been working on my own engine for over a decade, and I completely relate to this. I've torn it down and rebuilt it a few times, I've got multiple branches of it that are built for specific things... but when I want to do something I know it can't do, that could be easily done in some other engine, it just puts a bug up my butt to try and make my own code do that thing. Then I dive into code I haven't looked at for a few years and I realize that so many things could be improved. And I lose a week of sleep yak-shaving this thing that will almost definitely never be seen or used by anybody else. But I see it as a kind of craftsmanship and sharpening my own tools. I don't know another, better way to do that.
> But I see it as a kind of craftsmanship and sharpening my own tools. I don't know another, better way to do that.
Toooootalllly. This project started out for me as a learning exercise, and for a long time an explicit non-goal of the project was to ship a game. It's just my own little land that I know every nook and cranny of for experimenting and, sharpening my tools, as it were. It's also the best way I've ever found.
To be clear, your work is next level awesome and technically more sophisticated than anything I've done. Mine is Javascript based and essentially a sprite screen graph rendering engine mixed with pixel-level effects on top of canvas, with an isometric endless world platform for runners and RPGs built on top of that renderer. I did ship one game with it, back in 2016. Since then, I've relegated my commercial work in that vein to threejs and pixijs. It sounds counterintuitive, but using your own codebase like this to develop a finished commercial product inevitably forces you to pollute the purity of the code. You start to develop sets of features that are specific to the game's needs and which force compromises that shear it away from being a general-purpose green field engine. Do I still dream of the day when I finish a new game built on my own codebase....? Totally. But maintaining a game in production and not having its needs override the intent of keeping a general purpose engone is like 4x times the work.
>I've been working on my own engine for over a decade
Username checks out
Admirable perseverance!
I've always also had a side project or two in this domain but I've never managed to stick with one for more than 3-5 years.
This is pretty cool! I am also interested in game engine programming, but I am in the very beginning of the journey.
Do you have any recommendation on voxel engine learning materials (e.g. books, courses, etc)
Voxel engines are interesting because they're very much an area of active research. People are often coming up with novel techniques, and adapting traditional techniques in interesting ways. There isn't any good, singular resource for learning about voxel engine development thay I know of.
I'd recommend Handmade Hero for a more traditional resource on how to build a game engine. That's how I learned to program for real, and it worked great for me.
I think if you made this a plugin for an existing engine, say Godot, you could get a lot of use out of it. I'd use it!
Curious if you have started using LLMs to speed up any of your development yet?
No. The bottleneck for me is not the speed at which I type code, which is basically what LLMs accelerate. I've been thinking about trying again, but at this point the context windows are FAR too small to feed my entire project too, and it just seems like a giant pain to have to babysit that.
Additionally, the bottleneck for me is typically writing graphics code, and LLMs are hilariously bad at that.
It looks wonderful, well done on the design
It looks pretty awesome, great job!
After 2+ years of maintaining the FOSS lightweight Reddit frontend Redlib [0], I realized that my niche but extremely detailed knowledge and experience of using Reddit's endpoints might be useful. After reverse engineering the mobile app and writing code to emulate nearly every aspect of its behavior, plus writing a codegen framework that will auto-update my code from analyzing the behavior from an Android emulator, I can pretty easily replay common user flows from any IP around the world, collecting and extracting the data. Some use cases:
* OSINT (r00m101 just beat me to it by launching...)
* Research into recommendation algorithms, advertising placement algorithms, etc
* Marketing (ad libraries, detailed analysis of content given data not even exposed to the mobile app due to some interesting side channels, things like trend analysis, etc)
* Market research for products
* Sales teams can use it to find exact mentions of other products. Eg: selling crash reporting software? Look up your target accounts' brands and find examples of complaints.
Plus a few more with more imagination.
So I'm working on a site that allows user access to some of the read-only functions available here. Coming soon :tm:. Been really fun building it all in Rust, though :) If you're interested in anything here, email in profile.
~2 years ago, Reddit was cracking down on this type of usage. This lead to a mass exodus of users to lemmynet and other decentralized platforms.
What makes you special in this aspect? Seems you are small fish now, but if your niche project picks up steam. Nothing to stop them from cutting you off or forcing you to court/injunction and waste your personal resources.
That crackdown was for regular API usage aka just regular content access, which definitely isn’t special. Most other “reddit data access” sites either use some sort of headless browser or just the JSON endpoints, which are brittle and limited, whereas I can access the private mobile API that the app uses for ad/recommendation distribution at a much larger scale. These things aren’t accessible via the API. Picture it as: an API where you can access just content, vs having programmatic access to every piece of data the mobile app can access, which unintuitively is not limited to what the mobile app displays (there’s other interesting fields available).
Is there any interest in factoring the Reddit parts out of the UI code? I've been thinking of taking a stab at that myself but figured this would be a good place to ask if you have plans :)
Do you mean a way to have the Reddit app render content from some generic social media provider, while keeping the UI? I haven't thought about that yet. I'm sure it would be possible, but that would require tearing out a lot of backend code and replacing it 1:1. Most of my work has been on the network side of the app, and not much modification; just introspection and inspecting behavior.
My main question: why, do you like the UI? I honestly really hate the reddit app, I haven't seriously used it for browsing since I fixed up Libreddit into Redlib :)
I don't like the Reddit app personally but I also do like something a bit more dynamic than what Redlib offers. Personally I'm fine with JS on the frontend and frameworks like React as long as they're implemented well.
I'd also just like to play around with different styles of frontend just as a way to hack on things.
Ah, I see. You can get pretty far with Redlib as a base + modifying html templates. They're very flexible and easy to read/extend. Though it relies on public methods to access Reddit, not my mobile app secret sauce :)
Oh I thought there was interesting user agent stuff going on in Redlib itself but sounds like not. I'll use the public methods then thanks!
[dead]
After 10 years in defense tech, watching missile attacks in Ukraine and the Middle East made it clear how little most people really get about air defense. So I'm builiding this simulator which drops you into the operator’s seat. You can test out different scenarios and build an air defense network against various types of threats (stats from real world). Also have Ukraine, Israel-Iran scenarios.
Is this an attempt to give the decision-makers on your projects a way to develop a clue? My work is logistics-related and a lower priority than missle defense, but I'm surprised the people pulling my strings manage to get their pants on the right end of their bodies most of the time. Just curious if you folks have the same problem.
Poked at it for a few minutes. And yes, it's clear how very little I get about air defense.
I would consider adding a tutorial or a toy version that's simplified a bit.
Love it. What could be a good addition IMHO is to add approximate costs of the placed systems, and cost of the ammunition used during the simulation ( for both attack and defense ).
Like the Eisenhower speech. Every missile is 10 new schools, food to feed 100 families for a year, etc.
I don't think that's what they meant.
More along the lines of comparing $200 drones to $200,000 missiles. The economics of warfare and asymmetric warfare.
I think they were making a point about the kinds of things that a society can choose to spend its tax dollars on.
I find that hard to reconcile given what I was responding to:
"Love it. What could be a good addition IMHO is to add approximate costs of the placed systems, and cost of the ammunition used during the simulation ( for both attack and defense )."
I don't follow, sorry. The comment of yours that prompted my reply was in response to this comment:
> Like the Eisenhower speech. Every missile is 10 new schools, food to feed 100 families for a year, etc.
Greetings, Professor Falken. The only winning move is not to play.
Really cool. Wish I could see more of the system log messages, that's the most interesting part to me.
Tangential: do you have insights into viability of mini automated anti-drone turrets? Something you'd place on a truck or pull out of a trench when needed? We already have drones with shotguns. I guess it's the automatic acquisition and targeting that's the difficult part, but just how difficult is that?
really great, would make for a great tower defense style game as well. Start with few resources and learn what each capability can do. Defend against more complex/advanced threats over time.
Is the equipment efficiency meant to capture e.g. using a $1M missile to shoot down a $1k uav/rocket
Looks very interesting!
In your pre-made Estonia-scenario, some of the attacks come from Finland. What's that about?
I understand the launches from the Baltic Sea, but launching Kalibr next to a Finnish garrison seems a bit far-fetched.
Reminds me of a nuclear war simulator I had on my Amstrad many years ago, very cool
I tried Isreal-Iran scenario. So, any missile faster than 1000km/h pretty much have 0% chance of intercepting it? Data obviously classified, but this simulation is pretty fun.
Does this take into account the new "drone attacks from within the country's borders" scenario?
not sure you should use leaflet for this heavy map usage, it is not really usable now, maybe look at deck.gl
I'm working on a new app for creating technical diagrams - https://vexlio.com. It's an area with some heavyweight incumbents (e.g. Visio, Lucid) but I think there's good opportunity here to differentiate in simplicity and overall experience. I'm still in the fairly early phase, and I suspect I haven't quite found the best match of features to customers yet.
From a dev perspective this area has a ton of super interesting algorithmic / math / data structure applications, and computational geometry has always been special to me. It's a lot of fun to work on.
If anyone here is interested in this as a user, I'd love for any feedback or comments, here or you can email me directly: tyler@vexlio.com.
Some pages the HN crowd might be interested in:
* https://vexlio.com/blog/making-diagrams-with-syntax-highligh... * https://vexlio.com/solutions/state-diagram-maker/ * https://vexlio.com/blog/speed-up-your-overleaf-workflow-fast...
Years ago I was making a diagram editor with the intent of doing code generation from diagrams (like simulink, not stateflow). I started with splines for the connections and decided straight lines and junctions would be better for complex diagrams. I realized that a better way to internally define the connecting wires is via a set of lines and their connectivity (vs their endpoint coordinates). Imagine each line segment is defined by a direction (vertical or horizontal) and a position (perpendicular distance from the origin) Like ax+by=d where a and b are either 0 or 1. You also need to define which other lines it connects to. Given the list of connections you can then calculate the intersections at rendering time. By sorting the list of connections you can render the line without features at the start and end, and then draw junctions for any intermediate connections. The beauty of this would be to allow dragging blocks around and having the lines follow with the junctions passing through each other as needed. There is some housekeeping with this data structure (merging colinear segments that connect, breaking segments when needed) but the UI for dragging should be much better than anything out there.
Interesting - do you have a writeup or a demo available somewhere? What types of junctions were you envisioning?
In the links you posted (2 comments up) there is a PID controller. Out of the difference block the signal splits 3 ways. In Simulink that junction would just be a round dot where the lines meet. In most diagram editors that junction would probably have 5 lines connected to it. In my scheme there would be one horizontal line right through it and one vertical line. There would also be 2 short horizontal lines connecting the vertical line to the P and D blocks. In my scheme you could drag the D block up to the top above the P (D,P,I) and the junctions would change (because they're locations are computed) even though the topology has not. I never did a full implementation. email me at gmail if you want a better explanation or a quick drawing.
This looks really cool. An application I would use this for is to generate code for FPGAs, as finite state machines are very common.
This is an example, https://terostechnology.github.io/terosHDLdoc/docs/guides/st...
But it only outputs an SVG, and there are no tools (AFAIK) that go from diagram to code, which should easy to setup.
So I'd consider extending this to both generate code and read in code and make these nice interactive diagrams.
Thank you for the feedback! This is a great idea and definitely fits into the vision.
Do you know if the FPGA and/or hardware communities use any type of formalism for design or documentation of state machines? One example of what I mean is is Harel statecharts - essentially a formalized type of nested state diagram.
Actually right up my alley. I have many frustrations and reservations against the current offerings. Super excited to see a new player enter the field
Would love to hear those frustrations and reservations - drop me a line if you're interested in sharing: tyler@vexlio.com.
Looks pretty great! The free tier also looks reasonable. The pricing on the other tiers isn't outrageous either if you use it consistently. Unfortunately, I likely find myself in the big gap between the free tier and the Basic plan. I can't justify yet another subscription that I use only a couple of times a year. That said, I would happily pay the $6 on the months that I use the service. Given the churn issues, I'm surprised more SaaS offerings don't work that way.
Thank you for sharing this perspective! Your proposal is potentially a good middle ground, and I will certainly give this some thought.
It looks like a pretty interesting product so I really hate to be that guy but the FAQ page at https://vexlio.com/faq/ straight up doesn't work (whenever I click any of the questions, it does nothing). Also, wanted to know if there was anything in the pipeline to get a Desktop application which would work offline. In several places in the enterprise world especially, I do feel there would be scope for that. I would definitely pay for a desktop version which worked offline for example.
Whoops - FAQ issue should be fixed if you refresh (if it's still broken, give it some time for caches to be invalidated). Thanks for mentioning that!
Re: desktop version. The short answer is yes, probably, but I don't have a concrete timeline. I made tech and architecture choices from the beginning to make sure a cross-platform desktop version always remains possible. Frankly, the biggest obstacle for desktop is not the app itself, but distribution and figuring out a pricing model. The current solution for enterprise, business, and other interested people, is to self-host Vexlio, with separate licensing.
FAQ works fine for me now.
Looks really good an seems intuitive (from just browsing the landing page). Will look more deeply.
Diagram-as-code option?
ie. a language syntax from which a diagram can be generated?
I find a lot of the time taken up in doing diagrams is laying them out properly and then having to rearrange them when it grows beyond a certain size.
This may,however, be an old-man Visio user problem that's been better solved by more recent options...
Some type of programmatic diagram creation is definitely something I'm interested in supporting. It's not clear to me how large the audience would be, so it's been hard to prioritize.
Gave it a quick try and it's really nice, the aesthetic defaults are great. One thing I found unintuitive: I should be able to connect objects without having to select a new tool (the anchor points on hover should be clickable in any tool mode so I can connect objects on the fly).
Overall amazing though, will be using!
Thanks for this feedback! This is one of those quality-of-life features that I think are really important for the overall experience - I will be adding this.
In the off chance you haven't seen Bret Victor, your app reminds me of him, https://www.youtube.com/watch?v=NGYGl_xxfXA
Super impressive! Your software product looks phenomenal and your website is also extremely nice! Best of luck to you.
Nice project, congratulations, it would be cool to see it integrated into LaTex, what do you think?
Visio and Lucid are trying to cover everything at the expense of practical convenience. Pick a lane and stick to it. Good luck!!!
Definitely seems to be the case from my observation as well. Appreciate it!
What's your long term revenue model?
Enterprise licensing? Donation based? Hosting fees with value-add mark up?
Super cool. Do you consider yourself to be a competitor with Mermaid?
Thanks! I would say no. Mermaid is strongly code-first diagramming, which is an excellent usecase and niche in its own right. I would be surprised if Mermaid ended up with a WYSIWYG editor on top of it, since that is pretty counter to its philosophy (as far as I understand anyway).
Looks great! Your editor design is beautiful.
Looks great, and smart differentiation!
Cheers, thank you!
oh cool! I want to try this soon.
seamless latex integration is a winner for me!! will definitely spread the words for this
Awesome, thank you! If you or your colleagues have other LaTeX-related goals or wishes, do let me know. There's a lot of untapped opportunity there as well (IMHO).
really nice work. I'm going to give it a roll!
Thank you! If you end up having any feedback, definitely feel free to drop it here, or email if you prefer.
Unless you intend to be acquired by Overleaf I don't really see a future for your business to be honest.
This weekend, my modified Android/mobile Point of Sale (POS) app was used to celebrate the 100th anniversary of our village's volunteer firefighting organization.
The standard fiscal POS app was adapted to support a sort of low-trust swarm of waiters who used the app to collect orders. These orders were then transferred to a few high-trust cashiers by scanning QR codes generated on the waiters' apps.
After receiving payments, the cashiers' apps printed invoices and multiple "order tickets" categorized by "food," "drinks", "sweets"... This allowed waiters to retrieve items and deliver them to customers.
The system was used by around 40 users, with new waiters joining or leaving throughout the event. They used their own phones, and the app functioned without internet or Wi-Fi, gracefully downgraded (If a waiter didn't use the app due by choice or due to technical problems, they could manually relay orders to cashiers), Customers also had the option to approach cashiers directly, receive their order tickets, and pick up items themselves.
This is not that technically interesting, but I liked how the old manual system, the 70+ year village firefighting org. main cashier had, got digitalized in non-centralized way. (and I took this chance in trying to explain it, as I will have to, to maybe find more users for it)
> and the app functioned without internet or Wi-Fi
Just curious: How did it work without internet or wifi? Did it do something over bluetooth, NFC, QR code...?
The waiters (many, low-trust) were transferring orders to cashiers (few, high trust) by showing them QR code that transferred data to the cashiers' apps.
Then the waiter paid the cashier (in advance), got the bill to give to customer and order tickers (printed on a bluetooth POS printer with a cutter, so they were already separated) to recieve the goods (grouped by stations that gave out the goods, food, drinks ...). The stations took the order tickets and gave them goods. The waiters delivered them to customers and used the bill to get cash from the customer.
The waiters could use their own starting money and just stop selling at any point, or got it from the main cashier and had to return the same amount at the end.
It sounds like the Credit Card Processing system at the cashier had internet for processing credit cards etc. but the waiter app has no internet dependencies since it can transfer the order to the cashier system.
It was all cash only. We're not in the US :) ... btw: Slovenia (small country) has more than 1,300 voluntary fire departments, and approximately 8% of the entire Slovenian population are volunteer firefighters. It's the main way especially smaller communities here organize and meet.
Any way we could test this system? Is the code open source?
I'm cleaning up a 25-30 year old bicycle. First time I've stripped one almost right back to the frame.
Strongly recommend the rust remover described by Backyard Ballistics[0] on his second channel[1]; 1 liter water, 100g citric acid, 40g washing soda, generous squirt of dish soap. He claims the acid and alkali cancel out so there's nothing to attack the normal metal surface, but they leave citrate ions which dissolve rust by chelation, which makes it better than just citric acid, vinegar, or soda alone, which all pit and dissolve the clean metal surfaces, and easier/better than wire wool scratching. He also claims it's as effective as EvapoRust but much cheaper and can do more rust dissolving per litre than EvapoRust.
[0] https://www.youtube.com/@Backyard.Ballistics - restoration of old and very rusty guns
[1] https://www.youtube.com/watch?v=fVYZmeReKKY - "The Ultimate HOMEMADE Rust Remover (Better than EvapoRust)", Beyond Ballistics channel
Very cool. I imagine you'll also need a heavy duty degreaser for the drivetrain and bottom bracket unless you're just going to chuck those into the bin anyway.
I have a bottle of degreaser and used it on the derailleur. I was planning to replace the chain (stretched), cassette (broken teeth) and maybe front sprockets (worn) but they are riveted to the crank arms so maybe not those. Bottom bracket is a Shimano sealed cartridge bearing one so that can stay (previous owner upgrade I guess; it has v-brakes instead of the original cantilever). Undecided about using a new chain and cassette with worn front sprockets or temporizing by cleaning the old cassette and chain.
Apart from that, tyres are too worn, front wheel rims are so pitted I couldn't smooth them with sandpaper and they don't brake well, bar grips are worn and torn, plastic pedals worn smooth - lots of replacements. And changing the handlebars for the fun of it. Lots of paint chips and dings on the fame being filled with car touchup paint which looks awful close up - but I had it and the colour match is close, so it's cheap and tidier. https://spray.bike/ is tempting ... but not this bike.
Inspirations: https://www.youtube.com/@bkefrmr - Bike Farmer, who is trying to be the 'Bob Ross' of tidying up 90s steel bikes and https://www.youtube.com/watch?v=C4n2S7-SC4Q - Spindatt "So I restored the original paint on my vintage MTB" and his followup vid on the process.
Mineral spirits for the grease. Paper towels. Brake cleaner or starting fluid if you want to get it completely oil free for paint or whatnot.
Ideas are coming way too fast to work on them all at the moment.
* Expect/snapshot testing library for F# is now seeing prod use but could do with more features: https://github.com/Smaug123/WoofWare.Expect
* A deterministic .NET runtime (https://github.com/Smaug123/WoofWare.PawPrint); been steaming towards `Console.WriteLine("Hello, world!")` for months, but good lord is that method complicated
* My F# source generators (https://github.com/Smaug123/WoofWare.Myriad) contain among other things a rather janky Swagger 2.0 REST client generator, but I'm currently writing a fully-compliant OpenAPI 3.0 version; it takes a .json file determining the spec, and outputs an `IMyApiClient` (or whatever) with one method per endpoint.
* Next-gen F# source generator framework (https://github.com/Smaug123/WoofWare.Whippet) is currently on the back burner; Myriad has more warts than I would like, and I think it's possible to write something much more powerful.
A few months ago I launched SpiesInDC - https://spiesindc.com, a mail-based (as in the real mail) subscription service about Cold War history. Subscribers, ahem secret agents, receive packages every few weeks containing reproductions of famous documents, stanps from the USSR, Cuba, Czechoslovakia, coins, and other fun stuff. I keep refining the packages every week to make it better and it is so much fun.
Great, novel idea and great that you've been enjoying the process on your end. Is it possible to gift this? I couldn't tell from the Subscribe section where there's a shipping address field but no billing address information was needed. Sometimes the billing and shipping info have to be the same for payment to go through.
Yep it is possible to gift and in fact that is how most subscriptions come in. The latest round was because of Father’s Day. As for matching billing and shipping fields, not sure, everything has worked fine so far!
Wonderful. Thank you.
How are you handling the mailing? I love the idea of a mail-based project, but I worry that I would forget to go to the post office occasionally.
So the answer to this question is a funny one. I started using a Google spreadsheet to manage shipping dates and that quickly became a chore so like any good nerd would do I built a CRM which is now live if anyone wants to try it: https://6dollarcrm.com/
Wasn’t planning on announcing it here but what the hell.
If you don't mind answering, does this have any users besides you? I've got a few internal tools developed over the years that I don't have the bandwidth to turn into a proper SaaS (not much time for support, polish, new features, etc) but could potentially offer on an "as-is" basis for a token monthly sum but not sure if it would be worth the trouble.
Yep has several users, people I know personally have been beta testing it for a few months now. I haven't started marketing it yet because I have been dogfooding it since February in order to build exactly the CRM I personally want to use.
Also has > 800 automated feature tests, in app documentation, gone through security audits using tools like Zap, etc. I've built a lot of SaaS products over the years, and I'm building 6DollarCRM from the standpoint of having learned a lot of things the hard way. I'm currently working on data importers and browser extensions for easily adding new contacts.
Give it a spin and let me know what you think.
Are you concerned about the possibility of 5 dollar crm?
In fact we have been laughing about this because it reminds us of the There's Something About Mary bit regarding 7 minute abs.
Something similar regarding American history by mail was pitched as a successful business on shark tank this season:
This seems a dangerous game to play in the era of Donald Trump. Imagine you or your subscribers get their houses searched...
I might have missed something but don't think nerdy stamp collectors are on any watch lists.
Working on a physical and digital archive of all American vintage print advertising. I've built the archival and database software on Lucee & MySQL to store images and automate, and I use OpenAI to analyze images and extra meta data. All of the full page ads are pushed to https://adretro.com.
I've gotten the process to fully catalog all of the advertisements in a magazine (about 150 on average) down from over a week to a few hours. I should be able to get through the material within my lifetime now :)
A category for politically incorrect ads would be cool.
Good idea! There are certainly some ads where I think "no way that would fly today." Though its not necessarily being PC or not (because a lot of the ads would be considered offensive today). It more like "What were they thinking, this ad makes no sense"
It's funny...I absolutely despise being advertised to, yet I find vintage ads fascinating. I don't know what that says about me.
I feel the same about a lot graffiti; if it's recent, it's an eyesore, but old graffiti can be extremely interesting. I guess both domains expose some elements of the zeitgeist seldom explored in other mediums. ¯\_(ツ)_/¯
Nice site, by the way!
I think it's more about how there's a lot more advertising now than in the past; and just how generally intrusive advertising has become overall.
Think about a newspaper / magazine: The ads didn't suddenly block the article, move the page around, or phone home to the advertiser. Likewise, the ads wouldn't slow the magazine down, flash, or make noise.
Those elements certainly amplify the awfulness, but I am old enough that I remember reading magazines, and I despised the ads there was well. I'm trying to read this OMNI article about colonizing Mars, and this stupid full-page ad for calculator wristwatches is getting in my way. *shakes fist*
I'm sure glad that the inline ads model never caught on in novels.
Thank you!
Yeah, there is a subtext to the advertising that changes over time that is very interesting. For example, early appliance ads are about saving household labor to spend time with the kids, later appliance become more about status and the allure of technology.
You should organize it both by industry as well as by brand and by year. For instance, if I want to look up vintage Rolex ads from the 1960s I could do that.
Okay thanks for the feedback!
[flagged]
Incidentally, I have come across few vintage ads containing or targeted to black people explicitly. Most of the vintage publications I come across are Life, Saturday Evening Post, and Look. I am on the lookout for regional and local publications which may be different, but they're hard to find because they were not really circulated enough to have survived. But there are so many publications I randomly find it's sometimes daunting how much I feel I'm missing out finding!
I'm finally getting my online presence in order...
This week, I'll set up a Hugo blog with the Ed theme, love it, looks exactly what I'm looking for, and as a former LaTeX enthusiast, it's pretty close. It's readable, minimalist. I'll need to customize the theme, though. I plan to publish blog posts about anything I find interesting.
https://gohugo-theme-ed.netlify.app/
In parallel to this work, I'm setting up a simple system to keep my website + subdomains easy to build, rebuild, and deploy with Caddy on a cheap Scaleway compute server. In the past, I had some ideas I wanted to publish, but the system I went with made managing the sites dreadful.
Once that's ready, I'm back to learning Rust and crypto. It's fun, interesting, challenging, remote-friendly, and the salaries are usually 30-50% better. My current tech stack feels like a dead end: it has a low ceiling in terms of salary, the projects are generally not very interesting (I'm grateful for my current project, it's the best there is with this technology), and I believe the technology will see a slow and steady decline.
Apart from work, I'm building the playground for my 2 yo son, and planting blueberries, he loves them.
As a non-developer, I played with rust and various copilots over the last couple of months. I ended up with a backtesting engine.
Now I figured out I want to go all in actually learning rust and doing the deep dive in crypto. Enjoy the trip.
> This week, I'll set up a Hugo blog with the Ed theme.
Perhaps a first blog entry would be to show and tell how you setup the blog with Hugo+Ed on your domain in the first place.
As someone who is being told that they need to increase their non anonymous footprint online, I certainly would be interested in reading it.
Just thought about it jokingly yesterday that every developer's first blog post is how they set up the blog or how they wrote a blog engine... :)
Long story short: Sign up for Scaleway, get your account approved, launch an instance, they have affordable "learning" instances that still feel "real" and can later run real services that need backend. I don't expect lot of traffic and I don't care if my stuff would go down from time to time, it's for fun. Set up SSH. Buy a domain, set up the DNS records to point to your instance. Run Caddy on the server to serve a dummy HTML file. Set up HTTPS. Verify you see your stuff in the browser. Now, create an actual site. Install hugo, pick a theme, install locally, build locally. Set up a script that copies the build folder onto your server where Caddy is serving, then restart Caddy. Write some content, check the limits of the theme / your set up, make sure everything works correctly. Even with the best of themes, you'll want to fix or change something, do that, if it looks good and you still have energy to work on your blog, start writing posts and let the world know.
This could still definitely be a blog post but great and well summarized walkthrough, thanks a lot!
> as a former LaTeX enthusiast, it's pretty close. It's readable, minimalist.
Really nice! But you still need at least basic justification if you want to relate it to LaTeX. ;)
I've been having the thought that I should curate my mostly-anonymous online presence for my career. Is that why you're doing it? Curious what inspired you to do this and what steps you're taking
What tech stack are you currently using that you see as a dead end?
Flutter and Dart. It's not that bad, I'm not saying it's dead, I'm saying it's a dead end for me.
I don't see many opportunities that pay well, are interesting, and available for remote. I'm happy at my current position, but if they were to ever "right-size" the team, I'd be fckd, so I spend my nights learning other stuff.
I started Flutter in 2018, back then it felt "magical" for mobile development, now all the competitors caught up. They also (IMO) waste their time reimplementing Flash on the web, it's horrible for 99% of the cases. The community is also off-putting, you observe obvious flaws, 10 GDEs come at you that you are a POS.
In general, mobile has a lower ceiling than backend, frontend, systems, etc... Mobile is also usually a lower priority for the business than web.
Curious what projects you use rust on for crypto?
I'm still in the "learning Rust and discovering crypto" phase.
As I have a web+mobile background, I'll probably start with some simple mobile or web apps, a wallet, price alerts, seed phrase gen, ens explorer, etc, basically anything that's crypto / defi / blockchain adjacent to understand the field better and ease into it.
Then, I'll also build stuff from the ground up (build your own blockchain, smart contracts, etc) so that I have a deeper understanding of the basics, not just "hand-wavy" ideas like "freedom, sovereignty, decentralized, store of value, trustless, permissionless", etc.
In parallel, I also plan to do non-crypto stuff to practice Rust and to have an escape route to web Rust in case I don't like crypto all that much or can't get a job right away due to lack of Rust + crypto experience..
Then, I hope, as I have a better understanding of the field, I'll have more interesting project ideas, too.
If you find something solid behind the hand-wavy stuff, I’d really love an email with details.
I am a PhD student and for a while now I'm designing and developing a distributed network protocol that enables dynamic resource allocation across heterogeneous nodes, to which I called Rank. It's designed to handle computational, network, and temporal resources in fully distributed environments without central controllers, but that could also handle a centralized environment. Rank implements four core functions: discovery (finding paths between nodes), estimation (evaluating resource availability), allocation (reserving resources), and sharing (allowing multiple services to use the same resources). What I think it makes it unique is its ability to operate in completely decentralized environments with heterogeneous nodes, making it particularly valuable for edge computing, cloud gaming, distributed content delivery, vehicular communications, and grid computing scenarios. The protocol uses a bidding system where nodes evaluate their capability to fulfill resource requests on a scale from 0-1, enabling dynamic path selection based on current resource availability. I've implemented it in C++ and then also created a testing framework to validate its performance across different network topologies. This is still a work-in-progress and I am eager to publish results someday!
This sounds interesting. If you want to discuss it or just want a proofreader, please email me. I have done research in both distributed algorithms and economics. hackernews@mike.nahasmail.com
If you don’t know about these already, read about “self stabilizing algorithms”. They are fault-tolerant (to a certain definition) which is important in large distributed algorithms. I used one to build virtual networks with 10,000 nodes.
Thank you so must for your interest! I am working on publishing results and trying to create a proper webpage to reference Rank and all the documentation. My goal is to open this project as an open-source project as soon as I can so that everyone is able to build their solutions out of it and also contribute to the project. I'll keep you posted on that!
That sounds really interesting and I would also like a social media link or somewhere we can be kept abreast of updates.
This sounds promising. Keep us posted! If there's anywhere we can track progress, please link :)
Orleans would be good to checkout
Thanks! Actually I was not aware of Orleans as I never got close to .NET environments, but thank you for noticing it to me.
Hopefully it comes across as helpful and not condescending.
You're probably looking for "showing it to me" or "making me aware of it" rather than "noticing it to me" as noticing is usually used like "I noticed thing x" or "You have been noticed"
Oh, you're right, I am sorry! Yes, I meant "for showing it to me" or "making me aware of it"... I am not an English native speaker, and it was too early in the morning, I guess :)
https://sewerreport.com I am a dev/sewer inspector, done over 20k inspections for real estate alone. I built the ultimate, AI report generator based on my voice to text notes. Reports, email notifications, stripe integration. Payments and invoices. Unlock reports when paid. Square appointments integrations. Pulls all appointments and fills outs new report fields for me. No copy pasting anything ever again. Very niche but saves me 3 hours a day. Next js, it’s really been life changing for me.
Dev/sewer inspector is an interesting combination. Were you a developer first, and took up sewer inspection as a side job, or vice-versa ?
Nicely done! Those time-saving features are really valuable. I'm working on something similar for home inspectors.
- looks like your output report is an HTML page of text and media. Do you generate any PDFs?
- how much time does a report take to complete?
- how long have you been developing sewerreport.com?
- how many customers do you have?
Nice. You make any coin on this yet?
https://inlovingmem.com/ - is a tribute to my recently deceased mom that I vibe coded over the last week. I felt her life deserved to be celebrated widely but wanted to be sensitive to her privacy. I've also built in a number of interactive features for participation in funeral services etc, before, during, and after.
Folks have reached out about having an 'In Loving Memory Of' site for their loved ones, so I'm turning this into a side business to help out more with my (now widowed) father's retirement and care.
My sincere condolences for your loss, she must have given you incredible peace and strength to be able to produce this so early!
Thank you. This didnt come from place of peace or strength but from grief and a sense of need to honor her. One possibility for vibe coding is that it may turn app / web development into a form of therapy for more non-professional developers, and eventually all non-developers.
This is one lovely concept. What did you use to vibe code it?
Loveable
I'm sorry for your loss.
Thank you. I for one appreciate the curtsey of expressing sympathies. I don't question the motivation or whatever. It's just a kind gesture.
I will note that I'm trying not to think of her death as a loss. It certainly is in many ways for grandkids and others who were just starting to get to know her. But for the rest of us, I like to think we have a part of our deceased loved ones with us that we now have the responsibility to cary forward.
You killed her?
I'm cynical in general, but this type of stuff always sticks out. "I'm sorry for your loss" from one nameless headless stranger to a different nameless headless stranger feels as sincere as an AI bot, and that's to say it absolutely isn't.
Same as people saying things like "Don't say no one loves you, because I love you <3" but it's in a forum like this, or on Reddit. You don't know them. you don't love them.
You don't need to know them to empathize with them.
But is it real empathy? Did they actually pause and feel bad and convert their emotional response to some written message?
Or did they just short circuit. "Dead relative -> Say sorry for your loss". Like an AI bot.
It's the second one.
NotAnOtter smells like IsASkunk ... Why not just sit this one out instead of crushing the sentiment? I lost no mom in February and appreciate when people offer their condolences. And I'm this case, when I offer my condolences, I have at least some idea of what they're going through.
One of my younger brothers died a few weeks ago (he was 67; I'm 75). When people offer sympathy, I accept it and don't question their motives or involvement.
- [deleted]
Not as exciting or big as some of the projects on here, but just a small personal one I’ve been wanting to do for a while.
I recently impulse bought an Epson receipt printer, and I’ve started putting together a server in Go to print a morning update every day. Getting it to print the weather, my calendar and todos, news headlines, HN front page. Basically everything I pick up my phone for in the morning, to be on paper rather than looking at a screen first thing. Very early days but hacking away and learning escpos/go! (Vibecoding a lot of it)
Wow this is a really interesting concept. I have had many ideas for how to loosen the grip of the digital maelstrom on my brain. You're right, not looking at the phone in the morning is critical, and reading a few things on a page seems a lot more weighty and important than flitting by things on a phone.
Did you see the recent submission "A receipt printer cured my procrastination [ADHD]" ( https://news.ycombinator.com/item?id=44256499 ) ?
I did. My idea has preceded me seeing that article for a while now, but reading that was definitely a push finally try it out.
Very cool. I’ve thought about a digital dashboard for something similar (wave / weather report mostly) but I love the printer aspect.
This reminds me of a project for using a receipt printer to print of physical tickets of GitHub issues. https://aschmelyun.com/blog/i-built-a-receipt-printer-for-gi...
You have an interesting point. Screens are always changing and rarely taken seriously. Words on paper create a sense of weight and permanence. Make it work!
Watch out for the BPA in the receipt paper
Where I am, BPA receipts are banned, fortunately. Also making sure to buy BPA free alternatives.
Are there any keywords that help with identifying the most clean/neutral paper? Went down the rabbit hole briefly and had a tough time feeling confident in what was credibly Bpa free (at least on Amazon).
Which printer did you buy? Only gave it a quick glance but there seems to be a wide variety of printers...
Very different from all the magic mirror sort of solutions. Nice!
Love it I should do the same. We compare results :))
The last couple of weeks I've been building 'Recivo', a very simple way to receive emails programmatically. There are plenty of API-based services that can be used to send emails, but receiving them is harder. My service exposes a simple REST endpoint + event webhook that makes it a 5 min setup to start receiving. Attachments are included as well.
The main use-cases I'm thinking of right now is triggering agents using email or a very simple document upload flow to any SaaS (just forward an email to the SaaS).
This is a good idea. I think email is poised to become a more influential user interface tool, due to the advent of LLMs.
this is awesome, good work!
https://spring-agriculture.com/
Autonomous robotics for sustainable agriculture. Based in the south of the UK. Prototypes of an autonomous mechanical farm-scale weeding robot currently beginning real-world testing. Still a huge amount of work to do though.
Hardware and software developed fairly much from scratch, not using ROS (for not entirely crazy reasons...); everything written in Rust which I find well suited to this application area.
The robot is built using off-the-shelf components and 3d-printed custom parts, so build cost is surprisingly low, and iterations are fast (well, for hardware dev).
On robot compute is a couple of Raspberry Pi 5s.
Currently using the RPi AI Kit for image recognition, ie Hailo 8[L] accelerators.
Not currently using any advanced robotics VLA-type AI models, but soon looking to experiment with some of it, initially in simulation.
Feel free to get in touch if you'd like to talk :) Contact details in my HN profile, and on our website.
I invested in Small Robots Co (https://smallrobotco.com/) who were doing something similar. They had a good product, good brand, raised some funding, and were starting to get traction, with robots in trials on a number of farms, but at the end of the day they ran out of funding before they reached default-alive.
It's a tough space - convincing farmers to give it a go and running trials takes time and the UK isn't a very startup-friendly environment - investors are too often looking for a quick return.
This is such an important area - it's only going to become more critical to be able to grow more food whilst using less fertiliser and weed killer - so I wish you the very best of luck!
Nice to meet an SRC investor :)
Yes, it was all very sad the way SRC ended.
Coincidentally, we're based fairly close to where they operated. We are in touch with some of the people that used to be involved with SRC, and have been able to learn from some of their experiences. There is agreement that the UK can be difficult for this kind of startup, but also about the importance of the product area.
Very interesting
I have seen a few of these, but only one (about a decade ago) that used legs not wheels
Wouldn't it be better if the robot walked rather than rolled?
You may be able to illuminate this for me...
Before we started building we considered many different designs, including legs. However, it introduces significant extra mechanical and control complexity, with more complex failure modes - e.g. one leg gets stuck in the mud; it also would be more expensive to build.
So we decided to stick with wheels, at least for this product iteration!
Very Cool!! I'm pretty new to the robotics world, why are you avoiding ROS?
My knowledge of ROS is a couple of years out of date, but primarily that reproducible testing and simulation, with precise time/clock management, which is essential for a reliable product, was very difficult in ROS.
I also felt the ROS build system more convoluted than necessary; and seemed rather brittle - it was too easy to break it with OS or other updates.
We found that many off-the-shelf ROS nodes didn't do quite what we wanted, and ended up spending much more time than expected rewriting code that we expected we wouldn't need to. It is quicker, and we end up with less & more maintainable code, by writing it ourselves.
I expect this could have been resolved, but when testing ROS we also ended up using more compute resources on-robot than we expected.
Using our own system allows us to build exactly what we require, which has become more important as our system gets larger and more complex; and means integrations into other systems (including testing) are easier.
You are already the second case in two weeks that have abandoned ROS for industrial purpose(and not university), preferring to build something of their own. I agree that the build system is more complicated than it should be,but I was unaware of the problems related to the resources used by the nodes.
Your comment gave me a lot to think about, thank you.
A homegrown Plex.
After a lot of grief trying to make Plex and jellyfish to work with my collection, and then some more with the community [1] I decided to make my own.
There's no selling point and clear pathway to monetize, as other solutions are way more mature and feature complete, but this is my own and serves my needs the best.
I've been working on it on and off for last 8 years or so, and it's been my personal benchmark for js ecosystem. The way it works, every now and then I come back to the project, look at the latest trends in js world and ask myself a simple question - what should I change in the codebase to make it online with the latest trends. And everytime it leads to full rewrite. Kind of funny, kind of sad.
In a nutshell I have a huge movie collection - basically I'm preparing for armageddon where all online streaming services cease to exist and I need both backend to fetch me detailed information about movies in the collection as well as frontend to help to decide what to watch tonight.
My next major endeavor will be trying to integrate RAG to take a bite at my holy grail - being able to ask a question like "get me a good gangster flick" and get reasonable recommendations.
[1] I think it was jellyfish where I was asking on their forums for how to manually create a collection, stating I'm a software engineer with 20+ exp and they kept telling me that I shouldn't touch the code... While having an online campaign asking for volunteers to contribute to the codebase.
I'm trying to go the other way with my (simple) web apps- writing them so I don't have to rewrite them later. The whole UI is basically a form and a table, so I figured I should try.
For me that means Go + stdlib HTML templates (I want to try Gomponents at some point) to minimize dependencies. I copied the HTMX JS minified file into my source tree for some interactivity. I handwrote the CSS.
It looks very "barebones" (some would say ugly), but it's been solid as a rock. It's been a year and I haven't needed to update a thing!
I had my childhood heroes who were working on one of the first major app in Elbonia who helped me learn programming.
I remember asking them some 10-15 years later to help me with a project and they were like "sure, we'll do it CakePHP". Initially I was like "you mean in Cobol?". But then I realized they were masters of that tech, it works, and there's no need to reinvent the wheel and learn some new trendy web framework that will be forgotten in a blink of an eye.
Jellyfin right?
yeah, I was commenting on a phone, and the autocorrection was harsh on me
I'm working on https://tickerfeed.net - a new kind of forum for stock market discussion.
After HashiCorp was acquired by IBM I decided to take time off from corporate life and build something for myself. For years I've also been a casual retail investor on the side.
Forums like /r/stocks and /r/wsb in the past have been useful resources for finding leads and interesting information. But meme-ification (among other factors) have substantially degraded sites like Reddit, to the point where interesting comments are much fewer and far in between. With TickerFeed I'm hoping to recapture what was lost - a platform where investors can discuss companies and all things stock market through meaningful long form content.
It's also a chance to build something with my dream stack - Go + HTMX + SQLite, and that's been fun :)
Cool!
Bogleheads used to be place with serious folks but I haven't been there in a decade or more so no idea what it's like these days.
+1 on your tech stack
HTMX is so much fun, and the HATEOAS framework it encourages is a breath of fresh air in web development
Wonderful stack that. Site loads really quick too (except for some ads that took 3-4 seconds to load)
Working on two projects right now:
- LegalJoe: AI-powered contract reviews for startups, at the "tech demo" phase right now: https://www.legaljoe.ai/
- ClipMommy: A macOS tool to help (professionals who record a lot of videos | influencers) organize their raw video clips. Simply drag a folder of "disorganized" videos onto ClipMommy, and ClipMommy organizes the videos into folders / subfolders, adding tags, based upon some special statements that you can make at either the start or the end of your video (think audio-based "clapboard"). I'm expecting to release this within a week or two on the Mac App Store (Apple allowing...).
As an aside, I've been very impressed with Claude Code, it's (for me at least!) leading the way for how the next generation of business software might leverage AI. I plan to iterate on LegalJoe to make more "agentic" as a result of what I've seen is possible in Claude Code.
Legal Joe looks great. Nice video. Don't need it now, but it seems very useful
Building this as a Word add-in is very clever. Good work!
Cheers!
I would have liked to also provide a Google Doc plugin, but the Google Docs APIs [1] don't provide the required capabilities (specifically: a way to create tracked changes). Word's Add-In APIs [2] are also limited in some regards, but since they let you manipulate raw OOXML, you can work around those limitations for the most part.
[1] https://developers.google.com/workspace/docs/api/how-tos/ove...
[2] https://learn.microsoft.com/en-us/javascript/api/word?view=w...
Working on RSOLV.ai - automated security vulnerability remediation. Currently a one-man shop.
The insight: Most security scanners find problems but don't fix them. Industry average time to fix critical vulnerabilities is 65+ days. We generate the actual fixes and create PRs automatically, including educational content on the nature of the vulnerability and the fix in the PR description.
Technical approach: - AST-based pattern matching (moved from regex, dropped false positives from 40% to <5%) - Multi-model AI for fix generation (Claude, GPT-4, local models) - ~170 patterns across 8 languages + framework-specific patterns; can grow this easily but need more customer validation first.
Business model experiment: Success-based pricing - only charge when fixes get merged ($15/PR at the moment). No upfront costs. This forces us to generate production-quality fixes & hopefully reduces friction for onboarding.
Early observation: Slopsquatting (AI hallucinating package names that hackers pre-register) is becoming a real attack vector. It's pretty straightforward to nail and has a lot of telltales. Building detection & mitigation for that now.
Stack: Elixir/Phoenix, TypeScript, AST parsers
Building https://www.typequicker.com
Long-term, passion project of mine - I'm hoping to make this the best typing platform. Just launched the MVP last month.
The core idea of the app is focusing on using natural text. I don't think typing random words (like what some other apps do) is the most effective way to improve typing.
We offer many text topics to type (trivia, literature, etc) where you type text snippets. We offer drills (to help you nail down certain key sequences). We also offer:
- Real-time visual hand/keyboard guides (helps you to not look down at keyboard) - Extremely detailed stats on bigrams, trigrams, per-finger performance, etc. - SmartPractice mode using LLMs to create personalized exercises - Topic-based practice (coding, literature, etc.)
I started this out of passion for typing. I went from 40wpm to ~120wpm (wrote about it here if you're interested: https://www.typequicker.com/blog/learn-touch-typing) and it completely changed my perspective and career trajectory. I became a better programmer and writer because I no longer had to think about the keyboard, nor look down at it.
Currently, we're doing a lot of analysis work on character frequencies and using that to constantly improve the SmartPractice feature. Also, exploring various LLM output testing/observability tools to improve the text generation features.
Approaching this project with a freemium model (have paid AI powered features; using AI to generate text that targets user weakpoints) while everything else in the app is completely free. No ads, no trackers, etc. (Hoping to have sufficient paid users so that we can run the site and never have to even think about running ads).
I've received a lot of feedback and am always looking for ways to improve the site.
So I've got some things that seem a little bit weird to me:
1. Typing uppercase characters counts as a mistake
I'm not sure how that got to be the case, but somehow typing an uppercase letter instead of the lowercase is a mistake, despite the fact that sentences start with a lowercase letter. This conflicts with my muscle memory of starting sentences with a capital letter.
2. WPM is not a useful metric on its own
WPM can rise and fall depending on the length of the word. The bigger the word the less likely you are to type that word correctly from muscle memory, so the speed drops. The speed also drops due to the word being longer. I believe having both metrics would yield more useful data, such as when do you slow down etc.
Speaking of which, there are some more statistic things that could help, like measuring how fast you are at fixing the mistakes, or measuring three-letter combinations instead of two-letter combinations, because the context of the third letter might help, but you do need more data to gain a statistically significant result. Maybe trying to classify mistakes by the side of keyboard they happen on -- i.e. are they simple typos or a miscoordination of your hands.
---
Also, as pointed out by another commenter, hands also threw me off. I've been observing them and it's interesting that I don't use my little finger for the left row -- it's used in case I need to press shift.
Hi, thanks for checking out the app and for the feedback!
> 1. Typing uppercase characters counts as a mistake. I'm not sure how that got to be the case, but somehow typing an uppercase letter instead of the lowercase is a mistake, despite the fact that sentences start with a lowercase letter. This conflicts with my muscle memory of starting sentences with a capital letter.
So if you click on the topics (or whatever mode you're on), you will see the Options menu on the side. Capitalization is off by default but you can flip that back if you prefer capitalization. I've had folks request that capitalization be off by default hence the current state but I might change the default settings.
> 2. WPM is not a useful metric on its own
All typing sites generally use the same formula to calculate WPM - the length of the word doesn't matter. Most (pretty much all sites I've tried) sites use this formula: https://www.speedtypingonline.com/typing-equations. By all typed entries it's characters in this case. So it always assumes a length of 5 (avg. word length) and that's how it's calculated acorss all typing sites.
We have VERY detailed metrics. I may add a CPM toggle (toggle between both) but it seems most people prefer WPM as that's what they're used to on other sites.
> measuring three-letter combinations instead of two-letter combinations,
We measure both - see trigrams tab in the stats section.
> Also, as pointed out by another commenter, hands also threw me off. I've been observing them and it's interesting that I don't use my little finger for the left row -- it's used in case I need to press shift.
The hands are mostly there for folks learning correct touch typing practice - it's based on the most recommended general guidance for touch typing. It can be toggled off with the hand-icon button :)
What an incredibly interesting use of LLMs (generating text to practice typing). It leans in on what LLMs are good at. That said. I would love to see a middle tier pricing which had some features but avoided the AI use.
Why avoid AI use? Genuine question, I see this around and it seems usually based on a mental model of the environmental cost of AI that does not match impact in the real world.
Environmental cost is a concern, though for me not the main one. In this case it's two things.
1. AI interactions cost the service money, which is inevitably passed on to the consumer. The if it's a feature I do not wish to use, I like to have options to avoid paying for that feature. So in this case, avoiding AI use is a purely economic decision.
2. I am concerned about the content LLMs are trained on. Every major AI has (in my opinion) stolen content as training material. I prefer not to support products which I believe are unethically built. In the future, if models can be trained solely on ethically sourced material where the authors have been properly compensated, I would think this position.
I'm active in the /r/localllama community and on the llama.cpp GitHub. For this use-case you absolutely do not need a big LLM. Even an 8B model will suffice, smaller models perform extremely well when the task is very clear and you provide a few shot prompt.
I've experimented in the past with running an LLM like this on a CPU-only VPS, and that actually just works.
If you host it on a server with a single GPU, you'll likely be able to easily fulfil all generation tasks for all customers. What many people don't know about inference is that it's _heavily_ memory bottlenecked, meaning that there is a lot of spare compute left over. What this means in practice is that even on a single GPU, you can serve many parallel chats at once. Think 10 "threads" of inference at 20 Tok/s.
Not only that, but there are also LLMs trained only on commons data.
Thanks!
Yeah, LLMs are indeed really good for this use case.
> That said. I would love to see a middle tier pricing which had some features but avoided the AI use.
Only paid features are AI features. Everything else is free and no ads :)
You can type anything and as much as you want, you have access to all the advanced stats, you can create a custom theme from a photo of your keyboard, etc.
Everything but AI features is free right now. (Might change in future as we’re adding a lot more features so we will definitely consider a mid tier price )
Got it. That makes complete sense. I'll definitely check it out.
Hah that's pretty fun. I got tossed about by the animated hands for a few, but grabbed a 194 after that.
Dunno about the trigrams though, mostly it's on the "token group" level for me - either the upcoming lookahead feels familiar or it doesn't, and I don't much get bothered by the specific letters as much as "oh I don't have muscle memory on that word, and it's sadly nestled between two easy words, so it's going to be a patchy bit of alternating speed".
Thank you - glad you liked it and thanks for sharing your impressions and feedback; helps me understand what the users like.
> Dunno about the trigrams though, mostly it's on the "token group" level for me - either the upcoming lookahead feels familiar or it doesn't, and I don't much get bothered by the specific letters as much as "oh I don't have muscle memory on that word, and it's sadly nestled between two easy words, so it's going to be a patchy bit of alternating speed".
Could you elaborate a bit on this part - not sure I fully follow.
The trigrams/bigrams is mostly to help the user discover if there are some patterns that really slow them down or have a lot of mistakes. This is something I wanted that I didn’t see in any other apps.
This also what we use under the hood for SmartPractice weak point identification. We look at what the most relevant character sequences (for example the ta sequence is way more common than za) are and what the user struggles with the most. This is just one of the weak points we use in the user weakness profile.
This is very neat. One piece of feedback and a gripe I have with a lot of these is that missed or extra characters throw off the entire next sequence and essentially require backing up to deal with them, as opposed to wrong characters which are fine to just be mistakes you move on from. It'd be great to have some detection for when the user is continuing that re-aligns their string.
Thank you :)
> One piece of feedback and a gripe I have with a lot of these is that missed or extra characters throw off the entire next sequence and essentially require backing up to deal with them, as opposed to wrong characters which are fine to just be mistakes you move on from. It'd be great to have some detection for when the user is continuing that re-aligns their string.
Thank you for the feedback! I’m not entirely sure I can visualize exactly what you mean by this:
> It'd be great to have some detection for when the user is continuing that re-aligns their string.
Could you give an example of this?
I curious because I’ve been exploring alternative and unique UI ideas for typing practice so this could lead me into a new direction
I pulled up the first text I found from the site:
> according to its archive...
Let's say I mistype and don't double the first "c", but otherwise type entirely correctly.
> acording to its archive...
This would be counted as having everything wrong except the first 2 characters, which doesn't feel like a good reflection of my accuracy.
I know this is a hard problem because I don't think there's any simple guaranteed way to re-align the string to account for a possible deletion or insertion, particularly if there are more mistakes in the following text, but finding and using some sort of accuracy-maximizing alignment would be great to have.
Oh I see what you meant!
Yes - this is a very, very good point and something I actually spent so much time analyzing how I could implement a solution to this.
I think I spent over a week at one point. I refer to this issue as an off-by one or off-by two inaccuracy. Just as in the example you provided, the user only misses one character but types the rest of the word correctly (however because they missed one, the whole word is not mistyped).
This is indeed a very hard problem and in addition to the example you provided there are many other cases where this type of off-by one (or off-by two) mistyping can occur. At this time, I've put that problem on hold. I tried a few solutions but my friends said the UI was too confusing - the general initial feedback I received is to just keep typing as natural as possible; no stopping a user when they make a mistake, no guard-rails of any kind. Just mimic real typing as much as possible.
Issue is, it's one thing to implement the solution to this but another is how to correctly display this to the user. In essence, the text is just a collection with each character having an index. Per each character we measure everything; milliseconds taken to type, errors made for that character, whether it was corrected or not, etc. But if we're handling off-by one or off-by two, displaying this to the user in a non-confusing way is really hard. UX is hard haha
very cute. good luck!
Thank you very much! :)
[dead]
I'm still working on Habitat. It's a free and open source, self-hosted platform for communities to discover their local area. The plan is for it to be federated, but that's a while off yet. I've made some good progress recently. I've added the ability to temporarily freeze user accounts, custom WYSIWYG editing for sidebar content and functionality that allows the administrator to set site-wide announcements to appear and disappear at specific dates/times. I also got some great feedback from users of my instance of it for my local town and so fixed some bugs.
- The idea: https://carlnewton.github.io/posts/location-based-social-net...
- A build update and plan: https://carlnewton.github.io/posts/building-habitat/
- The repository: https://github.com/carlnewton/habitat
- The project board: https://github.com/users/carlnewton/projects/2
Nice idea.
I'd encourage you to embrace full decentralization via blockchain technology. Why? Any successful tech that empowers and connects local communities becomes very politically interesting very quickly.
Thanks. Decentralisation is the aim. I don't think blockchain will be required though, because lat/long coordinates will act as a route directly between instances. I describe this here: https://carlnewton.github.io/posts/location-based-social-net...
Not sure I understand your 'because' here. Blockchain is helpful in making data public and persistent, and not under control on any one individual.
You might also be interested in geohashing as a more user friendly way to group nearby posts, although lat/lon is easier for a cpu.
Perhaps I'm not educated enough with blockchain to see how it can help with Habitat, but I'll investigate it, thanks. I'll also look into geohashing. I've never heard of it before, but I have been thinking about the issue of multiple posts for the same location and how to deal with that, so geohashing might certainly help with that. Thanks again.
Love the idea!
Thanks so much!
I've been working on Splice CAD – an in-browser cable-harness designer.
Building cables for multiple personal and professional projects, I was frustrated by having to cobble together harness diagrams in Illustrator or Visio, cut snippets from from PDFs for connector outlines, map pin-outs, wire specs, cable constructions, mating terminals, and manually updating an Excel BOM.
Splice gives you:
An SVG canvas to drag-and-drop any connector or cable from your library to quickly route and bundle wires. Assign signal names to wires or cable cores.
Complete part data Connector outlines, pin-outs, terminal selections (by connector family & AWG), cable core colors & strand counts, wire AWG/color.
Automated BOM & exports parts-ready diagrams, wiring drawings, and a clean BOM in SVG, PNG, or PDF.
Connector & Cable Creators. Connectors or cables not in the existing library can be added with an optional outline and full specs (manufacturer, MPN, series, pitch, positions, IP-rating, operating temp, etc.), then publish privately or share publicly.
Demos & tutorials: Harness Builder → https://www.youtube.com/watch?v=JfQVB_iTD1I
Connector Creator → https://www.youtube.com/watch?v=zqDsCROhpy8
Cable Creator → https://www.youtube.com/watch?v=GFdQaXQxKzU
Full tutorials → https://splice-cad.com/#/tutorial/
No signup required to try—just jump in and start laying out your harness: https://splice-cad.com/#/harness. If you want to save, sign up with Google or email/password.
Continuing to make some additions and changes to ease use and improve the appearance of the harness design:
[1] Added many standard DSub, M8/M12, and USB connector outlines/pin arrangements to aid in the creation of connectors not currently in the library
[2] In concert with the addition of more connector outlines, added a Magic Button on the Connector and Cable Creator pages that pulls all fields for an entered MPN from Digikey or Mouser. Now, you can just enter your part number (like CDM806-04A-MP-F011-67 for this CUI connector, https://www.digikey.com/en/products/detail/same-sky-formerly...) and the fields will populate automatically. If a standard outline is recognized, you have the option to load the outline and pin arrangement too. For cables, this will pull the core count and other cable properties.
[3] The Harness Builder now features a grid and snap-to functionality to align items. You can also share a link (ie, https://splice-cad.com/#/shared/lgcy2htze9zlqs4igdgycfncy98f...) to a harness design that a collaborator can view or clone/edit).
omg, I wish there was a service like jlpcb / pcbway but for cable harnesses.. do you know of any? I'd love to take something like your tool and choose length and quantity and order it....
Thanks for the comment...yes, I've had that thought many times. There's on demand fab for about everything else, but no low-volume cable house with auto-quoting and a nice design interface.
Check out https://www.hi-harnesses.com/ - limited parts at this point but the closest thing I know of.
thanks i will check them out.
Just built a last-mile logistics management solution to replace a SaaS solution for a delivery company I used to be involved with.
Handles everything from real-time driver tracking, public order tracking links, finding suitable drivers for orders, batch push notifications for automatic order assignment, etc.
Backend: Feathers.JS, Postgres + TimescaleDB & PostGIS, BullMQ, Valhalla (for multi-stop route optimization although most of our deliveries are on-demand)
Frontend: SvelteKit
Mobile App (Android only for now): React Native/Expo, Zustand, Expo push notifications, and two custom native modules for secure token storage and efficient real-time GPS tracking. The tracking was probably the toughest to get right to find the best balance between battery/data efficiency and more frequent updates.
Been testing it for a couple weeks and as of last week, that company moved their operations over to it with 50+ drivers and thousands of orders processed through it so far (in a country with pretty unreliable connectivity/infrastructure).
I built it initially as a favor but open to other applications for it.
Would you be interested in sharing the code? I'm working a similar project and wouldn't mind exploring your code base.
Unfortunately its closed source at the moment but happy to discuss the problem space. E-mail in bio.
> I built it initially as a favor
That's a hell of a favor. Is this something you built by yourself or were you part of a larger team?
I guess it's not ENTIRELY a favor since I founded that company but stepped away a few years back and always felt a bit guilty ever since. They certainly weren't expecting me to build it though.
I built it all myself (including the integration with our ordering platform) It was sort of my white whale project that I've always wanted to do but didn't have the chops/time.
The advancements in AI-assisted coding encouraged me to give it a shot though and the results turned out great. It was a heavily supervised vibe-coding project that turned into a production-ready system.
Sounds very interesting. I'd love to take a look if open source?
I just took a fortnight off work with the intent of getting away from my laptop, but accidentally ended up making a listings site for London's independent / arts cinemas. As far as I can tell no such thing currently exists, and I feel like it should.
Obviously the main thing is getting the listings data, which as far as I know (mostly) isn't readily available any other way that scraping the cinemas' websites, for which I set this up as a separate-ish project[1]
Hey I'm doing something similar for NYC! But focused on screenings with special appearances only. There's a lot going on here. Happy to share notes.
Cool! Yours is much prettier than mine.
Pretty can get in the way sometimes. I like your site, it's easy to ingest the information quickly. I might simplify the design of mine to make it more usable. There's a reason why hackernews is still looking like this!
You should do all kind of events by event type, not just movies. Concerts, workshops, open lectures... london.eventhose.com. Then you should find volunteers for other cities to do the same.
Time Out should have long done that but instead they stopped their print edition.
Just finishing the London indie cinemas is an ongoing challenge! But yeah it'd be great to branch out to other places, and cover other types of events.
Neat. Just notice that the ICA has a Genesis P. Orridge bio through your site, so will likely go and see that. Thanks!
Nice! No Garden Cinema? It's the best cinema in London! (And their website is great, I would imagine easy to scrape)
It's on the quite long list[0] that I haven't got around to yet. I'll try to get it done today.
I've added a PR for this - https://github.com/Joeboy/cinescrapers/pull/4
Hope you didn't start on it!
(By the way, it wasn't too easy to scrape in the end…)
On the one hand that's awesome and it's really great to have a contribution! On the other hand, I unfortunately just added it myself. But it's there now, anyway :-)
More PRs very welcome if you're in the mood!
Ah no/great! I would have let you know before I tried but wasn't sure how far I'd get. Glad it's there and thanks for adding it.
I'll add the Peckhamplex now.
Thank you so much! Just merged and deployed.
Love these tiny, locally focused ideas!
Still on my sabbatical and continuing to build on things I enjoy rather than things that pay (for now).
Main focus is https://wheretodrink.beer, collecting and cataloging craft beer venues from around the world. No ambition of being exhaustive, but aiming for a curated and substantial list. After the last thread, a bunch of people added their suggestions, thanks! It helped add interesting new venues from cities I hadn’t covered yet.
I’m very slowly layering on features, and have a few spin-off ideas I’ll keep brewing on for later. The hardest problem thus far has been attempting to automate popularity rankings and automatic removal of defunct venues without breaching a bunch of ToS.
Also made https://drnk.beer, a small side project offering beer-related linkpages and @handles for Bluesky (AT Protocol). It's been on the backburner, but still very much live.
Probably looking for another small project for the next few months to focus on something else for a while. Always curious to see what others are building and doing. Thanks for sharing!
How did you populate it? The Berlin list was pretty decent. I added one that came to mind.
Appreciate it! In the end, a lot of manual work to be honest.
Think around 5% is from visitors, 10-15% from my own experience and the rest just procrastination research.
Started with the cities I know well, and after that adding on countries or cities close by, main focus has been Europe. At one point I tried to use ratebeer's dataset as a starting point, before they closed down, but it was so horribly outdated and irrelevant that it was more work than sourcing manually.
So I basically look for existing blog-ish top-lists for a city, then try to verify the information with search, social media, untappd, etc. Looking for social proof that the venue is operational and relevant.
To keep it updated I have some very rudimentary monthly tasks to ping a venue's website and notify me on things that signal they're closed. I also email myself a list of 10 random venues with all relevant links daily, so I can do a manual 5 min alive check.
[flagged]
Please don't do this.
Just shared this with the r/beer discord. Really cool idea.
Cool, added a few for Buenos Aires ;)
Great, thanks! The first entries for South America!
I am developing a utility for viewing photos directly in the terminal. The image is rendered using the ASCII character " " filling it with different colors and shades, this allows you to open the image in even on older computers.
Anyone interested can follow the link and star it on GitHub.
https://github.com/Ferki-git-creator/phono-in-terminal-image...
I'm building (and have been for the last few years) an open source high-performance Wordpress alternative on Elixir. It aims to achieve 1:1 feature parity. One thing that Wordpress has built up over the years that will take a little long for me is the plugins eco-system. But, other than that, I think everything else should be on par. IF you're an enterprise, you should easily see over 30-40% in server costs just by switching from Wordpress. This has been tested and proven with one of our enterprise clients who just recorded 500 million requests on a fork of the CMS.
But, I'm determined to see its completion even if there is just one user. I didn't take the Wordpress fiasco and how they handled it, lightly at all and it only fueled my motivation even more. ETA is by end of this year right on time for Christmas.
If you'd like to read more, here's an article about my CMS: https://medium.com/creativefoundry/what-i-learned-as-an-arti...
If you'd like to get Beta access, my email is listed in my profile.
I don't know if Wordpress has any kind of customizability or scripting, but it's now possible to add Lua scripting, natively, to an Elixir application. If that's handy it's something to consider.
Still working on my books site https://thegreatestbooks.org that I started in 2008. It's been a 1 man team the entire time. I recently made some major algorithm changes that I think greatly improves the rankings. My algorithm code is open source https://github.com/ssherman/weighted_list_rank
I do plan on open sourcing more of the code over time. I also have started working on other sites using the same algorithm implementation (music, movies, video games)
This has just been a side project over the year generating passive income. I get around 250,000 page views a day, and with ads, memberships, and affiliate links I make around $2,500~ a month.
Tech stack is ruby on rails 8, postgresql 17, opensearch, redis, bootstrap 5.3 hosting on 3 servers on linode.
Ho, so you're the one that created that website. When I wanted to start reading more and I did not know where to begin, I found this site and started reading stuff from the top. It was incredibly useful to me, thanks!
I like the idea of a books list. This gives me new inspiration for books that I could read. Other languages like Spanish and French would also be great. :)
Nice! I've been looking for a reliable book ranking site. The main rankings skew to the "classics" that don't always hold up (looking at you Moby Dick) but the books in the genre filters look more interesting.
A couple questions:
* Is this primarily intended for discovering new reads, or for people who've already read the books to debate which is greatest? I found the book descriptions sometimes give away too much, to the point where I stopped reading them for any book I might be interested in reading for pleasure. Examples include The Great Gatsby and Madame Bovary. Perhaps you could have a concise description that stays far away from plot points, and a more expanded description behind a "more" link.
* What dictates whether a series has one place on the list or separate places? Narnia has one for the whole series but Harry Potter has individual listings per book.
* Are ratings and reviews from your own site taken into account in the rankings?
I think it's most useful for discovering new reads, especially with the advanced search and recommendations functionality. I do agree i could do a better job of non spoiler summaries. good idea
- Series have always been a problem. Some book lists will include the entire series, and then some will have individual books. If the series is sold as a single book I'll often just include that. Like Lord of the Rings. Sometimes I will include only the first book in the series on a list, to prevent always adding every single book in a series when a list mentions "harry potter series".
basically I don't have a perfect way of handling series'
for the last point, kind of. If you add a book to the default "My Favorite Books" user list, it gets aggregated and used for this book list which is included in the rankings. https://thegreatestbooks.org/lists/463
Great idea, great site.
I went Yak shaving.
For my 3D audio project I need an affordable way to make plastic cases. I felt like injection molding services are way overpriced, so I decided to make the molds in-house. Turns out, CNC milling is overpriced, too. As are 5 axis CNC mills. So in the end, we built our own CNC machine.
And like these things always go, I found an EMI issue with my power supply and a USB compliance bug in the off-the-shelf stepper control board. But it all turned out OK in the end so we now have the first mold tool that was designed and machined fully in-house. And I learned so much about tool paths and drill bits. Plus it feels like now that everyone has experienced hands-on how stuff is milled, my team got a lot better at designing things for cheap manufacturing.
Great to get experience in CNC! I've been working on how to market my GatorCAM for CNC. So I'll give you a copy! 2 birds!
It is easy to select multiple holes/pockets at once so if you iterate, you don't spend time redoing CAM! It does traveling salesman to solve for efficient paths which even the expensive packages don't get right. Calculates v-bit paths too.
That looks great for 2D engraving setups. Or for cutting parts out of sheet wood/plastic/metal. But based on the videos, I don't think this is suitable for 5-axis milling yet.
Also, in the video where you cut out the circular logo coin:
https://youtu.be/mGd7EIkCK3g?feature=shared&t=108
it looks to me like you're using a metal cutter bit (with corncob grinding surface) on wood. You might get a lot less burr (that furry stuff on the sides of cuts) by using an asymmetric one-blade bit. They look weird, but they'll cut the wood fibers with a carbide blade instead of trying to rub them off with diamond fragments. You usually want separate tiny chips coming off the material. If it starts stringing - like in the video - then usually it's either the wrong bit or too fast spindle speed (and material melting rather than cutting).
Correct. This is only for 3 axis milling.
I'll have to keep an eye out for a bit like that. I usually use my 30 cent eBay bits with blue tape to avoid the frizzies. But i didn't want to hide the workpiece with tape in the video.
Wood be good to not have frizzies in the demo videos!
Btw fxtentacle...go grab a copy on me! Thanks for the kind word!
That's a pretty big yak to shave! Building a 5 axis that gives good results a big task. How long did it take you to get that working?
Why do you need to make so many molds?
There was something in this video about not being able to get moulds made in America:
Yes, it's difficult to find someone with both the skills and the machinery. And contrary to software, bad g-code in a CNC mill can easily cause $100k+ in damages if, for example, the spindle head crashes into a solid steel block ... Which is why they will not let outsiders do the programming, even if you think you could do it.
Got a link or blog we can check out?
Yeah! I would absolutely love to see a write up about this too!
Would love to see your machine! Any pics or write up?
* https://cijene.dev (HR, open source) - recently, Croatian retail chains were mandated to start publishing grocery prices online, but not how, so they made a mess of it; I've been building a crawler + unified API to avoid people duplicating the crawl/parse/cleanup effort (open source)
* https://trosko.hr (HR, Android/iOS app) - super-simple receipt/bill tracker (snap a photo of the receipt, reads it using Gemini, categorizes and stores locally - no accounts, no data gathering)
* https://github.com/senko/think (open source) - Python client library for LLMs (multiple providers, RAG, etc). I dislike the usual suspects (LangChain, LLamaIndex) but also don't want to tie myself to a specific provider, so chugging on my on lib for this.
Have you tried BAML? https://github.com/BoundaryML/baml
I haven't, thanks for the recommendation, will check it out!
I just quit my "day job" to work on a business I've built with some good friends! We make stingray-resistant booties -- ie, if you encounter stingrays in the shallows, these greatly reduce the chance you get stung (https://mydragonskin.com/). I'll be in charge of a couple marketing efforts, helping with Youtube, and other odd things that come up!
My day job required me to go into office frequently, and I'm really feeling the reduced social connection of being fully remote in a small company. Any suggestions how to deal with this? I'm planning to reconnect with old friends, surf a lot, go rock climbing, and maybe take dance / music / other classes. Would also love if anyone wants to work together in the same place (library, coffee shop, etc). I'm in Escondido California, but happy to drive ~30 min to meet folks.
Classes and workshops, something with the same people that occurs over several weeks. But it’s important that the content is something you’re personally interested in.
Would this work for Weever fish? My father in law was stung while walking the beach last year in Portugal, and I've been looking for some sort of sea shoes to bring with us since
This is my first time hearing about weever fish! Reading about them online, my best guess is that yes, our booties probably would work for them! The mechanism is very similar to stingrays, and they are a similar size to the ones we test with (round rays, about dinner plate size). To be clear, we haven’t explicitly tested with weever fish, so it is just my best guess.
Portuguese here, I know people that wear Crocs for that reason.
I have been lucky and haven't been stung in 30yrs of going to the beach here. As far as I know it's a bigger danger when the water is warm, but this might be just a myth.
Legend!!! My buddy just got stung the other week.
Check out Eventship. Hussein is local to SD. You should also meet Fred for press.
I’ll try and remember about these in the winter. I need new booties anyways. How many mm? 2 plus 2 so 4?
Oooh thanks, will check it out!
Ya exactly, 2 layers of 2mm each, for a total of 4mm. They’re less warm than most 4mm booties would be though, because they’re intended for the protection. If you’re in SoCal that’s a feature — your feet should stay warm but not overheat :)
But you could use this boot anywhere you see sharp objects, right? Need not be stingray. Assuming this is the first use case, wish you all the best!
It will help, but the bootie really is fine-tuned to stingrays, in some ways that might not be obvious. Stingrays strike with limited strength, so we measured tons of stingray strikes and designed to stop that. It won’t do much if you put all your weight onto a nail or something.
But if you want a balance of flexibility and stopping stingray stings, we really are the best. Nobody else is even trying, lol, the other options pretty much do nothing, or are encased in steel and not flexible at all.
I'm writing a decompiler for Turbo Pascal 3.0, to reverse engineer an educational game from the 80s.
Since TP 3.0 does no optimisations, and looking at the progress so far (~25% decompiled), it seems like matching decompilation should be achievable.
If/when I get to 100%, I hope to make the process of annotating the result (Func13_var_2_2 is hardly an informative variable name) into a community project.
Neat! I sometimes play around with the idea of reverse engineering and transcompiling a tiny game that I think was probably written in Turbo Pascal 4.0. Maybe 4.0 supported optimizations, but this program seems to have been compiled in a debug mode. (At least, it seems to have no optimization, and has the default {$S+} stack overflow checking at the start of every function.) The lack of optimization makes it (and perhaps other programs written in Turbo Pascal) a really attractive artifact to experiment with transcompiling. When I realized that only the first segment was the actual game, and the other three segments corresponded to standard units used for I/O (etc.), which could be harder to analyze, I realized I could just omit those segments and replace them with new functions suitable for the transcompilation target. Maybe some day I'll get around to finishing it.
Good luck!
Thank you!
It's similar with Turbo Pascal 3.0, but there's only one segment since it's a good old COM file. The compiler just copies its own first ~10000 bytes, comprising the standard library, and splices the compiled result to the end.
I can see how this makes transcompilation relatively straightforward, although the real mode 16-bit code is a bit unpleasant with all the segment stuff going on, so you might as well just decompile :D. It's very possible that similar instructions will be emitted in 3.0 and 4.0 for the same source input.
My program also has the stack checking calls everywhere before calling functions. I think that people using Pascal weren't worried about performance that much to begin with, so they didn't bother disabling it.
Sounds cool, what game?
A bit of a niche game: https://www.mobygames.com/game/63804/socher-hayam/
Although it has cult status in Israel for some reason.
I built a simple web app that helped make me more present during a family tragedy:
Brief backstory: While visiting us overseas, my in-laws were in a very bad car accident. Everyone involved is alive and going to be okay. But what followed was a series of emotional, physical and logistical challenges that pushed my wife and her parents to their limits.
During this time I found myself (shamefully) hiding on my phone. I was obsessively refreshing for updates from insurance/hospital teams, sending empty messages, and mindlessly scrolling feeds. My screen time was averaging 12 hours a day. Time I could have spent being fully present with my wife and her parents.
I finally accepted I have a serious phone addiction. I tried Apple Screen Time and a few popular screen time management apps, but found the blocks were too easy to bypass, and some apps were as useful as they were distracting depending on the context (e.g. YouTube). I didn’t necessarily want to use my phone less: it’s an incredibly useful tool, and the distractions were sometimes helpful.
What I really needed was intentional stretches of time spent away from my phone. I built touchgrass.fm as a simple way to record and incentivize those stretches of time. It’s not quite finished, but it’s been helping me stay present for hospital visits, meals and important conversations.
That's a really cool idea!
I am implementing a single Rust process to which you can connect a zero-knowledge proof of identity, such as can be created with ZKPassword from a physical passport. Each user ends up with a keypair which is:
1) Highly Sybil resistant. Neither the keypair owner nor anyone else can re-use the same underlying ID to link to another keypair.
2) Very high anonymity. While the Sybil resistance requires a nullifier representing the underlying ID to be present in a database (or stored in a public, decentralized form for blockchain use), there is no way to connect that nullifier with the keypair. Even if someone were to use brute force to successfully connect the nullifier with a specific underlying ID, such as a passport, there is no way to connect that ID with the keypair. (In the passport case, even merely brute-forcing the nullifier could only be done by the issuing government, someone who has hacked the government database, or someone with physical access to the passport. This is due to the fact that other passport information than the passport number is included in generating the underlying zero-knowledge proof.)
I understand that other technologies may have similar end-functionality, but this has the advantage that most of the functionality is encapsulated in a single Rust executable that could be easily used in any context, whether distributed or decentralized. (If anyone would like to know more, my contact info is at garyrobinson.net.)
The rust binary is great, but the underlying zk technology itself desperately needs to be sold to those dealing with things like passports.
In fact, now that I think about it, zk-proof identity will be required in the near future since so many poorly run organizations are leaking ID documents.
I made the same little Roguelike game with Raylib in Odin, C3, and FreeBASIC over the last few weeks. [0] [1] [2]
I started on a Zig one and nope'd right on out of that after a few hours of fighting the compiler.
I'm currently working on porting a bunch of my Rust mini-games to other languages. [3]
[0] https://github.com/Syn-Nine/odin-mini-games/tree/main/2d-gam...
[1] https://github.com/Syn-Nine/c3-mini-games/tree/main/2d-games...
[2] https://github.com/Syn-Nine/freebasic-mini-games/tree/main/2...
[3] https://github.com/Syn-Nine/rust-mini-games/tree/main/2d-gam...
why were you not satisfied with rust for game programming?
I probably put down at least 100k lines of Rust and made 15 games of varying sizes from small jam games to much larger ones [0], [1].
It seems like everyone just wants to make the next big popular engine with Rust because it's "safe", and few people really want to make actual games.
I also felt like prototyping ideas was too slow because of all the frequent manual casting between types (very frequent in game code to mix lots of ints and floats, especially in procedural generation).
In the end... it just wasn't fun, and was hard to tune game-feel and mechanics because the ideation iteration loop was slow and painful.
Don't get me wrong, I love the language syntax and the concept. It's just really not enjoyable to write games in it for me...
I've been working on a fully electric last-mile delivery company: https://hudsonshipping.co
Beyond the landing page (built with Astro), I've been building all of the route optimization, the delivery and warehouse management systems. A combination of go and java has allowed me to write a few microservices in the past 6 months to handle all of my logistical processes, and I'm just testing the mobile app in the field as we speak! I hope to make some of the code open-source one day!
This is interesting! Have you considered leveraging Google OR Tools[1] for route optimization? At a previous hyper-local eCommerce startup I worked for, we used it to solve similar problems. Although the setup and integration is not super easy, but the results far outweighed the effort.
I have considered it! I've opted for a more specialized optimization library that deals specifically in the Traveling Salesman Problem (https://github.com/graphhopper/jsprit). I will revisit this though, might come in handy pretty soon - thank you!
This is a super cool intersection of real world problems and software. How hard has it been to get customers? I assume trust is a big hurdle here. How are you approaching this problem?
Thank you! You've definitely identified the trickiest part, especially when you come in with a track record of, well...0 deliveries (I was in working in tech teams before this). Luckily, there are quite a few freight brokers in the NYC metro area, and they are willing to give you a trial period. Another way to approach is to work with smaller companies and offer discounts during the startup phase. (We're starting deliveries in August)
Sounds really great. Good luck
Thank you, appreciate it!
I'm working on a video game called Astroloot[1], a mix of bullet-heaven and scifi-space ARPG. After two years, I've finally completed the main-campaign and now start with the endgame. Ever since playing Diablo 2, I've wanted to create an ARPG. Have to say, this project brought back the joy of programming for me.
How is it on SteamDeck? I see on Proton there is one review so far that the linux experience is good (and they call it Path of Exile in space which is about the best compliment).
Thank you!
I have a SteamDeck myself and the game constantly runs at 90fps. The game has full controller support, so it is very comfortable to play on Deck.
Well heck yea! I'll be sure to pick it up when I get home this evening! I just moved counties and I dont have my desktop with me yet. So I'm looking for a arpg to play while I miss the latest Path of Exile season :(
Oh, nice!
If you like PoE, you should feel right at home!
My girlfriend recently got into making sourdough and wanted to keep a log of all her recipes. She really wanted to explore the relationships between recipe water percentage and crumb density, or proof time and oven spring, for example. I built her https://sourdoughchronicle.com - a local first bread journal that allows peer to peer recipe and results sharing. Claude + aider had a MVP built in an hour and she's loving it! Oddly enough the comparison charts haven't made it in yet, but that's the next feature on the the to-do list.
nice I'm gonna use this!
Lately, I’ve been exploring a few interconnected ideas:
Local-first web applications with a compiled backend – After eight years working on web platforms, the conventional stack feels bloated. The client already defines what it wants to fetch or insert. Usually through queries. So why not parse those queries and generate the backend automatically (or at least, the parts that can be)?
Triple stores as a core abstraction – I’ve been thinking about using a triple-based model instead of traditional in-memory data structures, especially in local-first apps. Facts could power both state and logic, and make syncing a lot simpler.
Lower-level systems programming – I’ve mostly worked in high-level languages, but lately I’ve been writing C libraries (like hash maps) and built a minimal 32-bit bare-metal RISC-V OS.
It’s all still brewing, but I think these ideas tie together nicely. What if the OS didn’t have a file system and just a fact store? Everything could be queried and updated live, like a Lisp machine but built on facts.
Some other things I’ve been playing with:
A jQuery-like framework and element factory - You can pass signals that automatically updates the DOM.
A Datomic-like database on top of OPFS - where queries become signals that react to new triples as they enter the system. Pairs well with the framework above.
Isnt this kind of a thing already, with the front end being able to write the sql queries
It’s getting there, but it does not handle permissions so you either have to add a bunch of rules through the database (such as RLS on Postgres) or define a permission schema.
Trying to see how far inference can go given that queries usually specify this information (ex: where(r => r.author == $SESSION.AUTHOR_ID)).
What hardware are you testing/running your RISC-V OS on ??
I’m using QEMU virt machine, so no hardware for the time being.
Would love to boot on a physical machine eventually though! If you have suggestions, happy to hear them :)
While taking care of my newborn, I had a lot of time to think about what annoys me most about being a software engineer. For me that is interfacing with databases.
So, I embarked a couple of weeks ago on my journey to build a relational database, which checks the boxes for me personally and I hope that this will be useful for other developers as well.
Project priorities (very early stage): - run code where the data is - inside of the database with user defined functions (most likely directly rust and wasm) - frontend to directly query the database without the risk of injection attacks (no rest, graphql, orms, models and all the boilerplate in between) - can be embedded into the application or runs as a standalone server - I hope this to be the killer feature to enable full integrations tests in milliseconds - imperative query language, which puts the developer back in control. Instead of thinking in terms of relational algebra, its centered around the idea of transforming a dataframe
Or in other words, I want to enable single developers or small teams to move fast, by giving them an opensource embeddable relational firebase.
If you have any thoughts on that, I would love to talk to you.
This reminds me spacetimedb a bit
Yeah, I think those folks have some very interesting ideas
Numpad: https://numpad.io/
It's a web-based notepad calculator, which means it's a notes app but it can evaluate inline calculations like
``` £300 in USD + 20%
09:00 to 18:30 - 45 minutes ```
I wrote the core of the calculator a few years ago, and I've just launched a big rewrite that supports
* document syncing * offline editing * markdown formatting * PDF and HTML exports * autocomplete * vim mode
Happy to hear feedback :)
Been working on https://github.com/amterp/rad for almost a year now. It's a programming language designed for writing good CLI scripts, so it's aiming to replace Bash but is much more Python-like, and offers unique syntax and a bunch of in-built support for scripting.
Please check it out if it sounds at all interesting! Keen for feedback :) I've written some docs, including a "getting started" guide, linked in the GitHub page.
Lately I’ve been working on two things:
An iOS client for Cloudflare. Surprisingly, there’s none out there, maybe because nobody needs it? I do, so I’ve created one and it’s now available on TestFlight [0].
Another interesting thing I’ve recently discovered is that LLMs are pretty great at vetting tenancy agreements, so I’m working on a website that reads tenancy agreements and will return a list of unfair clauses that might be present in the contract along with a detailed explanation of how you should follow up with the landlord/agency. I still need to finish it but if you’re interested it’s here [1].
http://axcas.net is an online computer algebra system I've been working on. I'm working to finish the programming language which is based on C, and I'm adding an ode solver which I plan to use to evaluate special functions.
I release code into the public domain hoping it will be useful. There's some fast code for Groebner basis computations using the F4 algorithm (parallelized - article to follow), and some routines for machine integers e.g. discrete logarithm, factoring, and prime counting.
I've been building tooling for better debugger support for Rust types using debuginfo: https://github.com/samscott89/rudy
I'm planning on doing a proper writeup/release of this soon, but here's the short version: https://gist.github.com/samscott89/e819dcd35e387f99eb7ede156...
- Uses lldb's Python scripting extensions to register commands, and handle memory access. Talks to the Rust process over TCP.
- Supports pretty printing for custom structs + types from standard library (including Vec + HashMap).
- Some simple expression handling, like field access, array indexing, and map lookups.
- Can locate + call methods from binary.
https://github.com/srv1n/kurpod
Lets you create encrypted containers disguised as normal files. 1000s of images, pdfs, videos, secrets, keys all stuffed into an innocent look "Vacation_Summer_2024.mp4".
I've almost got true steganography working i.e to get the carrier file to actually open in any file system(currently with mp4, pdf, png and jpeg).
Things like this have existed in the past, but nothing with a simple UI,recent encryption standards.
Damn how is the docker image only 4Mb. Even with the docker slim images they typically are atleast double digit. Nice!
Im just stuffing the binary into a scratch container. I had to port over openssl certs, but works like a charm after!
https://DocCheetah.com - aiming to help accountants chase clients for their documentation. Launched, not got any traction, spent a little bit on advertising through LinkedIn. Probably need to execute more targeted marketing and more problem validation.
https://Full.CX - still hums along in the background. Couple of customers. Just added MCP which has been amazing to use with AI coding agents. Updating the UI/UX to ShadCN to improve usability and make it easier for future changes replacing NextUI and Daisy.
https://Toolnames.com - no changes this month.
https://Risks.io - little bit of work on the new platform, yet to be released.
https://dalehurley.com - little facelift
FYI, Your personal site seems to have some styling issues: https://imgur.com/0pDKc4l
Same thing in firefox and chrome on mac.
Thank you, I will have to look into it
I am working on building a prototype for a simple 4-track recorder. It would be a cross between a Yak Back [0] voice recorder and a Tascam DP-004 [1] mixer.
My 7 year-old has gotten into music and is trying to record his own ideas. We have found the existing tools to be either too simple (Yak Back) or way too complex (Tascam). I want to make him something that has a simple interface, few buttons, and simple recording/mixing. The idea is to avoid the software programs like Garage Band and Logic.
I'm still working on hcker.news, which first started as a more configurable hacker news frontpage, but has turned into a thing that I've found to be quite helpful at content discovery.
I recently by request[0] added a cohesive timeline view for hn's /bestcomments. The comments are grouped by story and presented in the order that they were added to the /bestcomments page. It's a great way to see popular comments on active topics. I'm going to add other frills like sorting and filtering, but this seems to be as good a time as any to get some of your thoughts!
You can check it out here: https://hcker.news/?view=bestcomments
[0] https://news.ycombinator.com/item?id=44076987 (thx adrianwaj)
I'm working on a rhythm game for original NES: https://zeta0134.itch.io/tactus
This is written entirely in 6502 assembly, and uses a fun new mapper that helps a little bit with the music, so I can have extra channels you can actually hear on an unmodded system. It's been really fun to push the hardware in unusual ways.
Currently the first Zone of the game is rather polished, and I'm doing a big giant pixel art drawing push to produce new enemies, items, and level artwork to fill out the remainder of the game. It's coming along slowly, but steadily. I'm trying to have it in "trailer ready" / "demo" state by the end of this calendar year. Just this weekend I added new chest types and the classic Mimic enemy to spice things up.
Nice! What’s the new mapper you’re using? Is it available as an IC or does it use FPGA or something?
It's an FPGA mapper made by Broke Studio, detailed here if you're curious:
https://github.com/BrokeStudio/rainbow-net/blob/master/NES/m...
In terms of capabilities, graphically it's something like MMC5 (8x8 attributes and a bunch of tile memory) while sound wise it's almost exactly VRC6. The real nifty feature though is ipcm: it can make the audio available for reading at $4011
It turns out the APU inside the NES listens to writes to $4011 to set the DPCM level, which many games use to play samples. By having the cartridge drive it for reading, I can very efficiently stream one sample of audio with the following code:
So I just make sure to run that regularly and hey presto, working expansion audio on the model that doesn't normally support it. It aliases a little bit, but if I'm clever about how I compose the music I can easily work around that.inc $4011
Currently working on a project for EU region to compare supermarkets prices across certain countries it’s called https://compears.shop/nl
I'm working on LLM translation research for my tool that teaches you a language while you browse by translating sentences at your level into the language you're learning (https://nuenki.app)
I've had some breakthroughs with LLM translation, and I can now translate (slowly, unfortunately) at a far far higher quality than Opus, and well above DeepL. So I'm considering offering that as an API, though I don't know how much people actually care about translation quality.
DeepL's customers clearly don't care - their website is all about enterprise features, and they appear to get plenty of business despite their core product being mediocre.
Would people here be interested in that?
Still working on https://periplus.app, and recently started to see some traction.
It's an environment for open-ended learning with LLMs. Something like a personalized, generative Wikipedia. Has generated courses, documents, exams and flashcards.
Each document links to more documents, which are all stored in a graph you grow over time.
This is great. I love this concept. Built something similar myself a few months back (just the course generation part): https://quickguide.site/
A few courses I generated using above:
- https://dev.to/freakynit/network-security-cdn-technologies-a...
- https://dev.to/freakynit/aws-networking-tutorial-38c1
- https://dev.to/freakynit/building-a-minimum-viable-product-m...
Supremely impressive, and I lean a bit towards the more AI-hesitant side.
I tried to get it to generate a foreign language reading comprehension course (and even included custom instructions to make the course generate reading comprehension passages to emulate a test), but it just generated a course about _how_ to effectively read different kinds of texts, without actually generating the foreign-language passages themselves.
Yeah, doesn't work for generating language-learning content yet. Something more aligned to what you'd find on Wikipedia tends to work best.
I'm thinking you could have it in the same interface eventually, but right now all the machinery & prompts assume it's decomposable declarative knowledge.
wow I just tried this, absolutely fantastic. I really hope you take this all the way, I will be sharing with friends!
Edit: upgrading my review from fantastic to probably one the best first experiences I've had with an LLM app. You got my money!
Do you have any socials? Would love to keep up with updates about this project
Thanks for the positive feedback (and the sub)!! Means a lot.
No socials so far as I've mostly been posting updates on the Anthropic discord. But I made an X account for it just now (@periplus_app) where I'll mirror the updates.
You can also reach me any time by email for bug reports, feature reqs etc.
Building an app that extracts key information from PDFs + highlights citations. You provide a PDF and a JSON schema defining what to extract, and it returns the extracted values, the citations and their precise locations in the document.
This is especially valuable in workflows where verification of LLM extracted information is critical (e.g. legal and finance). It can handle complex layouts like multiple columns, tables and also scanned documents.
Planning to offer this both as an API and a self-hosted option for organizations with strict data privacy requirements.
Screenshot: https://superdocs.io/highlight.png
I've been building small programs in Zig, C and ARM64 assembly without relying on libc and only using Linux syscalls directly.
Some examples:
- A minimal C shell with built-ins like cd, pwd, type: https://gist.github.com/rrampage/5046b60ca2d040bcffb49ee38e8...
- Terminal Snake game which fits in a QR code using Linux syscalls for drawing: https://gist.github.com/rrampage/2a781662645dc2fcba45784eb58...
- HTTP server with sendfile support in ARM64 assembly: https://gist.github.com/rrampage/d31e75647a77badb3586ebae1e4...
I learned to handcraft a static ELF binary using just GNU assembler (no linker): https://gist.github.com/rrampage/74586d0a0a451f43b546b169d46... . Trying to see if I can craft a small assembler in ARM64
Kudos! Especially for the asm.
http.S is something I wanted to do by myself, ended up with generating data in asm and reusing Go for a http server.
I finally compiled and expanded on all my various blog posts, tutorials and other Python goodness into a book: Working with Python. It is available as a free pdf download at: https://mkaz.blog/working-with-python/
It's grown over a dozen or so years and when I finally decide to compile into a book, everyone now uses AI and no longer read and learn from books but instead through LLMs.
Fantastic. I wish I'd started on writing something like this years ago (although I'd wanted to teach explicitly rather than having a collection of how-tos).
> when I finally decide to compile into a book, everyone now uses AI
This is part of what discourages me from starting now, sadly. That, and having more concepts for actual Python projects than I know what to do with.
Great book! I already use python for some simple projects and your book is in the perfect level of practicality that I need. Thank you! Suggestion: create an epub version as well. It would be awesome to read it on a kindle or other e-ink devices.
> everyone now uses AI and no longer read and learn from books
Not me, I read the shit out of documentation and also books like yours which distill knowledge from professionals down to a bunch of useful points. I have never not learned something (even if I knew and forgot it) from reading a good book about "Working with X".
Thanks for your hard work, and for giving it away to others gratis.
Edit: the string formatting cookbook has a ton of useful info that I always forget how to use, I'm going to bookmark your site by this page: https://mkaz.blog/working-with-python/string-formatting
The string formatting article definitely has been my most popular post for years. I'm glad you found it useful, and thanks for the kind words
Me and my friend are working on Workback[1], a tool that can fix a11y issues end-to-end.
First we built it as a tool to fix any bug. After talking to a few folks, we realized that it is too broad. From my own personal experience, we realized how messy it is within organizations to address accessibility issues. Everybody scrambles around last minute. No body loves the work - developers, PMs, TPMs etc. And often external contractors or auditors are involved.
Now with Workback, we are hoping to solve the issues using the agentic loop.
If you personally experienced this problem, would love to chat and learn from your experience.
Still building. https://tosreview.org/
Reading through the Terms of service in websites is a pain. Most of the users skip reading that and click accept. The risk is that they enter into a legally binding contract with a corporation without any idea what they are getting themselves into.
How it started: I read news about Disney blocking a wrongful death lawsuit, since the victim agreed to a arbitration clause when they signed up for a disney+ trial.
I started looking into available options for services that can mitigate this and found the amazing https://tosdr.org/en project.
That project relies on the work of volunteers who have been diligently reading the TOS and providing information in understandable terms.
Light bulb moment: LLM's are good at reading and summarizing text. Why not use LLMs for the same. That's when I started building tosreview.org. I am also sending it for the bolt.new hackathon.
Existing features: Input for user entered URLs or text Translation available for 30+ languages.
Planned features: Chrome/firefox extension Structured extraction of key information ( arbitration enforced , jurisdiction enforced etc).
Let me know if you have any feedback
Thats interesting!
How does your product do in the age of AI?
I could imagine this could be sold to a whatever-legal-tech company, or maybe to a compliance company or similar.
Thanks for your comment!
AI and specifically the summarization capabilities of the LLMs is what made this product feasible.
This is still a side project with no plans of monetization. There is no moat (yet) that an internal team in a legal tech company cannot replicate. There are still a few interesting problems to solve in the roadmap, which I am eager to work on. Then will let life take its course
I'm not actively working on it daily, as I have shortage of free time and helping hands, but the HTTP Spec Test Suite is my Moby-Dick. I wrote about it here: https://www.caffeinatedwonders.com/2024/12/18/towards-valida..., I also discussed it on the HTTP WG mailing list and presented it at the HTTP WG Workshop last year.
Another Moby-Dick of mine is Kadessh, the SSH server plugin of Caddy, formerly known as caddy-ssh. This one is an itch. I wrote about it here https://www.caffeinatedwonders.com/2022/03/28/new-ssh-server..., and the repo is here: https://github.com/kadeessh/kadeessh. Similar to the other one, feedback and helping hands are sorely needed.
They are both sort of an obsession and itches of mine, but between dayjob and school, I barely have a chance to have the clear mind to give them the attention they require.
Myself.
Been a freelance dev for years, now beginning a "sabbatical" (love that word).
Planning to do a lot of learning, self-improvement, and projects. Tech-related and not. Preparing for the next volume (not chapter) of life. Refactoring, if you like, among other things.
I'm excited.
I’m building an e-book reader for the web and PWA platforms:
The library of public domain classics is courtesy of Standard Ebooks. I publish a book every Saturday, and refine the EPUB parser and styler whenever they choke on a book. I’m currently putting the finishing touches to endnote rendering (pop-up or margin notes depending on screen width) so that next Saturday’s publication of “The Federalist Papers” does justice to the punctilious Publius.
Obligatory landing page for the paid product:
I am working on building a custom PDF Web Component. With this web component, you can
- Create your own PDF editor with custom UI with the help of public methods which are exposed in the web component.
- You can add dynamic variables/data to the templates. What this means is you create one template, for example, a certificate template with name and date as variables and all you have to do is upload your CSV / JSON of names and dates, and it will generate the dynamic PDFs for you.
- It's framework-agnostic. You can use this library in any front-end framework.
It's still in early development, and I would love to connect with people who have some use cases around it.
I have integrated this library in one of our projects, Formester. You can see the details here https://formester.com/features/pdf-editor/
I have posted this demo video for reference https://www.youtube.com/watch?v=jorWjTOMjfs
Note: Right now it has very limited capabilities like only adding text and image elements. Will be adding more features going forward.
Working on https://vide.dev, the Cursor for Flutter devs.
While Cursor stops after writing great code, Vide goes the extra mile and has full runtime integration. Vide will go the extra mile & make sure the UI looks on point, works on all screen configurations and behaves correctly. It does this by being deeply integrated into Flutters tooling, it's able to take screenshot/ place widgets on a Figma-like canvas and even interact with everything in an isolated and reproducible environment.
I currently have a web version of the IDE live but I'm going to launch a full native desktop IDE very soon.
Any reason to not use flutter flow with all the AI stuff?
I'd say it depends on where you are coming from. With Vide, I'm approaching this problem from the code side. In my opinion, any application that is supposed to go into production and scale should be built on a solid code foundation.
My value proposition is to make developers more productive by skipping the boring stuff, while FlutterFlow is more of an "all-in-one" app platform.
I'm building a platform to help people—especially students and young adults—design meaningful, intentional lives with balance, courage, and an entrepreneurial mindset.
In Singapore, the system is heavily academic. You're expected to follow a rigid path (PSLE → JC → Uni → job), but no one teaches you how to think about what kind of life you want to live—or how to create it. That leaves many people feeling lost, even if they’re “on track.”
This platform flips that. It starts with the big picture: *“When you’re 90, what do you want your life to have looked like?”*
From there, users create a personal timeline of milestones across life domains: health, relationships, learning, impact—and now, *financial freedom.*
The app helps users:
1. Set long-term visions, then break them into clear, visual milestones
2. Use an AI assistant to suggest weekly actions and recalibrate as life evolves
3. Voice journal instead of typing; the AI transcribes and flags patterns (“You mentioned burnout 5x this week. Want to add a rest week or revise your work goals?”)
4. Track basic finances and align spending/saving to long-term goals (“You want to take a year off at 30. At this pace, you’ll have the runway by 32. Want to adjust?”)
5. Get matched with mentors or peer circles for guidance and accountability
The goal is not to “optimize” life like a spreadsheet. It’s to help people reflect, take control, and become someone they’re proud of.
If you’ve worked on anything in this space—journaling, goal tracking, financial wellness, coaching—I’d love to learn:
A. What made your tool stick long-term?
B. How did you balance simplicity with depth?
C. Any design or product traps I should avoid?
Appreciate any thoughts, questions, or brutal feedback.
Mostly writing for myself; I should really convert some drafts into proper blog posts because I'm really interested in discussing my ideas with others.
I've been thinking a lot about the current field of AI research and wondering if we're asking the right questions? I've watched some videos from Yann LeCun where he highlights some of the key limitations of current approaches, but I haven't seen anyone discussing or specifying all major key pieces that are believed to be currently missing. In general I feel like there's tons of events and presentations about AI-related topics but the questions are disappointingly shallow / entry-level. So you have all these major key figures repeating the same basic talking points over and over to different audiences. Where is the deeper content? Are all the interesting conversations just happening behind closed doors inside of companies and research centers?
Recently I was watching a presentation from John Carmack where he talks about what Keen is up to, but I was a bit frustrated with where he finished. One of the key insights he mentions is that we need to be training models in real-time environments that operate independently from the agent, and the agent needs to be able to adapt. It seems like some of the work that he's doing is operating at too low of an abstraction level or that it's missing some key component for the model to reflect on what it's doing, but then there's no exploration of what that thing might be. Although maybe a presentation is the wrong place for this kind of question.
I keep thinking that we're formulating a lot of incoherent questions or failing to clearly state what key questions we are looking to answer, across multiple domains and socially.
True. I believe the most important question right now is… how to solve for memory.
RAG and/or Fine-tuning is not the way.
Another topic is security, which would consist of using Ollama + Proxmox for example, but of course, right now, as emergent intelligence is still early, we would have to wait 2-3 years for ~8 B parameter local models to be as good as ChatGPT o3 pro or Claude Opus 4.
I do believe that we are close to discovering a new interface. What is now presenting itself through IDE’s and the command line (terminal)… I strongly believe we are 1-2 years away from a new kind of interface, that is not meant for developers only.
That feels like an IDE, works like a CLI, but is intuitive as Chrome is for browsing the web.
Watch Francois chollet on ML street
Working on tail calls for TXR Lisp. Current release provides self tail calls only; and certain cases don't work, like applying in tail position. Plus there is a shadowing bug. These issues are addressed already.
Tail calls between different VM functions are the next challenge. I'm going to somehow have it allocate the VM instance in the same space (if the frame size of the target is larger than the source, "alloca" the difference). The arguments have to be smuggled somehow while we are reinitializing the frame in-place.
I might have a prefix instruction called tail which immediately precedes a call, apply, gcall or gapply. The vm dispatch loop will terminate when it encounters tail similarly to the end instructions. The caller will notice that a tail instruction had been executed, and then precipitate into the tail call logic which will interpret the prefixed instruction in a special way. The calling instruction has to pull out the argument values from whatever registers it refers to. They have to survive the in-place execution somehow.
I'm currently building a suite of free AI tools for the beauty niche:
https://www.nose-shapes.com Upload a selfie and an AI model classifies over a dozen nose types (Greek, button, fleshy, flat, etc.), plus tips on contouring and finding complementary glasses.
https://www.foundation-shade-finder.com Snap a photo and our AI analyzes your skin tone + undertone, then recommends exact foundation shades from brands like MAC, Fenty, L’Oréal, Estée Lauder.
https://www.golden-ratio-face.com Measures facial proportions against the "golden ratio" to reveal symmetry and aesthetic balance—perfect for beauty enthusiasts and content creators seeking visual harmony.
All of them are no‑login, instant‑result web tools. I'd love the get any feedback from accuracy and UI to feature ideas or niche extensions.
A job feed for remote jobs - https://tangerinefeed.net/
This is something I’ve needed myself over the last few years as jobs become shorter and shorter lived. Keep on improving it as some kind of compulsion.
Looks good! Seems to not be bringing in the requirements section of the JDs?
Thanks! Will take a look.
I'm building https://prijm.com which is a minimalist link sharing and post creation platform with custom feed and notification support for your activities. Here are some of the features:
- Supports markdown every where, even in your comments and replies.
- Get notified.
- Personalized feeds.
- Lightning fast & mobile first.
I’ve been working on an app called Lång. It’s a calm daily spending guide – shows you what’s okay to spend today, based on how much needs to last how long.
The idea came from noticing how most people manage money day to day: checking their balance, adjusting by feel, trying not to drift. There are tons of tools for planning or categorising, but not much that fits that kind of improvised pacing.
Still early, but trying to shape it around those habits – to make something simple and steady, that supports how people already do things.
Been working on https://www.stainedglassatlas.com.
Trying to document and map as much of the publicly accessible stained glass as possible. The goal being the next time you visit a new city or town, you'll know where all the beautiful stained glass is to go see. Just recently added support for countries outside of North America. No exciting tech (vanilla HTML/CSS/JS). But excited for folks to check it out!
I've been working on implementing @mpweiher "Storage Combinators" [0] and "polymorphic Identifiers" [1] in Eiffel [3].
Currently I'm stuck implementing a storage combinator with EiffelWebFramework[4]
[0] https://dl.acm.org/doi/abs/10.1145/3359591.3359729
[1] https://scholar.google.com/citations?view_op=view_citation&h...
[2] https://en.wikipedia.org/wiki/Eiffel_(programming_language)
I'm building an app to help users find free and paid street parking in Vancouver: https://instaparkr.com/
While apps like Parkopedia and SpotAngels tackle the same problem, their one-size-fits-all approach often results in incomplete, missing, or outdated data. My approach is different: go deep on one city at a time by combining multiple publicly available datasets. This doesn't scale horizontally since each city has different data sources and formats, but the goal is to become the definitive parking resource for one city, build automation to keep it current, then methodically expand city by city.
If you are based in Vancouver, do give it a go. Your feedback would be awesome!
Hej, I made FisherLoop[1] to learn Swedish. FisherLoop are interactive audiobooks where I use TTS with word level timestamps to highlight the words as they are spoken. This helps me pick up on pronounciation and grammar in a, for me, natural way. Additionally, I added flashcards from the books + word lookup. I am adding new books right now. If you have any requests: public domain books, which are around one hour reading time let me know :)
I am using cerebras for book translations and verb extraction and all LLM related tasks. For TTS I am using cartesia. I have played around with Elevenlabs and they have slightly natural sounding TTS but their pricing is too steep for this project. Books would cost a couple of hundred euros to process.
Is there such a thing for Spanish?
Maybe I should have clarified, in addition to Swedish, I have added Spanish, Italian, German, and French.
I'm interested but I'm not getting the confirmation email.
Did you use the hi@... email? I am seeing a hard bounce for that email. Not sure how to debug that right now. All my emails I have tested have worked. Could you try a different email while I debug?
I am working on an app to detect tooth problems. I envision it as something you can use for a quick check for the large majority who don't have regular dental care. It will be late detection but a good alternative to doing nothing.
I am experimenting with the current SOTA multimodal LLMs, but performance is still not yet there, they still hallucinate non-existent teeth. (As an aside, I have found a simple but very telling test, I have an image with only 4 teeth visible up and 10 down, so I prompt the modal to count, non have been able to, but Gemini 2.5 pro is the closest of the lot, performance is worse in the description when the counting test fails).
I am going to try segmenting the image to see if I will have better results by prompting to describe segment by segment.
I'm working on an MCP to give your coding agent the ability to generate on-demand Mermaid diagrams about anything in your codebase. Among other benefits, it is very helpful for spotting unnecessary code or architecture that can accumulate while vibe coding.
https://www.npmjs.com/package/@mindpilot/mcp
Claude Code Quickstart:
``` claude mcp add mindpilot -- npx @mindpilot/mcp ```
Sounds useful!
Does this only work with JS code?
Writing SFF novels!
I need to put it up on the ol' blog-thing, but I've signed a contract with a small press for a debut novel, which is highly exciting. That one's urban fantasy from the point of view of the wizard's magic cloak. (You better believe it has opinions.)
Meanwhile, I've been working on a novel about a group of time travelers who accidentally get stuck in the Permian, well before the dinosaurs. Surprise! There are still big animals that can eat you, they're just more weird (and not as big). The research for that one has been wild.
The ol' blog thing, where I post story-related tidbits and such: https://rznicolet.com
I'm tinkering with relative positional encoding by trying to integrate acoustic features directly into it.
More specifically, I'm trying to use pitch (F0) to dynamically adjust the theta parameter in rotary positional embeddings, so the frequency of the positional encoding reflects the underlying pitch contour of the speech and instead of using a fixed unit circle (radius=1.0) for complex rotations, I'm trying to work out how to use variable radii derived from the pitch. The idea is to create acoustically-weighted positional encodings, where the position reflects the acoustic salience in the original audio. https://github.com/sine2pi/asr_model
having a really tough time wrapping my head around it but it sounds really interesting
I'm working on MedAngle, the world's first Super App for current + future doctors. An invite only platform, which has everything people in medschool/dental school and recent graduates need. From analytics, quizzes, summaries, x-rays , videos to tens of thousands of questions, ~100k+ students/doctors have solved over 100m questions, spent 10s of billions of seconds, and growing!
We're also working on the Premed Super App, same thing for people taking medical school entry exams like the MCAT or MDCAT.
I get to work with a bunch of top notch students and doctors, and I myself am the first ever full-stack technologist who also is a doctor in Pakistan, a country of 250 million people.
I’m working on a desktop app called With Audio https://desktop.with.audio a one time payment desktop app.
— it turns ebooks, articles, and documents into synchronized audio with real-time text highlighting. It’s great for people who prefer listening while reading (or want to stay focused), and it works fully offline with a one-time purchase — no subscriptions.
I’m bootstrapping it and trying to figure out how to market it effectively. So far, I’ve had some traction and early sales just by posting on Reddit, but I’m still learning the marketing side — especially how to reach people who’d benefit from it most.
Would love to hear how others approached early growth for similar bootstrapped tools.
Does it work with languages other than English?
Sadly not at the moment. I need some help to confirm other languages as I only understand English.
Same(-ish). :-)
I wouldn't need / want this for reading English, but it'd be killer for improving my Spanish vocab and speech-recognition. It's a great idea, and lots of people could get a lot of value out of it. Well done!
I'm working on a simple, local storage budgeting app called "Wasa Budget". I wrote it because I got tired of tracking my budget on excel sheets. It's written in flutter, it works well enough that I was able to entirely ditch the excel sheets now.
I want to publish it on Google play, but I need testers. If anyone cares about budgeting, I'd love to get some feedback.
Here's the app link: https://play.google.com/apps/testing/dev.selfreliant.wasa_bu...
I don't think you can download it without being added to my testers list though. Send me your Gmail address if you're interested!
'A testing version of this app hasn't been published yet or isn't available for this account.'
> Send me your Gmail address if you're interested!
Where? nleschov at gmail
I just added you to the testers list. The link should work now.
Word of warning, Google is pretty dumb and even requires testers to pay for the app. It's going for 3$, but I can reimburse everyone who helps me test once the testing phase is finished
Ah, looks like I can create discount codes too, so I could also send you a code so the payment is skipped. Let me know if you need that
I am trying to quit IT and become full-time games/books/gamebooks publisher. I tried crowdfunding once and it turned out very well so I am working on next projects now :) https://gamefound.com/en/creators/arispen
https://klutz.co.in is my main project. It started with the desire to create powerful AI free-of-cost, for me (the dev) and for you (the user)! This is how it works: 1. You login or signup using puter. 2. That's it!—you can now Chat with any website, a lot of LLMs and 19 other AI tools for entirely free.
But here's my problem: Not able to monetize Klutz. Because puter Auth doesn't let you track the email ID of users. So, they can just delete their puter account from puter.com and create a new one with the same email, to use the free trial again. This will make the pricing pointlessand abuse the system!
Any ideas other than ads? Anything would help!
Just published my Chrome extension that makes finding new content on YouTube easier!
https://chromewebstore.google.com/detail/relevant/fdhnccpldk...
Two months ago I posted an update that I had begun work on my Chrome extension [1] for Relevant. Relevant is a crowdsourcing website where users can categorize the channels they watch into a defined hierarchy of categories ranging from broad topics like "Science" and "Gaming" to more specific ones like "Phone Reviews" or "Speedrunning".
Although I had a little bit of engagement on the website, I found myself looking for something that could bring the experience onto YouTube, so I began work on a Chrome extension. It turns out there's a lot more complexity in building a Chrome extension than I realised. It's basically like building a website for the popup window, a javascript server for the background service workers and a message bus for the service worker.
After 2 months' of working weekends, I finally released a version of it that lets users see the categories of the content on the page, discover more channels matching those categories and contribute to the categorisation effort!
Nothing actually. Feels nice.
I've been building an interactive nuclear reactor scoping tool to help people build intuition about how different types of nuclear reactors work and cost at different sizes. I ran a bunch of simple reactor simulations and this basically interpolates between them. https://whatisnuclear.com/neutronics-scoping-tool.html
I did a screenshare demo of it yesterday: https://www.youtube.com/watch?v=GQzDfrdf71Y
I wrote a simple app last year that put all my Apple Watch workout routes on a simple map, so I can see how much of the city I’ve covered (all existing options were paid, and I was too cheap for it). Now I have some time, so rewriting it properly that’s based on neighbourhood, completion %s, achievements and etc. It’s weirdly fun, because I’m not a mobile engineer, but satisfying to see hundreds of users per month using my app.
Also, every region has different ways of representing a “neighbourhood”, so I get to learn how to extract viable data from each city. Lots of map stuff, I’m genuinely enjoying it!
HealthFit is pretty good, one-time fee, shows interesting heatmaps on all of your Apple Health workouts: https://apps.apple.com/de/app/healthfit/id1202650514
Did you look at the squadrats app? It’s compatible with strava also. It sounds quite similar to what you describe.
Not Squadrats, but I've checked out some others, like CityStrides. There were a few problems though:
- It felt like what I wanted to achieve is pretty simple (GPS coordinates -> display all on the same map), so didn't want to subscribe for a monthly fee. I couldn't actually find an app that would dump all my HealthKit data directly onto the map, which was surprising.
- Last year when I wrote my app, I wanted to see how fast I can learn simple mobile development loop
- Now, I couldn't really find anything that divides the coverage areas into real-world neighbourhoods. So, think of West Village of NYC, or Yorkville in Toronto, or Yoyogi in Shibuya and etc. Back when I used to live in Vancouver, I would look at my own app, and kinda say in my head "aight, I've walked through every street in West End, Vancouver". Figured it would be cool to have a proper way of tracking it. So working on it currently.
- It's kinda fun to work on an app for my own needs
I'll take a look at the squadrats though! Looks pretty cool.
Is your app on TestFlight?
There's wandrer.earth as well, though it's based on roads, not neighborhoods or squares
This sounds wonderful. Do you have some writeup about it or screenshots?
Hm, the new version is very rough right now, as I've been focusing on API/Data side of things. But generally the idea is something like these: - https://uc792df8aab8345f71952cc54569.previews.dropboxusercon...
- https://uceed957a657be57d7d53af97504.previews.dropboxusercon...
It felt good when I was able to figure out how to generate all the neighbourhood data for any given city. A bunch of fun OSM data manipulation though.
If you meant the app that I wrote last year, it's here - https://apps.apple.com/us/app/mapcut/id6478268682. The idea is much simpler though, as I mentioned.
https://github.com/nixiesearch/nixiesearch
A Lucene-based search engine on top of S3 block storage.
Index schema is immutable (but supports migrations), so you cannot just screw up your index.
Separate indexer/searcher tiers, so heavy indexing load does not affect search latency.
And embedding/reranker local inference, so you can run the whole AI search within a single docker container.
Looks nice! What did you use to make the first diagram (one with documents on left and 3 searcher boxes on right)?
Project Frottage: An automated constant stream of high quality AI wallpapers for mobile and desktop. We manually curate a list of ~150 prompts. Every 6h a random prompt is selected, a picture generated and uploaded to static hosting, free to use:
https://frottage.app/static/wallpaper-mobile-latest.jpg
https://frottage.app/static/wallpaper-desktop-latest.jpg
https://frottage.app/static/wallpaper-desktop-light-latest.j...
We just finished an android app, to set the wallpaper automatically on mobile: https://play.google.com/store/apps/details?id=com.frottage
Working on Cursor for Excel: https://www.tryalphaexcel.com/
As there is no open source version of Excel except Libreoffice, working to build the core Excel functionality with other open source packages. Then bringing in agentic editing functionality for real world data.
What is also has been interesting is to introduce banker/consultant formatting guidelines to the agent and making it beautify its work whether in tables or models.
Cant it be a VBA script interfaced to an external app? This way you could use standard Excel, no?
Just emailed you
Just responded - nice meeting!
I’m working on https://finbodhi.com — a double-entry personal finance tool where you own your data. It’s local-first, syncs across devices, and everything’s encrypted in transit.
It helps you track, understand, and plan your personal finances — with a proper accounting foundation.
It's interesting in many way. Using double-entry (it's a perspective shift), the technical challenges of building local-first app, UI/UX & visualizations, privacy and more.
> Q: Where is my financial data stored?
> A: Your financial data is stored locally on your device. ...
Good stuff! This was the first thing I checked, and it means I am now reading more about the app. Really nice to see this approach.
I know this is still WIP, but is feedback ok? The plan buttons say "Get starterd" which is a funny typo :) Also, I was not sure, but is this a website app, or a local app? For local data, I would strongly prefer an actual local app. Some screenshots of how it looks on multiple devices (directly comparable, as in, this is the same view and same data on iOS/Mac) would be great. Finally, do you have bank links? _The_ killer app I want in a personal finance app, and you'd be surprised how many make this really difficult, is to track my actual income and spending.
I signed up for your newsletter. Rare for me to do. Looking forward to hearing more!
Thanks @vintagedate, unfortunately, the things you asked are not done yet :)
The app is a PWA website (https://en.wikipedia.org/wiki/Progressive_web_app). Eventually, we would be able to build other clients (e.g mobile/native app) with the synched db, but not in near future. The data is stored in sqlite in browser which is synched with other devices. With backup to local filesystem, and soon dropbox.
The app supports multiple devices, but ui for mobile has not been our priority. We hope to fix it soon, at least for data entry.
And bank linking, again, we don't support. This is partly limitation of FinBodhi being webapp.
As and when we have support for things you are looking for, we will write to you (via newsletter :)
Thanks for catching the typo, and the feedback.
This looks very interesting. Do you support double entry bookkeeping not avoid errors? Is there support for transactions with more than one currency?
Ya, the product is built on double-entry. In case you are interested: https://finbodhi.com/docs/understanding-double-entry
On multiple currency, we have support for it as a commodity. We are planning to make it easier, in future. There is support for split transactions. So, transactions with more than one currency will work by associating them with accounts in different currency.
This looks supercool- Do you mind if I ask what your tech stack is ?
Much of it is usual stack. React, TypeScript, tailwind, d3 for viz, vite for packaging, sqlite for storage, Evolu for schema & sync, firebase for auth, and may react libraries. There is a small sync server (which handles synching of encrypted data), but apart from that, rest of it is front-end code.
also, if you are looking for help - I would love to chip in.This is something that has personally interested me too :)
Thanks for the offer :) We don't plan to involve others anytime soon. But the offer means a lot. Will keep in mind.
I was trying out an MCP tool and hit few issues, even though there is MCP inspector which sets up a webserver etc. I wanted much simpler tool I can use in SSH environment, so I built (with Claude Code) a terminal tool to proxy any stdio MCP server and then use the monitor TUI to see all the flow of calls between MCP client and server. Its been helpful to learn thing about MCP as you see the flow of calls happening and inspect them.
I'm making a kind of "Tinder for hiking trails".
I live in Switzerland and am (like many people here) an avid hiker. There are a lot of great hiking websites but they all suffer from the same problem: They are ultimately just a list of hiking routes that you need to plan around. Because I do a hike almost every week, the extra planning has become an overhead that takes time out of my life: how far away is it, what train should I take, whats the weather situation like, do I need to bring snowshoes, etc. The 65,000km of trails in this country also gives me decision paralysis!
So I'm building an app (React native/django) which takes a users current situation and preferences and then algorithmically suggests a few best options for them that they can quickly give a yes/no to. It's integrated with a lot of data like the train timetables, snow data, weather forecast etc.
I was able to reduce an hour of planning down to 5 minutes last week, so it's definitely working for me. What I am currently trying to do is figure out if other people have this problem and there's interest in the app concept.
This sounds great! My partner and I also spend a lot of time just scrolling around the Swiss hiking maps looking for potential routes. I had an idea for better filters (e.g. roundtrip hikes with >1000m elevation <2h by transit) and got as far as displaying hiking and transit data. Are you looking for testers? :)
Absolutely! Send me an email. Address in my profile
Interested...from Italy! some "hike with kids" filters could be interesting too. I don't have much time at the moment, but if you need an help let me know (even just for brainstorming)
Pretty sure people would want to play with it here in Geneva but it would need to expand to covering the French, Italian and Austrian Alps too over time. Keep us posted.
I moved my blog over to Jekyll hosted via Github: https://blog.andrewrondeau.com/
The site itself isn't anything "special." I've had a personal website for about 25 years; the past few years I finally moved from making HTML by hand to using various CMSes. I tried a "no database" CMS that my hosting page had, then I wrote my own CMS, https://github.com/GWBasic/z3, to learn node.js, but then I had to go back because Heroku dropped the free tier.
Jekyll is interesting. As a Mac user, I'm surprised there isn't a push-button app, like MAMP, to just run it. Instead, I got exposed to some weirdness with Ruby versioning that, because I don't have any Ruby experience, was frustrating.
The default Jekyll template has warnings, but when I tried to fix them, I ended up jumping into a rabbit hole of sass versioning.
I also ended up jumping into a rabbit hole with setting up redirects from old urls on my blog to their new locations. I don't touch Apache / cpanel that often, so there was a bit of a learning curve for me.
One funny thing was that I set up two redirects, in cpanel, from the same url to two different urls. (It was a mistake!) I couldn't delete them, so I had to submit a service request with my host.
Two interesting things that I do not have time to do:
- Set up Github actions to deploy on my original host (andrewrondeau.com) - Set up redirects from blog.andrewrondeau.com -> andrewrondeau.com
nice website. However, when I clock on the site header (your name), it redirects to 404 instead of your home page.
- [deleted]
Sometimes you wish you had best of hn summarised somewhere but that 'best' may be different to different people based on their interests. I'm therefore collecting links I find important, from hn and beyond.
Thinking about:
How will various Human Computer Interaction change as many of current apps (which are screen based UI with some background code) simply get replaced with chat/voice/gesture based requests to LLM
Lately I've been trying to detect/mitigate prompt injection attacks. Wrote a blog post about why it's hard: https://alexcbecker.net/blog/prompt-injection.html
An AI-native DocuSign
It's been around a month I've been working on it. Struggling with getting people to actually use it - this week I've set the ambitious goal of 10 new contracts sent *and completed* by people I don't know (last week's was 10...by people I do know).
It's hard because I feel I'm in a weird hole - in order to have a good product I need people to use it and give me feedback, but in order for people to use it and give me feedback I need a good product. It's like wth!
Another thing I'm struggling with - enjoying the process. I get daydreams like mad. I feel I'm always living in the future in some way, especially with this software, and it's taking away from being present in this work. Which sucks, because I want to be excited to *work* on this and NOT fake my own excitement towards this as a manifestation of my greed to get rich off it.
But MAN am I greedy. It's ugly sometimes, to myself.
But god how I love to work on software also. How I love making stupid bash commands on my terminal. How I love to feel like the old gods, who conquered the infant digital world.
Interesting, nice landing page. But I wonder if users care that it is "AI-native" As in, do users look for contract generation or eSignature that is infused with AI? Or rather are users interested in their own "job to be done" - whether that be creating a contract, agreement, or getting it signed efficiently.
Haha thanks!
I'm still trying to understand what users want. The origin of this site was a friend's issue - everytime he wanted to make a contract and send it to someone he would (1) generate w/ gpt (2) paste in google docs (3) export as pdf (4) drop into docusign and drag signature fields into blanks (5) sign + send.
After I talked to another person who recounted the same story, I thought there could be something here.
I did learn that people have their own existing contract templates they want to use instead of generating new ones each time (though sometimes that's nice), and that feature is in dev.
But all my data on what users want is from very low sample sizes :(
1. A general-purpose Bitemporal Data Schema using SQLite (for storage) and Clojure (for data processing).
I'm trying to see if I can "get away with it": no schema migration, no fixed views, one tenant per DB, local-first-friendliness.
The general approach is "Datomic meets XTDB meets redplanetlabs/Rama meets Local First". Conceptually, the lynchpin "WORLD FACTs" table looks like this:
2. "Writing for Nerds"| tx_id | valid_id | tx_t | valid_t | origin_t | entity | attribute | value | assert | namespace | user | role | |--------+----------+---------+---------+----------+--------+-----------+-------+--------+---------------+------+------| | uuidv7 | uuidv7 | unix ms | unix ms | uuid7 | adi | problems | sql | 1 | org.evalapply | adi | boss |
A workshop I've been experimenting with, using willing friends as guinea pigs. To help people remove friction from being able to "spool brain to disk". The sales-y part is here, with more context / explanation about what it is about and what it is not about: https://www.evalapply.org/index.html#writing-for-nerds
A JS framework called Torpor: https://torpor.dev (https://github.com/andrewjk/torpor)
Been working on this for a while, with the aim of making something simple and immediately usable. Components are JS functions, containing UI that is (mostly) HTML, with reactivity only done through proxied objects.
To test it out I built a distributed social media/microblog site called Redraft: https://redraft.social (https://github.com/andrewjk/redraft)
Self-hostable or hosted for a fee, all your posts stored in an SQLite file, comment/like/react on other people's Redraft posts via a Web extension (pending approval, but you can use the unpacked version from source).
Both are in a pretty rough state, but usable for the intrepid.
I'm working on a system to help people write their Family History.
You upload interviews with family members (text, audio or video all work) and the system automatically transcribes the text, finds key people or events, and puts it together with other information you may have gathered about those events or people before. Like building a genealogical tree but with the actual details about people's lives.
In the works to also attach pictures of said people and events to give it some life.
Would be great to understand topics from family via AI (givem all the interviews are fed there) that they can’t discuss face to face
I am working on Community Notes for Bluesky: https://github.com/johnwarden/open-community-notes
Is bluesky becoming more or less decentralized over time?
Will I ever be able to host an instance where controversial views can be published?
Is that not possible today with the self hosting option? https://atproto.com/guides/self-hosting
That's my question! If I run that, can I show posts that bluesky has banned? Can I allow people to respond to any post they choose?
The stuff I'm working on never feels worth sharing, but I am doing a lot of computer stuff lately. It's kind of the year of moving towards declarative setups for me.
- Migrating to Niri on my laptop and re-evaluating my literate config approach, switching from xkb configs to kanata and a few other QOL changes to make my tooling more composable and expressive
- Shoring up my blog / media sharing infrastructure (migrated to a landing page on an s3 bucket, with different prefixes for several different hugo deployments for different purposes, still need to get better about actually posting content)
- Preparing to migrate a bunch of my self-hosted services to a k8s cluster which can can be fully deployed locally for testing and defined in code. All this is managed through argo and testable with localstack and crossplane for some non-local resources
- Attempting (somewhat unsuccessfully) to setup a nixos config for a bunch of services that just don't feel right to run in containerized stack that I want to live in ec2 and have as close to 100% uptime as possible (uptime kuma, soju/weechat relay/bitlbee, conduit, radicale, agate, whatever else I think of that is small and has a built-in nixOS service module. Thinking about some kind of RSS aggregating solution here as well)
- Experimenting with vibecoding by trying to get an LLM to do the legwork to build a TUI interface to ynab using rust (which I don't know how to write)
I'm hoping that by the end of this summer most of the tooling I use for most things will be way more concrete and seamless. I also want to get my workflows down and get on top of converting at least a few the ~100 draft blog posts I have laying around into something I can actually post. Ditto for my photography albums, which are not yet organized into coherent groupings or exported for web.
Building a TUI for ynab is pretty compelling, so I wouldn't say that's not worth sharing. If you could pull the data on terminal startup and get a budget snapshot each time you open a terminal-- cool.
So far that experiment is going pretty well! I haven't worked much on it but the tooling I'm using has made a great base for the project. My goals (in order of priority) are:
- Get it so that you can categorize transactions quickly in a keyboard-driven way
- Similarly have a quick, improved option for dealing with overspending / underfunding
- Add some additional reporting that I'd like to see (as well as the ability to drill down in a more fuzzy way than currently supported in ynab)
- Finally (and most importantly but also most ambitiously) develop a view with some simple tools that helps users figure out WTF is wrong when a reconciliation isn't working out. This is much harder than the other things I'm trying to do here
Luckily YNAB's API is very open and I think I can do all the things I'm looking to achieve here. If I'm successful, I plan to spin off a sister TUI project for making handling import edgecases easier in beancount, which I also use but for different reasons
Edit: but your idea of having CLI command options for printing reports on a regular basis / on opening the shell is also neat, I do plan to have some CLI options that don't require you to open the full TUI
Having worked on various products and startups before, I want to make it easier for solo-founders and small teams to understand website traffic, conversions, product usage, errors their users are having and provide support to their users, without having to intergrate and maintain multiple disjointed tools.
That's why I am building Overcentric - a simple and affordable toolkit that combines web & product analytics, session replays, error reporting, chat support and help center - all in one place.
Been building it and testing with several startups and improving based on their feedback. I am also using Overcentric for Overcentric itself, so I always get ideas for improvement.
What's next: more tools that are useful for startups are on the roadmap and I am exploring how LLMs can be further utilised (apart from support, session replay summaries, aiding in writing help center articles) and refining pricing.
Check it out at https://overcentric.com/
Would love to connect with other SaaS founders and have Overcentric help them grow their startups.
Syntax highlighting caught my interest after I created a data format. I stumbled upon TextMate grammar bundles which are supported in some editors and created a bundle that works with three of them. The gnome text editor uses a different language definition format for which I created a grammar file as well. [1]
To highlight the syntax in the browser I checked out the CodeMirror project that uses Lezer grammars. It is very flexible and allowed me to implement additional features like custom folding. [2]
I would also like to create a grammar for tree-sitter, finish the Java implementation and documentation of the ESON parser before I try to implement it in other languages.
[1] https://gitlab.com/marc-bruenisholz/eson-textmate-bundle [2] https://gitlab.com/marc-bruenisholz/eson-lezer-grammar
Newly Registered Domain block lists for PiHole[0]. This has worked hands-off for the last two days after a couple of weeks of tweaking. My PiHole blocklist is currently a bit over 14 million domains, of which around 8 million are NRDs.
My next item is to add AbuseIPDB IP addresses to my "Uninvited Activity"[1] IP address blocking system, implementing xRuffKez's script here: https://github.com/xRuffKez/AbuseIPDB-to-Blackhole
Unfortunately, but also understandably, AbuseIPDB limit their free-access (account required) API to 10,000 IP address records. So I might be putting it into a database to hopefully aggregate multiples of the 10k results if they're not always the same 10k.
Not a technical project in the typical sense, but I finally started working on a satirical faux-exposé series about my years working in the industry.
If anyone's interested in that kind of thing:
https://massiveimpassivity.substack.com/p/softcore-how-nobod...
I'm building a budget app for my wife and myself.
Basic goals:
- Web based for zero update latency
- Have it work offline
- Automatically import transactions from my banks
- No running/hosting cost
- Secure
Tools used so far:
- InstantDB for the datastore, providing the offline capability too
- A gmail account that automatically gets forwarded bank alerts for purchases
- Gitlab.com w/scehduled pipelines for cron based email-syncing
- Netlify for the free hosting
- InstantDB magic codes / email links for securing the data
I'm at the point where I can track and categorize purchases, including split transactions.
Next steps:
- Add in date ranges for reporting / data views; e.g. show expenses incurred in a one month period instead of for all time.
- Add in planned / project transactions for month forecasting
- Statement import & import reconciliation and statement reconciliation
- Scrape company specific digital reciept emails (like Amazon) to autopopulate more transaction data
And that'll be the end of the stuff I can do for free. I think I will add features that require money and/or dedicated hardware though:
- OCRing receipts -> autopopulated transaction data / description
- Using chatgpt to suggest categorizations
- Scrape extra data from my bank sites, like physical addresses of entities involved in charges.
Been building my YouTube channel where I cover things like Apple Vision Pro development — as well as some new storytelling directions for me, like my newest short on the Voyager mission's camera problems when it was 2 billion miles from earth:
http://youtube.com/@dreamwieber
In parallel I'm working on a bunch of apps for Vision Pro -- my most well-known at the moment being Vibescape which was featured recently by Apple: https://youtu.be/QcTiDBtCafg
To round this out, my wife and I are converting a historic farm in the Pacific Northwest to regenerative agriculture practices. So far we've restored over 20 acres of native ecosystems.
If that's interesting to you there's a channel here:
https://github.com/patched-network/vue-skuilder https://patched.network
FOSS toolkit for SRS and adaptive tutoring systems. Inching closer to proper demos and inviting usage.
In essence, I'm looking to decouple ed-tech content authoring (eg, a flash card, an exercise, a text) from content navigation (eg, personalizing paths and priorities given individual goals and demonstrated competencies), allowing for something like a multi-sided marketplace or general A/B engine over content that can greatly diminish the need to "build your own deck" for SRS to be effective.
Project became my main focus recently after ~8 years of tiny dabbling, and I've largely succeeded at pulling spaghetti monolith into a sensible assembly of packages and abstractions. EG, the web UI can now pull from either a 'live' couchdb datalayer or from statically served JSON (with converters between), and I'm 75% through an MVP tui interface to the same system as well.
My team and I are building a web app that enables any business to chat with any other business in any language. Details:
It's B2B only - can't register with a free email provider, gotta own a real domain -Therefore identities are collective - companies, not company employees --Therefore all interactions are persisted at the org level rather than assigned to individual inboxes
-It allows you not just to talk but also to work together on contracts. We built a contract parser that turns contract clauses into smart, plain language objects
We're calling it Geneva and doing a friends/family/acquaintances exploratory release as I type this.
http://genevabm.com http://x.com/genevab2b https://www.linkedin.com/company/genevabm/
My Carnatic Raga classifier is progressing very well. I am now training a classifier to identify 142 ragas.
A bit of background: I have been working on a Raga classifier since November of last year - I started with just 2 ragas and a couple megabytes of audio. After experimenting with a lot of different ideas and Neural Net Architectures, I finally landed on one that could scale. I increases to 4 ragas, then 12, then 25 and then to 65.
All the training is done locally on my desktop (RTX4080, AMD 7950X, 64G RAM). My goal is to make an app for fast inferencing (preferably CPU) and to get this app in the hands of enthusiasts so that I can get some real data on its efficacy. If that goal is hit, then my plan is to iterate and keep increasing the raga count on the model and eventually release to the public. As long as I can get the model to either run locally or for very cheap on server, I hope to not charge for this.
It has been an amazing learning experience. The first time I got a carnatic singer to sing and the model nailed almost all ragas was the highest high I've felt in a while.
I’d love a pointer to this when it’s shareable!
Absolutely!
Wow! I would love to try it out whenever a demo is available.
I posted a couple of months ago:
https://github.com/dahlend/kete
Research grade orbit calculations for asteroids and comets (rust/python).
I began working on this when I worked at caltech on the Near Earth Object Surveyor telescope project. It was originally designed to predict the location of asteroids in images. I have moved to germany for a PhD. I am actively extending this code for my phd research (comet dust dynamics).
Its made to compute the entire asteroid catalog at once on a laptop. There is always a tradeoff between accuracy and speed, this is tuned to be <10km over a decade for basically the entire catalog, but giving up that small amount of accuracy gained a lot of speed.
Example, here is the close approach of Apophis in 2029:
https://dahlend.github.io/kete/auto_examples/plot_close_appr...
A Parquet file compactor. I have a client whose data lakes are partitioned by date, and obviously they end up with thousands of files all containing single/dozens/thousands of rows.
I’d estimate 30-40% of their S3 bill could be eliminated just by properly compacting and sorting the data. I took it as an opportunity to learn DuckDB, and decided to build a tool that does this. I’ll release it tomorrow or Tuesday as FOSS.
Load the data into MergeTree instead? https://clickhouse.com/docs/engines/table-engines/mergetree-...
Published here: https://codeberg.org/unticks/comparqter
- [deleted]
We're building Zigpoll (https://www.zigpoll.com), a survey platform focused on zero-party data collection — think post-purchase attribution, customer feedback, and segmentation — all done directly on your site without relying on third-party cookies or offsite links.
We initially built it for Shopify, but now it’s fully embeddable, supports headless implementations, and integrates with tools like Klaviyo, Zapier, n8n, and Snowflake. One thing we’re especially proud of is how fast and unobtrusive it is: polls load async, don’t block rendering, and are optimized for mobile and low-latency responses.
From a tech angle:
Frontend is all React, optionally SSR-safe.
Backend is Node.js + Postgres, with a heavy focus on queueing + caching for real-time response pipelines.
API-first design (public API just launched: apidocs.zigpoll.com).
We recently open-sourced our n8n integration too.
If you're a dev working on ecom, SaaS, or even internal tooling and need a non-annoying way to collect structured feedback, happy to chat or get you set up. Feedback welcome — especially critical stuff. Always looking to improve.
The premise is that when I read social spaces like Reddit or X, if the government has done anything contentious you get nothing more than strident left takes, or strident right takes on the topic. Neither of which is informative or helpful.
So I have set up a site which uses AI which is specifically guided to be neutral and non-partisan, to analyses the government actions from the source documents. It then gives a summary, expected effect, benefits and disadvantages, and ranks the action against 19 "things people care about" (e.g. defence, environment, civil liberties, religious protection, etc.)
The end result is quite compelling. For example here's the page that summarises all the actions which are extremely beneficial or disadvantageous to individual liberties: https://sivic.life/tyca/tyca_individual_liberties/
Looks like the site is down.
Hi, thanks for taking an interest in our site :-)
The site is up now as far as I can tell. We were doing some updates a couple of hours ago which might have been when you tried it. Please have another go.
Insurance is negative NPV. Trying to make it NPV neutral by giving people tools to self-insure. Starting with an app that lets you self-insure your phone with friends and family.
https://apps.apple.com/us/app/open-insure-self-insurance/id6...
This is cool. I'm interested in reading more on this concept. Any chance you have a write up of experiences so far? Or can you point to other resources?
This is interesting, it this an experiment or planning to make it real? What markets is it targeting?
It’s real, my friends and I all pay premiums every month, we’ve put aside $1100 so far. Work on it nights and weekends with one of my fellow policy holders. Feedback would be super appreciated.
Interesting approach. How would you make money?
Love this, signing my family up now. Absolutely hate apple care
I'm working on a little website to summarize discussion trends across the podcast ecosystem. I wrote about an early prototype here[1] and also gave a presentation about it a few months ago[2] and now I'm working on an expanded "daily pulse" view across hundreds of episodes of top news podcasts from the last few days.
My secret agenda is to explore how the "information supply chain" can be tracked across the data-processing stack all the way from the original audio through transcription, the processing pipeline, and UI. I'm using language models for multi-stage summarization and want to be able to follow the provenance of summaries all the way back to the transcripts and original audio.
This is a super neat concept. I would find it really cool to be able to see a "map" of podcast topics, wherein I can click to specific segments in specific podcasts. Even cooler would eventually be the ability to stitch together clips about the same topics from separate podcasts, eventually.
This is such a good visualization idea. I'd like to see some of the webinars and work calls I am on represented this way in the after-call summary
Thanks!
Yes, you could try making one using Observable Plot (which is what I used for these): https://observablehq.com/plot/transforms/dodge
One of the slides in my presentation has the full prompt I used, in case that's useful. I ran it on chunks of the podcast transcript and then merged/deduplicated the results to get the data that's visualized here.
Been working on a small team exploring what the intersection of coaching, employee engagement, and AI looks like. (https://engage.myemmaai.com/)
Most employee engagement software is just placation for HR. When it's common that the lowest scoring question on feedback cycles is "I believe that action will be taken based on the results of this feedback," there's something fundamentally broken with how companies handle feedback, and how the tools their given enable them to react to it.
Our end goal is to help leaders and managers identify problems with trust and communication within a team. The reality is, 90% of the time, the problem lies with the leadership itself. We're trying to provide both the tools to diagnose what the problems are, and frameworks for managers to fix them.
Looks neat! Keen to see where you go with this. I cofounded a small training company and we had something similar in mind for a while: a digital tool giving managers and orgs insight into team engagement, with diagnosis and tools based on our leadership framework. In the end we focused on consulting and in-person engagements but I think there's a lot of potential in what you're building.
Coaching is still a big part of what we're building, and I think it will be an ongoing option for orgs to have the option to bring in coaches to dive into their leadership and communication issues.
Along with what feels like billions of others right now, I'm building a pet project as a learning exercise involving rag agents, MCP and locally hosted llm's to work with a 15 year old pile of proprietary wiki data and a large 20yr old codebase.
What kind of hardware are you using to run the LLMs locally?
Any pointers?
I am building a Hardware Design Language for FPGA accelerators.
The big trick or the language is that it doesn't hide the pipelining you have to do to up your FMax, instead, you can manually add register stages in the places they're important, and the compiler will synchronize the other paths.
A really neat trick with this pipelining system is that submodules can respond to the amount of pipelining around them (through inferring template parameters). This way the programmer really doesn't have to think about the pipelining they do add. Examples are a FIFO's almost_full treshold, inferring how many simultaneous state there needs to be for a pipelined loop, inferring the depth of BRAM shift regs, etc.
I built an IPC/RPC shim for a Chrome extension so I can send strongly-typed messages between isolated JS contexts that otherwise expose wildly inconsistent messaging APIs.
I discovered that VSCode has a very nice solution so I pulled the core VSCode libraries and injected them into a Chrome extension using the dependency injection, ipc / rpc, eventing to bridge the gap between all of these isolated JS contexts and expose a single, strongly‐typed messaging API, my IPC/RPC shim sits on top of each of the native environments and communication mechanisms.
Yesterday, Microsoft released the source code for the Copilot chat. Apparently, since the basis of my Chrome extension is the same core libraries I can drop the VSCode chat UI into the side panel without much friction. Although, I might continue to use Microsoft's FluentUI chat currently implemented in the extension.
Because Copilot chat has a lot of code that runs in node in Electron, now I'm working in porting all the agent capabilities for browser automation from the Copilot chat including the code for intent, prompt creation, tools, disambiguation, chunking, embedding, ect. I'm 4 to 6 weeks away from having feature parity of Playwright for automation from a Chrome extension side panel that can do most of the inference using huggingface transformer.js locally. Nonetheless, heuristics exposed as tools such that if the intent is playing a video, all that is required is a tool that collects all the video tags and related elements with metadata. No need to use $10 in tokens to figure out which video element to play.
Yeah, I think I'm 4 to 6 weeks away from having a Copilot chat in a browser doing agent automation.
If you want to see where I'm at today, https://github.com/adam-s/doomberg-terminal.
> AI-Powered News Intelligence
When I did Grub the crawler back in the day, that's what I was shooting for!
If you want a jumpstart on the Playwright stuff: https://github.com/kordless/gnosis-wraith. Runs on Google Cloud Run. The UI is still in progress but you can test it here: https://wraith.nuts.services. Uses tokens to email for login.
The extension stuff is the way to go, IMHO! You can capture any page, even automatically.
That is awesome! Thank you for sharing!
I have a nice garden going right now. TAM Jalapeños have taken the longest to flower, almost thought they wouldn’t. Sweet cherry peppers have been plentiful. Lost my zucchini crop to squash vine borers.
Vine borers got mine too. First time in a long time. I'm in central Texas.
But no hornworms or caterpillars this year. Very strange!
Currently, in a side project to keep track of pricing in the supermarkets, as since the start of the war of Ukraine, the prices have raised, event skyrocketed in some cases, without any real explanation. In some cases, I've seen how a product has raised up to 1€ in some months. The way to keep the pricing changes is to take a picture of a supermarket receipt, store the information, and calculate the pricing diff for each article. For extracting the information, I use Textract from AWS, as it fits my needs, however, it could be possible to replace it with any other OCR which can extract structured information. My purpose is to improve the project gradually and add statistical information, like charts showing the pricing changes along the time, show the most bought products in a period of time, etc
As the project relies on AWS service, I have decided to not publish a free demo, as it would be exposed to abuse by sending all kind of images, not to talk about crazy billing. However, here is a screenshot on how does it display the receipt details:
There is a raise on one article, and a down on some others
The project is built using Ruby on Rails and runs on docker. It is light enough to run on a cheap VPS.
Feel free to take a look on it: https://gitlab.com/cbolanos79/sbt_rails
Any constructive opinion/collaboration is welcome :)
hey there, I spent some time working on something very similar, but I am based in the US! I was able to import receipts from Safeway & Costco here, using private APIs. But since stores here don't really support OAuth, and using a private API may not be the best long term approach, I paused my effort. I would love to chat if you are interested. I used react to build a prototype.
I'm working on a free high performance address matching (geocoding) library. As it turns out I blogged about it just today: https://www.robinlinacre.com/address_matching/
Repo: https://github.com/ilmoraunio/conjtest
Conjtest is a policy-as-code CLI tool which allows you to run tests against common configuration file formats using Clojure. You can write policies using Clojure functions or declarative schemas against many common configuration file formats such as YAML, JSON, HCL, and many others (full list in repo).
Under the hood, it uses Babashka and SCI (Small Clojure Interpreter) to run the policies and Conftest/Go parsers for compatibility with Conftest (https://www.conftest.dev/). It’s also possible to bring your own parser or reporting engine using Babashka scripting.
The initial big pieces are in place now, I’m preparing my end of the year to talk about Conjtest and get some feedback/issues to work on.
Hey HN – I'm working on OneBliq https://onebliq.com, a lightweight tool to help teams plan and track Azure costs collaboratively, without the usual enterprise overhead.
We built it because managing cloud budgets often turns into a spreadsheet mess, or worse, a never-ending consulting engagement. OneBliq lets you:
* Split and allocate Azure costs by cost centers, teams, or projects
* Visualize current spend and attention areas at a glance
* Experiment with plans and projections without complex tooling
* Skip sales calls and long onboarding – just install and kick the tires
It's still early, but we're seeing traction with teams who want clarity without complexity. Happy to answer questions, share more, or get feedback.
Would love your thoughts – what would make a tool like this useful (or useless) for you?
I just started on an old-school forum software. Go + Sqlite. Good old server-side HTML templating.
Why? I don’t like Discourse and Flarum that much. I want an even simpler solution with fewer bells and whistles.
But I guess the market is dead anyways for forums. I might replace my phpBB instance that has been running for 15 years.
I imagine this is really fun to make.
I can't remember a time where it's felt more fun to decide "I'm just going to make this web thing the way we used to make web things."
Is it ready enough to share a link to your work?
Thanks for asking, but not yet.
I've been working on my own version of a literate programming system (https://github.com/adam-ard/organic-markdown). It's kind of a mix of emacs org-mode, jupyter, and Zettelkasten. But, because it's based on standard pandoc-style markdown, you can use it with a much wider range of tools. Any markdown editor will do.
Even though I made it as a toy/proof of concept, it's turned out to be pretty useful for small to medium size projects. As I've used it, I've found all kinds of interesting benefits and helpful usage patterns. I've tried to document some; I hope to do more soon.
--https://rethinkingsoftware.substack.com/p/the-joy-of-literat...
--https://rethinkingsoftware.substack.com/p/organic-markdown-i...
--https://rethinkingsoftware.substack.com/p/dry-on-steroids-wi...
--https://rethinkingsoftware.substack.com/p/literate-testing
--https://www.youtube.com/@adam-ard/videos
The project is at a very early stage, but is finally stable enough that I thought it'd be fun to throw out here and see what people think. It's definitely my own unique spin on literate programming and it's been a lot of fun. See what you think!
For fun, I am building a little tool called 'domain-manager'. Basically just a binary that automates configuring a Linux host to run a bunch of WordPress/Laravel/PHP sites.
It creates all the necessary boilerplate to generate PHP Docker containers, creates all of the MySQL users, and sets up all of the directory structures to get a new website up and running. It even helps set up SFTP users and gets letsencrypt certificates set up with certbot.
It's still very early days, but I appreciate that what used to be a bunch of commands that I would run by hand and slightly change every few months is now pretty much just all self contained. Should mean the next migration to a different server is easier.
Created in frustration because I was too cheap to pay the $50/month for a cPanel license.
Still building: https://www.saner.ai/
The ADHD-friendly AI personal assistant for notes, email, and calendar.
Where you can just chat to search notes, manage emails, and schedule tasks. It proactively plans your day every moring and checks in to help you stay on top of everything.
Continuing to plug away at Trilogy, a better SQL for data consumption and analytics. Getting closer to core feature completeness, at which point can pivot to focus on integrated visualizations + pre-processing/ETL.
Most recently have been focused on better geographic visualizations in the public studio for people to experiment - getting decent automatic lat/long, want to have easy path visualizations (start/end, etc). More AI-accelerated options as well, especially around model authoring.
Repo: https://github.com/trilogy-data/pytrilogy Studio: https://trilogydata.dev/studio-core/
I built a hardware server monitor with LED display based on the ESP8266. I needed 8 fewer things to think about in the morning. If you want, you can build one yourself, I released the hardware and firmware: https://github.com/seanboyce/servermon
Next up is a small lamp for migraines. I noticed that dim red light is much more tolerable to me than anything else. I mean obviously, darkness is ideal, but you need to do other stuff like eat and drink eventually if it's a persistent one.
So I designed a quick circuit to use fast PWM (few Mhz, so no flicker) to control a big red LED. I'd like it to be sturdy and still functional in 50-100 years, so made some design choices for long-term durability. No capacitors, replaceable LED and so on.
A simple project, but it's a busy month and I need something easy this time.
I'm putting the finishing touches on my free daily word game, Omiword[1][2]. I had it basically finished, with the option for players to make a one-time $5 payment to unlock access to the archives, but then Stripe shut down my account, claiming it was a "restricted business".[3] I'm now reworking it to try to fund it through Patreon, we'll see how that goes.
A better paint-by-numbers generator than what I found online.
Examples wiki: https://github.com/scottvr/pbngen/wiki
The code: https://github.com/scottvr/pbngen
A tool for creating WCAG/ADA accessible Tailwind-like color palettes. :)
https://www.inclusivecolors.com/
The idea is it helps you create palettes that have predictable color contrast built-in, so when you're picking color pairs for your UI/web design later, it's easy to know which pairs have accessible color contrast.
For example, you can design your palette so that green-600, red-600, blue-600, all contrast against grey-50, and the same for any other 600 grade vs 50 grade color, like green-600 vs green-50.
That way you won't run into failing color contrast surprises later when you need e.g. an orange warning alert box (with different variations of orange for the background, border, heading text and body text), a red danger alert box, a green success alert box etc. against different color backgrounds.
Updating a treatment of a finite difference approach to Schrodinger's equation from WebGL to WebGPU, using WebGPU compute shaders. Having actual arrays for data storage is so much cleaner than the older approach with textures for data storage and fragment shaders for computations. https://www.vizitsolutions.com/portfolio/webgpu/compute/ Once this is caught up with the earlier version, I'll be extending it in terms of additional numerical issues and techniques and use it to build explorable educational content in 1-D quantum mechanics. Eventually, on to 2-D quantum mechanics.
I welcome feedback, just keep in mind that this is a work in progress, and I haven't even reviewed it for clarity and typos.
Currently building a SaaS for product management to make up for some of my own defects. Looking for beta testers if anyone is interested in trying it and providing feedback. There will definitely be bugs but I plan to start using it for myself to help me keep up on some projects I'm working on. The big thing I'm excited about is AI Risk Analysis, which will review the dependencies and hopefully catch bottlenecks and issues before they cause misses. Current URL is below. Name is pending. Just had Google tell me something that sounded cool.
Open source quiz creator to create quizzes by pasting in text or selecting from a large range of historical categories.
Started as a very simple app for me to play around with OpenAI’s API last year then morphed into a portfolio project during my job search earlier this year. Now happily employed but still hacking on it.
Right now, a user can create a quiz, take a quiz, save it and share the quiz with other people using a URL.
Demo: You can try out the full working application at https://quizknit.com
Github Links: Frontend: https://github.com/jibolash/quizknit-react , Backend: https://github.com/jibolash/quizknit-api
I'm building an AI for Customer Support.
Here's the summary: - read all your sources - public websites, docs, video - answer questions with confidence score and no hallucinations with citations - cut support time and even integrates directly into your customer facing chatbots like Intercom
Still deliberating on the business model. If anyone would be interested in taking a look, I would love to show you.
I think if you allow a set of YouTube videos as input, it'll be quite powerful coupled with transcription ability of LLMs. Lots of people consume content that way. As an added bonus, you can show the performance summary about the sections the user did well or not so well on with video links to those timestamps for them to go back and review.
I've been hacking on an Icecast-compatible server with Erlang. You can feed it an FFmpeg icecast feed into the server, and listen to it with any Icecast-compatible player. I think it's kind of neat; I do some extra things that the official Icecast server doesn't give you.
I store the chunks in a custom-built database (on top of riak_core and Bitcask), and I have it automatically also make an HLS stream as well. This involved remuxing the AAC chunks into MPEG-TS and dynamically create the playlist.
It's also horizontally scalable, almost completely linearly. Everything is done with Erlang's internal messaging and riak_core, and I've done a few (I think) clever things to make sure everything stays fast no matter how many nodes you have and no matter how many concurrent streams are running.
This sounds super cool! Any public code you can share?
I’m afraid not; I have become a tiny bit disillusioned with open source and I’m keeping some of my projects to myself now.
I’ll probably release the code I wrote for the input radio station but that’s just a glorified script written in Rust and calling FFmpeg. The only fun part of that is I call OpenAI to get AI commercials and DJ chatter.
I'm working on a new kind of encyclopedia and reference website that is way more engaging and rich than a single page of text on every subject like Wikipedia. Back in my childhoold we had Microsoft Encarta on CDROM and it was a very compelling multimedia experience and I want to emulate something like that on the modern web by combining video, sound, images and text in a more compelling user experience with great discoverability. I've been working on the first version for about eight months and I hope to launch it next week at https://reference.org
I've been quite obsessed about ramping up (technically complex, not basic crud/wrappers) SaaS development with Gen AI tools, speeding things from months to weeks to days. But then I hit a snag: operations are the new bottleneck. How can I support all of these products, let alone promote them or find customers? My focus shifted to agents, and I realized that access for these AI bots was a major hurdle, despite all the MCPs available.
The thing is, we’ve been retrofitting software made for humans for machines, which creates unnecessary complications. It’s not about model capability, which is already there for most processes I have tested, it’s because systems designed for people are confusing to AI, do not fit their mental model, and making the proposition of relying on agents operating them a pipe dream from a reliability or success-rate perspective.
This led me to a realization: as agentic AI improves, companies need to be fully AI-native or lose to their more innovative competitors. Their edge will be granting AI agents access to their systems, or rather, leveraging systems that make life easy for their agents. So, focusing on greenfield SaaS projects/companies, I've been spending the last few weeks crafting building blocks for small to medium-sized businesses who want to be AI-native from the get-go. What began as an API-friendly ERP evolved into something much bigger, for example, cursor-like capabilities over multiple types of data (think semantic search on your codebase, but for any business data), or custom deep-search into the documentation of a product to answer a user question.
Now, an early version is powering my products, slashing implementation time by over 90%. I can launch a new product in hours supported by several internal agents, and my next focus is to possibly ship the first user-facing batch of agents this month to support these SaaS operations. A bit early to share something more concrete, but I hope by the next HN thread I will!
Happy to jam about these topics and the future of the agentic-driven economy, so feel free to hit me up!
My wife and I recently started sharing our passion for cooking at https://soulfulsabor.com. Got WordPress set up to get things running and focus on the cooking and photos. Wordpress turned out to be hyper complex for my taste and needs plugins for a lot of things, so I'm starting to develop my own static site for specifically for food blogs, not wanting to turn it into a product but just to add simplicity to our own workflow. The cooking side of the project is really fulfilling after a long day in the computer, it feels great to do something tangible with quick results. Got a bunch of bread recipes to upload soon.
I had been working on a macOS app last couple weeks. Got it approved by Apple today YAY!
It's called Heap. It's a macOS app for creating full-page local offline archives of webpages in various formats with a single click.
Creates image screenshot, pdf, markdown, html, and webarchive.
It can also be configured to archive videos, zip files etc using AppleScript. It can do things like run JavaScript on the website before archiving, signing in with user accounts before archiving, and running an Apple Shortcut post archiving.
I feel like people who are into data hoarding and self host would find this very helpful. If anyone wants to try it out:
https://apps.apple.com/ca/app/heap-website-full-page-image/i...
I've built a Reddit-like community platform in Go. Users can create their own sub-communities, and within them, set up different categories and boards. Posts can be voted on, and board types can include regular posts, Q&A, or live chat. It's like a hybrid of Reddit and Discord but leans more towards a traditional web community. It also supports server-side rendering, making it SEO-friendly. This project is an extension of my previous Hacker News clone, dizkaz (https://news.ycombinator.com/item?id=43885998). I'm currently working on implementing submission rate limiting and content moderation, which is a bit challenging, but it should be ready for launch soon.
Just like many others, I also have this problem: it’s getting harder and harder to find the articles I’ve read later. There’s so much great content I come across every day that, in my experience, it’s rather difficult, if not impossible, to find it again by searching Google. So I started building a personal solution to keep the stuff I read searchable.
There are existing tools that try to solve this, but I have some unique requirements that pushed me to create my own. Things like Telegram integration, weekly summaries, and indexing YouTube content. More importantly, I felt the pain enough that I was motivated to work on it as a side project.
A good example is the frustration and disappointment I had with Pocket: the search functionality was terrible, and each update seemed to make the product less usable.
I’ve now got a working prototype, and I’m putting on the final touches. Let me know if you have any thoughts, or email me if you’d like to try it out.
I'm building https://zenquery.app — a tool for querying large CSV, JSON, Parquet and Excel files using plain English. No SQL or coding required.
As a data engineer, I regularly have to dig through massive files to debug issues or validate assumptions — things like missing column values, abnormally large timestamps, inconsistent types, or duplicate records. It’s tedious and time-consuming, and that’s what led me to build this.
ZenQuery makes it quick and easy to explore data locally, without needing to spin up notebooks, write scripts, or upload anything to the cloud. It’s also useful for doing lightweight analytical QA if you're working with business data.
Happy to answer any questions.
Hey.. we are using this one in our company for past week. It's working great.
But, can you please add gdrive connection support to it? Our company mainly uses gdrive for all collaboration and would help to have a direct integration with it. As of now I first have to download the files (they are small files, but still).
Great product otherwise. Best wishes..
Glad to know it's working great.
Regarding gdrive integration, it's already in my todo. Have received the same request from one other person.
Will bump up this feature's priority.
Thanks..
Looks pretty cool. You should add a pricing section though. I thought the only cost one would have with this would be the LLM api costs.
Hey, thanks for checking it out. I have put pricing above the Download section, but I guess I could do better to make it more visible.
Will update.. thanks again..
Also, for marketing teams would be helpful to have a subscription based team bundle plan instead of a one time purchase per device! For example on my company I know that our marketing team would benefit a lot from using a tool like this. Anyways, great tool. Good job
Noted.
Will think on implementing this correctly since this will also need SSO integration for auth along with auditing and rbac controls.
Thanks for the suggestion :)
For SSO, RBAC, etc, check out https://workos.com
I’m the founder :) Happy to help!
Hey hi...
Will check it out. Thanks for mentioning :)
a Slack and Discord app to help take turns (i.e. queue) with your teammates, overwhelmingly used for sharing tech resources like staging servers. It's crazy something that started so tiny (almost as a joke for my old workplace) has grown into my main "thing".
an infrastructure configuration monitoring solution for terraform/opentofu managed stacks. I am unsure how to proceed with this tbh. It's sort of the underdog in this space - it's much cheaper than the competitors. But really, it's yet to make a dent.
(I am maybe prowling around for something new to build)
I have been writing a few technical posts about how ML is used to show ads: https://satyagupte.github.io/posts/how-ads-work/
I was hoping to make a piano practice assistant for my kids, that would take sheet music in MusicXML format, listen to the microphone stream, and check for things they frequently miss like rests, dynamics, consistent tempos.
Surprisingly the blocker has been identifying notes from the microphone input. I assumed that'd have been a long-solved problem; just do an FFT and find the peaks of the spectrogram? But apparently that doesn't work well when there's harmonics and reverb and such, and you have to use AI models (google and spotify have some) to do it. And so far it still seems to fail if there are more than three notes played simultaneously.
Now I'm baffled how song identification can work, if even identifying notes is so unreliable! Maybe I'm doing something wrong.
Here's an algorithm I cooked up for my (never completed) master's thesis:
It's based on the assumption that the most common frequency difference in all pairs of spectrum peaks is the base frequency of the sound.
-For the FFT use the Gaussian window because then your peaks look like Gaussians - the logarithm of a Gaussian is a parabola, so you only need three samples around the peak to calculate the exact frequency.
-Gather all the peaks along with their amplitudes. Pair all combinations.
-Create a histogram of frequency differences in those pairs, weighted by the product of the amplitudes of the peaks.
When you recognise a frequency you can attenuate it via comb filter and run the algorithm again to find another one.
Note detection works ok if you ignore the octave. Otherwise, you need to know the relative strength of overtones, which is instrument dependent. Some years ago I built a piano training app with FFT+Kalman filter.
Cool, I'll give it a shot. So far I've just been blindly feeding into the AI and crossing my fingers. I'll try displaying the spectrogram graphically, and I imagine that'll help figure out what the next step needs to be.
I was thinking this would be a good project to learn AI stuff, but it seems like most of the work is better off being fully deterministic. Which, is maybe the best AI lesson there is. (Though I do still think there's opportunity to use AI in the translation of teacher's notes (e.g. "pay attention to the rest in measure 19") to a deterministic ruleset to monitor when practicing).
I always wanted to do a keyboard/tablet combo (maybe they make these, I don't know).
The idea is a fully weighted hammer action keyboard with nothing else, such as the Arturia KeyLab 88 MkII, and add to that tiny LED lights above each key. And have a tablet computer which has a tutor, and it shows the notes but also a guitar hero like display of the coming notes, where the LED lights shine for where to press, and correction for timing and heaviness of press, etc.
Just writing posts for my blog on personal experiences with startups https://developerwithacat.com . Am taking a break from any serious building, bit tired of failing. Using the blog as a form of self therapy.
Building TenantFit: https://tenantfit.mtxvp.com/, a lightweight tool to help small landlords pre-screen rental applicants.
When you post a listing (e.g., on Facebook, Kijiji), you get tons of “Is this still available?” messages — but no useful info. TenantFit lets landlords collect basic answers (income range, pets, lifestyle) via a public link, then ranks responses to highlight promising leads.
No accounts or sensitive info collected from tenants (landlord does not even see candidate email until they reply), just a quick pre-screen before deeper screening to save time.
I’ve been building my wife a budget tracking dashboard for reporting on PPC ad campaigns.
At any given time, she’s working with any number of clients (directly or subcontracted, solo or as part of a team) who each have multiple, simultaneous marketing campaigns across any number of channels (google/meta/yelp/etc), each of which is running with different parameters. She spends a good amount of time simply aggregating data in spreadsheets for herself and for her clients.
Surprisingly we haven’t been able to find an existing service that fits her needs, so here I am.
It’s been fun for me to branch out a bit with my technology selections, focusing more on learning new things I want to learn over what would otherwise be the most practical (within reason) or familiar.
Wut.Dev (https://wut.dev) - a fast, client-side, privacy-focused, alternative to the AWS console.
I got tired of using the AWS console for simple tasks, like looking up resource details, so I built a fast, privacy-focused, no-signup-required, read-only, multi-region, auto-paginating alternative using the client-side AWS JavaScript SDKs where every page has a consistent UI/UX and resources are displayed as a searchable, filterable table with one-click CSV exports. You can try a demo here[1]
[1] https://app.wut.dev/?service=acm&type=certificates&demo=true
Unsolicited feedback (and take with grain salt since I’m probably not your target buyer)
- the subheading is describing the “how” not the “what”. Meaning, what would you use this product for?
- in general, all the headlines could be preposition from the “what” a user would do scenario. Eg instead of saying “Resource Relationship Diagrams” … say “See Resource Relationship with Ease”
- if I’m understanding the tool correctly, this seems like a “lookup” tool. In which case lookup.dev is for sale … just fyi.
Much appreciated! I just put this homepage together recently, so this is really helpful feedback.
Great idea, i'm tired of aws console too
My background is in NLP, research, and startups. I joined a power company where I saw a clear opportunity to use AI for automating equipment inspection from drone images.
But the environment made it hard to move fast. The systems were outdated, and there wasn’t much support for building AI tools in-house. That experience made me realize I needed to grow beyond the modeling layer. There were things I wanted to build, but I didn’t yet have the full skill set to do it on my own.
So I’ve been learning full stack development. I had built a small chatbot app before, but this time I’m applying what I’m learning toward a focused MVP for the inspection work. It’s been a practical way to connect what I know with what I want to make real.
Funny, I actually have a very similar story, where the plan was also to use drones/AI for inspection of power equipment. For the same reasons as you I quit to work on my own projects, but I discarded the drone project and went another way. Best of luck!
Thanks. Would love to chat if you're available? my twitter is @taha_moji
Went down a bitcoin rabbit hole and got really interested in how wallets are derived with seed phrases. I wanted a simple and secure way to generate them from the terminal but could not find anything so I built my own: https://github.com/rittikbasu/s33d
A screen reader for linux. My aim is to carry around my Raspberry Pi 500 or some other mini keyboard with a tiny computer embedded in it and have it serve as a fully functioning computer.
My hope is to make it easier to use a computer blind than with my usual workflow with a monitor.
I'm working on Fro (https://fro.app)
Haven't released properly yet - not sure if it's stable but oh well.
I don't like using my personal email to sign up for things. But there are definitely things that I do want to sign up for - newsletters, try out some services.
I know there are temporary email services, but I actually want to use these services. Of course there is Apple email that forwards to your real email.
But, I also don't want to flood my inbox.
Anyway, I wanted to receive these transactional emails in my personal Slack.
So, that's what Fro is for (https://fro.app)
- Sign up - get an email address - link to your Slack channel
And you can now catch up on those newsletters via Slack.
its not working when i try to do connect with slack i am getting this error Something went wrong when authorizing this app. Try going back and authorizing again. If problems persist, contact support for help.
Error details invalid_team_for_non_distributed_app
I'm currently developing a middleware that connects Nvidia PhysX to GameMaker. There's still a lot of work left but I have most features working in some capacity. Dynamic and static actors, primitive/convex/triangulated shapes, joints, character controllers, GPU accelerated PBD particles and deformables, etc. GameMaker is primarily a 2D engine and offers limited options for 3D, but it is possible if you know how to use vertex buffers. I'll probably post it here once it's a little farther along, but I'm pretty proud of my progress so far. I'm hoping I can use it to support myself in some way, but there's a lot of anxiety in selling a niche project like this.
On a whim, I bought a pack of playing cards at the supermarket. Now I'm learning how to play card games.
The card maker has its own web site with the rules for playing all kinds of card games, and it's filterable by number of players, including many games for one person.
What's the name of the game?
I wanted to learn a bit about backend development, so I've been building my own version of soundcloud with supabase. Main thing I've learnt so far, auth is flipping complicated. But it's been really fun! The audio compression is done clientside with ffmpeg and WASM, I'm pretty pleased with that approach. Everything is pretty busted atm, but I'm trying to get to a 'walking skeleton' then polish. I've been devlogging the process as I go for fun.
This month I released USBSID-Pico v1.3 pcb via PCBWay and Retro8BitStore and yesterday firmware version v0.5.0-BETA. The new pcb now supports mixed MOS6581 / MOS8580 chips (voltage) at the same time and new firmware brings a lot of tweaks and improvements making Commodore64 digitunes play better on Windows.
USBSID-Pico is a RPi Pico (RP2040/W RP2350/W) based board for interfacing one or two MOS SID chips and/or hardware SID emulators over (WEB)USB with your computer, phone, ASID supporting player or USB midi controller.
More info at https://github.com/LouDnl/USBSID-Pico
I just watched https://www.youtube.com/watch?v=nh0SxO1y6I0
Well done. This is really cool.
Thank you!
I'm putting the finishing touches on VT[0] - a minimal AI chat client focused on privacy. No tracking, clean interface, with support for deep research, web search grounding, tool calls, and RAG... and more.
The code is all open source on GitHub[1]. Really close to shipping now - hope to share launch details soon.
These monthly HN threads have been great motivation for me to keep building consistently. Thanks everyone!
The VT app is now live at https://vtchat.io.vn/
I'm working on an online photo gallery platform for professional photographers. The MVP is ready, but I'm also using the opportunity to learn more about SEO, marketing, and communication. This is the URL: https://picstack.com
One interesting lesson is to see the effort involved in acquiring new customers and setting up funnels, especially when bootstrapping with a small budget. Sometimes, as developers, we are in our bubbles and don't realize how much skill one needs to figure out the customer acquisition domain.
Congratulations on making an excellent project. It seems to be exactly what professional photographers need. I wish you good luck!
https://pickyskincare.com - a tool that lets you find skin care products based on the ingredients you want and don't want in it. The main use case is for finding cheaper versions of a product you already like, or one without things you're allergic or sensitive to.
It's written in elixir using Phoenix live views. There's almost no custom Javascript outside of what that framework gives. First load may take a while because it's the cheapest tier of fly.io and boot loads all known ingredients and products in to memory.
https://figma-to-react-native.com
Plugin to convert Figma designs to React Native code fully client-side.
And a complimentary service that syncs the code directly to your filesystem in real-time, as well as an optional MCP server to flex the generated code to your codebase to fit your framework/libraries.
Source: https://github.com/kat-tax/figma-to-react-native
(includes cool tech like lightningcss-wasm for styles conversion and esbuild-wasm for client-side previews)
I’ve built the best way to learn over 120 languages to advanced levels (optimized for studying multiple languages in parallel): https://phrasing.app
I got the demo video produced, and a blog set up and seeded. You can see some of the science behind learning multiple languages at https://phrasing.app/blog/multiple-languages or follow my progress using Phrasing to learn 18+ languages at https://phrasing.app/blog/language-log-000
Now I’m working on the onboarding process, which I’m very excited about on both a product and a technical level. On the product level, it dovetails nicely into most of the shortcomings of the app. One solution to a dozen problems.
On the technical level, I’m starting to migrate away from reagent (ClojureScript react wrapper). The first step was adapting preact/signals-react to support r/atom, r/cursor, and r/reaction. This has worked beautifully so far and the whole module, with helpers, is less than 100 LoC. I’m irrationally excited about it, and every time I use any method it brings me a stupid amount of joy… especially since it’s exactly the same API as reagent.
For those curious, the next steps in the migration will be: upgrading to React 19 support once reagent ships with it (in alpha currently), then replacing the leaf components with hsx and working my way up the tree. No real code changes, just a lot of testing needed. Maybe at the end of it all, I can switch the whole app over to preact — will be interesting to test the performance differences.
As far as ideas I’m thinking about, I’m currently planning the next task in my head. This will be an (internal) clojure library that will hopefully have ClojErl (erlang), ClojureScript (js), and jank (C) interfaces, which means I’ll be able to write clojure once, and run on the server, browser, and mobile — all in their native environment. Needless to say, being able to write isomorphic clojure without running JavaScript everywhere has me almost as excited as my signals wrapper :D
I'm building the Enact Protocol: https://enactprotocol.com
Turn any command into an AI-discoverable MCP tool with a few lines of YAML:
Any AI agent can search for "greeting" and use your tool. I'm also building the first registry at https://enact.toolsname: hello-world description: "Greets the world" command: "echo 'Hello, World"
Still working on Alzo [1], my services startup for french architecture companies. These days I spend most of my time writing client-specific apps [2] that are hot loaded on demand into my Elixir monolith. So far I like this architecture a lot because it is hard to break anything in it.
[2] https://lucassifoni.info/blog/leveraging-hot-code-loading-fo...
I've been building a satirical t-shirt brand for the miserably employed: https://www.miserablyemployed.com/
This is a rising creator's channel on happily unemployed. I loved every bit of it.
My 5th gap year (unemployed)
Must be nice. I have kids and a wife and a house to support.
https://readworks.app/ is an app to do research within PDF collections mainly for scientists or in the legal field. It’s an oss project I’ve worked on the last two years.
I’ve figured out that I lack in terms of marketing / sales and to develop successful strategies to gain visibility. So actually enjoining the summer rather than coding at night / weekends but still having plenty ideas how develop it further and assist analytical reading.
I like it. Looks really useful.
Thank you very much. In case you have ideas feel free to leave an issue on GitHub https://github.com/read-works/readworks
I'm building an Electron app that integrates with Shopify and QuickBooks Online to streamline operations for artist-led product businesses. As a jewelry maker, I'm automating the spreadsheets and photo archives I rely on to run my studio. I want a single source of truth for all the data scattered locally and across platforms: tracking material prices, calculating costs and labor, analyzing marketing ROI, and accurately updating cost of goods sold when a piece sells.
Working on a silly side project called SinkedIn — a parody of LinkedIn but just for posting failures, screwups, and embarrassing moments. Staging is live here: https://sinkedin-staging.vercel.app/ and GitHub repo is: https://github.com/Preet-Sojitra/sinkedin. Pushing to production soon. UI is rough, I’m not a frontend person — bear with me! All sorts of contributions are welcome.
similar to thedailywtf.com ?
Maybe. Actually I was not aware of this site when I started working on this. I am thinking to give pure anti-linkedin vibes to sinkedin. I still don’t really have a proper vision of which direction to take this in.
https://askcrystal.info/dashboard
We aggregated half a dozen plus disparate data sources to create a comprehensive infrastructure map of the PNW power grid. Our goal is to be able to query for and provide informed answers for grid operators, investors, and other energy adjacent businesses in the space.
(For reference): The PNW has the most abundant clean power in the US and is one of the markets with most opportunity as our consumption increases with AI.
Working on my startup: ProtoMatter
Automating Clean-room plant propagation using robots
There are about 2-3+ Billion plants cloned in laboratory conditions per year which are all done by hand. I am in the process of trying to develop a MVP to automate this task while also getting customer conversations to get early adopters.
What I am struggling with is that I don't know if I should focus on developing the MVP which will cost 20k-40k & 4-6 months to develop or put in place a pilot program to get customers willing to buy the machine / pay up front before I start developing. Hardware startups are rough usually because their MVP takes so long to develop.
I am currently bootstrapping while I am pushing for more conversations trying to do both at once. I could personally finance the venture, but it seems like a poor move to just take on all the risk personally? I have am setting up conversations with a few VCs, but that is a month out.
I'm working on this full time at the moment. I have a couple people who I have talked to who could be co-founders but nothing has materialized yet. So I am just all over the place at this stage in the process.
I spoke to 4-5 potential customers and 2-3 of which are 'interested' in what I have but seem only interested in the 'validation' stage which only comes up after the huge personal investment on my end.
I recommend doing the pilot program. Are you sure there are customers willing to sign up for it?
Customers want a solution, but without actual proof of customers willing to pay or do anything about it, how can you truly know? Because of companies awkward financing, the situation is very similar to:
- Assuming tractors don't exist - "Would people go through the effort to ship a tractor to their farm, learn how to use it, and either pay you to repair it or figure it out themselves ..while also paying you per field plowed."
Like, it seems like an obvious 'yes' but this is obviously my framing, not theirs, and maybe I am completely off base and what I am offering to them is truly a truck with a plow and not a tractor that they think will constantly get stuck in the mud.
Can you build a cheaper-shittier proof of concept? Software only digital proof of concept?
I can make a POC within 15-30 days.
The project is a bit rough because of its clean room requirement where there needs to be substantial build / testing to ensure a near zero contamination rate.
I could build a POC or show some video of how the product would work. What would you want to do with this?
I'm working on my first hide-and-seek game[1], built using my own 2D game engine[2]. The game's theme is dark and mysterious, with various subtle references.
Aside from that, I’ve also made some sillier little games/demos[3].
There’s a computer with a classic BASIC interpreter written in Lua after the first level.
I've been working on https://stacks.camera - it's an idea about overlaying the previous picture when you're taking a photo so you can create a timelapse or animation.
For example, you can scroll through 60 pictures from my window https://stacks.camera/u/ben/89n1HJNT
Most of the challenges are around handling images & rendering, but I've also been playing with Passkey-only authentication which I'm finding really interesting.
Integrating molecular dynamics into my protein viewer: https://github.com/David-OConnor/daedalus
I'm working on inq - a real ink pen that writes on real paper while simultaneously digitizing everything you write. Specifically working on the software for our mobile and web apps.
Among other things, my team has implemented access-based sharing using web links, like Google Docs for real paper handwriting. And we've just launched Quin, our AI assistant for real paper handwriting. Super useful for getting help with math, language learning, looking up relevant facts, generating ideas, etc.
How does it work? Some imu / accelerometer sensing?
I don't see any ink refills, when I run out do I have to buy a new $165 pen?
No, the pens take standard D1 refills and are easy to change - they'll be available soon in the shop there.
Oh that's quite a feature! I think a lot of people would love to customize their pen with different colors/types of ink that way, you should definitely add it somewhere in the description or to the FAQ
Broadly: Still forging ahead building a game server framework in Erlang. I've shifted my attention away from Godot integration (which AFAIK is still working) and toward LÖVE and Lua. Godot is great, but having to write GDScript on the client and Erlang on the backend has caused me many headaches in my game logic. My current goal is to have a beautiful, concurrent, Erlang-based control plane with Lua-based game logic running on both the server and the client.
To that end, I've most recently been hacking on Robert Virding's Luerl (https://github.com/rvirding/luerl), working to adapt the Lua test suite to chase down some small compatibility issues between PUC Lua and Luerl. While Lua is a lovely language, it would also be swell to get Fennel working under Luerl. I wrote a game for the LÖVE jam a few months ago in Fennel and it was a pleasant way to dip my toes into lisp-likes.
I've also been adding things to control plane software, Overworld, here and there: https://github.com/saltysystems/overworld Happily all of the Protobuf and ENet stuff that I've already built nicely carries over into the LÖVE world.
An audience-driven GenAI rom-com w/ Daily Episodes.
How We Met – https://how-we-met.c47.studio/
Each day, I create a new 30-second episode based on the plot direction voted on by the audience the day before.
I'm trying to see how far the latest Video GenAI can go with narrative content, especially episodics. I'm also curious what community-driven narratives look like!
For the past week, I've been tinkering mostly with Runway, Midjourney, and Suno for the video content. My co-creator vibe coded the platform on Lovable.
SynoPosture is a posture monitoring app that utilizes your AirPods Pro, AirPods Max, or Powerbeats Pro to track your head orientation in real-time.
Whenever your head tilts past a set threshold for too long, SynoPosture reminds you to sit up and gently protect your neck, intelligently and without spamming.
## Core Features
- Real-time pitch, roll, yaw tracking via `CMHeadphoneMotionManager`
- Smart posture alerts after 5–60 s in a poor position
- Fully customizable sensitivity (10°–100° thresholds)
- Live Activities & Dynamic Island: countdown ring when posture timer is active
- Apple Watch companion app: real-time sync, haptics, Digital Crown refresh
- Enterprise-grade Anti-Flood System: 30s cooldowns, freshness filtering, pending update queue
- Full Accessibility Support: VoiceOver, Dynamic Type, high contrast, motion reduction
- Fully localized: English, Spanish, Arabic, Japanese, and more …
- Runs perfectly in the background or after force quit (via `BGTaskScheduler` + state persistence)
TestFlight Invite Link:
More and more people are more cognizant of their alcohol consumption, and like calories - it can be hard to eyeball them compared to the "standard unit" (14g alcohol).
I build an iOS app (https://quenchai.app) that uses a carefully constructed Multimodal LLM workflow to convert photos to standard drinks and track consumption over time.
Did you know a standard margarita is ~2.5 standard drinks and a light beer is ~0.8?
Why not just a barcode scanner? And a standard lookup for mixed drinks.
I don't feel like this needed an AI solution.
Why not click on their link and see they are talking about drinks served in bars?
Sooooooo mixed drinks..?
It's wild to me how you jumped down my throat about not looking into his product enough when you did read my entire comment.
I've been digging into WebRTC to understand how it works under the hood. My goal is to eventually build a lightweight media server with SFU capabilities — but focusing on the protocol level, not just using libraries.
To get there, I'm breaking it down into smaller projects. The first one: a basic WebSocket signaling server written in Rust (based on RFC 6455). I'm also learning Rust, so this was a good way to build something real while figuring things out.
On the frontend, I used Angular and just the RTCPeerConnection API — no external WebRTC libs. The focus for now is just signaling: how peers connect, exchange offers/answers, and so on. No media or security handling yet — that's the next step.
Here’s a short demo video on YouTube https://www.youtube.com/watch?v=V_qdW2JchbU It’s not production-ready yet. Right now, each WebSocket connection spins up a new thread, but I plan to rework that using something like Tokio for better performance.
Curious to hear thoughts or suggestions.
Compiler back-end for WASM and more, but with the core at a slightly lower abstraction level than WASM and with a somewhat novel ABI.
To abstract around register file differences in different ISAs, I'm using SSA-form with spilling to a separate "safe stack". Enforces code-pointer integrity for security's sake (not unlike WASM) but extended also to virtual method tables.
"Partial-ISA migration" allows a program to run on multiple cores with slightly different ISA extensions. "Build-migration" is migration to another build of the same program in the same address space: Instead of trying to debug an optimised program, you would migrate it to a "debug-build" to attach a debugger. Or you could run a profiling build, compile a new build using the result and then migrate the running program to the optimised build: something that previously only JIT-compilers have done AFAIK.
I'm out of the research stage and at the stage of writing the first iteration of the main passes of the compiler, but now and then I've had to back-track and reread a paper on a compiler algorithm or refine the spec. It has taken a few years, and I expect it to take a few years more.
https://github.com/stravu/crystal
Crystal is a re-imagining of what an IDE means when AI drives development. Traditional IDEs are designed for deep focus on one task at a time, but that falls apart when you have to wait 10-20 minutes for an agentic task to run. Crystal lets you manage multiple instances of Claude Code so you can inspect/test the results of one while waiting for the others to finish.
Among many other things, I'm formatting many of my recipes and working on generating LaTeX to make a physical recipe book.
My son has inherited my love of cooking and baking, so we'll refine the book, add comments and photos, and eventually print and bind copies for our family and friends.
I also am hoping to laser engrave some old cookie sheets with one of my grandma's hand-written recipes. The problem I have is that it's rather faded, and I don't know yet how to make it pop for a good contrast.
A few things!
1. After hearing Cell by Pannotia, I became obsessed with trying my hand at making a bit of electronic music. I have an Arturia Keystep 32, a Korg NTS-1 and a Korg SQD-1 to mess around with, but I'd really love to learn how to capture the sound on the Pannotia's album since it speaks to me on a visceral level (album link for the curious: https://pannotia.bandcamp.com/album/cell)
2. Turning some old telephones into fun "audio guestbooks", have some additional features lined up that I am going to add (just waiting on parts to arrive), trying to improve a bit on the ones shown in this excellent video: https://www.youtube.com/watch?v=dI6ielrP1SE
3. Managed to get a blog post up recently. My work is not exactly what I would call "HN worthy" but if you need a laugh or some decent toilet reading, it probably qualifies (my blog: https://futz.tech/)
I love these threads. So many people working on so many different and interesting things. Renews my hope for the future, a bit.
I’ve been working on Lexa ( https://cerevox.ai/ ) — a document parser that reduces token count in embedding chunks by up to 50% while preserving meaning, making it easier to build high-performing RAG agents.
The idea came from building a personal finance chatbot and hitting token limits fast — especially with PDFs and earnings reports full of dense, noisy tables. Our chunks were bloated with irrelevant data, which killed retrieval quality. So we built Lexa to clean and restructure documents: it clusters context intelligently and removes any spacing, delimiters, or artifacts that don’t add semantic value. The result is cleaner, more compact embeddings — and better answers from your LLM.
Next up, I’m exploring ways to let users fine-tune the parsing logic — using templates, heuristics, or custom rules through UI — to adapt Lexa to their domain-specific needs.
I'm trying to create the best A/B test sample size & duration calculator: https://calculator.osc.garden/
It's free (https://github.com/welpo/ab-test-calculator), and it has no dependencies (vanilla JS + HTML + CSS).
Right now it only supports binary outcomes. Even with the current limitations, I feel it's way above many/most online calculators/planners.
I'm writing tests, fixing bugs, and adding features to improve the quality of a piece of financial software that transfers certain financial data on a special private network. It's way less fancy than it sounds, but I'm enjoying improving the tests and adding important security and legal compliance features. Knowing that others will depend on my hard work to keep their business financial records straight is a great reward, and I am taking my responsibility seriously.
I'm also working on learning about building software with LLMs, specifically I am building a small personal project that will allow me to experiment with them using measurable hypotheses and theories, rather than just tweaking a prompt a bunch and guessing when it is working the best. I know others have done this, but I am building it from the ground up because I'm using it as a learning experience.
I plan to take my experimentation platform and build a small "personal agent" software package to run on my own computer, again building from scratch for my own learning process, that will do small things for me like researching something and writing a report. I don't expect anything too useful to come out of it, since I am using 1.7B/4B models on a MacBook Air M2 (later I might use my 3080 but that won't be much improvement), but it will be interesting to build the architectural stuff even if the agents are effectively just useless cycle-wasters.
I’m exploring two different applications of AI for education and skill-building:
1. Open-Source AI Curriculum Generator(OSS MathAcademy alternative for other subjects) Think MathAcademy meets GitHub: an AI system that generates complete computer science curricula with prerequisites, interactive lessons, quizzes, and progression paths. The twist: everything is human-reviewed and open-sourced for community auditing. Starting with an undergrad CS foundation, then branching into specializations (web dev, mobile, backend, AI, systems programming).
The goal is serving self-learners who want structured, rigorous CS education outside traditional institutions. AI handles the heavy lifting of curriculum design and personalization, while human experts ensure quality and accuracy.
2. Computational Astrology as an AI Agent Testbed For learning production-grade AI agents, I’m building a system that handles Indian astrology calculations. Despite the domain’s questionable validity, it’s surprisingly well-suited for AI: complex rule systems, computational algorithms from classical texts, and intricate overlapping interpretations - perfect for testing RAG + MCP tool architectures.
It’s purely a technical exercise to understand agent orchestration, knowledge retrieval, and multi-step reasoning in a domain with well-defined (if arcane) computational rules.
- Has anyone tackled AI generated curricula? What are the gotchas? - Interest in either as open-source projects?
> everything is human-reviewed and open-sourced for community auditing
2 projects worth checking out here: https://github.com/kamranahmedse/developer-roadmap (open-sourced roadmaps, no course content) and also https://github.com/ossu for more college curricula level (with references to outside courses).
I've been personally working on AI generated courses for a couple of months (probably will open source it in 1–3 months). I think the trickiest part that I haven't figured out yet is how to kind of build a map of someone's knowledge so I can branch out of it, things like "have a CS degree" or "worked as a Frontend Dev" is a good starting point, but how to go from there?
I really like how Squirrel AI (EdTech Company) breaks things down — they split subjects into thousands of tiny “knowledge points.” Each one is basically a simple yes/no check: Do I know this or not? The idea makes sense, but actually pulling it off is tough. Mapping out all those knowledge points is a huge task. I’m working on it now, but this part MUST be open source
btw, feel free to email me to bounce ideas or such (it's in my bio)
As a European missing a managed hosting solution, me and a buddy of mine are building an alternative: https://ploi.cloud
The goal is quite simple, allow developers to host their application with easy straight forward pricing. We are about to launch very soon. Everything is built on Laravel/PHP.
We are open to beta testers, so if you feel you want to test this please drop me and email in my profile.
Had a similar idea. Wishing you both good luck. Europe needs more hosting solutions.
I've been off work for two weeks to recover from surgery, and have been playing with a couple projects throughout the day between rest and physical therapy:
- A home-rolled router/firewall: Using yocto to create a distribution for a router/firewall for my home network. It started as an exercise in wanting to have more control over the security of my home network, as well as see how nice of a UI/UX I can tease out of an LLM. It's also part of a (seemingly never ending) consolidation of homelab services.
- A SNES Reverse Engineering setup: A nephew of mine is starting to get into video games and is starting with a SNES but his system broke. I'm working on helping repair the console, but am also trying to set up an effective "LLM + Ghidra + SNES emulator + image generation AI + asperite plugin" to allow him to swap sprites and text in games to add some creativity and learning to the experience.
- A personal assistant system: Experimenting with agents to create a personal assistant for our house, and seeing to what extent the agents can be helpful and how much hardware is required to run something like that in-house.
- aztui: A TUI for exploring and interacting with Azure resources. I'd like to add some caching/pre-fetching logic to make the interaction with the interface snappier (one of the main motivators to create it).
I've been using GPT pretty heavily throughout, and it has been a lot of fun both using it, and spending some dedicated time looking at the models themselves along with the frameworks that support running and integrating them.
Finally cleaned up a free online Python course: https://proficientpython.com/
I wrote the articles/exercises/projects a few years ago, but now I've made interactive coding and quiz widgets, using Pyodide, Lit, web workers, etc. All open source: https://github.com/pamelafox/proficient-python
JSON API to integrate with QuickBooks Enterprise / QuickBooks Desktop https://qubesync.com
Spent 14 years slogging through a custom implementation with my previous company, and didn't want my pain and suffering to go to waste. Just spent a few hours yesterday to replace that app's integration with my new api and got a pretty good diff:
117 files changed, 258 insertions(+), 10032 deletions(-)
I am working on building Flexprice(https://flexprice.io/), an open source monetization platform for AI and Agentic companies.
This week, we’re doing a 5-day launch week, where we’re shipping a new set of billing features every day. Github link: https://github.com/flexprice/flexprice
I just finished playing with my Shimano Di2 groupset and the e-tube app. Last year researchers revealed that a simple replay attack was possible to shift someone elses bicycle. My bike was delivered with updated firmware that is no longer vulnerable so I had to find a way to downgrade the bike. The e-Tube app only allows updating the bike, but it detects root, emulators, frida-server or changing the APK and then crashes. I had to find a way to circumvent that and use an SDR to do the actual attack
Would love to see a write up on this
You can find the writeup of how I downgraded the firmware here: https://grell.dev/blog/di2_downgrade
The actual attack is described here: https://grell.dev/blog/di2_attack