> this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
EDIT: F-Droid received a $400,000 grant from a single source this year ( https://f-droid.org/2025/02/05/f-droid-awarded-otf-grant.htm... ) so now I’m even more confused about how they decided to hand this server to a single team member to host in unspoken conditions instead of paying basic colocation expenses.
>We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services.
Not clear if "contributor" is a person or an entity. The "hosting services" part make it sound more like a company rather than a natural person.
> one person had the physical server in their basement
Unless you have even the faintest idea about how F-Droid does it, please stop spreading FUD. All the article says is that it is not a normal contract but a special arrangement where one or a select few have physical access. It could be in a locked basement, it could be in a sealed off cage in a data center, it could be a private research area at a university. We don't know.
A special arrangement with an academic institution providing data center services wouldn't be at all surprising, that has been the case for many large open source projects since long before the term was invented, including Linux, Debian and GNU itself.
The OSU Open Source Lab gives machines to groups in their datacenter: https://osuosl.org/services/hosting/
It has hosted quite a few famous services.
Which famous services?
I doubt OSU is going to host F-Droid. It doesn't even sound like F-Droid would want them to host it.
https://osuosl.org/blog/osl-moving-to-state-data-center/ mentions several major, famous services/projects that OSUOSL either has hosted in the past or is still hosting: kernel.org, Debian, Gentoo, Drupal, OpenWRT, OSGEO. https://osuosl.org/blog/osl-future/ also mentions hosting Mozilla at the time of the Firefox 1.0 release, and having previously hosted Apache Software Foundation. Closer in relevance to F-Droid, OSUOSL hosts the GitLab instance used by postmarketOS: https://postmarketos.org/blog/2024/10/14/gitlab-migration/
F-Droid is the best known non-corporate Android App Store... Why wouldn't they be willing to host it?
It's a critical load-bearing component of FOSS on Android.
There is nothing wrong with hosting prod at home. A free and open source project needs to be as sustainable and low maintenance as possible. Better to have a service up and running than down when the funds run out.
> I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
If they really want to run it out of a computer in their living room they should at least keep a couple servers on standby at different locations. Trusting a single person to manage the whole thing is fragile, but trusting a few people with boxes that are kept up to date seems pretty safe. What are the odds they'd all die together? Paying a colo or cloud provider is probably better if you care about more 9s of uptime, but do they really need it?
Yup. But the same can happen in shared hosting/colo/aws just as easily if only one person controls the keys to the kingdom. I know of at least a handful of open source projects that had to essentially start over because the leader went AWOL or a big fight happened.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
> if only one person controls the keys to the kingdom
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
It doesn't say it's in someone's house. Maybe the guy runs a business doing this.
At least they know where it is. They can go knock on the door.
Is colocation not considered to be "self-hosting" in the cloud era?
> a $400,000 grant
IDK if they could bag this kind of grant every year, but isn't this the scale where cloud hosting starts to make sense?
400k could get you 10 Dell Poweredges with a 128 core CPU, 256GB of RAM and multiple terabytes of storage _multiple times_. 400k easily covers two of these machines, and colocation space is about 2k per year.
Cloud hosting only makes sense at a very, very small scale, or absurdly large ones.
You have two options. Colo if you still want physical access to your devices, or cloud, where you get access to nothing beyond some online portals.
Colo is when you want to bring your own hardware, not when you want physical access to your devices. Many (most?) colo datacenters are still secure sites that you can't visit.
Every colo I've visited has a system for allowing physical access for our equipment, generally during specific operating hours with secure access card.
secure access cards, IDing, bag check, and a tech following you around. Of course cabinets are all locked up as well.
A lot of these places are like fortresses
I've only ever seen that at data centers that offer colo as more of a side service or cater to little guys who are coloing by the rack unit. All of the serious colocation services I've used or quoted from offer 24/7 site access.
Basically anywhere with cage or cabinet colocation is going to have site access, because those delineations only make sense to restrict on-site human access.
To be quite honest I've never seen a colo that didn't offer access at all. The cheapest locations may require a prearranged escort because they don't have any way to restrict access on the floors, but by the time you get to 1/4 rack scale you should expect 24/7 access as standard.
Same. We would colo and had racks behind chain link fencing that was locked behind cipher locks
I don't think so. I don't think anybody is going to hand off their server and ask someone else to hook it up. Also, you need access so you can troubleshoot hardware issues.
So that they can pay 100x more expenses for.. no gain? They would pay an arm just for traffic.
CloudFlare is free/cheap and hey presto, no servers to manage!
And when your Cloudflare site is down, most of the Internet is down too! There's no downside!
Counterpoint: that would require using CloudFlare.
That is, in my opinion, far superior to using a single server ran by "someone".
I guess that is the beauty of opinions: they can be different from person to person. In my case, I would rather avoid CloudFlare if possible.
It's just a build server no? If that's the case it's not the end of the world.
Or does it also serve the APKs?
depending on how you view it, the build server _does_ serve the APKs, right?
400K would go -fast- if they stuck to a traditional colo setup. Donations like this are rare and it may be all they get for a decade.
Personally I would feel better about round robin across multiple maintainer-home-hosted machines.
> 400K would go -fast- if they stuck to a traditional colo setup.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive. I’m not understanding how you think $400K would disappear “fast” unless you think it’s thousands of dollars per month?
I, personally, have a cabinet in a colo. With $400k, I can host it at that datacentre with the income from risk-free return never exercising the capital with 10 GigE, 3 kW of power. If I can do it, they can do it.
Modern computers are super efficient. A 9755 has 128 cores and you can get it for cheap. If you've been doing this for a while you'd have gotten the RAM for cheap too.
If I, a normie, can have terabytes of RAM and hundreds of cores in a colo, I'm pretty sure they can unless they have some specific requests.
And dude, I'm in the Bay Area. Think about that. I'm in one of the highest cost localities and I can do this. I bet there are Colorado or Washington DCs that are even cheaper.
I to am in the bay area, and clearly I have been shopping at the wrong colos. I expected to find nothing with unlimited bandwidth for under $1k/mo given past experience with what may have been higher end DCs.
In any event if I was the volunteer sysadmin that had to babysit the box, I would rather have it at my home with business fiber where I am on premises most of the time because getting in and out of a colo is always a whole thing if their security is worth a damn.
Even given a frugal and accessible setup like that I can imagine 400k lasting 5 years tops especially if paying for the volunteers business fiber and much more especially given I expect some of it is to provide a sustainable compensation to key team members as well. Every cent will count.
400k would last me 13 years for a rack, power and 10Gbit/s bandwidth at my colo place (Switzerland, traditionally high prices)
Yes, but that's not their only expense.
Stupid question from me: What are their other costs? I'm a total newbie about data center colo setups, but as I understand, it includes: power and internet access with ingress and egress. Are you thinking their egress will be very high, thus thus need to pay additional bandwidth charges?
Yes, but that’s not the last or only donation they’re receiving either.
Don't bet on receiving money in the future.
It's a community donation-supported project. That's kind of the whole deal.
Regardless, the ongoing interest on $400K alone would be enough to pay colo fees.
Since you've already done the math, what's the interest on $400k pay for the colo costs?
at a (fairly modest) 3.3 its like 1100/month.
I don't know what kind of rates are available to non-profits, but with 400k in hand you can find nicer rates than 3.3 (as of today, at least).
that covers quite a few colo possibilities.
USD money market funds from Vanguard pay about 3.7% now. Personally, I would recommend a 50/50 split between a Bloomberg Agg bond ETF and a high-yield bond ETF. You can easily boost that yield by 100bps with a modest increase in risk.
Another thing overlooked in this debate: Data center costs normally increase at the rate of inflation. This is not included in most estimates. That said, I still agree with the broad sentiment here: 400K USD is plenty of money to run a colo server for 10+ years from the risk-free interest rate.
For reference, in the US at least, there was/is a company called Joes Data Center in KC who would colo a 1U for $30 or $40 a month. I'd used them for years before not needing it anymore, so not some fly by night company(despite the name).
At that rate, that would buy you nearly 1000 years of hosting.
Those prices are rock bottom! For that price, what do you get for (a) power budget, (b) Internet connectivity, (c) ingress and egress per month?
I Googled for that brand and got a few hits:
The homepage now redirects here: https://patmos.tech/- https://inflect.com/building/1325-tracy-avenue-kansas-city/joes-datacenter/datacenter/joes-datacenter - https://www.linkedin.com/company/joesdatacenter/ - https://www.facebook.com/joesdatacenter/Another under appreciated point about that data center: It has excellent geographical location to cover North America.
I was trying to avoid naming exact prices because it becomes argument fodder, but locally I can get good quality colo for $50/month and excellent quality coloration with high bandwidth and good interconnects for under $100 for 1U
I really don’t know where the commenter above was getting the idea that $400K wouldn’t last very long
Alaska. Dollars per Mbit + reliable power in colo.
Joe's got bought out by Patmos.
The jury's still out on whether or not this is a good thing.
For a server? The going rate for a 1/4 cabinet is $300-500/month.
A full rack, 10 gigabits bandwidth and 1920W of power is available for as little as $800/month: https://1530swift.com/colocation.php
Of course you have to buy the switches and servers…
I think there are quite some misconceptions about F-Droid in the comments :
- you can be your own F-Droid server
In fact it's a basic static HTTP(S) server that is generated with the list of .apk and meta-data so it rely doesn't require much.
I think what is concerning to people is that the most popular INSTANCE of F-Droid, the one that is by default when one downloads the F-Droid CLIENT, is "centralized" but again that's a misconception. It's only popular, it's not really central to F-Droid itself. Adding another repository in the F-Droid parlance is just a simple option of changing or adding a URL to more instances.
That being said if anybody here would like to volunteer to be provider a fallback to the build system to that popular instance, I imagine the F-Droid team would welcome that with open arms.
Context: "F-Droid build servers can't build modern Android apps due to outdated CPUs" (https://news.ycombinator.com/item?id=44884709)
Ugh. This 100% shows how janky and unmaintained their setup is.
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
> F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
Clearly the GrapheneOS community is clueless then.
You can host F-Droid yourself, which is the opposite of centralized. If the GrapheneOS community actually is concerned about centralization they can fork it themselves as well. Then we'll have two public repositories.
Futhermore, each author signs their own software, which again is the opposite of centralized. One authority signing everything would be centralized.
As someone that has run many volunteer open source communities and projects for more than 2 decades, I totally get how big "small" wins like this are.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
It's a shame mozilla wont step up to fund it. They've spunked way more money on way dumber things.
Imagine the good they could do if they didn't pay their CEO 6 million a year.
They'd probably burn it without much to show for, like the rest of their funds
"I understand this is a volunteer effort, but it's not a good look."
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
You can't just host servers in your own basement! You need to pay out the ass to host servers in some big company's basement!
I don't have a problem with an open source project I use (and I do use F-Froid) hosting a server in a basement. I do have a problem with having the entire project hosted on one server in a basement, because it means that the entire project goes down if that basement gets flooded or the house burns down or the power goes out for an extended period of time, etc.
Having two servers in two basements not near each other would be good, having five would be better, and honestly paying money to put them in colo facilities to have more reliable power, cooling, etc. would be better still. Computer hardware is very cheap today and it doesn't cost that much money to get a substantial amount of redundancy, without being dependent on any single big company.
This sounds reasonable. But this is a build server, not the entire project infrastructure.
I bet the server should be quite powerful, with tons of CPU, RAM and SSD/NVMe to allow for fast builds. Memory of all kinds was getting more and more expensive this year, so the prolonged sourcing is understandable.
The trusted contributor, as the text says, is considered more trustworthy than an average colocation company. Maybe they have an adequate "basement", e.g. run their own colo company, or something.
It would be great to have a spare server, but likely it's not that simple, including the organization and the trust. A build server would be a very juicy attack target to clandestinely implant spyware.
What do you think would happen if that server went down? People can't get app updates, or install new ones. That is all. That is not critical.
They can then probably whip up a new hosted server to take over within a few days, at most. Big deal.
They are not hosting a critical service, and running on donations. They are doing everything right.
I concur, and given the amount of apps they build it makes sense to spend the money on a good build server to me, especially if it is someone with experience hosting trusted servers as mentioned as well as a contributor already. If people do not want to use it, the source code to build yourself is still available for the apps they supply.
> Computer hardware is very cheap today
As long as you don't need RAM or hard drives. It's getting more expensive all the time too. This isn't the ideal moment to replace a laptop let alone a server.
It's like ya'll are so eager to crap on a thing that you don't even read tfa.
> this server is physically held by a long time contributor with a proven track record of securely hosting services.
So you are assuming it's a rando's basement when they never said anything like that.
If their way of doing business is so offensive either don't use them, disrupt them or pitch in and help.
> I understand this is a volunteer effort, but it's not a good look.
What does make a "good look" for a volunteer project?
> this server is physically held by a long time contributor with a proven track record of securely hosting services.
This is effectively a rando's basement. It doesn't matter that they've been a contributor or whatever. Individuals change, relationships sour. Securely hosting how ? By locking the front door ? By being a random tech company in the midwest ? Or by having proper access control ?
As a little reminder, F-Droid has _all_ the signing keys on its build server. Compromising that is somewhere between "oh that's awful" and "stop the world". These builds go out as automatic updates too. So uh, yeah, I'd like it if it was hosted by someone serious and not my buddy joe who's a sysadmin don't worry
> What does make a "good look" for a volunteer project?
It's an open-source project. It should be... open. Not mysterious or secretive about overdue replacements of critical infrastructure.
Graphene is a great product but their incessant mud slinging at any service that isn't theirs is tiresome at best.
Some of their points are valid but way too often they're unable to accept that different services aren't always trying to solve the same problem.
"Nothing is ever good enough" (tm)
If I were running a volunteer project, I would be dumping thousands a month into top-tier hosting across multiple datacenters around the world with global failover.
the _if_ is doing a lot of heavy lifting there. You're free to complain about it but Fdroid has been running fine for years and I'd rather have a volunteer manage the servers than some big corporation
They quite notably haven't been running fine for years: https://news.ycombinator.com/item?id=44884709 Their recent public embarrassment resulting from having such an outdated build server is likely what triggered them to finally start the process of obtaining a replacement for their 12 year old server (that was apparently already 7 years old when they started using it?).
Its embarrassing that Google binaries don't even use runtime instruction selection.
Nah, if you actually read into what's available there, it's clear that the compilers have never implemented features to make this broadly usable. You only get runtime instruction selection if you've manually tagged each individual function that uses SIMD to be compiled with function multi-versioning, so that's only really useful for known hot spots that are intended to use autovectorization. If you just want to enable the latest SIMD across the whole program, GCC and clang can't automatically generate fallback versions of every function they end up deciding could use AVX or whatever.
The alternative is to make big changes to your build system and packaging to compile N different versions of the executable/library. There's no easy way to just add a compiler flag that means "use AXV512 and generate SSE2 fallbacks where necessary".
The people that want to keep running new third-party binaries on 12+ year old CPUs might want to work with the compiler teams to make it feasible for those third parties to automatically generate the necessary fallback code paths. Otherwise, there will just be more and more instances of companies like Google deciding to start using the hardware features they've been deploying for 15+ years.
But you already know all that, since we discussed it four months ago. So why are you pretending like what you're asking for is easy when you know the tools that exist today aren't up to the task?
> commodity hardware
Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
> it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
This seems entirely like wishful thinking. They were using a 12 year old server that was increasingly unfit for the day-to-day task of building Android applications. It doesn't seem like they were in a position to acquire and deploy any exotic hardware (except to the extent that really old hardware can be considered exotic and no longer a commodity). I'd be surprised if the new server is anything other than off the shelf x86 hardware, and if we're lucky then maybe they know how to do something useful with a TPM or other hardware root of trust to secure the OS they're running on this server and protect the keys they're signing builds with.
I'm just reading what was written, especially "the specific components we needed", and assuming they're not as incompetent as is being suggested, given they've served me well. Perhaps you haven't been tendering for server hardware recently, even bog-standard stuff, and seen the responses that even say they can't quote a fixed price currently. At least, that's in my part of the world, in an operation buying a good deal of hardware. We also have systems over ten years old running.
> shove it in a special someone's basement
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
100%. Just as an example I have several racks at home, business fiber, battery backup, and a propane generator as a last resort. Also 4th amendment protections so no one gets access without me knowing about it. I host a lot of things at home and trust it more than any DC.
> Also 4th amendment protections so no one gets access without me knowing about it.
If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes.
So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property.
Isn't a business line quite expensive to maintain per month along with a hefty upfront cost? For a smaller team with a tight budget, just going somewhere with all of that stuff included is probably cheaper and easier like a colo DC.
> Also 4th amendment protections so no one gets access without me knowing about it
laughs in FISA
Which one of those things do you think you can't get in a datacenter?
That's not the point. The point is that a "home" setup can basically replicate or exceed a "professional" setup when done right.
A home setup might be able to rival or beat an “edge” enterprise network closet.
It’s not going to even remotely rival a tier 3/4 data center in any way.
The physical security, infrastructure, and connectivity will never come close. E.g. nobody is doing full 2N electrical and environmental in their homelab. And they certainly aren’t building attack resistant perimeter fences and gates around their homes, unless they’re home labbing on a compound in a war torn country.
I read it a bit differently: you don't need to be a mega-corp with millions of servers to actually make a difference for the better. It really doesn't take much!
Also, even 12-year-old hardware is wicked fast.
The issue isn’t the hardware, it’s the fact that it’s hosted somewhere private in conditions they wont name under the control of a single member. Typically colo providers are used for this.
Is it one person? Is it an organization/professional company with close ties to F-Droid? There are a lot of worst-case assumptions in this thread.
Eh. It's just a different set of trade-offs unless you start doing things super-seriously like Let's Encrypt.
With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results.
Let's focus on how they have done so much with such simple hardware, rather then comparing them to companies that do so little with so much more.
I don't understand why governments haven't started to fund F-Droid, almost all govt. apps are open-source.
Countries which fear they could be cut off from the duopoly mobile ecosystem should be forcing android manufacturers to bundle in F-Droid; For the amount of nonsense regulations they force phone manufacturers to adhere to, bundling F-Droid wouldn't be that hard.
Google won't be happy, but anti-trust regulations would take care of it.
Because it's not their responsibility. Why they should care about these kind of stuff? Don't drop everything on governments.
A project like F-Droid is dumb to begin with where they're the one to build the apps.
> A project like F-Droid is dumb to begin with where they're the one to build the apps.
I heartily disagree. Linux distributions also build the packages themselves, and that adds a layer of trust.
It ensures that everything in the fdroid repo is free software, and can be self-built.
What did your local politician say when you wrote to them and suggested it?
(I've worked with several politicians. You'd be surprised what a well timed letter or meeting can achieve.)
Not much...
I wrote a few times to my local MPs ("député", as we call them in France). I usually got a response, though I suspect it was written by their secretary with no other consequence. In one case (related to privacy against surveillance), they raised a question in the congress, which had just a symbolic impact.
It may be different in other countries. In France, Parliament is de-facto a marginal power against a strong executive power. Even the legal terms are symptomatic of this situation: the government submits a "project of law" while MPs submit a "proposal of law" (which, for members of the governing party, is almost always written by the government then endorsed by some loyal MP).
I publish an app to the App Store, Google Play, and F-Droid. For years, F-Droid took absolute ages to reflect a new release.
People used to criticize the walled gardens for having capricious reviewers and slow review times, but I found F-Droid much more frustrating to get approval from and much slower to get a release out.
So this development is much appreciated. In fact I had an inkling that build times had improved recently when an update made it out to F-Droid in only a day or two.
Hmm:
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Yikes. They don't need a "special arrangement" for those requirements. This is the bare minimum at many professionally run colocation data centers. There is not a security requirement that can't be met by a data center -- being secure to customer requirements is a critical part of their business.
Maybe the person who wrote that is only familiar with web hosting services or colo-by-the-rack-unit type services where remote-hands services are more commonly relied on. But they don't need to use these services. They can easily get a locked cabinet (or even just a 1/4 cabinet) only they could access.
A super duper secure locked cabinet acessible only to them or anyone with a bolt cutter.
You want to host servers on your own hardare? Uh yikes. Let's unpack this. As a certified AWS Kubernetes professional time & money waster, I can say with authority that this goes against professional standards (?) and is therefore not a good look. Furthermore, I can confirm that this isn't it chief.
Colocation is when you use your own hardware. That's what the word means.
And you're not going to even get close to the cabinet in a data center with a set of bolt cutters. But even if you did, you brought the wrong tool, because they're not padlocked.
Bolt cutters will probably cut through the cabinet door or side if you can find a spot to get them started and you have a lot of time.
Otoh, maybe you've got a cabinet in a DC with very secure locks from europe.... But all are keyed alike. Whoops.
A drill would be easier to bring in (especially if it just looks like a power screwdriver) and probably get in faster though. Drill around the locks/hinges until the door wiggles off.
I'd go with a drill -- but I'm not sure what possible threat vector would have access to the cabinet who would be able to get to the cabinet in any decent data center.
Because it's a secret, we don't know if it's mom's basement where the door doesn't really lock anyways, just pull it real hard, or if it's at Uncle Joey's with the compound and the man trap and laser sensors he bought at government auction through a buddy who really actually works at the CIA.
"F-Droid is not hosted in a data centre with proper procedures, access controls, and people whose jobs are on the line. Instead it's in some guy's bedroom."
Not reassuring.
It could just be a colo, there are still plenty of data centres around the globe that will sell you a space in a shared rack with a certain power density per U of space. The list of people who can access that shared locked rack is likely a known quantity with most such organisations and I know in the past we had some details of the people who were responsible for it
In some respects, having your entire reputation on the line matters just as much. And sure, someone might have a server cage in their residence, or maybe they run their own small business and it's there. But the vagueness is troubling, I agree.
A picture of the "living conditions" for the server would go a long way.
Depends on the thread model, which one is worse.
State actor? Gets into data centre, or has to break into a privately owned apartment.
Criminal/3rd party state intelligence service? Could get into both, at a risk or with blackmail, threats, or violence.
Dumb accidents? Well, all buildings can burn or have an power outage.
> State actor? Gets into data centre, or has to break into a privately owned apartment.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
>I don’t think a state actor would actually break in to either in this case
Read Jabber.ru Hetzner accident: https://notes.valdikss.org.ru/jabber.ru-mitm/
>Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet
Or just a warrant and a phone call to set up remote access? In the UK under RIPA you might not even need a warrant. In USA you can probably bribe someone to get a National Security Letter issued.
Depending on the sympathies of the hosting company's management you might be able to get access with promises.
I dare say F-Droid trust their friends/colleagues more than they trust randos at a hosting company.
As an F-Droid user, I think I might too? It's a tough call.
> Data centers are built with redundant network connectivity, backup power, and fire suppression. [...] The question is their relative frequency, which is where the data center is far superior.
Well, I remember one incident were a 'professional' data center burned down including the backups.
https://en.wikipedia.org/wiki/OVHcloud#Incidents
I know no such incident for some basement hosting.
Doesn't mean much. I'm just a bit surprised so many people are worried because of the server location and no one had mentioned yet the quite outstanding OVH incident.
I'm not going to pretend datacenters are magical places immune to damage. I worked at a company where the 630 Third Street datacenter couldn't keep temperatures stable during a San Francisco heatwave and the Okex crypto exchange has experienced downtime because the Alibaba Zone C datacenter their matching engine is on experienced A/C failure. So it's not all magic, but if you didn't encounter home-lab failure it's because you did not sample the population appropriately.
https://www.reddit.com/r/homelab/comments/wvqxs7/my_homelab_...
I don't have a bone to pick here. If F-Droid wants to free-ball it I think that's fine. You can usually run things for max cheap by just sticking them on a residential Google Fiber line in one of the cheap power states and then just making sure your software can quickly be deployed elsewhere in times of outage. It's not a huge deal unless you need always-on.
But the arguments being made here are not correct.
Surely "Juan's home server in basement burns down" would make the headlines. You're totally right.
> The question is their relative frequency, which is where the data center is far superior.
as a year long f-droid user I can't complain
I think there are countless examples of worse failures by organisations that meet your criteria for far more valuable assets than some free apps.
The 'cloud' has come full circle
Eh...
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You actually do not have to trust the people who run f-droid for those apps whose maintainers enroll in reproducible builds and multi-party signing, which only f-droid supports unlike any alternatives.
That looks cool, which might just be the point of your comment, but I don't think it actually changes the argument here.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
F-droid makes the most sense when shipped as the system appstore, along with pinned CA keychains as Calyxos did. Ideally f-droid was compiled from source and validated by the rom devs.
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
Combine that with distributed reproducible builds with signed evidence validated by the app and you end up not having to trust anything but the f-droid app itself on your device.
None of this mitigates the fact that apriori you don't know if you're being served the same package manifest/packages as everyone else - and as such you don't know how many signatures any given package you are installing should have.
Yes, theoretically you can personally rebuild every package and check hashes or whatever, but that's preventative steps that no reasonable threat model assumes you are doing.
Why have we normalized "app stores" that build software whose authors likely already provide packages of?
I've been using Obtainium more recently, and the idea is simple: a friendly UI that pulls packages directly from the original source. If I already trust the authors with the source code, then I'm inclined to trust them to provide safe binaries for me to use. Involving a middleman is just asking for trouble.
App stores should only be distributors of binaries uploaded and signed by the original authors. When they're also maintainers, it not only significantly increases their operational burden, but requires an additional layer of trust from users.
The cloud isn't the only other option, they could still own and run their own hardware but do it in a proper colocation datacenter.
I never questioned or thought twice about F-Droid's trustworthiness until I read that. It makes it sound like a very amateurish operation.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
For a single server why would you use cloud services rather than go the self-owned route?
A "single server" covers a pretty large range of scale, its more about how F-droid is used and perceived. Package repos are infrastructure, and reliability is important. A server behind someone's TV is much more susceptible to power outages, network issues, accidents, and tampering. Again, I don't know that's the case since they didn't really say anything specific.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
The F-Droid repos are provided by redundant mirrors: https://f-droid.org/en/docs/Running_a_Mirror/
If this is the hidden master server that only the mirrors talk to, then it's redundancy is largely irrelevant. Yes, if it's down, new packages can't be uploaded, but that doesn't affect downloads at all. We also know nothing about the backup setup they have.
A lot depends on the threat model they're operating under. If state-level actors and supply chain attacks are the primary threats, they may be better off having their system under the control of a few trusted contributors versus a large corporation that they have little to no influence over.
Ah. I took "not just any data center" to mean "in a specific co-location facility where they trust the person responsible for it".
I agree that "behind someone's TV" would be a terrible idea.
> It makes it sound like a very amateurish operation.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
What do you mean by "major distribution"?
e.g. AS41231 has upstreams with Cogent, HE, Lumen, etc... they're definitely not running a shoestring operation in a basement. https://bgp.tools/as/41231
Modern machines go up to really mental levels of performance when you think about it and for a lot of small scale things like F droid I doubt it takes a lot of hardware to actually host it. A lot of its going to be static files so a basic web server could put through 100s of thousands of requests and even on a modest machine saturate 10 gbps which I suspect is enough for what they do.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
This isn't about a server for hosting the website or package repo, it's about the server building all the packages.
Which is itself kind of suspicious - why can't they say "yeah we pay for Colo in such-and-such region" if that is what they are doing? Why should that be a secret?
Does anyone know what the server is? I don't see it on their site.
I'm curious why supply chain issues got in the way and why they couldn't just configure a Dell Poweredge and get delivery in a couple weeks.
I'm assuming they have some special requirements that weren't met by an off-the-shelf server, so I'm just curious what those requirements are.
So.. what kind of hardware did they buy?
Yeah kind of conspicuously absent! They said
> The previous server was 12 year old hardware
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
> 12-year old hardware is only marginally faster than a RPi 5.
A Dell R620 is over 12 years old and WAY faster than a RPi 5 though...
Sure, it'll be way less power efficient, but I'd definitely trust it to serve more concurrent users than a RPi.
> 12-year old hardware is only marginally faster than a RPi 5
My 14yo laptop-used-as-server disagrees. Also keep in mind that CPU speeds barely improved between about 2012 and 2017, and 2025 is again a lull https://www.cpubenchmark.net/year-on-year.html
I'm also factoring in the ability to use battery bypass in phones I buy now because they are so powerful, I might want to use them as free noiseless server in the future. You can do a heck of a lot on phone hardware nowadays, paying next to nothing for power and no additional cost on your existing internet connection. A RPi 5 is that same ballpark
Plus the fact that it's been running for 5 years. Does that mean they bought 7 year old hardware back then? Or is that just when it was last restarted?
Unfortunately you can’t even get the RAM for $400 anymore.
I was able to find 2 x 16GB DDR4 for $150...
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
https://www.amazon.com/Timetec-Premium-PC4-19200-Unbuffered-...
https://www.amazon.com/MSI-MAG-B550-TOMAHAWK-Motherboard/dp/...
For a server that's replacing a 12 year old system, you don't need DDR5 and other bleeding edge hardware.
I don't think 32GB is going to be enough lol
Also, you would want ECC for something this important.
I seriously wonder if doing the same build twice by two people in two locations wouldn't provide the same benefit and others for less money.
(I might be spoiled by sane reproducible build systems. Maybe F-droid isn't.)
> not hosted in just any data center [...] a long time contributor with a proven track record of securely hosting services
This is ambiguous, it could mean either a contributor's rack in a colocation centre or their home lab in their basement. I'd like to think they meant the former, but I can't deny I understood the latter in my first read.
Also, no details on the hardware?
While I get their setup is amateurish, it's also a good reminder of how simple setups can be.
Saying this on HN, of course.
I think all the criticism of what F-Droid is doing here (or perceived as doing) reflects more on the ones criticising than the ones being criticised.
How many things went upside down and all the "right" things were done (corporate governance, cloud native deployment, automation, etc.). The truth is none of these processes are actually going to make things more secure, and many projects went belly up despite following these kinds of recommendations.
That being said, I am grateful to F-Droid fighting the good fight. They are providing an invaluable service and I, for one, am even more grateful that they are doing it as uncompromisingly as possible (well into discomfort) according to their principles.
Not to mention this is a build server, its uptime isn't actually all that critical, assuming they then mirror the artifacts out from there.
Not to mention it also simplifies the security of controlling signing keys significantly.
Exactly, if you run out of money, processes meant jackshit.
Disappointed in HN because of these comments
Criticism is good when it comes with feasible suggestions or even a little help.
I wonder how many of HN audience does know someone, or a guy who knows a guy, which works in a data center able to manage the hardware and a simple email/message/hello there could open a new opportunity.
I wonder if anyone knows about Droid-ify. Whether it it a safe option, or better to stay away of it?
It showed up one day while I searched about why F-Droid was always so extremely slow to update and download... then trying Droid-ify, that was never a problem any more, it clearly had much better connectivity (or simply less users?)
it's a different client using same servers. fdroid official client is just super buggy.
Good. But I wish PostmarketOS supported more devices. On battery, tons of kernel patches could be set per device plus a config package in order to achieve the best settings. On software and security...you will find more malware in Play Store than the repos from PmOS/Alpine. I know it's not a 100% libre (FSF) system, but that's a much greater step towards freedom than Android, where you don't even own your device.
The issue with Linux-based phones is and remains apps. Waydroid works pretty well, but since you need to rely on it so much, you are better off using Graphene or Lineage in the first place.
But Android it's a clusterfuck. Look Lemuroid, a Retroarch based emulator with a nice GUI. With the new SAF related permissions you can't make the emulator work any more.
It's frankly embarrassing how many of the comments on this thread are some version of looking at the XKCD "dependency" meme and deciding the best course of action is to throw spitballs at the maintainers of the critical project holding everything else up.
F Droid is no where near being a critical project holding Android up. The Play Store, and the Play Services themselves are much more critical. Being open source doesn't make you immune from criticism for not following industry standards or being called out for poor security.
> The Play Store, and the Play Services themselves are much more critical.
Critical for serving malware and spyware to the masses, yes. GrapheneOS is based on Android and is far better than a Googled Android variant precisely because it is free of Google junk and OEM crapware.
The internet itself is also critical for serving malware and spyware, but that doesn't mean that the internet is garbage. Google invests much more into removing malicous apps from the app store than fdroid does.
If you have nothing to install on your device, what's the point of being able to? For me, f-droid is a cornerstone in the android ecosystem. I could source apks elsewhere but it would be much more of a hassle and not necessarily have automatic updates. iOS would become a lot more attractive to me if Android didn't have the ecosystem that's centered around the open apps that you can find on f-droid
>If you have nothing to install on your device
>I could source apks elsewhere
Do you or do you not have apps you want to install?
At the very least, it's reasonable to expect the maintainers of such a project to be open about their situation when it's that precarious. Why wouldn't you take every opportunity to let your users and downstream projects know that the dependency you're providing is operating with no redundancy and barely enough resources to carry on when things aren't breaking? Why wouldn't they want to share with a highly technical audience any details about how their infrastructure operates?
> when it's that precarious
assumptions
They're building all the software on a single server, and at best their fallback is a 12 year old server they might be able to put back in production. I'm not making any unreasonable assumptions, and they're not being forthcoming with any reassuring details.
I think both of those POVs are wrong. The whole thing about F-Droid is that they have worked hard on not being a central point of trust and failure. The apps in their store are all in a repo (https://gitlab.com/fdroid/fdroiddata) and they are reproducibly built from source. You could replicate it with not too much effort, and clients just need to add the new repository.
Is it possible to add some kind of hardware detection to the build process of a project submitted and inspect the details?
Absolutely zero details on the old or new server.
Christ, comment sections like this make me never want to do anything that might gain widespread adoption, ever.
Brought to you by the helpful folks who managed to bully WinAmp into retreating from open source. Very productive.
A lot of people here are used to working for companies with a larger infrastructure budget.
so uhhh what are the specs of said server?
i'm glad we have a wing that's against gab app store. Can we have one that's for them for balance?
I wish they could give more clarity on whether its hosted in a professional server or someone's bedroom, because just saying that "it's held by a long time contributor with a proven track record of securely hosting services" is not very reassuring.
> Another important part of this story is where the server lives and how it is managed. F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol. if they're gonna use gitlab just use a proper setup - bigco is already in the critical path...