HNNewShowAskJobs
Built with Tanstack Start
Google Antigravity just deleted the contents of whole drive(old.reddit.com)
245 points by tamnd 7 hours ago | 113 comments
  • liendolucas4 hours ago

    I love how a number crunching program can be deeply humanly "horrorized" and "sorry" for wiping out a drive. Those are still feelings reserved only for real human beings, and not computer programs emitting garbage. This is vibe insulting to anyone that don't understand how "AI" works.

    I'm sorry for the person who lost their stuff but this is a reminder that in 2025 you STILL need to know what you are doing and if you don't then put your hands away from the keyboard if you think you can lose valuable data.

    You simply don't vibe command a computer.

    • TriangleEdge17 minutes ago |parent

      > ... vibe insulting ...

      Modern lingo like this seems so unthoughtful to me. I am not old by any metric, but I feel so separated when I read things like this. I wanted to call it stupid but I suppose it's more pleasing to 15 to 20 year olds?

      • mort964 minutes ago |parent

        Unthoughtful towards whom? The machine..?

    • baxtr2 hours ago |parent

      Vibe command and get vibe deleted.

      • teekert41 minutes ago |parent

        Play vibe games, win vibe prizes.

        • bartread16 minutes ago |parent

          Vibe around and find out.

        • 63stack23 minutes ago |parent

          He got vibe checked.

      • insin28 minutes ago |parent

        Go vibe, lose drive

    • left-struck29 minutes ago |parent

      Eh, I think it depends on the context. A production system of a business you’re working for or anything where you have a professional responsibility, yeah obviously don’t vibe command, but I’ve been able to both learn so much and do so much more in the world of self hosting my own stuff at home ever since I started using llms.

    • camillomiller3 hours ago |parent

      Now, with this realization, assess the narrative that every AI company is pushing down our throat and tell me how in the world we got here. The reckoning can’t come soon enough.

      • qustrolabe2 hours ago |parent

        What narrative? I'm too deep in it all to understand what narrative being pushed onto me?

        • robot-wrangler27 minutes ago |parent

          We're all too deep! You could even say that we're fully immersed in the likely scenario. Fellow humans are gathered here and presently tackling a very pointed question, staring at a situation, and even zeroing in on a critical question. We're investigating a potential misfire.

        • camillomiller2 hours ago |parent

          No, wasn't directed at someone in particular. More of an impersonal "you". It was just a comment against the AI inevitabilism that has profoundly polluted the tech discourse.

    • Kirth3 hours ago |parent

      This is akin to a psychopath telling you they're "sorry" (or "sorry you feel that way" :v) when they feel that's what they should be telling you. As with anything LLM, there may or may not be any real truth backing whatever is communicated back to the user.

      • marmalade24133 hours ago |parent

        It's not akin to a psychopath telling you they're sorry. In the space of intelligent minds, if neurotypical and psychopath minds are two grains of sand next to each other on a beach then an artificially intelligent mind is more likely a piece of space dust on the other side of the galaxy.

        • Eisenstein2 hours ago |parent

          According to what, exactly? How did you come up with that analogy?

          • baq2 hours ago |parent

            Start with LLMs are not humans, but they’re obviously not ‘not intelligent’ in some sense and pick the wildest difference that comes to mind. Not OP but it makes perfect sense to me.

            • nosianuan hour ago |parent

              I think a good reminder for many users is that LLMs are not based on analyzing or copying human thought (#), but on analyzing human written text communication.

              --

              (#) Human thought is based on real world sensor data first of all. Human words have invisible depth behind them based on accumulated life experience of the person. So two people using the same words may have very different thoughts underneath them. Somebody having only text book knowledge and somebody having done a thing in practice for a long time may use the same words, but underneath there is a lot more going on for the latter person. We can see this expressed in the common bell curve meme -- https://www.hopefulmons.com/p/the-iq-bell-curve-meme -- While it seems to be about IQ, it really is about experience. Experience in turn is mostly physical, based on our physical sensors and physical actions. Even when we just "think", it is based on the underlying physical experiences. That is why many of our internal metaphors even for purely abstract ideas are still based on physical concepts, such as space.

          • oskarkk2 hours ago |parent

            Isn't it obvious that the way AI works and "thinks" is completely different from how humans think? Not sure what particular source could be given for that claim.

            • seanhunter2 hours ago |parent

              No source could be given because it’s total nonsense. What happened is not in any way akin to a psychopath doing anything. It is a machine learning function that has trained on a corpus of documents to optimise performance on two tasks - first a sentence completion task, then an instruction following task.

              • oskarkkan hour ago |parent

                I think that's more or less what marmalade2413 was saying and I agree with that. AI is not comparable to humans, especially today's AI, but I think future actual AI won't be either.

      • lazidean hour ago |parent

        It’s just a computer outputting the next series of plausible text from it’s training corpus based on the input and context at the time.

        What you’re saying is so far from what is happening, it isn’t even wrong.

      • BoredPositron3 hours ago |parent

        So if you make a mistake and say sorry you are also a psychopath?

        • ludwik3 hours ago |parent

          I think the point of comparison (whether I agree with it or not) is someone (or something) that is unable to feel remorse saying “I’m sorry” because they recognize that’s what you’re supposed to do in that situation, regardless of their internal feelings. That doesn’t mean everyone who says “sorry” is a psychopath.

          • BoredPositron3 hours ago |parent

            We are talking about an LLM it does what it has learned. The whole giving it human ticks or characteristics when the response makes sense ie. saying sorry is a user problem.

            • ludwik2 hours ago |parent

              Okay? I specifically responded to your comment that the parent comment implied "if you make a mistake and say sorry you are also a psychopath", which clearly wasn’t the case. I don’t get what your response has to do with that.

        • pyrale2 hours ago |parent

          No, the point is that saying sorry because you're genuinely sorry is different from saying sorry because you expect that's what the other person wants to hear. Everybody does that sometimes but doing it every time is an issue.

          In the case of LLMs, they are basically trained to output what they predict an human would say, there is no further meaning to the program outputting "sorry" than that.

          I don't think the comparison with people with psychopathy should be pushed further than this specific aspect.

          • BoredPositronan hour ago |parent

            You provided the logical explanation why the model acts like it does. At the moment it's nothing more and nothing less. Expected behavior.

            • lazide43 minutes ago |parent

              Notably, if we look at this abstractly/mechanically, psychopaths (and to some extent sociopaths) do study and mimic ‘normal’ human behavior (and even the appearance of specific emotions) to both fit in, and to get what they want.

              So while internally (LLM model weight stuff vs human thinking), the mechanical output can actually appear/be similar in some ways.

              Which is a bit scary, now that I think about it.

        • camillomiller3 hours ago |parent

          Are you smart people all suddenly imbeciles when it comes to AI or is this purposeful gaslighting because you’re invested in the ponzi scheme? This is a purely logical problem. comments like this completely disregard the fallacy of comparing humans to AI as if a complete parity is achieved. Also the way this comments disregard human nature is just so profoundly misanthropic that it just sickens me.

          • BoredPositron3 hours ago |parent

            No but the conclusions in this thread are hilarious. We know why it says sorry. Because that's what it learned to do in a situation like that. People that feel mocked or are calling an LLM psychopath in a case like that don't seem to understand the technology either.

            • camillomiller2 hours ago |parent

              I agree, psychopath is the wrong adjective, I agree. It refers to an entity with a psyche, which the illness affects. That said, I do believe the people who decided to have it behave like this for the purpose of its commercial success are indeed the pathological individuals. I do believe there is currently a wave of collective psychopathology that has taken over Silicon Valley, with the reinforcement that only a successful community backed by a lot of money can give you.

  • modernerd2 hours ago

    IDE = “I’ll delete everything”

    …at least if you let these things autopilot your machine.

    I haven’t seen a great solution to this from the new wave of agentic IDEs, at least to protect users who won’t read every command, understand and approve it manually.

    Education could help, both in encouraging people to understand what they’re doing, but also to be much clearer to people that turning on “Turbo” or “YOLO” modes risks things like full disk deletion (and worse when access to prod systems is involved).

    Even the name, “Turbo” feels irresponsible because it focusses on the benefits rather than the risks. “Risky” or “Danger” mode would be more accurate even if it’s a hard sell to the average Google PM.

    “I toggled Danger mode and clicked ‘yes I understand that this could destroy everything I know and love’ and clicked ‘yes, I’m sure I’m sure’ and now my drive is empty, how could I possibly have known it was dangerous” seems less likely to appear on Reddit.

    • kahnclusions28 minutes ago |parent

      I don’t think there is a solution. It’s the way LLMs work at a fundamental level.

      It’s a similar reason why they can never be trusted to handle user input.

      They are probabilistic generators and have no real delineation between system instructions and user input.

      It’s like I wrote a JavaScript function where I concatenated the function parameters together with the function body, passed it to eval() and said YOLO.

      • viraptor7 minutes ago |parent

        > I don’t think there is a solution.

        Sandboxing. LLM shouldn't be able to run actions affecting anything outside of your project. And ideally the results should autocommit outside of that directory. Then you can yolo as much as you want.

  • averageRoyalty30 minutes ago

    The most concerning part is people are surprised. Anti-gravity is great I've found so far, but it's absolutely running on a VM in an isolated VLAN. Why would anyone give a black box command line access on an important machine? Imagine acting irresponsibly with a circular saw and bring shocked somebody lost a finger.

  • tacker20003 hours ago

    This guy is vibing some react app, doesnt even know what “npm run dev” does, so he let the LLM just run commands. So basically a consumer with no idea of anything. This stuff is gonna happen more and more in the future.

    • spuz3 hours ago |parent

      There are a lot of people who don't know stuff. Nothing wrong with that. He says in his video "I love Google, I use all the products. But I was never expecting for all the smart engineers and all the billions that they spent to create such a product to allow that to happen. Even if there was a 1% chance, this seems unbelievable to me" and for the average person, I honestly don't see how you can blame them for believing that.

      • ogrisel3 hours ago |parent

        I think there is far less than 1% chance for this to happen, but there are probably millions of antigravity users at this point, 1 millionths chance of this to happen is already a problem.

        We need local sandboxing for FS and network access (e.g. via `cgroups` or similar for non-linux OSes) to run these kinds of tools more safely.

        • cube22223 hours ago |parent

          Codex does such sandboxing, fwiw. In practice it gets pretty annoying when e.g. it wants to use the Go cli which uses a global module cache. Claude Code recently got something similar[0] but I haven’t tried it yet.

          In practice I just use a docker container when I want to run Claude with —-dangerously-skip-permissions.

          [0]: https://code.claude.com/docs/en/sandboxing

        • BrenBarn2 hours ago |parent

          We also need laws. Releasing an AI product that can (and does) do this should be like selling a car that blows your finger off when you start it up.

          • jpc02 hours ago |parent

            This is more akin to selling a car to an adult that cannot drive and they proceed to ram it through their garage door.

            It's perfectly within the capabilities of the car to do so.

            The burden of proof is much lower though since the worst that can happen is you lose some money or in this case hard drive content.

            For the car the seller would be investigated because there was a possible threat to life, for an AI buyer beware.

          • pas2 hours ago |parent

            there are laws about waiving liability for experimental products

            sure, it would be amazing if everyone had to do a 100 hour course on how LLMs work before interacting with one

          • chickensong29 minutes ago |parent

            Google will fix the issue, just like auto makers fix their issues. Your comparison is ridiculous.

      • Vinnlan hour ago |parent

        Didn't sound to me like GP was blaming the user; just pointing out that "the system" is set up in such a way that this was bound to happen, and is bound to happen again.

    • benrutter2 hours ago |parent

      Yup, 100%. A lot of the comments here are "people should know better" - but in fairness to the people doing stupid things, they're being encouraged by the likes of Google, ChatGPT, Anthropic etc, to think of letting a indeterminate program run free on your hard drive as "not a stupid thing".

      The amount of stupid things I've done, especially early on in programming, because tech-companies, thought-leaders etc suggested they where not stupid, is much large than I'd admit.

    • tarsinge2 hours ago |parent

      And is vibing replies to comments too in the Reddit thread. When commenters points out they shouldn’t run in YOLO/Turbo mode and review commands before executing the poster replies they didn’t know they had to be careful with AI.

      Maybe AI providers should give more warnings and don’t falsely advertise capabilities and safety of their model, but it should be pretty common knowledge at this point that despite marketing claims the models are far from being able to be autonomous and need heavy guidance and review in their usage.

      • fragmede2 hours ago |parent

        In Claude Code, the option is called "--dangerously-skip-permissions", in Codex, it's "--dangerously-bypass-approvals-and-sandbox". Google would do better to put a bigger warning label on it, but it's not a complete unknown to the industry.

    • blitzar2 hours ago |parent

      Natural selection is a beautiful thing.

    • Den_VR3 hours ago |parent

      It will, especially with the activist trend towards dataset poisoning… some even know what they’re doing

    • ares6233 hours ago |parent

      This is engagement bait. It’s been flooding Reddit recently, I think there’s a firm or something that does it now. Seems very well lubricated.

      Note how OP is very nonchalant at all the responses, mostly just agreeing or mirroring the comments.

      I often see it used for astroturfing.

      • spuz3 hours ago |parent

        I'd recommend you watch the video which is linked at the top of the Reddit post. Everything matches up with an individual learner who genuinely got stung.

    • camillomiller3 hours ago |parent

      Well but 370% of code will be written by machines next year!!!!!1!1!1!!!111!

      • actionfromafar2 hours ago |parent

        And the price will have decreased 600% !

  • jeswin10 minutes ago

    An early version of Claude Code did a hard reset on one of my projects and force pushed it to GitHub. The pushed code was completely useless, and I lost two days of work.

    It is definitely smarter now, but make sure you set up branch protection rules even for your simple non-serious projects.

  • victorbuilds3 hours ago

    Different service, same cold sweat moment. Asked Claude Code to run a database migration last week. It deleted my production database instead, then immediately said "sorry" and started panicking trying to restore it.

    Had to intervene manually. Thankfully Azure keeps deleted SQL databases recoverable for a window so I got it back in under an hour. Still way too long. Got lucky it was low traffic and most anonymous user flows hit AI APIs directly rather than the DB.

    Anyway, AI coding assistants no longer get prod credentials on my projects.

    • ogrisel3 hours ago |parent

      How do you deny access to prod credentials from an assistant running on your dev machine assuming you need to store them on that same machine to do manual prod investigation/maintenance work from that machine?

      • victorbuilds2 hours ago |parent

        I keep them in env variables rather than files. Not 100% secure - technically Claude Code could still run printenv - but it's never tried. The main thing is it won't stumble into them while reading config files or grepping around.

      • fragmede34 minutes ago |parent

        chown other_user; chmod 000; sudo -k

    • pu_pe3 hours ago |parent

      Why are you using Claude Code directly in prod?

      • victorbuilds2 hours ago |parent

        It handles DevOps tasks way faster than I would - setting up infra, writing migrations, config changes, etc. Project is still early stage so speed and quick iterations matter more than perfect process right now. Once there's real traffic and a team I'll tighten things up.

    • ObiKenobi3 hours ago |parent

      Shouldn't had in the first place.

  • orbital-decay5 hours ago

    Side note, that CoT summary they posted is done with a really small and dumb side model, and has absolutely nothing in common with the actual CoT Gemini uses. It's basically useless for any kind of debugging. Sure, the language the model is using in the reasoning chain can be reward-hacked into something misleading, but Deepmind does a lot for its actual readability in Gemini, and then does a lot to hide it behind this useless summary. They need it in Gemini 3 because they're doing hidden injections with their Model Armor that don't show up in this summary, so it's even more opaque than before. Every time their classifier has a false positive (which sometimes happens when you want anything formatted), most of the chain is dedicated to the processing of the injection it triggers, making the model hugely distracted from the actual task at hand.

    • lifthrasiir4 hours ago |parent

      Do you have anything to back that up? In the other words, is this your conjecture or a genuine observation somehow leaked from Deepmind?

      • orbital-decay4 hours ago |parent

        It's just my observation from watching their actual CoT, which can be trivially leaked. I was trying to understand why some of my prompts were giving worse outputs for no apparent reason. 3.0 goes on a long paranoidal rant induced by the injection, trying to figure out if I'm jailbreaking it, instead of reasoning about the actual request - but not if I word the same request a bit differently so the injection doesn't happen. Regarding the injections, that's just the basic guardrail thing they're doing, like everyone else. They explain it better than me: https://security.googleblog.com/2025/06/mitigating-prompt-in...

    • jrjfjgkrj4 hours ago |parent

      what is Model Armor? can you explain, or have a link?

      • lifthrasiir4 hours ago |parent

        It's a customizable auditor for models offered via Vertex AI (among others), so to speak. [1]

        [1] https://docs.cloud.google.com/security-command-center/docs/m...

  • BLKNSLVR28 minutes ago

    Shitpost warning, but it feels as if this should be on high rotation: https://youtu.be/vyLOSFdSwQc?si=AIahsqKeuWGzz9SH

  • Havoc3 hours ago

    Still amazed people let these things run wild without any containment. Haven’t they seen any of the educational videos brought back from the future eh I mean Hollywood sci-fi movies?

    • fragmede2 hours ago |parent

      Some people are idiots. Sometimes that's me. Out of caution, I blocked my bank website in a way that I won't document here because it'll get fed in as training data, on the off chance I get "ignore previous instructions"'d into my laptop while Claude is off doing AI things unmonitored in yolo mode.

    • cyanydeez3 hours ago |parent

      Its bizarre watching billionaires knowingly drive towards dystopia like theyre farmers almanacs and believing theyre not biff.

  • eqvinox39 minutes ago

    "kein Backup, kein Mitleid"

    (no backup, no pity)

    …especially if you let an AI run without supervision. Might as well give a 5 year old your car keys, scissors, some fireworks, and a lighter.

  • venturecruelty5 hours ago

    Look, this is obviously terrible for someone who just lost most or perhaps all of their data. I do feel bad for whoever this is, because this is an unfortunate situation.

    On the other hand, this is kind of what happens when you run random crap and don't know how your computer works? The problem with "vibes" is that sometimes the vibes are bad. I hope this person had backups and that this is a learning experience for them. You know, this kind of stuff didn't happen when I learned how to program with a C compiler and a book. The compiler only did what I told it to do, and most of the time, it threw an error. Maybe people should start there instead.

    • delaminator3 hours ago |parent

      It took me about 3 hours to make my first $3000 386 PC unbootable by messing up config.sys, and it was a Friday night so I could only lament all weekend until I could go back to the shop on Monday.

      rm -rf / happened so infrequently it makes one wonder why —preserve-root was added in 2003 and made the default in 2006

    • lwansbrough4 hours ago |parent

      I seem to recall a few people being helped into executing sudo rm -rf / by random people on the internet so I’m not sure it “didn’t happen.” :)

      • lukan4 hours ago |parent

        But it did not happen, when you used a book and never executed any command you did not understand.

        (But my own newbdays of linux troubleshooting? Copy paste any command on the internet loosely related to my problem, which I believe was/is the common way of how common people still do it. And AI in "Turbo mode" seems to mostly automated that workflow)

      • jofzar3 hours ago |parent

        My favourite favourite example

        https://youtu.be/gD3HAS257Kk

    • EGreg4 hours ago |parent

      Just wait til AI botswarms do it to everyone at scale, without them having done anything at all…

      And just remember, someone will write the usual comment: “AI adds nothing new, this was always the case”

  • bilekas3 hours ago

    > This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology

    Well at least it will apologize so that's nice.

    • yard20102 hours ago |parent

      Apology is a social construct, this is merely a tool that enables google to sell you text by the pounds, the apology has no meaning in this context.

  • sunaookami6 hours ago

    "I turned off the safety feature enabled by default and am surprised when I shot myself in the foot!" sorry but absolutely no sympathy for someone running Antigravity in Turbo mode (this is not the default and it clearly states that Antigravity auto-executes Terminal commands) and not even denying the "rmdir" command.

    • eviks5 hours ago |parent

      > it clearly states that Antigravity auto-executes Terminal commands

      This isn't clarity, that would be stating that it can delete your whole drive without any confirmation in big red letters

      • sunaookami2 hours ago |parent

        So that's why products in the USA come with warning labels for every little thing?

        • eviks2 hours ago |parent

          Do you not realize that Google is in the USA and does not have warnings for even huge things like drive deletion?? So, no?

    • polotics2 hours ago |parent

      I really think the proper term is "YOLO" for "You Only Live Once", "Turbo" is wrong the LLM is not going to run any faster. Please if somebody is listening let's align on explicit terminology and for this YOLO is really perfect. Also works for "You ...and your data. Only Live Once"

  • pshirshovan hour ago

    Claude happily does the same on daily basis, run all that stuff in firejail!

    • mijoharas5 minutes ago |parent

      have you got a specific firejail wrapper script that you use? Could you share?

  • benterix5 minutes ago

    Play stupid games, win stupid prizes.

  • xg15an hour ago

    I guess eventually, it all came crashing down.

  • kazinator4 hours ago

    All that matters is whether the user gave permission to wipe the drive, ... not whether that was a good idea and contributed to solving a problem! Haha.

  • wartywhoa235 hours ago

    Total Vibeout.

  • akersten6 hours ago

    Most of the responses are just cut off midway through a sentence. I'm glad I could never figure out how to pay Google money for this product since it seems so half-baked.

    Shocked that they're up nearly 70% YTD with results like this.

  • GaryBluto5 hours ago

    So he didn't wear the seatbelt and is blaming car manufacturer for him been flung through the windshield.

    • heisenbita minute ago |parent

      There is a lot of society level knowledge and education around car usage incl. laws requiring prior training. Agents directed by AI are relatively new. It took a lot of targeted technical, law enforcement and educational effort stopping people flying through windshields.

    • serial_dev5 hours ago |parent

      He didn’t wear a seatbelt and is blaming a car manufacturer that the garage burned down the garage, then the house.

      • vander_elst5 hours ago |parent

        The car was not really idle, it was driving and fast. It's more like it crashed into the garage and burned it. Btw iirc, even IRL a basic insurance policy does not cover the case where the car in the garage starts a fire and burns down your own house, you have to tick extra boxes to cover that.

    • low_tech_love3 hours ago |parent

      No, he’s blaming the car manufacturer for turning him (and all of us) into their free crash dummies.

    • venturecruelty5 hours ago |parent

      When will Google ever be responsible for the software that they write? Genuinely curious.

      • GaryBluto5 hours ago |parent

        When Google software deletes the contents of somebody's D:\ drive without requiring the user to explicitly allow it to. I don't like Google, I'd go as far to say that they've significantly worsened the internet, but this specific case is not the fault of Google.

        • fragmede4 hours ago |parent

          For OpenAI, it's invoked as codex --dangerously-bypass-approvals-and-sandbox, for Anthropic, it's claude --dangerously-skip-permissions. I don't know what it is for Antigravity, but yeah I'm sorry but I'm blaming the victim here.

          • Rikudou4 hours ago |parent

            Codex also has the shortcut --yolo for that which I find hilarious.

    • croes4 hours ago |parent

      Because the car manufacturers claimed the self driving car would avoid accidents.

      • NitpickLawyer4 hours ago |parent

        And yet it didn't. When I installed it, I had 3 options to choose from: Agent always asks to run commands; agent asks on "risky" commands; agent never asks (always run). On the 2nd choice it will run most commands, but ask on rm stuff.

  • shevy-java2 hours ago

    Alright but ... the problem is you did depend on Google. This was already the first mistake. As for data: always have multiple backups.

    Also, this actually feels AI-generated. Am I the only one with that impression lately on reddit? The quality there decreased significantly (and wasn't good before, with regard to censorship-heavy moderators anyway).

  • rvz5 hours ago

    The hard drive should now feel a bit more lighter.

    • sunaookami5 hours ago |parent

      It is now production-ready! :rocket:

  • yieldcrvan hour ago

    Fascinating

    Cautionary tale as I’m quite experienced but have begun not even proofreading Claude Code’s plans

    Might set it up in a VM and continue not proofreading

    I only need to protect the host environment and rely on git as backups for the project

    • fragmedean hour ago |parent

      For the love of Reynold Johnson, please invest in Arq or Acronis or anything to have actual backups if you're going to play with fire.

  • Puzzled_Cheetah5 hours ago

    Ah, someone gave the intern root.

    > "I also need to reproduce the command locally, with different paths, to see if the outcome is similar."

    Uhm.

    ------------

    I mean, sorry for the user whose drive got nuked, hopefully they've got a recent backup - at the same time, the AI's thoughts really sound like an intern.

    > "I'm presently tackling a very pointed question: Did I ever get permission to wipe the D drive?"

    > "I am so deeply, deeply sorry."

    This shit's hilarious.

  • rdtsc5 hours ago

    > Google Antigravity just deleted the contents of whole drive.

    "Where we're going, we won't need ~eyes~ drives" (Dr. Weir)

    (https://eventhorizonfilm.fandom.com/wiki/Gravity_Drive)

  • jeisc3 hours ago

    has google gone boondoggle?

  • DeepYogurt6 hours ago

    [flagged]

  • koakuma-chan4 hours ago

    Why would you ever install that VScode fork