HNNewShowAskJobs
Built with Tanstack Start
How to Train an LLM: Part 1(omkaark.com)
18 points by parthsareen 2 days ago | 3 comments
  • doppelgunner2 days ago

    Do you think LLMs in the future will still be on the server? I think as the hardware improves, there will be a time where we can install the LLM on mobile phones or portable devices. This will make it faster and cheaper to maintain sincce you don't need a server anymore. Or maybe I'm wrong?

    • tom_alexander2 days ago |parent

      That's already the case. I run a quantized 70 billion parameter llama 3.1 model on my framework 13 inch laptop. Only cost ~$300 to get the 96GB of ram (which I purchased for unrelated non-AI reasons before the AI boom). It certainly isn't fast, but it is fast enough. I run it via vulkan compute using llama.cpp with an anythingllm web interface in front of it.

  • rob_c2 days ago

    so, exactly how much code/content is new/borrowed?