I can’t decide if it is hype, FOMO, or what ever would I call it. I see all of the AI talk, I hear about new tools coming out daily, I read about startups pivoting only to include AI (otherwise there is no money for them)… And now I fear that if I don’t try out or even start using AI tools that I’ll get overrun.
Do I even need to know what is MCP? Do I need to have agents to do some tasks on my behalf? Do I need to start creating apps and sites using Vibe coding (I’m not a developer)?
> And now I fear that if I don’t try out or even start using AI tools that I’ll get overrun.
You can do this quite easily. Ask ChatGPT to come up with a nice cooking recipe and just take it from there. Ask it all kinds of questions and ask yourself what you find useful.
> Do I even need to know what is MCP?
No. And this comes from someone that uses it as I'm a developer. You might want a high level understanding of what tool-use is and what different LLMs are capable of doing. If you find LLMs useful then you might even want to equip Claude desktop with some tools that have been created for everyone to use.
For your perspective: MCP enables tool-use. Well, there are other ways, but MCP standardizes it.
> Do I need to start creating apps and sites using Vibe coding (I’m not a developer)?
At best, vibe coded apps allow you to create high fidelity prototypes, which is amazing but you can't create a fully functional product with just vibe coding. There may be the odd exception that made it work, but I suspect that would take a lot of effort with the current state of the art. I'd recommend lovable.dev if you want to create a prototype.
No.
Read real books. Talk to real people. Do real things.
You can catch up when everyone else actually gets somewhere, and have spent the whole meanwhile on self exploration.
Depends on what your goal is.
Every few months LLM's get markedly better. In days GPT-5 will come out which by early reports (and anecdotal evidence on LMarena) is substantively better than the best current coding models. Will it surpass humans? It is a possibility, unless there is some yet unforeseen impassable intelligence gap below human intelligence and above SOTA AI intelligence.
Coding as a hobby seems to be more fun for people without AI tools. You get the experience of more completely holding the system in your head, your interaction with the codebase is more intuitive and connected deeply to your natural reasoning faculties. However, I believe the purism of eschewing AI tools is a trap that had few consequences in 2023, or even 2024, but by the end of 2025, the standard for what a "productive software developer" can accomplish in a week will be set by the boon AI tools give to mid and senior developers.
That's it, there's no getting around the reality of technological progress. Soon after that, there will be few if any humans in the loop at all for all except the highest level executive decisions. Coding will be an artisan task done for personal enjoyment, like knitting or cooking.
Against the grain here: Yes, you will.
You should try to understand what it is useful for and what it isn't. You don't need to vibe code at all; in fact, it's the exact opposite. You should read the code and be able to figure out what LLMs are bad at and how they make inhuman mistakes.
I personally like the prompt completion of functions. The act of manually writing out in detail what needs to happen and how help me think about what I want to have done.
Anyway, the main argument I have that you should be using it is that the first and experience of using it will teach you about what to look for in broken LLM code and also help you guide others into producing less broken LLM code.
They're not a developer.
Somehow, I completely missed that...
Perhaps it'll be interesting to see where AI and crypto intersect.
People are getting dumber, sicker and poorer, are they not? There'll be a need to change that. Make me healthy, wealthy and wise. And Vibing means more apps and ideas can be tested. Deployed is another story. Voice recognition will be more important I suspect as people stop reading and writing. How do people avoid the robots overall?
With AI, you can take an old app, and update it and modernize it, right? I was using this web app a long time ago. Would be cool to get it right for 2025. https://uipublish.sourceforge.net/ - who else would want to do that?
I'm an engineer and my general attitude is that when major new developments enter the space, it's worthwhile to understand how they work, why they were created, and what they could potentially be good for, and what they potentially won't be good for, and why.
I applied this methodology to mobile, cryptocurrencies, Web3, VR, and AI. I ended up being bearish about cryptocurrencies and Web3 and bullish about mobile, VR and AI. VR is having a fair amount of trouble taking off, but I think AI is not facing the same challenges. As such, I'm betting very strongly that it's going to be part of our future, and that means now is a good time to start understanding how it works and what people are saying about it and what might come next. I do this by putting my hands on the technology and running lots of different models locally and also experimenting with cloud models, as well as exploring what tools companies are building with these technologies. Simply reading doesn't quite do it for me.
I don't do this because I'm worried about being left behind, nearly as much as I want to be able to deploy these tools effectively in the future to solve problems. It's also fun!
Probably, but not for the reasons you think.
There are segments of the software industry where competence matters: security, finance, healthcare, games, and some more. But most of the rest of software is bodies filling seats mashing buttons.
One way to be a rising star as a button masher, framework junky, or other benchwarmer is merchandising your most valuable product, yourself. At this time nothing screams hot trend as loudly as AI. Like all hot air this bubble will eventually pop. You can choose to hop on the AI fashion train while people are throwing money to the wind or you can go back to mindless button mashing.
You don't have to jump. Not all progress goes forwards. As anyone who's familiar with web frameworks and crypto knows, most of the paths lead to dead ends.
Play with things. Treat them as a toy, not as homework. Eventually you'll get a feel for what is useful and what is someone trying to sell you something.
The first time I actually got something from a MCP was when I just asked claude to check my emails for anything I need to act on. I had 8 emails in the inbox, didn't expect it to find anything. But it found an important request from accounting that was cc-ed and forwarded multiple times to the point I didn't see the actual ask. It was also some tedious bureaucratic stuff so Claude write the response to it in 2 minutes and I didn't have to procrastinate on it the whole morning.
Leaning how to use AI in the upcoming years will be equivalent to knowing how to use the internet and microsoft office in the 2010s. You don't need to absolutely know it, but
The thing with AI is that its pretty easy to use - 80% of it is just prompt engineering. For example, agents are nothing more than a big system prompt to really any LLM, with instructions like "if you want to create a code file, surround the code with <code></code> tags", and then the wrapper program that runs the chat detects those tags and actually does the thing.
I'm not sure if that's an answer to your question, but there are at least companies today that do no allow their programmers to use GenAI e.g. OpenAI, including tools running locally.
You could (for now) work for such a company.
That doesn't mean they'll be around in the future, maybe they will get left behind. But the future is difficult to predict, and it's at least the case that such companies exist today.
Then again it might be difficult to identify such companies. I tried to find any public information that the company I'm thinking of doesn't use AI (large US company, 10k+ employees) and I couldn't. So you probably wouldn't know, before working there, that they don't allow AI usage. So maybe this advice isn't as actionable as I'd hoped.
If someone never moved into web development and still coding C++ on Windows, do they survive? Maybe they did, having some job somewhere in a legacy company, or maybe they are doing some real cool stuff at Google etc. But for the majority of the people web is the default now. IMO, it will be the same for AI. You can probably survive and hold some legacy jobs in some legacy companies but majority of the people would have moved to AI (and it will be faster than desktop -> web migration). But why even worry about he hype. Leave the hype aside (stop browsing LinkedIn if that helps) and start learning a little bit every day/week.
I think there is something happening here and whats needed is your own independent opinions of the technology. I think a lot of people dont like the idiots hyping the tech up and the incessant nonsense. Thats what turns them away from AI. But its a very useful bit of technology and your relationship to it should be direct with no middlemen.
So if you want to work and adopt AI, make your own workflows. Interact directly with ChatGPT or even the API as opposed to learning cursor shortcuts or Lovable prompts.
I use AI very frequently and I love to spell out my thinking and even have AI critique it. Once done, I generate code in phases to save typing. I dont think anything will replace software engineers but engineers with more knowledge will replace those with less. You have to snatch that knowledge. Moronic VCs and tool vendors wont help, direct involvement will.
You don't _NEED_ to know it, but it's probably good for your career to have some exposure in 2025 to LLMs.
Download ollama and mess with a model locally, implement your own RAG system so you get a feel for what that entails and what good and bad use cases look like. I use LLMs every day for random stuff, but not really because I need the output, more because I need to know and understand where it's strengths and weaknesses are.
You're not trying to become an overnight expert here. Nobody expects that. But there's this gap right now between what stakeholders think AI can do (basically everything) and what it actually can do. And guess who needs to bridge that gap?
As a software engineer part of your responsibility is to advise non-software engineers on what is just now possible with technology, because when a stakeholder comes to you with some wacky idea, you need to be able to judge it and decide on the investment of time the idea might require, or if it's even possible today/now.
The op's not talking about running a local AI.
That's pretty much a complete waste of time as the op is not a developer.
The op's talking about using AI in their normal work workflows.
>Download ollama and mess with a model locally,
Ollama locally is very slow (or low quality). I feel a good middle ground is renting GPU or TPU per minute and running a local model there.
Not if you have a gaming GPU or a recent Mac.
IMVHO it's worthwhile to invest some time playing with the new technologies, as you may appreciate them and found them useful. Give a try to Claude code, Gemini-cli, copilot in vs code with agent mode, or in GitHub.
But I don't think you need to stress, there is a lot of hype and there will be time to learn how to use the winners later.
Not knowing your profession, although we know you are not a developer, but you could be an AI professor, it is hard to answer the second part of your question.
For the first part, consider this the Smart Phone of this age. You will need to know and understand how to prompt the AI. You will need to be comfortable with it. If not expect to be asking your children to help you do it as they will grow up with it and be very familiar.
No, you don't need to try to keep up with new tools. I would recommend you try the models though, even for a short time every few months. Send them questions or things you're working on, and see how they do. Provide sufficient context.
It's a good approximation to say that all tools are thin wrappers on top of the models, and having a good grasp of what the models can/can't do right now gets you 80% of the way there.
You'll be fine. Some people want you to never use it, others want you to give it a try right now... but generative AI is not a stock, you'll be free to experiment tomorrow, or in a week, or in a year.
When companies are changing their names to include buzzwords, it’s hype. This happened during the dotcom bubble, with crypto, and now AI.
Whatever your job is will dictate if, or how much, you may care. How things shake out at the end of the day will likely look much different than they do right now.
If you’re not someone who likes to try and ride hype waves with various get rich quick schemes, and not in the space, I wouldn’t worry about it. Sit back with your popcorn and watch these people fight for a few years and let the bubble pop. Then we’ll see who is left and what it’s actually good for. You can then adjust from there if needed.
If you’re not a developer you might as well use AI to become one?
Back in my day we used learn x in y minutes to get up to speed on a new language and then a week to get comfortable.
Now I can do that for anything in like 15 minutes.
Of course, coding is the easy part and anyone can do it.
Software engineering (key word: engineering) is the hard part and requires that you understand the computer, design patterns, and how to build robust systems.
Yes, you should.
As you're not a developer, but haven't told us your field, here are some general reasons why.
My non-devwloper friends (we're in our 40s) are using it to improve their productivity. They have already realized that the AI is bad at some tasks but great at others. For example, a person in PR now no longer writes the 'day to day' PR releases. It's a complete waste of their time, they're really just a formality anyway, and an AI can do it. But they still do all the important ones.
It can release you from routine tasks, or make them trivial.
I've also heard of someone who's using it to find jobs to tender for that they previously didn't have time to figure out. Filter + summarize. So tasks and opportunities that were too time consuming are now viable.
People are making their own mini apps for personal use that are specific for their field.
You don't necessarily have to have the ideas, but you do need to talk to people in your field and find out what they're doing.
MCP/Agents/etc. are pretty cutting edge, but you will start hearing non-techies using them soon.
Agents/MCP means the LLM can do things. You can ask it to make a plan of action, and then DO the actions. It's still a bit ropey.
But soon you might be able to ask your AI to go into HubSpot, find 10 leads that look like they've gone stale and send them a special offer. (It's not quite there yet, and a bit unreliable on something as high level as that, code is less fuzzy and so it performs better).