I’m somewhat indifferent to artifacts. While they can produce useful code occasionally when the codebase is relatively small, they become excessively cumbersome once it reaches around 500 lines. This makes it difficult to even add or remove code. If you want to change a few colors in a webpage, they have to rewrite the whole artifact. It frustrates me that you can’t edit a single line in an artifact; you have to rely on the LLM to do it.
I have ran into a few situations where there's some console error that is fixed by rewriting the thing from scratch. Artifacts are awesome for being able to display some sort of interactive tutoring tool I've made on the fly, but realistically I could just have that HTML/CSS/JS file myself and push it to my own site relatively easy. For others though, it's hosted there easily
I think that's a good decision. They know their market and it's intended for small projects and demo mostly from non tech people. And they did not built half baked editor for which people would have further complaints about. AI assisted coding is a whole different thing and there are many players.
> OpenAI appears to have basically abandoned Custom GPTs since their Spring ‘24 update, and I’m a bit stumped as to why.
It simplifies the product, reducing the number of hurdles the user has to jump through. "Hmm, which gpt should I use for this task?" That should be OpenAI's problem, not mine!
False all around.
Custom GPTs are not abandoned, see heavy usage, and their selection is not a dichotomy between the user's problem and OpenAI's problem. Custom GPTs exist so they can benefit from custom prompts which are highly relevant. Unless you're asserting that custom prompts are useless, which would be an absurd assertion to make, it cannot be asserted that Custom GPTs are useless. And no, this is not something that OpenAI is going to select for you because the customization is a personal one.
Custom GPTs don’t support a bunch of newer ChatGPT features like chat history and projects and they can’t be edited from mobile. There is no real advantage to using a custom GPT over adding a custom prompt to a project at this point, given that the latter doesn’t isolate you from the rest of ChatGPT’s feature set. It really seems like they stopped working on custom GPTs and just expect users to use projects instead.
> There is no real advantage to using a custom GPT over adding a custom prompt to a project
The two features, namely Custom GPTs and Projects, are orthogonal. This is because a Project is for related explorations of a theme, whereas a Custom GPT is for unrelated explorations of a theme.
> chat history
What chat history? Each chat is in the user's history by default, which is how it's supposed to work for Custom GPTs. I don't need a filtered chat history for a Custom GPT like I do for a Project.
> It really seems like they stopped working on custom GPTs and just expect users to use projects instead.
That's more a personal belief rather than a conclusion; it's not even a formal declaration by OpenAI.
The chat history is probably referring to the referencing across chats feature:
Hilariously, today, I unsubscribed from claude due to the increased api timeouts and expensive usage costs when I can use other models for way cheaper that perform equally as good.
yeah their efforts seem to show no care for their consumer level subscribers. all they do seems to be tiered to enterprise customers.
How stable are these now?
I turned off artifacts months ago because it would:
- frequently update code incorrectly / bad edit diff
- act like it updated / created an artifact when it just did nothing
- slowly / painfully delete every single line one by one before rewriting
- use artifacts for things that shouldn't have had any code written at all
Just wasn't worth the value it provided. This was before claude code.