ChatGPT and Codex Down

Posted by bakigul 1 day ago

Counter32Comment12OpenOriginal

Comments

Comment by kilroy123 1 day ago

Both are down for me. :-/ I'm currently in Eastern Europe.

Comment by AustinDev 1 day ago

Both currently working in US.

Comment by lrvick 1 day ago

Burn baby burn.

Meanwhile, you can always buy hardware like a Strix Halo and have local LLMs that no third party can take away from you.

Comment by virgildotcodes 23 hours ago

I really wish local models could compete with Codex, but they are miles apart for now. I'm not sure how they would ever not be, unless local models at some point in the future catch up to the current state of 5.4 high.

Even then, the frontier models would likely have improved by an equivalent degree, so you'd again be faced with the same choice of deciding between a dramatically less effective local tool and a far more capable, closed remote model.

I guess there's going to be some point of "good enough" for most people.

I feel like the closed frontier models really got there around 8 months ago and then even moreso ~4-6 months ago with the release of the Codex series and then opus 4.6. Finally feels like you can get reliably good implementations of features that follow repo patterns and best practices, and at least with 5.4 High/Xhigh Codex, code reviews that don't mostly surface hallucinated or superficial bullshit.

While I'm rambling, I feel like when/if local models ever do catch up to this point, the frontier models are going to be so damn good that software devs are truly fucked.

Comment by lrvick 10 hours ago

I do linux kernel, compiler, and operating system dev with Qwen3.5 122b running locally on a Strix Halo 128G ad 35t/s. Pretty much the most complex software problems one can work on.

I think a lot of people just want to put in a credit card and press an easy button.

Comment by virgildotcodes 5 hours ago

Yeah the easy button, if translated to a more capable model that requires less hand holding, manual correction and consistently produces better quality code, is of course the point. You wouldn't want to go from Qwen3.5 122b back to GPT 3.5 for coding assistance.

People can definitely be productive with less powerful models. Supermaven or Cursor's tab autocomplete models from a year ago were already a huge boost over the pre-AI days. They just don't have the same capabilities as the leading models.

Curious if you've tried Gpt 5.4 High through Codex to compare for your use case?

Comment by andyfilms1 23 hours ago

Sure, but unless you're training them yourself they can still be compromised with poisoning or bias. They're still black boxes even if you're running them locally.

Comment by lrvick 10 hours ago

Obviously, and that is no different than remote models. You do not and should not ever trust an LLM, but with proper handling they can still be super useful.

You give LLMs a dedicated OS to work in, let them do research or debugging and commit to branches, review and clean up those branches as you like from a trusted OS, then sign the commits and mark a PR as ready for review.

Comment by Archit3ch 22 hours ago

What's the alternative to frontier models? Disk-streamed GLM 5.1? By the time you get a single response back, the API will be back up.

Comment by lrvick 10 hours ago

35t/s on Qwen3.5 122b on a Strix Halo. The local stuff works great now. Stop giving the corpo monopolists money.

Comment by 1 day ago

Comment by rvz 23 hours ago

I would have expected Claude to take time off first. It turns out that both ChatGPT and Codex decided to take some time off on vacation today.

Comment by ChrisArchitect 23 hours ago