Trusted access for the next era of cyber defense
Posted by surprisetalk 6 days ago
Comments
Comment by alopha 6 days ago
Comment by guzfip 6 days ago
The money is in enterprise and government. The consumer market doesn’t remotely pay enough. It’s just the same story with Microsoft purposely making Windows an unusable mess because that’s not where they make their money. It was good to establish themselves, but that market is getting dumped.
Comment by flyinglizard 6 days ago
Comment by NitpickLawyer 6 days ago
Comment by everlier 6 days ago
Comment by Avicebron 6 days ago
Comment by Jedd 6 days ago
Having grown up reading cyberpunk novels about life in cyberspace, a passing interest in cybernetics (though not of the Sirius Cybernetics Corporation variety), it's frustrating to lose a 'this means computer or internet related' prefix.
Comment by bee_rider 6 days ago
Comment by SturgeonsLaw 5 days ago
I don't know any techies who use the term like that, unless they're in a role that interfaces with the suits.
Comment by ofjcihen 6 days ago
No no, best to have them distribute the cyber to us responsibly.
Comment by SoftTalker 6 days ago
Comment by TeMPOraL 6 days ago
Comment by swyx 6 days ago
Comment by cshimmin 6 days ago
Comment by twoodfin 6 days ago
Comment by TeMPOraL 5 days ago
Comment by Melatonic 5 days ago
Comment by chickensong 6 days ago
Comment by atoav 6 days ago
Comment by zarzavat 6 days ago
Comment by FacelessJim 6 days ago
Comment by tb0ne1521 5 days ago
Comment by ofjcihen 6 days ago
These feels more or less like a way to get in the news after Anthropic's Mythos announcement by removing some guardrails. I’m still signing up though.
Comment by bunnywantspluto 6 days ago
Comment by alephnerd 6 days ago
Comment by gavinray 6 days ago
Just FYI for others.
Comment by hoss1474489 6 days ago
Direct link: https://chatgpt.com/codex/cloud/security
Comment by gavinray 6 days ago
Anyone else who hasn't verified able to access?
Comment by Kye 5 days ago
Comment by alphabettsy 6 days ago
Comment by ofjcihen 6 days ago
Comment by mikewarot 6 days ago
I used to think we were 20 years away from a shift to Capabilities based Operating Systems, which were ----> this <---- close to being adopted widely when the PC revolution swiped them aside.
Unfortunately, I think we're about to repeat history, and we're now 20+ years out from actually solving things, AGAIN. 8(
Comment by NoahZuniga 5 days ago
Comment by mikewarot 4 days ago
Comment by TeMPOraL 5 days ago
If anything, maybe the security community can finally be arsed to consider ad-hoc delegation of authority as a core concept and a basic use case, because that's arguably the primary source of persistent user-level security issues in computing.
In real life, it's absolutely normal to ask random people on the fly to do something in your name, with your credentials - whether that's sending your kid with your credit card for a grocery run, asking spouse to do some bank transfers for you or set up a new computer for you, or asking a co-worker to operate some system. It's the other reason people write passwords on post-its: even without bullshit password strength rules (see xkcd://936), there's still a frequent need to share passwords with people.
Meanwhile, for the past decades, security community has been insisting on tying authority to individuals, and doing everything possible both technologically and socially to prevent authority delegation (except in top tier corporate systems, where this is technically supported, but in such convoluted, complex and broken ways that it may as well not exist - people will still resort to post-its in drawers).
Until this basic concept is recognized, I fear more broad security improvements will only result in more useful work being prevented from happening, and more people-years wasted as users figure out ways to defeat security measures so they can do their actual jobs.
Comment by mikewarot 5 days ago
Giving $20 to an AI is far safer than giving it your credit card. The effects are limited to $20 of loss.
Comment by TeMPOraL 4 days ago
I.e. even if your mom handed you her credit card, she was still there in a car nearby (spatial proximity), and was waiting for you there (temporal limit), and she was your mom (persistent trust-based relationship), which is sufficient protection from the risk of you running away and spending her money on hookers.
(How you managed to buy cigarettes as a 15yo is beyond me - or maybe there were no age checks in 1970s yet?)
Coming back: in real life, we don't bother with restricting the access tool, everyone is transiently giving much more access than they need to random things, and expect them to not abuse it. Meanwhile, cybersecurity is mostly stuck in the mindset of passwords being your identity, and being like underwear (change frequently, don't share), and the concept of delegation of authority doesn't exist beyond some enterprise systems. Which is why, in real world, everyone says "fuck it" and just shares passwords as needed.
Comment by Melatonic 5 days ago
Comment by TeMPOraL 5 days ago
But yes, me and my siblings would often do grocery runs for our mom, with her card in hands, and I also think nothing of handing my own card to my wife (who already knows the PIN), or mine or her siblings, or even some acquaintances, because I trust them.
The larger point (even larger than my previous comment) is that, contrary to what cybersecurity (and especially cryptocurrency aficionados) community believes, the real world runs on trust. Trust is not a bug, it's a feature - an optimization that makes societies and civilizations scale. Trust has its own limits and structural complexities, it has its ebbs and flows, but it's absolutely vital and systems that do not support it (or try to eliminate it) simply gets worked around. Not out of spite, but out of necessity - otherwise nothing would ever get done.
Comment by iammjm 6 days ago
Comment by greatgib 6 days ago
Comment by onoesworkacct 6 days ago
Comment by keyle 6 days ago
Comment by mmooss 6 days ago
Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.
> Democratized access: Our goal is to make these tools as widely available as possible while preventing misuse. We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t. That means using clear, objective criteria and methods – such as strong KYC and identity verification – to guide who can access more advanced capabilities and automating these processes over time.
KYC isn't democratic and doesn't prevent arbitrary favoritism, it's the opposite: It's used to control people and to favor friends and exclude enemies.
Comment by sureMan6 6 days ago
That kind of thinking is exactly why LLMs are so censored, because people think OAI should be liable if someone uses chatgpt to commit cyber crimes
How about cyber crimes are already illegal and we just punish whoever uses the new tools to commit crimes instead of holding the tool maker liable
This gets complex if LLMs enable children to commit complex crimes but that's different from just outright restricting the tool for everyone because someone might misuse it
Comment by 0x3f 6 days ago
And once the wedge is in and the concept of maker responsibility is planted, it expands to people's pet issues, obviously.
The actual line of who gets punished just ends up at some equilibrium in the middle. Largely arbitrarily.
Comment by kaashif 6 days ago
If someone uses ChatGPT to create child porn or worse, to get help tracking down and meeting children, there is NO way in hell the public will accept "don't punish the toolmaker" as a principle.
Comment by marshray 6 days ago
Yes, pentesting tools, even automated ones, are often legal. But they commonly do run up against legal restrictions and risks. They're marketed very differently from ChatGPT.
Comment by luma 6 days ago
I don't see how OpenAI is Ford in your analogy as OpenAI didn't make the software that blew up.
Comment by Havoc 6 days ago
>partner with a limited set of organizations for more cyber-permissive models.
I get where they're going with this, but still rather hilarious how they had to get a corporate speak expert pull of the mental gymnastics needed for the announcement
Comment by nullc 6 days ago
Comment by 2001zhaozhao 6 days ago
Comment by striking 6 days ago
Comment by zb3 6 days ago
Translation: we aim to make defensive capabilities available to US and their vassals so they can protect critical infrastructure, while ensuring countries that are independent can't protect against US attacking their critical infrastructure.
Fortunately, this plan will backfire - the model capability is exaggerated and these "safeguards" don't reliably work.
Comment by CompoundEyes 6 days ago
Comment by rishabhaiover 6 days ago
Comment by Phelinofist 6 days ago
Comment by realisticid 5 days ago
Comment by spacebacon 6 days ago
Comment by ACCount37 6 days ago
Second, it does not look relevant to the discussions in any way, fashion or form.
Comment by spacebacon 6 days ago
Comment by ACCount37 5 days ago
Further than most "AI psychosis" papers go, but still not in any way far.
And "makes these treasured black boxes irrelevant"?
With wild claims like this, either demo a generational improvement on a live model or GTFO.
Comment by spacebacon 5 days ago
Comment by spacebacon 5 days ago
Comment by ACCount37 6 days ago
ChatGPT 5.x just tries to deny everything remotely cybersecurity-related - to the point that it would at times rather deny vulnerabilities exist than go poke at them. Unless you get real creative with prompting and basically jailbreak it. And it was this bad BEFORE they started messing around with 5.4 access specifically.
And that was ChatGPT 5.4. A model that, by all metrics and all vibes, doesn't even have a decisive advantage over Opus 4.6 - which just does whatever the fuck you want out of the box.
What's I'm afraid the most of is that Anthropic is going to snort whatever it is that OpenAI is high on, and lock down Mythos the way OpenAI is locking down everything.
Comment by jruz 6 days ago
Comment by ACCount37 6 days ago
Comment by lebovic 6 days ago
Privacy concerns aside, the KYC process for OpenAI was self-serve and took about a minute.
Comment by jiggawatts 6 days ago
Pepperidge Farm remembers.
Comment by alephnerd 6 days ago
Plenty of AI for Cybersecurity companies use a mixture of models depending on iteration and testing, including OpenAI's.