Scan your website to see how ready it is for AI agents
Posted by WesSouza 13 hours ago
Comments
Comment by Mordisquitos 11 hours ago
Also AI industry: "Please make sure your website is adapted so that AI agents are able to use it."
Comment by shimman 10 hours ago
Comment by dpkirchner 10 hours ago
Comment by cj 8 hours ago
No one does SEO because they're trying to help Google.
You do it because you're trying to help the people using google. (Edit: or trying to make money by driving traffic for ads)
Whether or not companies spend time on AEO is directly tied to whether LLM/agents/AI/etc end up becoming a lead channel that buyers use to research products to buy.
Comment by themafia 8 hours ago
Who are all _super_ interested in "Top 10 Ways to make a summer Mojito."
Comment by i_love_retros 8 hours ago
Haha, no, people do it to try and get ranked higher and thus make more money. They're not trying to help anyone.
Comment by staticshock 7 hours ago
Comment by ryandrake 6 hours ago
Comment by staticshock 2 hours ago
Comment by zombot 9 hours ago
Comment by Bombthecat 6 hours ago
Thank you
Comment by pickleglitch 12 hours ago
Comment by tkmcc 9 hours ago
Comment by Bender 7 hours ago
ip route add blackhole "${CIDR}" 2>/dev/null
Do a for loop reading through files of all the cloud IP CIDR blocks and that will curtail all the AI, search engine bots and more.Comment by solenoid0937 6 hours ago
Comment by Bender 6 hours ago
I could totally imagine Joe Rogan saying, "Hey Jamie, what was that site? Oh yeah go to ai dash sucks dash bfdd dot newsdump dot org to get your copy of an SSH banner today."
I've had traffic sent to me long ago from paying into Google's program but it was mostly bots. This was in the 2003-2009 time-frame. I imagine by now it's not much better.
Comment by jamiek88 5 hours ago
Comment by celso 11 hours ago
Comment by pickleglitch 10 hours ago
Comment by dormento 9 hours ago
I woke up with such a bad feeling..
Comment by KetoManx64 7 hours ago
Comment by accrual 5 hours ago
Comment by gosub100 10 hours ago
Comment by ablob 10 hours ago
Comment by snailmailman 9 hours ago
The web is becoming more and more unusable every day. If your data is easy to access, it gets stolen and scraped, your site effectively DDOSed. If your site is hard to access nobody will visit.
Comment by throw-the-towel 9 hours ago
Comment by a34729t 7 hours ago
Comment by KetoManx64 7 hours ago
Comment by ryandrake 6 hours ago
Comment by saintfire 9 hours ago
The latency while browsing the web these days is brutal as a result; between Anubis and Cloudflare and the like.
Our prize for it will be the impending super intelligence our benevolent future overlords allow us to exploit, I suppose. /s
Comment by oynqr 5 hours ago
Comment by cyanydeez 6 hours ago
So perhaps you can point your LLM at this and ask it to inverse the rules and make sure user design remains consistent.
Comment by stackghost 7 hours ago
Comment by Urgo 12 hours ago
403 Forbidden
error code: 1106
The site is blocking our scanner. This may be due to WAF rules, bot detection, or IP-based restrictions.
Perfect :)
Comment by dawnerd 12 hours ago
Comment by progbits 11 hours ago
Comment by ChrisArchitect 9 hours ago
Comment by Alifatisk 10 hours ago
Comment by kitsune1 11 hours ago
Comment by xnx 11 hours ago
Comment by hombre_fatal 11 hours ago
I published a free macOS app three years ago to the app store and abandoned it. Over the last six months I received multiple emails per week from people asking where they can find it since it only shows up on the app store for older macOS.
I finally asked people how they found out about my app, and 100% of the time it was because they asked ChatGPT how to do something and it found my crappy website.
I had also written aspirational but nonexistent features on my website at the time (like a personal TODO), and ChatGPT told people my app had this feature they wanted.
So I took the time to put a 2.0 release together years later.
There's clearly a lot of power here, like how you can make claims on your website that LLM agents take at face value. It's like keyword stuffing all over again since LLMs are not hardened against it.
For ecommerce it's even more obvious. I asked an LLM why it thought Product A was better than Product B and it clearly just regurgitated a paragraph from Product A's website about how it's better than Product B. We've all probably hit this with Google Search's AI summary where it's regurgitating some nonsense someone wrote in a blog post or reddit comment.
Comment by cyanydeez 6 hours ago
There's no evidence that agent traffic follows the same pathway.
Comment by ToucanLoucan 10 hours ago
* You describe your website as "crappy" yet ChatGPT was able to figure it out enough to get you traffic for an app you didn't maintain
* ... with the caveat that it thought made up theoretical features were actual features
So unless your website was "GEO"d by sheer accident, I really don't think this is a good example to cite as the demonstration of what you're saying.
Comment by hombre_fatal 6 hours ago
It doesn't mean you can't deliberately game the bot. It means you can analyze how and then replicate it (aka SEO).
If I can unintentionally sway the LLM agent, then I can figure out how and do it intentionally (aka GEO).
Either way, if you've used LLMs, then you it's trivially possible to sway them. That's the only proposition you need to accept for GEO to be possible. Though it's far worse than possible: I'm sure it's widespread and ubiquitous.
Comment by tehjoker 11 hours ago
Comment by xnx 10 hours ago
For 30 years marketers have been doing everything they can to avoid making sites useful for people, despite that being what Google rewarded from the start (e.g. relevant link text, page titles, and headings).
Comment by 11101010010001 10 hours ago
Comment by snailmailman 9 hours ago
I searched for a specific niche product the other day. Second result down was AI blogspam “what to buy now that product X has been discontinued. We reviewed these 9 alternatives now that the company shut down.”
The company didn’t shut down. The 9 alternatives were the same product by the same company in different sizes and quantity counts. How kind of them to hallucinate so many glowing reviews for me after they hallucinated a problem into existence first.
At least the search engine can summarize all the slop for me. It even cites sources! The sources directly contradict the summary almost every time, but why would you click through?
Comment by leros 13 hours ago
I've redesigned my site to have enough content so that AI knows what I have but they have to send the user to my site to use an interactive JavaScript widget to get the final answer they need. So far so good, but not sure how long that will work for.
Comment by sroussey 11 hours ago
Comment by leros 11 hours ago
I can tell they're not using it because the page is getting hit by their user agents but my API is not.
Comment by subscribed 10 hours ago
Your site, your choices.
But also: hostile design? My choice.
Comment by leros 9 hours ago
Comment by zb3 11 hours ago
So:
- are you certain this "revenue" doesn't come from ads promoting scams? or you simply don't care?
- what do you think about LLMs "licensing" the content so you get royalties instead of putting these artificial obstacles?
Comment by leros 11 hours ago
Comment by throwaway290 10 hours ago
which LLMs are doing this?
Comment by indigodaddy 12 hours ago
Comment by carlosjobim 11 hours ago
Comment by Mordisquitos 11 hours ago
Comment by unsungNovelty 12 hours ago
Comment by CPLX 11 hours ago
Why do you have a website in the first place?
Comment by Mordisquitos 11 hours ago
Comment by CPLX 5 hours ago
If I tell Claude to go search the web and find me a bunch of links to the websites of restaurants in my neighborhood because I want to try something new do you think the restaurant wants to be on that list?
Comment by burntpineapple 11 hours ago
Comment by Mordisquitos 10 hours ago
Comment by ac29 7 hours ago
We need to meet the customer where they are and that means making our site more accessible to search engines, mobile devices, LLMs, or whatever comes next.
Comment by PenguinCoder 10 hours ago
Comment by themafia 8 hours ago
Comment by rgilton 13 hours ago
(Hint: no)
Comment by subscribed 9 hours ago
If the "AI" I was talking with couldn't see your offer, it naturally didn't exist for me in the assessment and choice phase I then did.
So I don't think it's universally a "no". Like it or not, LLMs are useful.
Comment by Mordisquitos 9 hours ago
Comment by CrzyLngPwd 9 hours ago
We couldn't scan this site isitagentready.com returned 522 <none>
The site appears to be experiencing server errors. This is not an agent-readiness issue. Try scanning again later.
Oops.
Comment by ajesus 6 hours ago
Comment by cdrnsf 12 hours ago
Comment by bob1029 12 hours ago
How much CPU time an average request takes is probably the most important factor in the real world. No one running a frontier AI lab is going to honor any of the metadata described here.
Comment by xg15 13 hours ago
It will hit exactly the same walls too, namely that the technical details are completely irrelevant - if adopting a standard is actually a negative for websites, because it will separate the site from its users, sites will obviously not do it.
You can lead the horse to water but you cannot make it drink, especially if the water is obvious poison.
Comment by embedding-shape 12 hours ago
Not that I believe this will be how the future turns out, but what if the main users of websites end up being agents? Then adopting the standard ends up being a requirement for survival instead of something negative.
Hopefully and ideally we don't end up there, because then the internet will surely suck for us humans, but I'm not so sure the whole "make platforms/websites open up for the machines" will necessarily fail yet again because of the same issues, can very well be different this time.
Comment by cyanydeez 6 hours ago
Comment by c7b 12 hours ago
Comment by bigfishrunning 11 hours ago
Comment by c7b 11 hours ago
Comment by themafia 8 hours ago
I love it when the people who just want to use technology to benefit humanity as a whole are dimly regarded as "starry-eyed idealists."
> because it will separate the site from its users, sites will obviously not do it.
Sites don't generate their own users. Users must discover sites. This allows a third party to dictate terms to them. Which we already know happens.
> especially if the water is obvious poison.
Alcohol exists. I think you might want to put away the "perfectly rational" assumptions about humanity.
Comment by XCSme 13 hours ago
We couldn't scan this site isitagentready.com returned 522 <none>
The site appears to be experiencing server errors. This is not an agent-readiness issue. Try scanning again later.
Comment by ajesus 6 hours ago
Comment by firefoxd 13 hours ago
Comment by swingboy 13 hours ago
Comment by egypturnash 11 hours ago
Comment by Boss0565 5 hours ago
Comment by throwaway290 10 hours ago
Comment by p4bl0 13 hours ago
[1] https://www.w3.org/community/reports/tdmrep/CG-FINAL-tdmrep-...
Comment by fabiensanglard 13 hours ago
Comment by sodapopcan 12 hours ago
Comment by frizlab 12 hours ago
Comment by bikelang 13 hours ago
Comment by bookofjoe 9 hours ago
Comment by p4bl0 13 hours ago
Comment by acedTrex 13 hours ago
Comment by Manfred 12 hours ago
This has always stuck to me as an example of the pinnacle of collective investment delusion that seems to exist in certain circles. They idea that you can shape the world to your product instead of improving the world with your product. You just have to try hard enough.
Comment by billfor 9 hours ago
Comment by WesSouza 13 hours ago
Good.
Comment by daft_pink 13 hours ago
I’m not really interested in my website being ai ready, but it’s particularly fascinating to me that they are suggesting and interface for ai agents to make payments to secure access to an api.
Generally, when I want to pay for an api, it would be really wonderful to be able to just direct an ai to setup the account and get me some credentials.
Comment by loloquwowndueo 9 hours ago
A lot of the misses are for stuff a blog doesn’t need like mcp or api catalogs. It’s a damn blog, I have no api. Unless rss feed counts.
Comment by deathanatos 9 hours ago
> isitagentready.com returned 522 <none>
Ironic perfection.
Comment by postalcoder 13 hours ago
Comment by embedding-shape 13 hours ago
Comment by frizlab 12 hours ago
Comment by Hamuko 11 hours ago
The announcement is so full of AI shit that I'm not even going to consider it as a competitor.
Comment by _verandaguy 13 hours ago
I have reduced my online presence to much less than it once was partly because I don't want to feed this machine training data that I've worked hard to make for a human audience.
Comment by gwerbin 13 hours ago
Comment by embedding-shape 12 hours ago
Comment by gwerbin 3 hours ago
Comment by jacquesm 12 hours ago
Comment by _verandaguy 11 hours ago
Like... yeah, no shit; I didn't build it for your regex. It's not the target audience.
Plus, isn't the appeal of LLMs broadly that they can do somewhat-useful things with mostly-arbitrary input (if you ignore the risk of prompt injection)?
Comment by gwerbin 3 hours ago
They can definitely read HTML, but they do better with more structure. I proposed in a sibling comment for example that the "reader mode" feature in browsers might be a great LLM-compatibility feature to reduce all the HTML token noise. Or exposing an HTTP API with an OpenAPI schema and a proper sitemap and an RSS feed. For example fetching from an RSS feed can be exposed to the LLM as a "tool" that it can call.
Comment by bradleyankrom 12 hours ago
Comment by _verandaguy 10 hours ago
Though this is undermined somewhat by stories like this one[0], where an AI runs a "slow life" store catering to a lifestyle that specifically tries to disconnect from technology.
It's incredibly perverse.
[0] https://andonlabs.com/blog/andon-market-launchComment by Bender 9 hours ago
We couldn't establish a connection to this site. Check that the URL is correct and the site is online.
Working as designed. Would have to come from a non cloud IP.Comment by thunderfork 12 hours ago
"Now, make sure your websites are rigorously structured in such a way that allows the technology to work..."
Comment by jsharkey 13 hours ago
Comment by zombot 12 hours ago
Comment by nicbou 13 hours ago
Comment by LocalH 9 hours ago
Comment by remywang 13 hours ago
Comment by k4rli 13 hours ago
Comment by cousin_it 13 hours ago
Unless of course you want to expose some functionality only to AIs, not humans. Then sure. But why would you want to do that?
Comment by fhd2 13 hours ago
Comment by binaryturtle 12 hours ago
Comment by fragmede 12 hours ago
Comment by bhaney 13 hours ago
Comment by embedding-shape 13 hours ago
Seems like this belongs squarely in the fun and ever-growing collection of "Cloudflare throws vibe-slop into the world and see what sticks".
Comment by davidedicillo 11 hours ago
Comment by totalwebtool 9 hours ago
Comment by danlitt 12 hours ago
Comment by Hamuko 13 hours ago
Comment by ndiddy 13 hours ago
Comment by deckar01 12 hours ago
Comment by greenavocado 13 hours ago
Comment by ErroneousBosh 7 hours ago
Does anything legitimate use this?
If I see a request for my page as markdown, does that mean an AI scraper is poking at it? Sounds like a good time to return a zipbomb.
Comment by stackghost 7 hours ago
Comment by ge96 11 hours ago
Comment by doublerabbit 10 hours ago
Comment by fragmede 11 hours ago
Comment by fnoef 13 hours ago
What’s the F is going on? Is the world gone mad or something?
Comment by gwerbin 13 hours ago
Yes, it's madness but it doesn't matter that it's mad because you can't stop it. It's a technological gold rush, with all of the mixed connotations that "gold rush" should imply.
Comment by SunshineTheCat 12 hours ago
We are, after all, talking about some metadata here you are more than welcome to leave off your site.
Comment by lpcvoid 12 hours ago
Comment by reaperducer 13 hours ago
What’s the F is going on? Is the world gone mad or something?
E-something
I-something
Cyber-something
Crypto-something
AI-something
This, too, will pass. Like Blackberries and car bras.Comment by fragmede 11 hours ago
Comment by dwb 12 hours ago
Comment by sync 13 hours ago
Comment by pgporada 12 hours ago
Comment by bookofjoe 8 hours ago
Comment by zombot 12 hours ago
Short answer: Yes.
Although it's not the world proper, but a very loud and well-paid cohort of shills, astroturfers and spin doctors. Plus the occasional useful idiot and me-too hitchhikers, no doubt.
Comment by giancarlostoro 13 hours ago
Comment by droidjj 12 hours ago
Comment by ChrisArchitect 9 hours ago
Comment by gegtik 12 hours ago
Comment by julienreszka 12 hours ago
Fix: Implement the WebMCP API by calling navigator.modelContext.provideContext()
but I already do that. the extension detects them https://chromewebstore.google.com/detail/webmcp-model-contex...
Comment by celso 11 hours ago
Comment by julienreszka 8 hours ago
Comment by julienreszka 10 hours ago
Comment by ajesus 6 hours ago
Comment by i_love_retros 8 hours ago