AI on Australian travel company website sent tourists to nonexistent hot springs
Posted by breve 9 hours ago
Comments
Comment by 0xC0ncord 6 hours ago
To me this is the real takeaway for a lot of these uses of AI. You can put in practically zero effort and get a product. Then, when that product flops or even actively screws over your customers, just blame the AI!
No one is admitting it but AI is one of the easiest ways to shift blame. Companies have been doing this ever since they went digital. Ever heard of "a glitch in the system"? Well, now with AI you can have as many of those as you want, STILL never accept responsibility, and if you look to your left and right, everyone is doing it, and no one is paying the price.
Comment by benjedwards 4 hours ago
Treating AI models as autonomous minds lets companies shift responsibility for tech failures.
Comment by clarkmoody 3 hours ago
Comment by flakeoil 4 hours ago
Similar to what Facebook, Google, Twitter/X, Tiktok etc have been doing for a long time using the platform-excuse. "We are just a platform. We are not to blame for all this illegal or repugnant content. We do not have resources to remove it."
Comment by pjc50 5 hours ago
Comment by yojo 4 hours ago
> “We’re not a scam,” he continued. “We’re a married couple trying to do the right thing by people … We are legit, we are real people, we employ sales staff.”
> Australian Tours and Cruises told CNN Tuesday that “the online hate and damage to our business reputation has been absolutely soul-destroying.”
This might just be BS, but at face-value, this is a mom and pop shop that screwed up playing the SEO game and are getting raked over the internet coals.
Your broader point about blame-washing stands though.
Comment by ambicapter 4 hours ago
Comment by scblock 3 hours ago
Comment by stuaxo 4 hours ago
Comment by vivzkestrel 2 hours ago
Comment by ehnto 6 hours ago
An on the nose example would be, if your CEO asked you for a report, and you delivered fake data, do you think he would be satisfied with the excuse that AI got it wrong? Customers are going to feel the same way, AI or human, you (the company, the employee) messed up.
Comment by caminante 5 hours ago
You're not already numb to data breaches and token $0.72 class action payouts that require additional paperwork to claim?
In this article, these people did zero confirmatory diligence and got an afternoon side trip out of it. There are worse outcomes.
Comment by add-sub-mul-div 5 hours ago
He was likely the one who ordered the use of the AI. He won't fire you for mistakes in using it because it's a step on the path towards obsoleting your position altogether or replacing you with fungible minimum wage labor to babysit the AI. These mistakes are an investment in that process.
He doesn't have to worry about consequences in the short term because all the other companies are making the same mistakes and customers are accepting the slop labor because they have no choice.
Comment by nicbou 5 hours ago
Comment by pjc50 6 hours ago
It is however fraud on the part of the travel company to advertise something that doesn't exist. Another form of externalized cost of AI.
Comment by buran77 6 hours ago
Just here to point out that from a legal perspective, fraud is deliberate deception.
In this case a tourist agency outsourced the creation of their marketing material to a company who used AI to produce it, with hallucinations. From the article it doesn't look like either of the two companies advertised the details knowing they're wrong, or had the intent to deceive.
Posting wrong details on a blog out of carelessness and without deliberate ill intention is not fraud more than using a wrong definition of fraud is fraud.
Comment by tantalor 5 hours ago
Everybody knows AI makes stuff up. It's common knowledge.
To omit that disclaimer, the author needs to take responsibility for fact checking anything they post.
Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.
Comment by buran77 3 hours ago
> Skipping that step, or leaving out the disclaimer, is not carelessness, it is willful misrepresentation.
Couldn't help but notice you gave some very convincing legal advice without any disclaimer that you are not a lawyer, a judge, or an expert on Australian law. Your own litmus test characterizes you as a fraudster. The other mandatory components of fraud (knowledge, intention, damages) don't even apply, you said so.
Australian law isn't at all weird about this. Their definition (simplified) pivots on intentional deception, to obtain gains or to cause loss to others, knowing the outcome.
Comment by tantalor 3 hours ago
Comment by f33d5173 5 hours ago
Comment by buran77 3 hours ago
This is a matter of contract law between the two companies, but the people who randomly read an internet blog, took everything for granted, and more importantly didn't use that travel agency's services can't really claim fraud.
Just being wrong or making mistakes isn't fraud. Otherwise 99% of people saying something on the internet would be on the hook for damages again and again.
Comment by direwolf20 4 hours ago
Comment by Lerc 6 hours ago
I doubt they commissioned articles on things that don't exist. If you use AI to perform a task that someone has asked you to do, it should be your responsibility to ensure that it has actually done that thing properly.
Comment by alpinisme 6 hours ago
Comment by doodpants 5 hours ago
No, it worked as designed. Generative AI simply creates content of the type that you specify, but has no concept of truth or facts.
Comment by idopmstuff 3 hours ago
The design of it is based on the intention of the people creating it, not the actual outcome, and it's pretty clear from all available information, plus a general understanding of incentives, that it's designed to be as accurate as possible, even if it does make errors.
Comment by simianwords 3 hours ago
Comment by usefulcat 3 hours ago
Comment by simianwords 2 hours ago
Comment by usefulcat 2 hours ago
ETA:
To elaborate a bit: based on your response, it seems like you don't think my question is a valid one.
If you don't think it's a valid question, I'm curious to know why not.
If you do think it's a valid question, I'm curious to know your answer.
Comment by simianwords 43 minutes ago
Comment by merelysounds 4 hours ago
> Yeah—roughly, from general local knowledge (no web searching, promise ). I’ll flag where my memory might be fuzzy.
> Weldborough Hot Springs are in northeast Tasmania, near Weldborough Pass on the Tasman Highway (A3) between Scottsdale and St Helens.
Screenshot with more: https://postimg.cc/14TqgfN4
Comment by sh3rl0ck 2 hours ago
Comment by voidUpdate 5 hours ago
Comment by lm28469 4 hours ago
You'll still get an AI generate answer at the top, followed by 3 AI generated sponsored blog scams, etc.
Comment by zwog 5 hours ago
Comment by nicbou 4 hours ago
If you actually take pride in your work, it's a double whammy of competing with AI slop and losing over half of your traffic to AI summaries.
Useful independent websites are so cooked.
Comment by verytrivial 4 hours ago
There needs to be a more meta, layered approach to reason. Different personalities viewing the output with different hats on: "That's a bold claim, champ. Search required." But I guess the current real-time, interactive nature of these systems makes it difficult to justify.
Comment by mettamage 3 hours ago
Not with the current state of technology. I haven't seen that it works yet. It requires supervision.
It's funny, back in the day computer calculations were checked with human computers. But now? Just trust it bro.
Comment by metalman 5 hours ago
Comment by testing22321 4 hours ago
Comment by jmyeet 4 hours ago
At the end of the day, LLMs are a statistical approximation or projection.
A good example of this is how LLMs struggle with multiplication, particularly multipolcation of large numbers. It's not just that they make mistakes but the nature of the results.
Tell ChatGPT to multiply 129348723423 and 2987892342424 and it'll probably get it wrong because nowhere on Reddit is that exact question for it to copy. But what's interesting is it'll tend to get the first and large digits correct (more often than not) but the middle is just noise.
Someone will probably say "this is a solved problem" because somebody, somewhere has added this capability to a given LLM but these kinds of edge cases I think will constantly expose the fundamental limits of transformers, just like the famous "how many r's in strawberry?" example that di the rounds.
All this comes up when you tell LLMs to write legal briefs. They completely make up a precedent because they learn what a precedent looks like and generate something similar. Lawyers have been caught submitting fake precedents in court filings due to this.
Comment by simianwords 3 hours ago
People have no idea how capable LLM's are and confidently write these kind of things.
Comment by jmyeet 3 hours ago
[1]: https://arxiv.org/html/2505.15623v1
[2]: https://medium.com/@adnanmasood/why-large-language-models-st...
[3]: https://www.reachcapital.com/resources/thought-leadership/wh...
[4]: https://mathoverflow.net/questions/502120/examples-for-the-u...
Comment by simianwords 2 hours ago
Take 10,000 such multiplications. I'm sure not even a single one would be incorrect with GPT 5.2 (thinking). Want a wager?
Comment by ceejayoz 3 hours ago
ChatGPT appears to get this correct.
Comment by nephihaha 8 hours ago
Comment by NedF 6 hours ago
Comment by re-thc 6 hours ago
Seems par for course.