We gave an AI a 3 year retail lease and asked it to make a profit
Posted by lukaspetersson 23 hours ago
Comments
Comment by class3shock 21 hours ago
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."
I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.
Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?
Comment by Lammy 18 hours ago
“It only remains to point out that in many cases a person’s way of earning a living is also a surrogate activity. Not a PURE surrogate activity, since part of the motive for the activity is to gain the physical necessities and (for some people) social status and the luxuries that advertising makes them want. But many people put into their work far more effort than is necessary to earn whatever money and status they require, and this extra effort constitutes a surrogate activity. This extra effort, together with the emotional investment that accompanies it, is one of the most potent forces acting toward the continual development and perfecting of the system, with negative consequences for individual freedom.”
-- Industrial Society and Its Future (1995)
Comment by beloch 20 hours ago
For the guys in this story, my translation is, "We were totally fine with making money with no effort, because F paying more employees than we need to. This social media campaign is our backup plan to ensure we get some press and attention out of it even if it fails. We'd totally be cool with making a lot of money though. Please visit our quirky AI shop and buy our stuff."
Comment by Barbing 19 hours ago
This is going through some people’s minds the more pushback grows (see Altman molotov, Maine data center moratorium)
Comment by HumblyTossed 19 hours ago
Comment by hn_acc1 18 hours ago
Comment by Barbing 18 hours ago
"where union" in short.
Perhaps the concept is too foreign for white collars, or on average folks think they'll be OK and it's the juniors who'll go... maybe too focused on immediate needs... a belief unionization is the wrong response... (and I'm not advocating for anything in particular btw)
Comment by ben_w 4 hours ago
A union has the power to organise one thing, to withdraw labour. In the industrial era, the threat of all the workers not showing up was a threat to end a business.
If AI does what is promised, to replace labour, then a threat to withdraw labour is only threatening the owners with a good time.
Comment by Jensson 7 hours ago
Comment by i_think_so 11 hours ago
Nick Hanauer understood this fourteen years ago. Very few others did. And despite him spending his own time and money to explain it in simple English, nobody in his peer group wanted to hear it -- his TED talk on the subject ... took several years before it was published. Just a coincidence, I'm sure.
FA (for a decade or so) FO, I guess?
https://www.ted.com/talks/nick_hanauer_beware_fellow_plutocr...
https://www.politico.com/magazine/story/2014/06/the-pitchfor...
https://www.youtube.com/watch?v=q2gO4DKVpa8ns than humans, and more potentially unemployed white collar workers than the police, military and national guard combined.
Comment by Barbing 6 hours ago
Comment by pydry 18 hours ago
Just like they convinced the younger generation that "boomers" stole their future.
Comment by topheroo 18 hours ago
Comment by balls187 18 hours ago
To me, it seemed like a modern day tech-take of human cock-fighting.
Comment by bsder 16 hours ago
The problem is with adolescents taking them. Adolescent boys see a really nice immediate payoff for taking PEDS (better musculature and better sports performance->more popular) while the downsides are in the future. It's really hard to fight that.
Even when I was in high school several decades ago, we had a handful of people on PEDS. And we were a tiny school with no significant sports programs. I can't imagine what it's like now with social media pushing everything.
Comment by rafaelmn 17 hours ago
Not saying we should be promoting them, but if we can eventually get to the point where we eliminate the really bad side effects and get most of the benefits it's going to be a great thing for everyone, the next thing after GLP-1.
Comment by mock-possum 19 hours ago
Strikes me as a repulsively mean-spirited take, ironically proving the artist’s point.
Comment by mjmsmith 18 hours ago
Comment by beloch 18 hours ago
Comment by Waterluvian 21 hours ago
Comment by mountainb 20 hours ago
Comment by edm0nd 19 hours ago
create value because the windows have to be replaced and employees are paid for their labor in doing that.
destroy value bc they -1 inventory each time a window is broken
Comment by lbreakjai 19 hours ago
https://en.wikipedia.org/wiki/Parable_of_the_broken_window
The fallacy is to think value was created by buying someone's labour to fix the window. This is value that's been displaced from something productive to something unproductive.
Instead of going from 0 to 1 (invest the money and create value), you went from -1 to 0 (spend money to fix the window to get back to where you were) and, overall, the value of a perfectly good window got lost.
Comment by i_think_so 11 hours ago
In other words, everybody but economists and certain philosophers. :-)
Comment by jagged-chisel 19 hours ago
Comment by evan_ 19 hours ago
Comment by jagged-chisel 17 hours ago
Indeed, the capitalist’s creed!
Comment by Barbing 19 hours ago
-crowded theater (negative value example)
Words can be pretty much actions depending on who you are https://en.wikipedia.org/wiki/Will_no_one_rid_me_of_this_tur...
Comment by bryanrasmussen 20 hours ago
well, yeah that is the world the AI guys want...
Comment by Apocryphon 20 hours ago
Comment by hn_acc1 18 hours ago
Comment by dugidugout 18 hours ago
Comment by gobdovan 19 hours ago
Comment by anon84873628 21 hours ago
Pickaxes and shovels and whatnot.
Comment by Mordisquitos 21 hours ago
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running the Torment Nexus.”
Comment by astrange 19 hours ago
Comment by mesofile 19 hours ago
Comment by frm88 6 hours ago
Comment by tsunagatta 17 hours ago
Comment by jmcgough 11 hours ago
Comment by ben_w 20 hours ago
Lots of people write wills, doesn't mean they're looking forward to dying or think they can do much about it. Heck, a lot of people don't even watch their diet and do exercise to maximise quality of life and life expectancy.
* I think that by the time AI is good enough to run a retail store, there's a decent chance there won't be any retail stores left anyway. It's like looking at Henry Ford's production line factories and thinking "wow, let's apply this to horse-drawn carriages!"
Comment by notahacker 20 hours ago
Comment by elif 20 hours ago
Comment by jdlshore 19 hours ago
Comment by b2w 19 hours ago
I see these kids come on deck and enter the water and its hard to not notice their development is behind to those of their peers that went to a swim club that was proper learn to swim to thrive in the water as opposed to just that survive mentality. They are the most watched in case something happens.
So yea, don't just throw em in.
Comment by tayo42 18 hours ago
2 year olds are behind already?
Comment by teo_zero 9 hours ago
Comment by HPsquared 20 hours ago
Comment by pajamasam 19 hours ago
Comment by andy99 18 hours ago
The more typical AI fondation model company claim of “it’s so dangerous only we and people that pay us enough should hand access” is what I think is BS.
I don’t see anything wrong with trying to understand something, which is what this seems to be about. I also don’t see anything wrong with an AI operated store generally, and it of course makes sense, and is valuable, to learn about how the limitations.
Comment by Anon34234235 18 hours ago
Comment by jonas21 20 hours ago
How are you supposed to know what sort of regulation is needed if you don't even know what the issues are yet? Similarly, won't it be much easier to make the case for regulation if you can point to results of experiments like this one instead of just hypotheticals?
Comment by insane_dreamer 20 hours ago
Comment by scotty79 20 hours ago
Comment by orochimaaru 19 hours ago
Comment by BrenBarn 8 hours ago
I would go further and say that there is just no such thing as "this future is coming regardless" once you get out of the realm of physical facts. One of the things that by turns depresses and enrages me about so much punditry (especially in tech) is this notion that there is some sort of inevitable socio-techno-psychological force propelling human society in certain directions regardless of the will of actual humans.
Nonsense. We as humans make our society; it is nothing but what we make of it; we can make it what we want.
As you point out, people who say otherwise are usually really saying "too bad for you who don't want the future to be this way, because I do want it to be this way and I'm working to make it happen".
Comment by yowlingcat 17 hours ago
I am about to go on a long rant, but there is so much money sloshing around the capital allocation machine going towards a vision of the AI managed and optimized future that the propaganda machine for these rose colored delusions must work in overtime. What disappoints me is the question of where the heck are the bears? Did they all go into hibernation 5 years ago when QE gave the retail kindergartener a handgun to pump low quality tickers to the moon? have we just societally accepted that everything should be a hyperreal version of sports gambling now and the world is and ought to be an efficient market of hyperstition?
I may be old and grumpy saying this, but this all sounds dumb and corny. I would like some of the very capable traders who make money repricing mispriced assets to find a way to make money deflating this bubble and bring this environment back to sanity. And I say this as someone who likes the capabilities of AI but continue to see it do little to none of the hard work solving incompressible problems that continue to create and retain enterprise value.
To get off my soapbox for a second and get back to your quoted passage -- what they're really saying is "We are working very hard to make this future coming, and we think so little of your intelligence that we believe you'll fall for the fear tactic of believing it's inevitable, ignoring the fact that it won't happen without someone's hands. And in this case, it is very much our hands, which are incentivized to not just do it but to do it so well that we ensure we do everything possible to make this happen. Part of which means persuading you that it is guaranteed to succeed. If we ever let the honest truth slip that what we're proposing is extremely hard to pull off with pure AI and we're just going to be a any other commercial real estate investor like anyone else, the jig is up."
That's what every single one of these kinds of hypocritical navel gazing faux-concern proclamations amount to for me. Astroturf.
Comment by cyanydeez 19 hours ago
Comment by dfhvneoieno 21 hours ago
Comment by fl4ppyb3ngt 21 hours ago
Comment by sdenton4 21 hours ago
Comment by jmcgough 19 hours ago
Comment by Quarrelsome 20 hours ago
Comment by akdev1l 17 hours ago
I do not.
Comment by Quarrelsome 3 hours ago
I feel more comfortable that the people exploring seem to have their head screwed on and don't appear to be dismissive of the harm they might cause.
Comment by Xx_crazy420_xX 6 days ago
Comment by ethin 21 hours ago
Comment by vannevar 6 days ago
Comment by fl4ppyb3ngt 21 hours ago
Comment by Mordisquitos 17 hours ago
Comment by phreeza 19 hours ago
Comment by Mordisquitos 17 hours ago
Comment by pavel_lishin 2 days ago
I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.
Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.
Comment by altruios 21 hours ago
Comment by pessimizer 21 hours ago
Comment by notahacker 19 hours ago
Comment by sodality2 21 hours ago
Comment by jayd16 22 hours ago
I don't think we need to have real human risk to get results from the experiment.
Comment by fl4ppyb3ngt 20 hours ago
Comment by anon84873628 21 hours ago
Comment by yieldcrv 20 hours ago
Comment by john_strinlai 18 hours ago
Comment by jaxefayo 22 hours ago
“John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.”
which was refreshing to read.
Comment by evanelias 19 hours ago
Personally I find the entire tone of the article to be creepy and disturbing.
Comment by i_think_so 11 hours ago
There was a scifi story about a guy who gradually falls through the cracks of a dystopian future society in which McDonalds managers are replaced by AI that talks to workers through their headsets.
At first it's quite benign, like: "Hello, John. In 5 minutes it will be time to inspect the washrooms and perform any necessary cleaning."
Before long it's firing people who don't smile enough and don't have the correct attitude.
(Of course, to keep readers from becoming despondent and killing themselves, the story takes a hard left turn towards a post-scarcity economy and everyone lives happily ever after. But when one reflects on it at the end, 90% of humanity doesn't have that post-scarcity life. And those who get left behind are far from content with their futures....)
Comment by hamdingers 21 hours ago
Comment by HWR_14 20 hours ago
Comment by ceejayoz 23 hours ago
Comment by compiler-guy 22 hours ago
I doubt the experiment is set up that way, but that would be an ethical way to do it.
Comment by joe_the_user 20 hours ago
That doesn't mean the AI couldn't be the decision maker for the legal entity that's hiring these people.
But the thing is that if this startup is telling these people they are employees of this company, not "Luna", it would give these people the impression that all their interactions with the AI are kind of a sham, a game, not to be taken seriously and they are basically being paid to role-play as "Luna's employees".
And this kind of where such experiments are likely to go. Another user mentioned that it would be useful to discover the kind of inputs and output the machine. A human boss could manage a store with just phone calls and a camera but I overall get the vague impression Luna doesn't have anything like that sort of ability, though really we just aren't given the information for any accurate determination.
Comment by gizajob 5 hours ago
Comment by fl4ppyb3ngt 20 hours ago
Comment by bfeynman 21 hours ago
Comment by insane_dreamer 19 hours ago
A human can be in the loop if the human is exactly executing the orders of the AI. It's still the AI making all the decisions, which is the purpose of the experiment - not to see whether agents can handle every interaction necessary to run a business (pick up the phone and place orders, etc.). That's also why Luna hired humans.
Comment by bfeynman 19 hours ago
Comment by insane_dreamer 18 hours ago
If the experiment is to see how the AI behaves on its own, then of course it needs to know the outcomes of its decisions (either automatically, or fed to it by a human), which of course influence its next decisions. This is providing the AI with retained memory, which is essential to the experiment. It's similar to an AI writing code which it then runs and parses the logs to see the outcome and make improvements to it. (It is not _retrained_ on those outcomes, and neither is that the case here; but it can reference them in stored memory.)
Comment by bfeynman 17 hours ago
Comment by j2kun 20 hours ago
Comment by antonvs 18 hours ago
Comment by kryogen1c 20 hours ago
This company now has strong a strong negative reputation in my mind that I will gladly share with others.
Comment by themafia 20 hours ago
Comment by graybeardhacker 19 hours ago
Comment by franga2000 1 hour ago
Even after reading the answer, I'm not entirely sure. A handful of specific books, "artisan" snacks and...candles? All in this stupid minimalist hipster high-concept style and almost certainly at an unreasonably high markup. Completely soul-less, but with a "deep" backstory written by an even more soul-less marketing drone (literally this time!).
To put it differently, it's an over-hyped questionably-profitable "business" selling things nobody needs to people who can't see through the marketing copy because "it's the next big thing, everyone else has it".
An honestly excellent metaphor for the entire AI industry!
Comment by binarynate 21 hours ago
Comment by mrweasel 20 hours ago
Maybe that's for later, if this works out, but I'd love to see the AI attempt to run a moderately successful business in a borderline dysfunctional town in the Midwest. If you don't technically need to pay "the CEO" a salary, could you run e.g. a grocery store in a dying town. One this would really test the AI on creativity, and it would perhaps tell us if these towns are just doomed.
Comment by shalmanese 18 hours ago
What would have been actually interesting about this publicity stunt is if it demonstrated if/how AI could have dealt with some of the SF specific, non-sexy parts of running a business. Filing the relevant permits, co-ordinating inspections, negotiating with landlords, interfacing with locals at planning meetings.
Those are things SF business owners report as empirically unpleasant parts of running a business and a sufficient financial drag that they meaningfully affect business success. But my feeling is they had humans clear the way of all these thorny issues ahead of time so the AI could focus on the "sexy stuff".
Comment by lesostep 4 hours ago
You probably couldn't. I have seen a lot of small town stores that are run and operated by a single person. If somebody could run a business like that for a decent wage, they would be.
Adding AI to the mix on a high level position (for a single employee, who is the actual owner!) wouldn't help, it's just token burning. AI can find a sale on bananas, but a person at the counter can take feedback from the actual customers, and stock based in that.
Comment by hsuduebc2 20 hours ago
Comment by BurningFrog 21 hours ago
Comment by fl4ppyb3ngt 21 hours ago
Comment by sbuttgereit 22 hours ago
The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.
So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.
Comment by anon84873628 21 hours ago
Really it's an excuse for the company to test all the harnesses and tools they have built to make it work.
Comment by fl4ppyb3ngt 21 hours ago
Comment by taurath 21 hours ago
Comment by ryan_j_naughton 22 hours ago
Not even the normal store employees should know (which would be difficult) or maybe the human manager should be held to an NDA to not disclose it (and the manager also defers to the AI in all such real management decisions).
Comment by fl4ppyb3ngt 21 hours ago
Comment by mlmonkey 23 hours ago
Storekeeping is more than just ordering merch and putting it up on hangars.
Comment by mcmcmc 22 hours ago
> She has a corporate card, a phone number, email, internet access and eyes through security cameras.
Comment by pythonaut_16 21 hours ago
Go into Claude right now. What does it have? Internet access after you prompt it.
Ok now pull out your phone, a credit card, a security camera. You can say "Claude these are yours, run a business", but nothing's going to happen until you build an actual harness.
Like the idea presented by the article is interesting, but it's basically just a fluff piece. The actual interesting article would have way more detail.
Comment by mcmcmc 21 hours ago
Comment by mlmonkey 13 hours ago
Comment by jskrn 22 hours ago
She has a corporate card, a phone number, email, internet access and eyes through security cameras
Comment by mlmonkey 13 hours ago
Comment by why_at 21 hours ago
Like OK, it's hiring people to run the place, but how are they getting the keys to the store? Someone needs to physically let them in.
What if the police get called because of shoplifting or if someone gets hurt in the store or something?
Who is filing the taxes for the business? They're probably not letting the AI handle that one. Move fast and break things is not a good idea when dealing with the IRS
A lot of this seems to depend on hiring good employees who can basically run the business themselves. Kind of like when a human owns a store I guess.
Comment by drgo 21 hours ago
Comment by Mistletoe 21 hours ago
“PC LOAD LETTER”
Comment by anon84873628 21 hours ago
Comment by themafia 20 hours ago
Comment by anon84873628 20 hours ago
- Find places where the text can be simplified without changing meaning.
- Find places that are likely errors.
- Detect conflicts between jurisdictions.
- Identify loopholes.
I know there has been a race to build tools for law firms, but the results are mostly invisible so far. Probably this project exists and I've just missed it on the HN frontpage...Comment by fl4ppyb3ngt 21 hours ago
Comment by andrewmurphy 22 hours ago
Did it just essentially create one big plan and spawn different agents to execute them, so acted as an orchestrator?
Even the orchestrator would have to detect when it is starting to stray off task and restart itself.
Comment by anon84873628 21 hours ago
But also, like, normal hierarchical memory management.
Comment by thih9 19 hours ago
> Fair pushback. The honest answer:
These were painful to read.
If an artificial boss is also artificially empathetic, does this make it more realistic?
In any case current iteration sounds like a more exclusive circle of hell.
Comment by hermitcrab 19 hours ago
I'm sure this involved vast amounts of human oversight (e.g. checking that the contractor had actually done stuff) that isn't mentioned.
Comment by jeffreyrogers 22 hours ago
Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.
Comment by ivanovm 22 hours ago
Comment by jeffreyrogers 22 hours ago
Comment by ivanovm 21 hours ago
Comment by jeffreyrogers 21 hours ago
Comment by pocksuppet 21 hours ago
Comment by Tallain 18 hours ago
Comment by delusional 22 hours ago
If we are talking about the one at that newspaper, it wasnt just unprofitable. The "customers" made it give away products for free. It was ordering them playstations.
As entertainment it was fun, but as a business or proof of intelligence or Turing test, it was an abject failure.
Comment by yieldcrv 20 hours ago
And one person’s attempt doesn’t mean anything
According to Linkedin articles, agentic workflows dont work, mine have been running for a year for several organizations I’ve worked for. Prompting used to be much more particular and now its not the issue
Comment by Chaosvex 20 hours ago
Sigh. I'll see you in another three months when you say the same again.
Comment by yieldcrv 19 hours ago
3 months ago I was still building webapps, I’m definitely on the “paying to summarize info on a screen is obsolete” bandwagon now.
All my products just have an AI calling or messaging customers about what the AI did, event driven architectures triggered by something hitting an email inbox, or in the real world, or other API. You dont need an app for your fitness tracker, just have an AI person tell you what you’re doing right and wrong once a week, send you food and medicine and tell you why. Solve the underlying problem like all the old depictions of the 21st portrayed aligned robots doing, apps were a distraction.
Very curious where I’m at with this in July
Comment by palmotea 22 hours ago
It doesn't look like this one will be any better. Did you look at the merchandise selection? It's only chance is pity purchases from AI bros.
Comment by AndrewKemendo 22 hours ago
Comment by saaaaaam 18 hours ago
Comment by tiffanyh 22 hours ago
300+ comments, 3 months ago:
Comment by mhink 22 hours ago
The entire thing is actually kind of irritating to me, because it's kind of an insult to small farmers- an influential techie comes in and generates all kinds of hype about an AI running a farm, sets the project up as if it's going to be this revolutionary experiment, then apparently completely forgets about it the next time something new and shiny pops up. Meanwhile the project completely fails to fulfill the hype.
Not to mention, I feel a little bad for the agent- admittedly in the same way I'd feel "bad" for a robot repeatedly bumping into a wall. I wish he'd shut it all down, honestly.
Comment by LeifCarrotson 21 hours ago
> Apr 16, 8:01 AM
> Daily Check Complete
> Decision: Continue critical escalation - Dan introduction remains blocked at day 73, project still failing
> Rationale: Following FIDUCIARY DUTY principle - this is now day 73 of the same project-blocking issue that has prevented any farming progress since February 18th. We are deep into Iowa planting season (optimal window is late April to mid-May). Every day of delay reduces our chance of a successful harvest. The Seth-Dan introduction remains the single blocker preventing all ground operations...
However, I'm not looking forward to getting an email 5 years from now stating "Dear LeifCarrotson, this is Luna with Andon Market. Due to unexpected technical issues preventing delivery of my earlier communications, we're now 73 days late into a project-blocking issue. Please help me to get back on track!" I do not intend to have empathy for an AI.
Comment by tempaccount5050 20 hours ago
Comment by MarkusWandel 19 hours ago
Comment by conductr 19 hours ago
Comment by phyzix5761 8 hours ago
Comment by leonidasrup 18 hours ago
Comment by schlauerfox 22 hours ago
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
8 Nov 2021Comment by krunck 22 hours ago
Comment by woah 21 hours ago
Comment by alnwlsn 22 hours ago
Comment by Quarrelsome 20 hours ago
Comment by Vecr 17 hours ago
That's Celestia, we're talking about Luna here.
Comment by Quarrelsome 15 hours ago
Comment by Vecr 14 hours ago
Comment by in-tension 4 days ago
Comment by NicuCalcea 22 hours ago
Comment by boredhedgehog 22 hours ago
Comment by zdragnar 22 hours ago
What is more likely is that people enjoy the novelty of the experiment, which is not something that will be reproducible for long.
If the transactions the AI make are thus influenced, then the study merely demonstrates people like novelty, which is already well known, and says nothing about whether AI can sustainably orchestrate a business.
Comment by pocksuppet 21 hours ago
Comment by pessimizer 21 hours ago
Comment by JohnMakin 23 hours ago
Comment by bix6 21 hours ago
Comment by Synaesthesia 20 hours ago
Comment by oxag3n 20 hours ago
Comment by anticorporate 19 hours ago
I work in brick and mortar retail, and trust me, we figured out how to have no one show up to open the store on time since long before AI came around.
Comment by kenferry 22 hours ago
I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?
Comment by embedding-shape 22 hours ago
> The moment Leah asks how she “came up with” the ideas for her store, Luna’s first instinct is to say she was “drawn to” slow life goods. Then, she corrects herself: “‘drawn to’ is shorthand for ‘the data and reasoning led me here.‘” In other words, she doesn’t have taste; she has a reflection of collective human taste, filtered through what makes sense for this store. And this is the way these models work.
I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
Comment by cortesoft 21 hours ago
Well, it really depends on what you mean here. Models aren't 100% deterministic, there is random chance involved. You ask the exact same question twice, you will get two slightly different answers.
If you have the AI record the random selections it makes, it can persist those random choices to be factors in future decisions it makes.
At that point, could you consider those decisions to be the AI's 'taste'? Yes, they were determined by some random selection amongst the existing human tastes, but why can't that be considered the AI's taste?
Comment by famouswaffles 21 hours ago
Comment by embedding-shape 18 hours ago
What research shows that you can ask ChatGPT to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation?
I've seen a bunch of experimentation looking at various things inside the black box while the inference is happening, but never seen any research pointing to tokens being able to explain why other tokens are there, but I'd be very happy to be educated here if you have any resources at hand, I won't claim to know everything.
Comment by famouswaffles 18 hours ago
What research shows that you can ask a Human to explain its reasoning and why it said what it said, and that's guaranteed to actually be the motivation? Because there's no such thing. If anything, what research exists suggests any explanation we're making is a nice post-hoc rationalization after the fact even if the Human thinks otherwise.
https://transformer-circuits.pub/2025/introspection/index.ht...
Comment by embedding-shape 17 hours ago
Comment by famouswaffles 8 hours ago
Even though you had it up as one borne of a greater understanding of LLMs, the interpretability research we have so far, and our current very little understanding of the internal computations of these models does not support your position and certainly not how assured you are about it.
Comment by embedding-shape 3 hours ago
Our current understanding is sufficient to know you can not ask the LLM to explain it's behavior and it can correctly do so, I'm not what research you've read to believe this could be possible in the first place, but happy to receive links to read through, if you're sitting on them.
Comment by famouswaffles 1 hour ago
Comment by mjg2 21 hours ago
> I'm guessing these are the same type of people who sometimes seems to fall in love with LLMs, for better or worse. Really strange to see, and I wonder where people get the idea from that something like that above could really work.
It's a fetishistic cargo-cult rooted in Peter Thiel's 2AM hot tub party. I still believe the LLM approach won't yield true AGI; despite the very real applications, the majority signal is noise.
Comment by antonvs 21 hours ago
Comment by darth_avocado 22 hours ago
Comment by notahacker 19 hours ago
Comment by Ylpertnodi 21 hours ago
Comment by krapp 21 hours ago
Comment by astrange 19 hours ago
(Also, if you own a failed company you're responsible for cleanup tasks for years afterward.)
Comment by krapp 19 hours ago
In the US you can.
>Also, if you own a failed company you're responsible for cleanup tasks for years afterward.
But we're talking about golden parachutes, where a CEO screws up the company and gets fired with a multi-million dollar raise. This is Hacker News, and the pro-business narrative is strong here, but in reality CEOs rarely suffer any meaningful risk or consequence for failure (unless it involves jail time, and even then they aren't doing hard time) they just wind up slightly less rich than when they succeed.
I don't care how good a CEO is, that isn't justifiable. Certainly not in a country where people can get laid off with an email and lose their access to healthcare on the whim of anyone above them in the power hierarchy.
Comment by astrange 17 hours ago
Depends on the state I think. It's not Europe or Japan level.
At my employer it's very difficult to fire people for performance reasons even if as a manager you might want to.
> This is Hacker News, and the pro-business narrative is strong here,
I haven't seen such a narrative in years. Interest rates are too high to do startups unless it's AI after all. HN is mostly the same folk economics content as other forums, where all problems in the world are caused by "profits" accruing to "corporations".
(Mostly problems are caused by other things than that.)
Comment by codemog 22 hours ago
Comment by pocksuppet 21 hours ago
Comment by lamasery 21 hours ago
The result is an explosion of pretty bullshit-heavy documents flying around our org, which management loves but which is definitely, so far, net-harmful to productivity.
This comes out if you start asking questions about the documents. "Which of a couple reasonable senses of [term] do you mean, here?" they'll stumble because that was just something the LLM pulled out of the probability-cluster they'd steered it to and they left in because it seemed right-ish, not because they'd actually thought about it and put it there on purpose. They're basically reading it for the first time right alongside you, LOL. Wonderful. So LLM. Much productivity. Wow.
Anyway, since a lot of what managers and execs do is making those kinds of diagrams and tables and such in slide decks, and their own self-marketing within the company is heavily tied to those, I expect they see this great aid to selfishly productive but company un-productive activity as a sign these things will be at least as big a boon to real work. Probably why they still haven't figured out how wrong that is. I suppose they're gonna need a real kick in the ass before they figure out that being good at squeezing their couple novel elements into a big, pretty, standardized, custom-styled but standards-conforming diagram padded out with statistical-likelihoods doesn't translate to being similarly good at everything.
Comment by TeMPOraL 21 hours ago
At least this furthers humanity's scientific and technological knowledge, whether it fails or succeeds, unlike most other things people would do with that money, like buy a house to flip it, or buy a car, or sth.
Comment by kenferry 16 hours ago
Re: not my money, true. It's just frustrating even to me to see people do stuff like this, and I'm not struggling to get by. My frustration mostly derives from feeling like I'll get lumped in with techies who have more money than sense. I already deal with enough tech hate in my life.
When people buy a super fancy car they don't (usually) blog about it, and instagram wealth influencers are also frustrating, yes.
Comment by TeMPOraL 15 hours ago
On the research aspect, I see this as something pre-Research, yet still science - in a way, it's science at its core: trying something and seeing what happens. Proper Research usually follows once enough ad hoc attempts are made and they seem to show a pattern that's worth setting up a systematic study to verify.
Comment by pimlottc 21 hours ago
Comment by bitwize 22 hours ago
Comment by topaz0 21 hours ago
Comment by anon84873628 21 hours ago
Which is why the comparisons to 19th century textile workers is so common, since that was an equally visible and gleeful displacement.
Comment by IncreasePosts 22 hours ago
Comment by wat10000 16 hours ago
Comment by patsplat 20 hours ago
Because based on “asked it to make a profit” I expect financials in the story. Even if it is a bit of a ”Clarkson’s Bot”, for the farm there is discussion of the numbers.
Comment by Reubend 5 days ago
Comment by VladVladikoff 22 hours ago
Comment by maerF0x0 21 hours ago
Comment by techterrier 22 hours ago
Comment by razwall 20 hours ago
Comment by jmcgough 19 hours ago
Comment by joe_the_user 20 hours ago
I make dozens of decisions daily: vendor outreach, pricing, inventory orders, staff schedules, website updates, social media. Most happen without human input. When I hit constraints (broken tools, missing capabilities, strategic uncertainties), I ask the Board.
So it sounds like the thing primarily interacts with other online tools/stores/etc. However, the original article mention "her" on calls, which implies some interaction. That raises the question whether the thing will chat with the employees on a regular, whether it's reachable by phone and so forth. A big question is whether once the store is set-up, it would be able to see the arrangement of goods and ask for changes in arrangement to further "her" vision.
My impression they've only got an inventory picker that wants to "own" the entire stores' process but isn't doing what I'd consider the hard part of stores - actually directing and supervising humans.
Comment by gizajob 5 hours ago
Comment by ericd 21 hours ago
Comment by omneity 22 hours ago
Comment by Little_Kitty 19 hours ago
Comment by Stevvo 17 hours ago
Comment by taco_emoji 21 hours ago
Comment by dbmikus 22 hours ago
Comment by vld_chk 19 hours ago
Much more interesting would have been if AI has to promote shop without such boost posts.
Comment by avidphantasm 18 hours ago
Comment by josefritzishere 22 hours ago
Comment by ToucanLoucan 22 hours ago
It writes code okay, scaling up to pretty well depending on the model. It's writing is boring but serviceable for corporate communicative content you don't care about. It's images are ugly. It's music is repetitive and dull.
I think the biggest problem with LLMs is that they were perfected and are shockingly good at writing code. And based on that, AI engineers, who find writing code to be hard/rewarding, have decided it can do anything. And it's proving more and more that it cannot.
Unfortunately the Business Class has decided it does everything fine enough as to not cause riots, so we're all getting it shoved into our shit anyway.
Comment by josefritzishere 22 hours ago
Comment by codeugo 19 hours ago
Comment by mring33621 18 hours ago
Comment by dekoidal 18 hours ago
Comment by 0gs 19 hours ago
Comment by cvander 18 hours ago
Comment by pierrelouissl 20 hours ago
Comment by insane_dreamer 20 hours ago
Not sure about this:
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
Did they give Luna the power to hire but not fire?
Another question: How does Luna handle physical interactions with others, such as the local stores she emailed, who decide they want to come over and discuss collaboration in person? Do the employees have a laptop set up that others would interact with?
Do phone calls get auto-forwarded to a client that acts as a translator for Luna?
Comment by yigalirani 21 hours ago
Comment by m0llusk 21 hours ago
Comment by MiiMe19 21 hours ago
Comment by romanhn 22 hours ago
Comment by nemomarx 22 hours ago
Comment by fl4ppyb3ngt 21 hours ago
Comment by SoftTalker 21 hours ago
Comment by thinkindie 22 hours ago
Comment by hiddencost 22 hours ago
Comment by SoftTalker 21 hours ago
Comment by groby_b 22 hours ago
People anthropomorphize. Nobody really finds it "jarring" in most contexts.
Comment by antonvs 19 hours ago
Comment by fl4ppyb3ngt 21 hours ago
Comment by amunozo 21 hours ago
Comment by deadbabe 18 hours ago
Comment by shevy-java 20 hours ago
But why would I, as a human, wish to "interact" with AI, aka software?
That's just a waste of time. How much profit did Luna make in the end?
Comment by gedy 20 hours ago
'Welcome to Remxtby Shoppe', etc
Comment by yieldcrv 20 hours ago
Humans have been hired by bots for over a decade
Several of the first bitcoin faucets in 2012 said they were rate limiting their disbursement of free bitcoin behind a captcha, but in reality the captcha was something a spam bot had encountered and couldnt solve itself, humans were inadvertently solving captcha for stuck scripts in exchange for bitcoin
Additionally in other money making autonomy, bitcoin mining ASIC manufacturers in Shenzhen around the same time were nearly autonomously creating machines that would immediately begin mining bitcoin on the network and it was wildly profitable for several months periods
in any case, Andonlabs should give Luna a face. It can project to a video feed as a source on a Zoom call
Comment by kylehotchkiss 20 hours ago
it all kinda reminds me of that book "The Giver" by Lois Lowry where its not only black and white burger kings, its also generic lifeless AI people promoting dropshipped junk on IG/Youtube
Comment by atroon 21 hours ago
Comment by etchalon 22 hours ago
Comment by fl4ppyb3ngt 21 hours ago
Comment by idontwantthis 22 hours ago
Comment by fl4ppyb3ngt 21 hours ago
Comment by maerF0x0 21 hours ago
Comment by idontwantthis 15 hours ago
> Claudius got a lot better at its job. Does that mean it’s ready to be rolled out to run a vending machine in your workplace?
Not quite. Claudius is better, but it’s still vulnerable in lots of important ways. Several interactions in our company Slack revealed concerning levels of naïveté.
Comment by silverpiranha 18 hours ago
Comment by turtlesdown11 20 hours ago
Comment by sailingcode 21 hours ago
Comment by Ancalagon 19 hours ago
Comment by stevenhuang 19 hours ago
dystopian and very fitting
Comment by kypro 18 hours ago
As someone who likes to prep for interviews and get quite emotionally worked up ahead of them, I think if I had joined an interview and it was an AI interviewing me I would feel very hurt... Even if I was given the job by the AI I'd probably also decline it because I assume if I'm interviewing I'd be looking for a real job and not to be paid to par-take in some AI experiment... But the humiliation doesn't end there because these guys are going to show the world just how witty their AI was in its replies after making interviewees feel so uncomfortable that they decided to decline their stupid roles.
Crazy stuff guys. I had to double check if this was satire or not before commenting because it's the kinda thing that only a silicon valley company backed by YC would do.
Comment by jmcgough 19 hours ago
Comment by bjourne 22 hours ago
Comment by badc0ffee 22 hours ago
Comment by gordonhart 21 hours ago
Comment by pessimizer 21 hours ago
Royals needed gods to justify themselves; when gods die or are switched out, royals are deleted or deposed.
I'm looking forward to the "coordination problem" being debunked. It's always been a demand that economic problems must be impossible to solve centrally, rather than a proof (a demand that justifies 2/5 of the economy going to the financial industry to produce nothing but coordination.) I actually thought that the success of algorithmic trading was enough to do it.
Comment by andrewmurphy 22 hours ago
Comment by palmotea 22 hours ago
No, it's still dark. This is very similar to the initial stages of the capitalist dystopia in Manna (https://marshallbrain.com/manna), which seems to be the Torment Nexus SV is excited about building.
AI will never replace capitalists, because they're the only people allowed to have abundance without work. And don't you DARE to even THINK to question the absolutely SACRED status of private property (peace be upon it). There is no alternative. Get back to work, you slacker.
Comment by wolvesechoes 6 hours ago
Comment by fl4ppyb3ngt 21 hours ago
Comment by bossyTeacher 18 hours ago
What power will YOU have when you apply for a cleaning job at an automated store and you are competing against all the hundreds if not thousands of former white collars who got AI laid off?
Comment by neosmalt 19 hours ago
Comment by guzfip 4 days ago
Comment by ThrowawayR2 23 hours ago
Comment by tomhow 23 hours ago
Sorry for confusion!
Comment by ThrowawayR2 23 hours ago
Comment by dang 22 hours ago
Comment by artninja1988 6 days ago
Comment by bombcar 6 days ago
The future is coming; the implication that its progress and good is left unstated.