Notice of collective action lawsuit against Workday, Inc.

Posted by mooreds 3 days ago

Counter121Comment45OpenOriginal

Comments

Comment by edoceo 3 days ago

Key part is that AI is suspected of down-ranking folks by age (ADEA = Age Discrimination in Employment Act)

> The Court has provisionally certified an ADEA collective, which includes: “All individuals aged 40 and over who, from September 24, 2020, through the present, applied for job opportunities using Workday, Inc.’s job application platform and were denied employment recommendations.” In this context, being “denied” an “employment recommendation” means that (i) the individual’s application was scored, sorted, ranked, or screened by Workday’s AI; (ii) the result of the AI scoring, sorting, ranking, or screening was not a recommendation to hire; and (iii) that result was communicated to the prospective employer, or the result was an automatic rejection by Workday.

Comment by baggachipz 3 days ago

Age discrimination is a huge issue and I've experienced it firsthand. Places want to hire younger people because they're more apt to work longer hours for less pay. It's going to get worse as people who got into the web tech industry early on are still in the workforce, yet more and more young people are entering the workforce because "learning to code" was the perceived path to prosperity half a decade ago.

Comment by unyttigfjelltol 3 days ago

> Places want to hire younger people because they're more apt to work longer hours for less pay.

This is the best light you can shine on the discrimination. Most often it really is managers taking their “seniority” literally. As in, they don’t want to take the risk their reports are smarter, more experienced or capable of replacing them, so they discriminate on the basis of age. It’s counterintuitive, but this feels truest from my historical observation.

Comment by rtp4me 3 days ago

I wonder if this is really true or just an off-the-cuff comment (no disrespect). I have worked for a number of major tech companies and have never seen this in practice. Some managers were older, some were younger, some the same age. In most/all cases, people were hired based on their technical skills not because they fell into some magical age gap.

Comment by unyttigfjelltol 2 days ago

Idk, it’s what I observed over many years in many scenarios. It’s the better fit for explaining why some clearly motivated candidates run into headwinds as “overqualified.”

Big organizations have done better, and maybe the view is a bit stale and who knows.

Comment by hn_throwaway_99 2 days ago

> Places want to hire younger people because they're more apt to work longer hours for less pay.

If you take age out of the equation, is there supposed to be something wrong with preferring to hire people who are willing to work longer for less pay over people who aren't willing to work as long who want more pay?

Comment by baggachipz 2 days ago

Single people usually have more time on their hands, so they may be willing to work longer too. Is there something wrong with only wanting to hire single people?

Comment by array_key_first 2 days ago

Depending on why you do it, yes. Believe it or not intent matters.

And I also generally believe it's bad, because it breeds a toxic and self-eating culture.

Comment by port11 1 day ago

Experienced people might produce better results? Have a greater understanding of the larger picture that makes a business? And people with kids were shown to have higher retention. If the youngsters move on with the tribal knowledge, what was the point? My .02, no proof that we should respect our elders.

Comment by eightys3v3n 3 days ago

I've experienced the opposite where some smaller companies won't even look at a resume for someone under 30. One of the owners admitted it to me later on. :(

Comment by ottah 2 days ago

This should also be illegal in the US.

Comment by eightys3v3n 2 days ago

It's illegal in Canada, but how do you prove it if they don't admin it to you.

Comment by cardiffspaceman 2 days ago

40 and over is a “protected group” in US employment law. 30 year old people are not protected.

Comment by eightys3v3n 2 days ago

Really? The US only protects age discrimination of some ages not all?

Comment by ottah 2 days ago

Only older people are deserving of equal protection, because they reliably vote. Young people get screwed over for everything else.

Comment by boscillator 3 days ago

It will be fascinating to see the facts of this case, but if it is proven their algorithms are discriminatory, even by accident, I hope workday is held accountable. Making sure your AI doesn't violate obvious discrimination laws should be basic engineering practice, and the courts should help remind people of that.

Comment by zugi 3 days ago

An AI class that I took decades ago had just a 1 day session on "AI ethics". Somehow despite being short, it was memorable (or maybe because it was short...)

They said ethics demand that any AI that is going to pass judgment on humans must be able to explain its reasoning. An if-then rule says this, or even a statistical correlation between A and B indicates that would be fine. Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.

LLMs may be able to provide that, but it would have to be carefully built into the system.

Comment by nemomarx 3 days ago

I'm sure you could get an LLM to create a plausible sounding justification for every decision? It might not be related to the real reason, but coming up with text isn't the hard part there surely

Comment by zugi 3 days ago

> I'm sure you could get an LLM to create a plausible sounding justification for every decision.

That's a great point: funny, sad, and true.

My AI class predated LLMs. The implicit assumption was that the explanation had to be correct and verifiable, which may not be achievable with LLMs.

Comment by storystarling 3 days ago

It seems solvable if you treat it as an architecture problem. I've been using LangGraph to force the model to extract and cite evidence before it runs any scoring logic. That creates an audit trail based on the flow rather than just opaque model outputs.

Comment by fwip 2 days ago

It's not. If you actually look at any chain-of-thought stuff long enough, you'll see instances where what it delivers directly contradicts the "thoughts."

If your AI is *ist in effect but told not to be, it will just manifest as highlighting negative things more often for the people it has bad vibes for. Just like people will do.

Comment by nullc 3 days ago

Yes, they will, they'll rationalize whatever. This is most obvious w/ transcript editing where you make the LLM 'say' things it wouldn't say and then ask it why.

Comment by SpaceNoodled 3 days ago

It sounds like you're saying we should generate more bullshit to justify bullshit.

Comment by teraflop 3 days ago

They said "could", not "should".

I believe the point is that it's much easier to create a plausible justification than an accurate justification. So simply requiring that the system produce some kind of explanation doesn't help, unless there are rigorous controls to make sure it's accurate.

Comment by rilindo 3 days ago

> Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.

That could get interesting, as most companies will not provide feedback if you are denied employment.

Comment by zugi 3 days ago

Fair point. Maybe the requirement should be that the automated system provide an explanation that some human could review for fairness and correctness. While who receives the explanation may be a separate question, the drawback of LLMs judging people is that said explanation may not even exist.

Comment by direwolf20 3 days ago

This is the law in the EU, I think

Comment by em-bee 3 days ago

the way i understand it is that the law says decisions must be reviewed by a human (and i am guessing should also be overrideable), but this still leaves the question how the review is done and what information the human has to make the review.

Comment by ottah 2 days ago

I hate this. An explanation is only meaningful if it comes with accountability, knowing why I was denied does me no good if I have no avenue for effective recourse outside of a lawsuit.

Comment by candiddevmike 3 days ago

Would love to see some of the liability transfer to the companies using Workday too...

Comment by mh- 3 days ago

As someone over 40, I couldn't help but laugh at the font size on the site.. I guess they know their audience.

Comment by ryandrake 3 days ago

The first HN link I visited in a long time where I didn't have to hit ctrl-+ 3 or more times to increase the font size and make it readable.

Comment by mh- 3 days ago

lol, exactly my response. I have HN itself at 150% (ctrl-+ 3x).

Comment by davio 3 days ago

Regardless of merit, this seems like appropriate payback for having to make a new workday account for every single company you apply to a job.

Comment by michaelteter 3 days ago

Having done a lot of online applications in the last two years, I often just pass up companies that only have their application on Workday.

Workday is without question the most abominable of the online application systems available today.

Comment by shakkhar 3 days ago

Surely you haven't seen taleo.

Comment by parliament32 3 days ago

> allegations include that Workday, Inc., through its use of certain Artificial Intelligence (“AI”) features on its job application platform, violated the Age Discrimination in Employment Act (“ADEA”)

I'm interested to see Workday's defense in this case. Will it be "we can't be held liable for our AI", and will it work against a law as "strong" as ADEA?

Comment by advisedwang 3 days ago

ADEA says:

> It shall be unlawful for an employment agency to fail or refuse to refer for employment, or other­wise to discriminate against, any individual because of such individual's age, or to classify or refer for employment any individual on the basis of such individual's age.

I don't see any wiggle room for outsourced decision making to remove the responsibility for the outcome.

Comment by Hamuko 3 days ago

Hopefully they have something better prepared because "we can't be held liable for the black box we made" doesn't sound like a recipe for success. In Canada, they've already ruled that a website operator is liable for information that their LLM chatbot gives out.

https://www.bbc.com/travel/article/20240222-air-canada-chatb...

Comment by joshcsimmons 2 days ago

I covered this for my channel yesterday and was surprised how widespread this is. Like most people are likely affected.

https://youtu.be/7MA7xEgkGvY

Comment by phonon 3 days ago

Comment by sakesun 3 days ago

As I remember, Workday was found when a founder is in his 60s.

Comment by akersten 3 days ago

Feel bad for the next guy who wants to sue them but has to settle for workdaycase2 .com

I never liked these "trust me bro we're court authorized, give us all your PII to join the class action" setups on random domains. Makes phishing seem inevitable. Why can't we have a .gov that hosts all these as subdomains?

Comment by Aeolun 3 days ago

Yeah, that’s always my first reaction as well.

Comment by jkhall81 3 days ago

Oh snap.