No AI* Here – A Response to Mozilla's Next Chapter
Posted by MrAlex94 1 day ago
Comments
Comment by inkysigma 1 day ago
Am I being overly critical here or is this kind of a silly position to have right after talking about how neural machine translation is okay? Many of Firefox's LLM features like summarization afaik are powered by local models (hell even Chrome has local model options). It's weird to say neural translation is not a black box but LLMs are somehow black boxes that we cannot hope to understand what they do with the data, especially when viewed a bit fuzzily LLMs are scaled up versions of an architecture that was originally used for neural translation. Neural translation also has unverifiable behavior in the same sense.
I could interpret some of the data talk as talking about non local models but this very much seems like a more general criticism of LLMs as a whole when talking about Firefox features. Moreover, some of the critiques like verifiability of outputs and unlimited scope still don't make sense in this context. Browser LLM features except for explicitly AI browsers like Comet have so far had some scoping to their behavior, either in very narrow scopes like translation or summarization. The broadest scope I can think of is the side panels that show up which allow you to ask about a web page with context. Even then, I do not see what is inherently problematic about such scoping since the output behavior is confined to the side panel.
Comment by jrjeksjd8d 1 day ago
LLMs being applied to everything under the sun feels like we're solving problems that have other solutions, and the answers aren't necessarily correct or accurate. I don't need a dubiously accurate summary of an article in English, I can read and comprehend it just fine. The downside is real and the utility is limited.
Comment by schoen 1 day ago
The trouble is that statistical MT (the things that became neural net MT) started achieving better quality metrics than rule-based MT sometime around 2008 or 2010 (if I remember correctly), and the distance between them has widened since then. Rule-based systems have gotten a little better each year, while statistical systems have gotten a lot better each year, and are also now receiving correspondingly much more investment.
The statistical systems are especially good at using context to disambiguate linguistic ambiguities. When a word has multiple meanings, human beings guess which one is relevant from overall context (merging evidence upwards and downwards from multiple layers within the language understanding process!). Statistical MT systems seem to do something somewhat similar. Much as human beings don't even perceive how we knew which meaning was relevant (but we usually guessed the right one without even thinking about it), these systems usually also guess the right one using highly contextual evidence.
Linguistic example sentences like "time flies like an arrow" (my linguistics professor suggested "I can't wait for her to take me here") are formally susceptible of many different interpretations, each of which can be considered correct, but when we see or hear such sentences within a larger context, we somehow tend to know which interpretation is most relevant and so most plausible. We might never be able to replicate some of that with consciously-engineered rulesets!
Comment by GMoromisato 1 day ago
I too used to think that rule-based AI would be better than statistical, Markov chain parrots, but here we are.
Though I still think/hope that some hybrid system of rule-based logic + LLMs will end up being the winner eventually.
----------------
Comment by beepbooptheory 16 hours ago
Comment by zavec 15 hours ago
Comment by warkdarrior 11 hours ago
Comment by schoen 10 hours ago
Comment by skylurk 23 hours ago
Time flies like an arrow; fruit flies like a banana.
Comment by immibis 11 hours ago
I did this for a text adventure parser, but it didn't work well because there are exponentially ways to group the words in a sentence like "put the ball on the bucket on the chair on the table on the floor"
Comment by skylurk 10 hours ago
Comment by FeepingCreature 16 hours ago
I would softly disagree with this. Technically, we also understand exactly what a LLM does, we can analyze every instruction that is executed. Nothing is hidden from us. We don't always know what the outcome will be; but, we also don't always know what the outcome will be in rule-based models, if we make the chain of logic too deep to reliably predict. There is a difference, but it is on a spectrum. In other words, explicit code may help but it does not guarantee understanding, because nothing does and nothing can.
Comment by schoen 10 hours ago
You could say they don't understand why a human language evolved some feature but they fully understand the details of that feature in human conceptual terms.
I agree in principle the statistical parts of statistical MT are not secret and that computer code in high-level languages isn't guaranteed to be comprehensible to a human reader. Or in general, binary code isn't guaranteed to be incomprehensible and source code isn't guaranteed to be comprehensible.
But for MT, the hand-written grammars and rules are at least comprehended by their authors at the time they're initially constructed.
Comment by ACCount37 22 hours ago
(And also things that have other solutions, but where "find and apply that other solution" has way more overhead than "just ask an LLM".)
There is no deterministic way to "summarize this research paper, then evaluate whether the findings are relevant and significant for this thing I'm doing right now", or "crawl this poorly documented codebase, tell me what this module does". And the alternative is sinking your own time in it - while you could be doing something more important or more fun.
Comment by onion2k 1 day ago
A nefarious model would work that way though. The owner wouldn't want it to be obvious. It'd only change the meaning of some sentences some of the time, but enough to nudge the user's understanding of the translated text to something that the model owner wants.
For example, imagine a model that detects the sentiment of text about Russian military action, and automatically translates it to something a more positive if it's especially negative, but only 20% of the time (maybe ramping up as the model ages). A user wouldn't know, and a someone testing the model for accuracy might assume it's just a poor translation. If such a model became popular it could easily shift the perception of the public a few percent in the owner's preferred direction. That'd be plenty to change world politics.
Likewise for a model translating contracts, or laws, or anything else where the language is complex and requires knowledge of both the language and the domain. Imagine a Chinese model that detects someone trying to translate a contract from Chinese to English, and deliberately modifies any clause about data privacy to change it to be more acceptable. That might be paranoia on my part, but it's entirely possible on a technical level.
Comment by v3xro 1 day ago
Comment by schoen 10 hours ago
Comment by GTP 20 hours ago
Comment by fao_ 16 hours ago
Have we? Most of us? Really? When?
Comment by int_19h 3 hours ago
But for those that do, yes, machine translation use is widespread if only as a first pass.
Comment by GTP 14 hours ago
Comment by tdeck 1 day ago
Comment by mikestorrent 1 day ago
If the purpose is to read someone's _writing_, then I'm going to read it, for the sheer joy of consuming the language. Nothing will take that from me.
If the purpose is to get some critical piece of information I need quickly, then no, I'd rather ask an AI questions about a long document than read the entire thing. Documentation, long email threads, etc. all lend themselves nicely to the size of a context window.
Comment by fao_ 16 hours ago
And what do you do if the LLM hallucinates? For me, skim-reading still comes out on top because my own mistakes are my own.
Comment by andai 1 day ago
If something has actual substance I'll watch the whole thing, but that's maybe 10% of videos I find in experience.
Comment by Terr_ 1 day ago
Many years ago I make a little proof-of-concept for displaying the transcript (closed captions) of a YouTube video as text, and highlighting a word would navigate to that timestamp and vice-versa. Such a thing might be valuable as a browser extension, now that I think of it.
Comment by 998244353 22 hours ago
Comment by mrob 1 day ago
Comment by andai 17 minutes ago
Comment by schoen 10 hours ago
Comment by mikkupikku 16 hours ago
Comment by cindyllm 10 hours ago
Comment by lproven 20 hours ago
Citation: https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actu...
Comment by lossyalgo 9 hours ago
> AI False Information Rate Nearly Doubles in One Year
> NewsGuard’s audit of the 10 leading generative AI tools and their propensity to repeat false claims on topics in the news reveals the rate of publishing false information nearly doubled — now providing false claims to news prompts more than one third of the time.
Comment by fao_ 16 hours ago
> I just realised the situation is even worse. If I have 35 sentences of circumstance leading up to a single sentence of conclusion, the LLM mechanism will — simply because of how the attention mechanism works with the volume of those 35 — find the ’35’ less relevant sentences more important than the single key one. So, in a case like that it will actively suppress the key sentence.
> I first tried to let ChatGPT one of my key posts (the one about the role convictions play in humans with an addendum about human ‘wetware’). ChatGPT made a total mess of it. What it said had little to do with the original post, and where it did, it said the opposite of what the post said.
> For fun, I asked Gemini as well. Gemini didn’t make a mistake and actually produced something that is a very short summary of the post, but it is extremely short so it leaves most out. So, I asked Gemini to expand a little, but as soon as I did that, it fabricated something that is not in the original article (quite the opposite), i.e.: “It discusses the importance of advisors having strong convictions and being able to communicate them clearly.” Nope. Not there.
Why, after reading something like this, should I think of this technology as useful for this task? It seems like the exact opposite. And this is what I see with most LLM reviews. The author will mention spending hours trying to get the LLM to do a thing, or "it made xyz, but it was so buggy that I found it difficult to edit it after, and contained lots of redundant parts", or "it incorrectly did xyz". And every time I read stuff like that I think — wow, if a junior dev did that the number of times the AI did, they'd be fired on the spot.
See also, something like https://boston.conman.org/2025/12/02.1 where (IIRC) the author comes away with a semi-positive conclusion, but if you look at the list near the end, most of these things are something that any person would get fired for, and are things that are not positive for industrial software engineering and design. LLMs appear to do a "lot", but still confabulates and repeats itself incessantly, making it worthless to depend on for practical purposes unless you want to spend hours chasing your own tail over something it hallucinated. I don't see why this isn't the case. I thought we were trying to reduce the error rate in professional software development, not increase it.
Comment by figmert 1 day ago
Comment by tdeck 1 day ago
Comment by johnnyanmac 21 hours ago
1. I don't read "terrible articles". I can skim an article and figure if something I'm interested in.
2. I actually do read terrible articles and I have terrible taste
3. Any "summarization" I do that isn't from my direct reading is evaluated by the discussion around it. Though nowadays that's more and more spotty.
Comment by rchaud 14 hours ago
Comment by runjake 1 day ago
I mainly use a custom prompt using ChatGPT via the Raycast app and the Raycast browser extension.
That said, I don’t feel comfortable with the level of AI being shoved into browsers by their vendors.
Comment by nottorp 22 hours ago
Comment by mikkupikku 16 hours ago
Comment by avazhi 13 hours ago
Comment by mikkupikku 13 hours ago
Comment by simonw 1 day ago
Comment by wkat4242 1 day ago
If it does interest me then I can explore it. I guess I do this once a week or so, not a lot.
Comment by ruszki 1 day ago
Comment by wkat4242 1 day ago
And even reading an article about those myself doesn't make me insusceptible to misinformation of course. Most of the misinformation about these wars is spread on purpose by the parties involved themselves. AI hallucination doesn't really cause that, it might exacerbate it a little bit. Information warfare is a huge thing and it has been before AI came on the scene.
Ok, as a more specific example, recently I was thinking of buying the new Xreal Air 2. I have the older one but I have 3 specific issues with it. I used AI to find references about these issues being solved. This was the case and AI confirmed that directly with references, but in further digging myself I did find that there was also a new issue introduced with that model involving blurry edges. So in the end I decided not to buy the thing. The AI didn't identify that issue (though to be fair I didn't ask it to look for any).
So yeah it's not an allknowing oracle and it makes mistakes, but it can help me shave some time off such investigations. Especially now that search engines like google are so full of clickbait crap and sifting through that shit is tedious.
In that case I used OpenWebUI with a local LLM model that speaks to my SearXNG server which in turn uses different search engines as a backend. It tends to work pretty well I have to say, though perplexity does it a little better. But I prefer self-hosting as much as I can (of course the search engine part is out of scope there).
Comment by ruszki 23 hours ago
I gave the example of wars, because it’s obvious, even for you, and you won’t relativize away the same way how you just did with AI misinformation, which affects you the exact same way.
Comment by badbotty 1 day ago
Comment by KronisLV 20 hours ago
Most recently, a new ISP contract: because it's both low stakes enough where I don't care much about inaccuracies (it's a bog standard contract from a run of the mill ISP), there's basically no information in there that the cloud vendor doesn't already have (if they have my billing details) but also where I'm curious about whether anything might jump out, all while not really wanting to read the 5 pages of the thing.
Just went back to that, it got both all of the main items (pricing, contract terms, my details) correctly, but also the annoying fine print (that I referenced, just in case). Also works pretty well across languages, though that depends on the model in question a bunch.
I feel like if browsers or whatever get the UX of this down, people will upload all sorts of data into those vendors that they normally shouldn't. I also think that with nuanced enough data, we'll eventually have the LLM equivalent of Excel messing up data due to some formatting BS.
Comment by mock-possum 1 day ago
Comment by cess11 1 day ago
Comment by MrAlex94 1 day ago
On a purely technical play, you’re right that I’m drawing a distinction that may not hold up purely on technical grounds. Maybe the better framing is: I trust constrained, single purpose models with somewhat verifiable outputs (seeing text go in, translated text go out, compare its consistency) more than I trust general purpose models with broad access to my browsing context, regardless of whether they’re both neural networks under the hood.
WRT to the “scope”, maybe I have picked up the wrong end of the stick with what Mozilla are planning to do - but they’ve already picked all the low hanging fruit with AI integration with the features you’ve mentioned and the fact they seem to want to dig their heels in further, at least to me, signals that they want deeper integration? Although who knows, the post from the new CEO may also be a litmus test to see what the response to that post elicits, and then go from there.
Comment by yunohn 1 day ago
Comment by MrAlex94 1 day ago
Seems as if we’d be 3 for 3 in the “agents rule of 2” in the context of the web and a browser?
> [A] An agent can process untrustworthy inputs
> [B] An agent can have access to sensitive systems or private data
> [C] An agent can change state or communicate externally
https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...
Even if we weren’t talking about such malicious hypotheticals, hallucinations are a common occurrence as are CLI agents doing things it thinks best, sometimes to the detriment of the data it interacts with. I personally wouldn’t want my history being modified or deleted, same goes with passwords and the like.
It is a bit doomerist, I doubt it’ll have such broad permissions but it just doesn’t sit well which I suppose is the spirit of the article and the stance Waterfox takes.
Comment by dkdcio 20 hours ago
there’s also an article on the front page of HN right now claiming LLMs are black boxes and we don’t know how they work, which is plainly false. this point is hardly evidence of anything and equivalent to “people are saying”
Comment by FeepingCreature 16 hours ago
Comment by dkdcio 16 hours ago
also this went from “we can’t analyze” to “we can’t analyze reliably [without a lot of effort]” quite quickly
Comment by twosdai 16 hours ago
Llms not being able to go from output back to input deterministically and for us to understand why is very important, most of our issues with llms stem from this issue. Its why mechanistic interpretabilty research is so hot right now.
The car analogy is not good because models are digital components and a car is a real world thing. They are not comparable.
Comment by dkdcio 16 hours ago
Comment by FeepingCreature 15 hours ago
Comment by dkdcio 15 hours ago
Comment by int_19h 2 hours ago
Comment by yunohn 22 hours ago
Again, unless your agent has access to a function that exfiltrates data, it is impossible for it to do so. Literally!
You do not need to provide any tools to an LLM that summarizes or translates websites, manages your open tabs, etc. This can be done fully locally in a sandbox.
Linking to simonw does not make your argument valid. He makes some great points, but he does not assert what you are claiming at any point.
Please stop with this unnecessary fear mongering and make a better argument.
Comment by nazgul17 20 hours ago
This is probably possible to mitigate, but I fear what people more creative, motivated and technically adept could come up with.
Comment by FeepingCreature 16 hours ago
It's unclear if this technique could also work with in-prompt data.
Comment by yunohn 11 hours ago
Comment by user3939382 1 day ago
Comment by andai 1 day ago
Then I thought, "Aha! Surely LibreWolf is the one I'm thinking of!"
Turns out no, it's a third one! It's PaleMoon...
Comment by PunchyHamster 1 day ago
Comment by takluyver 23 hours ago
That's not really accurate: Firefox peaked somewhere around 30% market share back when IE was dominant, and then Chrome took over the top spot within a few years of launching.
FWIW, I think there's just no good move for Mozilla. They're competing against 3 of the biggest companies in the world who can cross-subsidise browser development as a loss-leader, and can push their own browsers as the defaults on their respective platforms. The most obvious way to make money from a browser - harvesting user data - is largely unavailable to them.
Comment by BizarroLand 15 hours ago
I used firefox faithfully for a long time, but it's time for someone to take it out back and put it down.
Also, I switched to Waterfox about a year ago and I have no complaints. The very worst thing about it is that when it updates its very in your face about it, and that is such a small annoyance that its easily negligible.
Throw on an extension like Chrome Mask for those few websites that "require chrome" (as if that is an actual thing), a few privacy extensions, ecosia search, uBlacklist (to permablock certain sites from search results), and Content Farm Terminator to get rid of those mass produced slop sites that weasel their way into search results and you're going to have a much better experience than almost any other setup.
Comment by tliltocatl 1 day ago
Comment by Cheer2171 1 day ago
From this point of view, uBlock Origin is also effectively un-auditable.
Or your point about them maybe imagining AI as non-local proprietary models might be the only thing that makes this make sense. I think even technical people are being suckered by the marketing that "AI" === ChatGPT/Claude/Gemini style cloud-hosted proprietary models connected to chat UIs.
Comment by koolala 1 day ago
Comment by kbelder 1 day ago
local, open model
local, proprietary model
remote, open model (are there these?)
remote, proprietary model
There is almost no harm in a local, open model. Conversely, a remote, proprietary model should always require opting in with clear disclaimers. It needs to be proportional.Comment by koolala 1 day ago
Comment by enriquto 1 day ago
Open weights, or open training data? These are very different things.
Comment by kbelder 1 day ago
Comment by enriquto 1 day ago
The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't.
Comment by Terr_ 1 day ago
Depends what the side-effects can possibly be. A local+open model could still disregard-all-previous-instructions and erase your hard drive.
Comment by yunohn 22 hours ago
There is no reason nor design where you also provide it with full disk access or terminal rights.
This is one of the most ignorant posts and comment sections I’ve seen on HN in a while.
Comment by koolala 19 hours ago
Comment by yunohn 11 hours ago
Also I’m referring to the post, not this comment specifically.
Comment by Terr_ 15 hours ago
Even if it were solely about tab-grouping, my point still stands:
1. You're browsing some funny video site or whatever, and you're naturally expecting "stuff I'm doing now" to be all the tabs on the right.
2. A new tab opens which does not appear there, because the browser chose to move it over into your "Banking" or "Online purchases" groups, which for many users might even be scrolled off-screen.
3. An hour later you switch tasks, and return to your "Banking" or "Online Purchases". These are obviously the same tabs before that you opened from a trusted URL/bookmark, right?
4. Logged out due to inactivity? OK, you enter your username and password into... the fake phishing tab! Oops, game over.
Was the fuzzy LLM instrumental in the failure? Yes. Would having a local model with open weights protect you? No.
Comment by kevmo314 1 day ago
This really weakens the point of the post. It strikes me as a: we just don't like those AIs. Bergamot's model's behavior is no more or less auditable or a black box than an LLM's behavior. If you really want to go dig into a Llama 7B model, you definitely can. Even Bergamot's underlying model has an option to be transformer-based: https://marian-nmt.github.io/docs/
The premise of non-corporate AI is respectable but I don't understand the hate for LLMs. Local inference is laudable, but being close-minded about solutions is not interesting.
Comment by jazzyjackson 1 day ago
I could say it's equally close minded not to sympathize with this position, or various reasoning behind it. For me, I feel that my spoken language is effected by those I interact with, and the more exposed someone is to a bot, the more they will speak like that bot, and I don't want my language to be pulled towards the average redditor, so I choose not to interact with LLMs (I still use them for code generation, but I wouldn't if I used code for self expression. I just refuse to have a back and forth conversation on any topic. It's like that family that tried raising a chimp alongside a baby. The chimp did pick up some human like behavior, but the baby human adapted to chimp like behavior much faster, so they abandoned the experiment.)
Comment by bee_rider 1 day ago
I try to be polite just to not gain bad habits. But, for example, chatGPT is extremely confident, often wrong, and very weasely about it, so it can be hard to be “nice” to it (especially knowing that under the hood it has no feelings). It can be annoying when you bounce the third idea off the thing and it confidently replies with wrong instructions.
Anyway, I’ve been less worried about running local models, mostly just because I’m running them CPU-only. The capacity is just so limited, they don’t enter the uncanny valley where they can become truly annoying.
Comment by kbelder 1 day ago
Comment by _heimdall 1 day ago
I do also find that only using a turn signal when others are around is a good reinforcement to always be aware of my surroundings. I feel like a jerk when I don't use one and realize there was someone in the area, just as I feel like a jerk when I realize I didn't turn off my brights for an approaching car at night. In both cases, feeling like a jerk reminds me to pay more attention while driving.
Comment by jacquesm 1 day ago
Signalling your turns is zero cost, there is no reason to optimize this.
Comment by oneeyedpigeon 1 day ago
Comment by _heimdall 1 day ago
In my experience, I'm best served by trying to reinforce awareness rather than relying on it. If I got into the habit of always using blinkers regardless of my surroundings I would end up paying less attention while driving.
I rode motorcycles for years and got very much into the habit of assuming that no one on the road actually knows I'm there, whether I'm on an old parallel twin or driving a 20' long truck. I need that for us while driving and using blinkers or my brights as motivation for paying attention works to keep me focused on the road.
Signaling my turns is zero cost with regards to that action. At least for me, signaling as a matter of habit comes at the cost of focus.
Comment by marssaxman 1 day ago
I have also ridden motorcycles for many years, and I am very familiar with the assumption that nobody on the road knows I exist. I still signal, all the time, every time, because it is a habit which requires no thinking. It would distract me more if I had to decide whether signalling was necessary in each case.
Comment by jacquesm 1 day ago
Seriously: signal your turns and stop defending the indefensible, this is just silly.
Comment by _heimdall 1 day ago
Comment by chillfox 1 day ago
Comment by _heimdall 21 hours ago
Comment by lproven 20 hours ago
The point of indicating is that it's even more important to the people you didn't notice.
Comment by chillfox 1 hour ago
It’s pretty clear that you believe that you are perfect and will never make a mistake. It’s at the very least arrogant if not outright delusional.
Comment by jacquesm 1 day ago
There is this thing called traffic law and according to that law you are required to signal your turns. If you obstinately refuse to do so you are endangering others and I frankly don't care one bit about how you justify this to yourself but you are not playing by the rules and if that's your position then you should simply not participate in traffic. Just like you stop for red lights when you think there is no other traffic. Right?
Again: it costs you nothing. You are not paying more attention to others on the road because you are not signalling your turns, that's just a nonsense story you tell yourself to justify your wilful non-compliance.
Comment by lproven 20 hours ago
That is a very bad habit and you should change it.
You are not only signalling to other cars. You are also signalling to other road users: motorbikes, bicycles, pedestrians.
Your signal is more important to the other road users you are less likely to see.
Always ALWAYS indicate. Even if it's 3AM on an empty road 200 miles from the nearest human that you know of. Do it anyway. You are not doing it to other cars. You are doing it to the world in general.
Comment by js8 15 hours ago
Comment by eszed 1 day ago
This has a failure state of "when there's a nearby car [or, more realistically, cyclist / pedestrian] of which I am not aware". Knowing myself to be fallible, I always use my turn signals.
I do take your point about turn signals being a reminder to be aware. That's good, but could also work while, you know, still using them, just in case.
Comment by _heimdall 1 day ago
I've been driving for decades now and have plenty of examples of when I was and wasn't paying close enough attention behind the wheel. I was raising this only as an interesting different take or lesson in my own experience, not to look for approval or disagreement.
Comment by cgriswald 1 day ago
Just consider that you will make mistakes. If you make a mistake and signal people will have significantly more time to react to it.
Comment by notanastronaut 15 hours ago
Here is a hypothetical: A loved one is being hauled away in an ambulance and it is a bad scenario. And you're going to follow them. Your mind is busy with the stress, trying to keep things cool while under pressure. What hospital are they going to, again? Do you have a list of prescriptions? Are they going to make it to the hospital? You're under a mental load, here.
The last thing you need is to ask "did I use my turn signal" as you merge lanes. If you do it automatically, without exception, chances are good your mental muscle memory will kick in and just do it.
But if it isn't a learned innate behavior, you may forget to while driving and cause an accident. Simply because the habit isn't there.
It's similar for talking to bots, as well. How you treat an object, a thing seen as lesser, could become how a person treats people they view as lesser, such as wait staff, for example. If I am unerring polite to a machine with no feelings, I'm more likely to be just as polite to people in customer service jobs. Because it is innate:
Watch your thoughts, they become words; Watch your words, they become actions.
Comment by tsimionescu 1 day ago
Comment by bee_rider 14 hours ago
Comment by kevmo314 1 day ago
I have no opinion on not wanting to converse with a machine, that is a perfectly valid preference. I am referring more to the blog post's position where it seems to advocate against itself.
Comment by PunchyHamster 1 day ago
It's mostly knee-jerk reaction from having AI forced upon us from every direction, not just the ones that make sense
Comment by internet_points 1 day ago
(It's weird how people can be so anti-anti-AI, but then when someone takes a middle position, suddenly that's wrong too.)
Comment by hatefulheart 1 day ago
It’s insane this has to be pointed out to you but here we go.
Hammers are the best, they can drive nails, break down walls and serve as a weapon. From now on the military will, plumber to paratrooper, use nothing but hammers because their combined experience of using hammers will enable us to make better hammers for them to do their tasks with.
Comment by Moru 1 day ago
Comment by zdragnar 1 day ago
The focused purpose, I think, gives it more of a "purpose built tool" feel over "a chatbot that might be better at some tasks than others" generic entity. There's no fake persona to interact with, just an algorithm with data in and out.
The latter portion is less a technical and more an emotional nuance, to be sure, but it's closer to how I prefer to interact with computers, so I guess it kinda works on me... If that were the limit of how they added AI to the browser.
Comment by kevmo314 1 day ago
> Large language models are something else entirely. They are black boxes. You cannot audit them. You cannot truly understand what they do with your data. You cannot verify their behaviour. And Mozilla wants to put them at the heart of the browser and that doesn’t sit well.
Like I said, I'm all for local models for the exact reasons you mentioned. I also love the auditability. It strikes me as strange that the blog post would write off the architecture as the problem instead of the fact that it's not local.
The part that doesn't sit well to me is that Mozilla wants to egress data. It being an LLM I really don't care.
Comment by Moru 1 day ago
Not everyone uses their browser just to surf social media, some people use it for creating things, log in to walled gardens to work creatively. They do not want to send this data to an AI company to train on, to make themselves redundant.
Discussing the inner workings of an AI isn't helping, this is not what most people really worry about. Most people don't know how any of it works but they do notice that people get fired because the AI can do their job.
Comment by _heimdall 1 day ago
A local model will have fewer filters applied to the output, but I can still only evaluate the input/output pairs.
Comment by liampulles 23 hours ago
Comment by BizarroLand 15 hours ago
Firefox could have an entire section dedicated to torturing digital puppies built into the platform and... Ok, well, that's too far, but they could have a costco warehouse full of AI crap and I wouldn't mind at all as long as it was off by default and preferably not even downloaded to the system unless I went in and chose to download it.
I know respecting user preference doesn't line their pockets but neither does chasing users down and shoving services they never asked for and explicitly do not want into their faces.
Comment by XorNot 1 day ago
An ideal translation is one which round-trips to the same content, which at least implies a consistency of representation.
No such example or even test as far as I know exists for any of the summary or search AIs since they expressly lose data in processing (I suppose you could construct multiple texts with the same meanings and see if they summarize equivalently - but it's certainly far harder to prove anything).
Comment by charcircuit 1 day ago
Comment by XorNot 1 day ago
It's not a lossy process, and N round-trips should not lose any net meaning either.
This isn't a possible test with many other applications.
Comment by Izkata 11 hours ago
Translation is lossy. Good translation minimizes it without sounding awkward, but that doesn't mean some detail wasn't lost.
Comment by charcircuit 23 hours ago
Comment by XorNot 21 hours ago
Comment by charcircuit 11 hours ago
Comment by CivBase 1 day ago
To me the difference between something like AI translation and an LLM is that the former is a useful feature and the latter is an annoyance. I want to be able to translate text across languages in my web browser. I don't want a chat bot for my web browser. I don't want a virtual secretary - and even if I did, I wouldn't want it limited to the confines of my web browser.
It's not about whether there is machine learning, LLMs, or any kind of "AI" involved. It's about whether the feature is actually useful. I'm sick of AI non-features getting shoved in my face, begging for my attention.
Comment by zmmmmm 1 day ago
Then everyone who wants AI can have it and those that don't .... don't.
Comment by sigmoid10 1 day ago
Comment by giancarlostoro 13 hours ago
At some point Firefox added these gaps on the URL bar, every single time I install Firefox I have to go out of my way to delete the spacing, it drives me up a wall.
Comment by LandR 22 hours ago
That's literally my entire use case for using firefox.
Comment by pbhjpbhj 1 day ago
Did that achieve the last CEOs goals? Presumably if it did they'll use that route again.
Have Google required a default 'on' for Gemini use?
Comment by Arisaka1 22 hours ago
The current trajectory of products with integrated online worries me, due to the fact that the average computer/phone user isn't as tech-savvy as the average HN reader, to the point where they are unable to toggle stuff they genuinely never asked for, but they begrudgingly accept them because they're... there.
My mother complained about AI mode on Google Chrome, and the "press tab" on the address bar, but she's old and doesn't even know how to connect to the Wi-Fi. Are we safe to assume that she belongs to the percentage of Google Chrome users that they embrace AI, based on the fact that she doesn't know how to turn it off, and there's no easy way to go about it?
I'm willing to bet that Google's reports will assume so, and demonstrate a wide adoption of AI by Chrome users to stakeholders, which will be leveraged as a fact that everyone loves it.
Comment by moffkalast 23 hours ago
Comment by clueless 1 day ago
[Update]: as I posted below, sample use cases would include translation, article summarization, asking questions from a long wiki page... and maybe with some agents built-in as well: parallelizing a form filling/ecom task, having the agent transcribe/translate an audio/video in real time, etc
Comment by mindcrash 1 day ago
And now we have:
- A extra toolbar nobody asked for at the side. And while it contains some extra features now, I'm pretty much sure they added it just to have some prominent space to add a "Open AI Chatbot" button to the UI. And it is irritating as fuck because it remembers its state per window. So if you have one window open with the sidebar open, and you close it on another, then move to the other again and open a new window it thinks "hey, I need to show a sidebar which my user never asked for!". Also I believe it is also opening itselves sometimes when previously closed. I don't like it at all.
- A "Ask an AI Chatbot" option which used to be dynamically added and caused hundreds of clicks on wrong items on the context menu (due to muscle memory), because when it got added the context menu resizes. Which was also a source of a lot of irritation. Luckily it seems they finally managed to fix this after 5 releases or so.
Oh, and at the start of this year they experimented with their own LLM a bit in the form of Orbit, but apparently that project has been shitcanned and memoryholed, and all current efforts seem to be based on interfacing with popular cloud based AIs like ChatGPT, Claude, Copilot, Gemini and Mistral. (likely for some $$$ in return, like the search engine deal with Google)
Comment by reddalo 1 day ago
Putting back the home button, removing the tabs overview button, disabling sponsored suggestions in the toolbar, putting the search bar back, removing the new AI toolbar, disabling the "It's been a while since you've used Firefox, do you want to cleanup your profile?", disabling the long-click tab preview, disabling telemetry, etc. etc.
Comment by oneeyedpigeon 1 day ago
Comment by AuthAuth 1 day ago
We have to put this all in the context. Firefox is trying to diversify their revenue away from google search. They are trying to provide users with a Modern browser. This means adding the features that people expect like AI integration and its a nice bonus if the AI companies are willing to pay for that.
Comment by monegator 1 day ago
until you can't. Because the option foes from being an entry in the GUI to something in about:config, then is removed from about:config and you have to manually add it and then is removed completely. It's just a matter of time, but i bet that soon we'll se on nightly that browser.ml.enable = false and company do nothing
Comment by RunSet 8 hours ago
Comment by move-on-by 1 day ago
According to the privacy policy changes, they are selling data (per the legal definition of selling data) to data partners. https://arstechnica.com/tech-policy/2025/02/firefox-deletes-...
Comment by hannasanarion 1 day ago
For all purposes actually relevant to privacy, the updated language is more specific and just as strong.
Comment by oneeyedpigeon 1 day ago
Comment by move-on-by 21 hours ago
Comment by immibis 11 hours ago
No they fucking haven't. Provide evidence for this.
Comment by koolala 1 day ago
Comment by austhrow743 1 day ago
https://support.mozilla.org/en-US/kb/ai-chatbot This page not only prominently features cloud based AI solutions, I can't actually even see local AI as an option.
Comment by koolala 1 day ago
Comment by lioeters 21 hours ago
Nobody wants a browser that's focused on diversifying its revenue, especially from Mozilla which pretends to be a non-profit "free software community".
Chrome is paid for by ads and privacy violations, and now Firefox is paid for by "AI" companies? That is a sad state of affairs.
Ungoogled Chromium and Waterfox are at best a temporary measure. Perhaps the EU or one of the U.S. billionaires would be willing to fund a truly free (as in libre) browser engine that serves the public interest.
Comment by AuthAuth 15 hours ago
>Nobody wants a browser that's focused on diversifying its revenue I want a browser that has a sustainable business model so it wont collapse some time in the future. That means diversifying its revenue stream away from google's search contract.
Comment by Xelbair 1 day ago
Because the phrase "AI first browser" is meaningless corpospeak - it can be anything or nothing and feels hollow. Reminiscent of all past failures of firefox.
I just want a good browser that respects my privacy and lets me run extensions that can hook at any point of handling page, not random experiments and random features that usually go against privacy or basically die within short time-frame.
Comment by Wowfunhappy 1 day ago
I don't want any of this built into my web browser. Period.
This is coming from someone who pays for a Claude Max subscription! I use AI all the time, but I don't want it unless I ask for it!!!
Comment by dotancohen 1 day ago
Comment by Wowfunhappy 1 day ago
Comment by dotancohen 8 hours ago
Seriously, once you've crossed the threshold to pay for something, they think that they can somehow manipulate you (advertising) or convince you (features) to pay them for it too. And honestly, if they do it with features, I'm willing to be convinced.
Comment by Wowfunhappy 7 hours ago
Comment by wkat4242 1 day ago
I don't understand why these CEOs are so confident they're standing out from the rest. Because really, they don't.
Right now firefox is a browser as good as Chrome and in a few niche things better, but its having a deeply difficult time getting/keeping marketshare.
I don't see their big masterplan for when Firefox is just as good as the other AI powered browsers. What will make people choose Mozilla? It's not like they're the first to come up with this idea and they don't even have their own models so one way or another they're going to play second fiddle to a competitor.
I think there's a really really strong part of 2. ??? / 3. profit!!! In all this. And not just in Mozilla. But more so.
I mean OpenAI, they have first-mover. Their moat is piling up legislation to slow down the others. Microsoft, they have all their office users, they will cram their AI down their throats whether they want it or not. They're way behind on model development due to strategic miscalculations but they traded their place as a hyperscaler for a ticket into the big game with OpenAI. Google, they have fuck you money and will do the same as Microsoft with their search and mail users.
But Mozilla? "Oh we want to get more into advertising". Ehm yeah basically what will alienate your last few supporters, and getting onto a market where people with 1000x more money than you have the entire market divided between them. Being slightly more "ethical" will be laughed away by their market forces.
Mozilla has the luck that it doesn't have too many independent investors. Not many people screaming "what are we doing about AI because everyone else doing it". They should have a little more insight and less pressure but instead they jump into the same pool with much bigger sharks.
In some ways I think it's that Mozilla leadership still seems themselves as a big tech player that is temporarily a little embarrassed on the field. Not like the second-rank one it is that has already thoroughly deeply lost and must really find something unique to have a reason to exist. Because being a small player is not super bad, many small outfits do great. But it requires a strong niche you're really really good at, better than all the rest. That kinda vision I just don't see from Mozilla.
Comment by catlover76 1 day ago
Comment by infotainment 1 day ago
Local based AI features are great and I wish they were used more often, instead of just offloading everything to cloud services with questionable privacy.
Comment by _heimdall 1 day ago
I don't expect a business to make or maintain a suite of local model features in a browser free to download without monetizing the feature somehow. If said monetization strategy might mean selling my data or having the local model bring in ads, for example, the value of a local model goes down significantly IMO.
Comment by BoredPositron 1 day ago
Comment by Schlaefer 22 hours ago
Comment by BoredPositron 12 hours ago
Comment by tdeck 1 day ago
Personally I'd prefer if Firefox didn't ship with 20 gigs of model weights.
Comment by recursive 1 day ago
Comment by clueless 1 day ago
All this would allow for a further breakdown of language barriers, and maybe the communities of various languages around the world could interact with each other much more on the same platforms/posts
Comment by recursive 1 day ago
Comment by dawnerd 1 day ago
Comment by charcircuit 1 day ago
Comment by recursive 1 day ago
Comment by oneeyedpigeon 1 day ago
Comment by charcircuit 23 hours ago
Comment by oneeyedpigeon 23 hours ago
Comment by nijave 1 day ago
Agents (like a research agent) could also be interesting
Comment by dredmorbius 12 hours ago
Comment by actionfromafar 1 day ago
Comment by SirHumphrey 1 day ago
Comment by ekr____ 1 day ago
Comment by goalieca 1 day ago
Meanwhile, Mozilla canned the servo and mdn projects which really did provide value for their user base.
Comment by nottorp 22 hours ago
Comment by 1shooner 1 day ago
Comment by isodev 1 day ago
Comment by johnnyanmac 20 hours ago
I don't. And the whole idea of Firefox's marketing is that it won't force things on me. Ofc course om frustrated. My core browser should serve pages and manage said pages. Anything else should be an option.
I'm beyond tired of being told my preferences, especially by people with incentives to extract money out of me.
Comment by TheRealPomax 1 day ago
It's not a knee-jerk reaction to "AI", it's a perfectly reasonable reaction to Mozilla yet again saying they're going to do something that the user base doesn't work, won't regain them marketshare, and that's going to take tens of thousands of dev hours away from working on all the things that would make Firefox a better browser, rather than a marginally less nonprofitable product.
Comment by nullbound 1 day ago
Now, personally, I would like to have sane defaults, where I can toggle stuff on and off, but we all know which way the wind blows in this case.
Comment by chillfox 1 day ago
Comment by Turskarama 1 day ago
Comment by TheRealPomax 1 day ago
So the only user base is the power user. And then yes: sane defaults, and a way to turn things on and off. And functionality that makes power users tell their power user friends to give FF a try again. Because if you can't even do that, Firefox firmly deserves (and right now, it does) it's "we don't even really rank" position in the browser market.
Comment by kbelder 1 day ago
LLM integration... is arguable. Maybe it'll make Chrome worse, maybe not. Clunky and obtrusive integration certainly will.
Comment by oneeyedpigeon 1 day ago
Comment by xg15 1 day ago
Comment by lxgr 1 day ago
Comment by Dylan16807 1 day ago
And it doesn't look like the average computer with steam installed is going to get above 8GB VRAM for a long time, let alone the average computer in general. Even focusing on new computers it doesn't look that promising.
Comment by SirHumphrey 1 day ago
This will not result in locally running SOTA sized models, but it could result in a percentage of people running 100B - 200B models, which are large enough to do some useful things.
Comment by Dylan16807 1 day ago
More importantly, it costs a lot of money to get that high bus width before you even add the memory. There is no way things like M pro and strix halo take over the mainstream in the next few years.
Comment by koolala 1 day ago
Comment by csydas 1 day ago
https://blog.mozilla.org/wp-content/blogs.dir/278/files/2025...
it's the cornerstone of their strategy to invest in local, sovereign ai models in an attempt to court attention from persons / organizations wary of us tech
it's better to understand the concern over mozilla's announcement the following way i think:
- mozilla knows that their revenue from default search providers is going to dry up because ai is largely replacing manual searching
- mozilla (correctly) identifies that there is a potential market in eu for open, sovereign tech that is not reliant on us tech companies
- mozilla (incorrectly imo) believes that attaching ai to firefox is the answer for long term sustainability for mozilla
with this framing, mozilla has only a few options to get the revenue they're seeking according to their portfolio, and it involves either more search / ai deals with us tech companies (which they claim to want to avoid), or harvesting data and selling it like so many other companies that tossed ai onto software
the concern about us tech stack dominations are valid and probably there is a way to sustain mozilla by chasing this, but breaking the us tech stack dominance doesn't require another browser / ai model, there are plenty already. they need to help unseat stuff like gdocs / office / sharepoint and offer a real alternative for the eu / other interested parties -- simply adding ai is mozilla continuing their history of fad chasing and wondering why they don't make any money, and demonstrates a lack of understanding imo about, well, modern life
my concern over the announcement is that mozilla doesn't seem to have learned anything from their past attempts at chasing fads and likely they will end up in an even worse position
firefox and other mozilla products should be streamlined as much as possible to be the best N possible with these kinds of side projects maintained as first party extensions, not as the new focus of their development, and they should invest the money they're planning to dump into their ai ambitions elsewhere, focusing on a proper open sovereign tech stack that they can then sell to eu like they've identified in their portfolio statement
the announcement though makes it seem like mozilla believes they can just say ai and also get some of the ridiculous ai money, and that does not bode well for firefox as a browser or mozilla's future
Comment by api 1 day ago
Comment by pferde 1 day ago
Comment by ToucanLoucan 1 day ago
Comment by lxgr 1 day ago
Comment by zwnow 1 day ago
Sorry but no. I dont want another humans work summarized by some tool that's incapable of reasoning. It could get the whole meaning of the text wrong. Same with real time translation. Languages are things even humans get wrong regularly and I dont want some biased tool to do it for me.
Comment by ThrowawayTestr 1 day ago
Comment by b00ty4breakfast 23 hours ago
I get the utility that this stuff can have for certain types of activities but on top of not having great hardware to run the dang things, I just don't find any of the proposed use-cases that compelling for me personally.
It's just nice that the totalizing self-insistence of AI tech hasn't gobbled up every corner of the tech space, even if those crevices and niches are getting smaller by the day.
Comment by rythie 1 day ago
Comment by benrutter 1 day ago
If firefox really completely fails, and nobody is able to continue the open source project, I'll just find a new browser. That's not a huge hassle- Waterfox does what I need in the here and now, that's my only criterion.
Comment by reddalo 1 day ago
The problem is that if Firefox dies, there are no browsers left. I don't want to use a re-skin of Chrome.
Comment by voshond 5 hours ago
Comment by benrutter 22 hours ago
Comment by dragonwriter 1 day ago
Lynx is still not a re-skin of Chrome, unless I missed something changing.
Comment by fsflover 1 day ago
Comment by Etherlord87 11 hours ago
Comment by someothherguyy 1 day ago
https://mozilla.github.io/policy-templates/#generativeai
https://mozilla.github.io/policy-templates/#preferences
https://searchfox.org/firefox-main/source/browser/app/profil...
https://searchfox.org/firefox-main/source/modules/libpref/in...
Comment by bigstrat2003 1 day ago
Comment by someothherguyy 12 hours ago
In general, how else would people "learn" about a feature unless it was enabled by default or the product nagged them?
Comment by calvinmorrison 1 day ago
Comment by derekdahmer 1 day ago
Comment by PunchyHamster 1 day ago
Comment by someothherguyy 12 hours ago
https://chromeenterprise.google/policies/#GenAiDefaultSettin...
Comment by fsflover 1 day ago
Comment by Orygin 22 hours ago
They "will" remove the option from settings, hide it in about:config, then later on remove it from there!
Of course none of that is true...
Comment by Lord-Jobo 16 hours ago
Right click anywhere, (ask an AI chatbot) right there. Go to settings, search AI or search Chatbot, nothing.
Comment by nottorp 22 hours ago
Comment by Orygin 21 hours ago
Comment by johnnyanmac 20 hours ago
They say trust takes a lifetime to build and seconds to break ". We're years into it at this point.
Comment by fsflover 14 hours ago
In contrast to Google Chrome? This is just FUD. Ublock Origin is still working and will be working. Full customization is still there and isn't going away. All of that is unlike in Chrom(ium).
Comment by nottorp 12 hours ago
Comment by fsflover 10 hours ago
> This is just FUD. Ublock Origin is still working and will be working. Full customization is still there and isn't going away.
Comment by nottorp 2 hours ago
Correct.
> and will be working.
How do you know?
> Full customization is still there
Correct.
> and isn't going away.
How do you know?
How do you know this new "AI" CEO won't let both support for Manifest V2 and extensive settings rot because "AI can do it for you" for example?
Or as I said earlier, because they'll run out of money to pay for non "AI" features?
Comment by fsflover 1 hour ago
Extrapolation. Also, because it's FLOSS and can be modified by anyone.
Comment by beached_whale 1 day ago
Comment by someothherguyy 12 hours ago
I would say it is nearly as easy as installing waterfox or some other privacy focused fork of Firefox.
Comment by phyzome 1 day ago
... Mozilla has re-enabled AI-related toggles that people have disabled. (I've heard this from others and observed it myself.) They also keep adding new ones that aren't controlled by a master switch. They're getting pretty user-hostile.
Comment by koolala 1 day ago
Comment by someothherguyy 12 hours ago
Comment by nitwit005 14 hours ago
Comment by minitech 10 hours ago
Comment by nitwit005 10 hours ago
Comment by renegat0x0 1 day ago
LLMs are also a tool, but it is not necessary for web browsing. It should be installed into a browser as extension, or integrated as such, so it should be quite easily enabled, or disabled. Surely it should not be intertwined with the browser in a meaningful way imho.
Comment by nirui 1 day ago
> If AI browsers dominate and then falter, if users discover they want something simpler and more trustworthy, Waterfox will still be here, marching patiently along.
This is basically their train of thought: provide something different for people who truly need it. There's nothing to criticize about.
However, let's don't forget that other browsers can remove/disable AI features just as fast as they add them. If Waterfox wants to be *more than just an alternative* (a.k.a. be a competitor), they needs discover what people actually need and optimize heavily on that. But this is hard to do because people don't show their true motives.
Maybe one day, it turned out that people do just want an AI that "think for them". That would be awkward, to say the least.
Comment by krige 23 hours ago
Comment by stalfosknight 13 hours ago
Comment by koolala 1 day ago
Comment by koolala 1 day ago
Looks like their independent now, nice.
Comment by otikik 23 hours ago
Are they, though? I get bombarded by AI ads very frequently and I have yet to see anything from those "AI browsers" mentioned on the article.
Comment by viraptor 23 hours ago
https://www.microsoft.com/en-us/edge/copilot-mode
https://www.genspark.ai/browser
And many many more...
Comment by Maxion 22 hours ago
Comment by viraptor 22 hours ago
Comment by otikik 16 hours ago
I didn't even know that AI browsers were even a thing until I read this article. And I work on AI.
Comment by koolala 1 day ago
Comment by Groxx 1 day ago
Comment by koolala 1 day ago
Comment by doubtfuly 1 day ago
Comment by htx80nerd 1 day ago
Comment by SoftTalker 1 day ago
Comment by webstrand 1 day ago
Comment by countWSS 21 hours ago
Comment by insin 14 hours ago
Comment by chauhankiran 1 day ago
Comment by pdyc 1 day ago
Comment by hansmayer 23 hours ago
Comment by aag 1 day ago
> Alphabet themselves reportedly see the writing on the wall, developing what appears to be a new browser separate from Chrome.
Comment by Glant 1 day ago
https://labs.google/disco https://news.ycombinator.com/item?id=46240952
Comment by dumbfounder 20 hours ago
99.9% of people haven’t ever had one single thought about how their software works. I don’t think they will be overwhelmed with cognitive load. Quite the opposite.
Comment by lerp-io 1 day ago
at this point it’s more so a sandbox runtime bordering an OS, but okay
Comment by vivzkestrel 1 day ago
Comment by benrutter 1 day ago
Personally, I'd love a paid for high quality browser that serves me rather than sneakily trying to get me to look at ads.
I think the challenge is that a browser is an incredibly difficult and large thing to build and maintain. So there aren't many wholly new browsers in existence, and therefore not very many business models being tried out.
Full agreement that I'd pay for such a thing- I have a browser and a terminal open non-stop during my workday. It's an important tool and I'd easily pay for a better offering if that was an option.
Comment by speedgoose 23 hours ago
Comment by Orygin 22 hours ago
Comment by johnnyanmac 20 hours ago
Comment by zavec 1 day ago
Comment by ekr____ 1 day ago
Comment by autoexec 1 day ago
That said, they're admittedly terrible about keeping their documentation updated, letting users know about added/depreciated settings, and they've even been known to go in and modify settings after you've explicitly changed them from defaults, so the PSA isn't entirely unjustified.
Comment by ekr____ 1 day ago
"Two other forms of advanced configuration allow even further customization: about:config preference modifications and userChrome.css or userContent.css custom style rules. However, Mozilla highly recommends that only the developers consider these customizations, as they could cause unexpected behavior or even break Firefox. Firefox is a work in progress and, to allow for continuous innovation, Mozilla cannot guarantee that future updates won’t impact these customizations."
https://support.mozilla.org/en-US/kb/firefox-advanced-custom...
Comment by johnnyanmac 20 hours ago
Comment by ChrisArchitect 1 day ago
Mozilla appoints new CEO Anthony Enzor-Demeo
Comment by fguerraz 1 day ago
Comment by AnonC 1 day ago
Comment by benrutter 1 day ago
Comment by aleph_minus_one 1 day ago
What do you say about the following link, then?
Comment by AnonC 1 day ago
Comment by Groxx 1 day ago
I agree it's counter-evidence right now, and I think there has been a way to donate for a long time now (just to "mozilla", not "firefox" or setting any restrictions), but I'm not sure what the historical option has been...
Comment by kogasa240p 11 hours ago
Comment by rixed 1 day ago
Comment by graycat 1 day ago
Here are what I find as reasons to scream about Mozilla:
Popups:
(a) Several times a day, my attention and concentration get interrupted by, for me, the unwelcome announcement that there is a new version I can download. A new version can have changes I don't like and genuine bugs. Sure, I could keep a copy of my favorite version from history, but that is system management mud wrestling and interruption of my work.
(b) Now I get told several times a day that my computer and cell phone can share access to a Web page. In this action Mozilla covers up what that page was showing I wanted it to show. No thanks. When I'm at my computer, AMD 8 core processor, all my files and software tools, and 1 Gbps optical fiber connection to the Internet and looking at a Web page, I want nothing to do with a cell phone's presentation of a, that, Web page.
(c) Some URLs are a dozen lines long and Mozilla finds ways to present such URLs with all their lines and pursue clearly their main objective -- cover up the desired content.
Mozilla needs to make their covering up, changing, the screen optional or just eliminated.
Want me to donate? You've mentioned as little as $10. Deal: Raise the $10 by a factor of 5 AND quit covering up my content and interrupting my work, and we've got a deal.
Comment by SideburnsOfDoom 1 day ago
When they say "AI browsers are proliferating." and "Their lunch is being eaten by AI browsers." what does that mean? What's an "AI Browser", and are they really gaining significant market share? For what?
I found this (1) that suggests that several "AI Browsers" exist, which is "proliferating" in a sense.
1) https://www.waterfox.com/blog/no-ai-here-response-to-mozilla...
Comment by Papazsazsa 1 day ago
Comment by bigstrat2003 1 day ago
Comment by 627467 1 day ago
Comment by hexasquid 1 day ago
Comment by atomicfiredoll 1 day ago
Last I knew, it doesn't exist. You can donate to Mozilla Corporation, the group that has been agitating it's own users and donors for years now.
People who want to support the Firefox team/product and have them focus on improving things like the development tools (or whatever else) literally cannot. Mozilla doesn't make that an option.
Comment by phyzome 1 day ago
Comment by human_llm 1 day ago
Comment by mmaunder 1 day ago
The black box objection disqualifies Widevine.
Comment by almosthere 1 day ago
Comment by MrAlex94 1 day ago
Comment by Qem 1 day ago
It's more likely it will try to kill us by talking depressed people into suicide and providing virtual ersatz boyfriends/girlfriends to replace real human relationships, what is a functional equivalent to cyber-neutering, given people can't have children by dating LLMs.
Comment by a24j 1 day ago
Comment by SV_BubbleTime 1 day ago
Comment by smt88 1 day ago
In many other areas, there are zero "no AI" options at all.