Are we stuck with the same Desktop UX forever? [video]
Posted by joelkesler 2 days ago
Comments
Comment by jhhh 2 days ago
A perfect pain point example was mentioned in the video: Text selection on mobile is trash. But each app seems to have different solutions, even from the same developer. Google Messages doesn't allow any text selection of content below an entire message. Some other apps have opted in to a 'smart' text select which when you select text will guess and randomly group select adjacent words. And lastly, some apps will only ever select a single word when you double tap which seemed to be the standard on mobile for a long time. All of this is inconsistent and often I'll want to do something like look up a word and realize oh I can't select the word at all (G message), or the system 'smartly' selected 4 words instead, or that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.
Comment by PunchyHamster 2 days ago
Inability to imagine someone might have different idea about what's useful is general plague of UI/UX industry. And there seem to be zero care given to usage by user that have to use the app longer than 30 seconds a day. Productivity vs learning time curve is basically flat, and low, with exception being pretty much "the tools made by X for X" like programming IDEs
Comment by ryandrake 2 days ago
App designers need to understand that their opinions on how the app should look and work are just that: opinions. Opinions they should keep to themselves.
Comment by 3v1n0 1 day ago
Comment by array_key_first 1 day ago
But if you also want your product to be productive for a way array of use cases, it's necessary. You need to think about your market.
Comment by eviks 1 day ago
Comment by rcxdude 1 day ago
Comment by hulitu 1 day ago
Thank god the RAM prices have risen. Maybe some people will start to programm with their heads instead of their (AI) IDE.
Comment by stephenlf 1 day ago
Comment by eviks 1 day ago
Comment by seba_dos1 1 day ago
Comment by array_key_first 1 day ago
On desktop, I often see people waste inordinate amounts of time on workflows that don't suit their use case. Little do they know - there's a config for that!
For example, I'll see people holding outlook like it's radioactive. They'll do the same busy-body work of manually pruning their inbox and sorting stuff and deleting stuff. The config can really help them there, but I think they either don't know it's capabilities or are scared of it.
Comment by hulitu 1 day ago
Most people also don't care about the mothers of programmers. Until, you know, they have to send an SMS using exactly (particular) one of the 2 SIMs present in the phone and the 20 years old app will not let them.
Comment by porkbrain 2 days ago
Comment by taskforcegemini 2 days ago
Comment by aoeusnth1 1 day ago
Comment by eastbound 2 days ago
Comment by throwaway894345 1 day ago
Comment by AlienRobot 2 days ago
Comment by xnx 2 days ago
Comment by hulitu 1 day ago
I'm only half joking.
Comment by doubled112 1 day ago
Comment by clearleaf 2 days ago
Comment by porkbrain 1 day ago
I'd actually compare screen OCR to screenshots. Instead of every app and every website implementing their own screenshot functionality, the system provides one for you.
Same goes for text selection. Instead of every context having to agree on tagging the text and directions, your phone has a quick way of letting you scan the screen for text.
To be fair, I still use the "hold the text to select it" approach when I want to continue with the "select all" action and have some confidence that is going to do what I want.
Comment by zbentley 1 day ago
That correctly identifies the problem. Now why is that, and how can we fix it?
It seems fixable; native GUI apps have COM bindings that can fairly reliably produce the text present in certain controls in the vast majority of cases. Web apps (and "desktop" apps that are actually web apps) have accessibility attributes and at least nominally the notion of separating document data from presentation. Now why do so few applications support text extraction via those channels? If the answer is "it's hard/easier not to", how can we make the right way easier than the wrong way?
Comment by bathtub365 1 day ago
Comment by porkbrain 1 day ago
Comment by supportengineer 2 days ago
You can count on it, it is reliable, it always works.
Comment by throwaway894345 1 day ago
Comment by ahartmetz 1 day ago
Doesn't have to be - Blackberry BB10 had damn near solved it. I think they had some patents on it, but these should have expired, and I noticed some corresponding changes in Android. But it's still far from being as good as BB10. What BB10 had was a kind of combined cursor and magnifying glass that controlled really well, plus the ability to tap the thing left or right to move one letter at a time.
Comment by johanyc 1 day ago
it doesn't look very easy to use in the demo tbh
Comment by ahartmetz 4 hours ago
Comment by diziet_sma 2 days ago
Comment by jauntywundrkind 1 day ago
This is the trouble. It's been decades of the OS becoming less and less relevant. Apps have more power, more will to build their own thing.
And there's less and less personal computing left. There's the design challenges, the UX being totally different. But the OS used to be a common substrate that the user could use to do things. And the OS has just vanished vanished vanished, receeded into the sea. Leaving these apps to totally dominate the experience, apps that are so often little more than thin clients to some far off cloud system, to basically some corporations mainframe.
The OS's relevance keeps shrinking, and it's awful for users. Why bother making new UX for the desktop, if the capabilities budget is still entirely on the side of the app? What actually needs to change is's UX of the desktop or other OS paradigm (mobile), it's a fundamental shift in taking power out of the mainframe and having a personal computer that's worth a damn, that again has more than a quantum of capability embued in it that it can deliver to the user.
(My actual hope is that someday the web can do some of this, because apps have near always been a horrible thing for users that gives them no agency, no control, that's pre baked to be only what is delivered to the user.)
Comment by linguae 2 days ago
Personally, I wish there were a champion of desktop usability like how Apple was in the 1980s and 1990s. I feel that Microsoft, Apple, and Google lost the plot in the 2010s due to two factors: (1) the rise of mobile and Web computing, and (2) the realization that software platforms are excellent platforms for milking users for cash via pushing ads and services upon a captive audience. To elaborate on the first point, UI elements from mobile and Web computing have been applied to desktops even when they are not effective, probably to save development costs, and probably since mobile and Web UI elements are seen as “modern” compared to an “old-fashioned” desktop. The result is a degraded desktop experience in 2025 compared to 2009 when Windows 7 and Snow Leopard were released. It’s hamburger windows, title bars becoming toolbars (making it harder to identify areas to drag windows), hidden scroll bars, and memory-hungry Electron apps galore, plus pushy notifications, nag screens, and ads for services.
I don’t foresee any innovation from Microsoft, Apple, or Google in desktop computing that doesn’t have strings attached for monetization purposes.
The open-source world is better positioned to make productive desktops, but without coordinated efforts, it seems like herding cats, and it seems that one must cobble together a system instead of having a system that works as coherently as the Mac or Windows.
With that said, I won’t be too negative. KDE and GNOME are consistent when sticking to Qt/GTK applications, respectively, and there are good desktop Linux distributions out there.
Comment by gtowey 2 days ago
At Microsoft, Satya Nadella has an engineering background, but it seems like he didn't spend much time as an engineer before getting an MBA and playing the management advancement game.
Our industry isn't what it used to be and I'm not sure it ever could.
Comment by linguae 2 days ago
This also came at a time when tech went from being considered a nerdy obsession to tech being a prestigious career choice much like how law and medicine are viewed.
Tech went from being a sideshow to the main show. The problem is once tech became the main show, this attracts the money- and career-driven rather than the ones passionate about technology. It’s bad enough working with mercenary coworkers, but when mercenaries become managers and executives, they are now the boss, and if the passionate don’t meet their bosses’ expectations, they are fired.
I left the industry and I am now a tenure-track community college professor, though I do research during my winter and summer breaks. I think there are still niches where a deep love for computing without being overly concerned about “stock line go up” metrics can still lead to good products and sustainable, if small, businesses.
Comment by jack_tripper 2 days ago
When the hell was even that?
Comment by vjvjvjvjghv 2 days ago
Comment by andrekandre 2 days ago
> In the 80s and 90s there was much more idealism than now.
that idealism was already fading by then, which had started much earlier in the preceding decades (see, memex/hypertext etc) > tech has devolved into a big money making scheme with only the minimum necessary actual technology and innovation
in the end, they are businesses, so it could be assumed that such orientation would take over in the end eventually though, no?its the system of incentives we all live under (make more money or die)
Comment by ryandrake 2 days ago
This is not true for the vast majority of people making these things. At some point, most businesses go from “make money or die” to financial security: “make line go up forever for no reason”.
Comment by lotsofpulp 1 day ago
Comment by throwaway894345 1 day ago
Comment by lo_zamoyski 2 days ago
> There were also more low hanging fruit to develop software that makes people’s lives better.
In principle, maybe. In practice, you had to pay for everything. Open source or free software was not widely available. So, the profit motive was there. The conditions didn’t exist yet for the profit model we have today to really take off, or for the appreciation of it to exist. Still, if there’s a lot of low-hanging fruit, that means the maturity of software was generally lower, so it’s a bit like pining for the days when people lived on the farm.
> There was also less investor money floating around so it was more important to appeal to end users.
I’m not so sure this appeal was so important (and investors do care about appeal!). If you had market dominance like Microsoft did, you could rest on your laurels quite a bit (and that they did). The software ecosystem you needed to use also determined your choices for you.
> To me it seems tech has devolved into a big money making scheme with only the minimum necessary actual technology and innovation.
As I said earlier, the profit motive was always there. It was just expressed differently. But I will grant you that the image is different. In a way, the mask has been dropped. When facebook was new, no one thought of it as a vulgar engine for monetizing people either (I even recall offending a Facebook employee years ago when I mentioned this, what should frankly have been obvious), but it was just that. It was all just that, because the basic blueprint of the revenue model was there from day one.
Comment by gldrk 1 day ago
As a private individual, you didn't actually have to pay for anything once you got an Internet connection. Most countries never even tried enforcing copyright laws against small fish. DRM was barely a thing and was easily broken within days by l33t teenagers.
Comment by mc32 2 days ago
Comment by lo_zamoyski 2 days ago
I think you may be looking at history through rose-tinted glasses. Sure, social media today is not the same, so the comparison isn’t quite sensible, but IRC was an unpleasant place full of petty egos and nasty people.
Comment by hulitu 1 day ago
One should take a look at HN. /s
I find the discussions on the early Internet (until around 2010) more civilised than today.
Today, the internet is fully weaponized by and for big companies and 3 letter agencies.
Comment by corysama 2 days ago
The subtle running joke was that while the main characters technobabble was fake, every other background SV startup was “Making the world a better place through Paxos-based distributed consensus” and other real world serious tech.
Comment by vjvjvjvjghv 2 days ago
Comment by ryandrake 2 days ago
Comment by Telaneo 1 day ago
Comment by Anonyneko 1 day ago
Comment by hulitu 1 day ago
I tried to use my phone as a "computing device", but i mostly can use it as a toy. Working with text and files on a phone is... how to say nicely ... interesting.
Comment by Telaneo 1 day ago
Comment by Normal_gaussian 2 days ago
Comment by XorNot 2 days ago
We now have giant title bars to accommodate the hamburger menu button, which opens a list of...standard menu bar sub menu options.
You could fit all the same information into the same real estate space, using the original and tested paradigm.
Comment by Ekaros 1 day ago
On other hand. Vivaldi I am trying on phone has this stupid thick bar at bottom on my Android. With essentially bookmarks, back, home, forward and tabs buttons... Significantly more taking visual space...
I am really not sure what is going on in total...
Comment by scottjenson 2 days ago
I'm excited so many people are interested in desktop UX!
Comment by ChuckMcM 2 days ago
I think an fertile area for investigation would also be 'task specific' interactions. In XDE[1], the thing that got Steve Jobs all excited, the interaction models are different if you're writing code, debugging code, or running an application. There are key things that always work the same way (cut/paste for example) but other things that change based on context.
And echoing some of the sentiment I've read here as well, consistency is a bigger win for the end user than form. By that I mean even a crappy UX is okay if it is consistent in how its crappy. Heard a great talk about Nintendo's design of the 'Mario world' games and how the secret sauce was that Mario physics are consistent, so as a game player if you knew how to use the game mechanics to do one thing, you can guess how to use them to do another thing you've not yet done. Similarly with UX, if the mechanics are consistent then they give you a stepping off point for doing a new thing you haven't done but using mechanics you are already familiar with.
[1] Xerox Development Environment -- This was the environment everyone at Xerox Business Systems used when working on the Xerox Star desktop publishing workstation.
Comment by NetOpWibby 2 days ago
In my downtime I'm working on my future computing concept[1]. The direction I'm going for the UI is context awareness and the desktop being more of an endless canvas. I need to flesh out my ideas into code one of these days.
P.S. Just learned we're on the same Mastodon server, that's dope.
---
Comment by wiether 1 day ago
It's only in the end that I realized I just spent 40 minutes watching the video.
Thanks for sharing it with us!
Comment by calmbonsai 2 days ago
Comment by p_ing 1 day ago
Rightly your talk was not about specific issues or specific solutions, but as a desktop user (macOS primarily, Windows secondary but historically, and KDE a distant third), beyond the mishmash of different UIs, i.e. Windows 11 presenting Windows 3.x or just outright dumb decisions such as transparent everything, what is it that you want to solve for people on the desktop space to make them more /productive/ than they currently are? Especially now that our primary vehicle to information creation and sharing is not the desktop, but the web browser alone?
Comment by az09mugen 2 days ago
Will look into your other talks.
Comment by averynicepen 2 days ago
Comment by scottjenson 1 day ago
Comment by agumonkey 2 days ago
Comment by pjmlp 2 days ago
Comment by analogpixel 2 days ago
Comment by thaumaturgy 2 days ago
In the Trek universe, LCARS wasn't getting continuous UI updates because they would have advanced, culturally, to a point where they recognized that continuous UI updates are frustrating for users. They would have invested the time and research effort required to better understand the right kind of interface for the given devices, and then... just built that. And, sure, it probably would get updates from time to time, but nothing like the way we do things now.
Because the way we do things now is immature. It's driven often by individual developers' needs to leave their fingerprints on something, to be able to say, "this project is now MY project", to be able to use it as a portfolio item that helps them get a bigger paycheck in the future.
Likewise, Geordi was regularly shown to be making constant improvements to the ship's systems. If I remember right, some of his designs were picked up by Starfleet and integrated into other ships. He took risks, too, like experimental propulsion upgrades. But, each time, it was an upgrade in service of better meeting some present or future mission objective. Geordi might have rewritten some software modules in whatever counted as a "language" in that universe at some point, but if he had done so, he would have done extensive testing and tried very hard to do it in a way that wouldn't've disrupted ship operations, and he would only do so if it gained some kind of improvement that directly impacted the success or safety of the whole ship.
Really cool technology is a key component of the Trek universe, but Trek isn't about technology. It's about people. Technology is just a thing that's in the background, and, sometimes, becomes a part of the story -- when it impacts some people in the story.
Comment by cons0le 2 days ago
AKA resume-driven development. I personally know several people working on LLM products, where in private they admit they think LLMs are scams
Comment by jfengel 2 days ago
Stories which focus on them as technology are nearly always boring. "Oh no the transporter broke... Yay we fixed it".
Comment by PunchyHamster 2 days ago
Comment by amelius 2 days ago
(equivalent of people being glued to their smartphones today)
(Related) This is one explanation for the Fermi paradox: Alien species may isolate themselves in virtual worlds
Comment by d3Xt3r 2 days ago
The people we saw on screen most of the time also held important positions on the ship (especially the bridge, or engineering) and you can't expect them to just waste significant chunks of time.
Also, don't forget that these people actually like their jobs. They got there because they sincerely wanted to, out of personal interest and drive, and not because of societal pressures like in our present world. They already figured out universal basic income and are living in an advanced self-sufficient society, so they don't even need a job to earn money or live a decent life - these people are doing their jobs because of their pure, raw passion for that field.
Comment by Telaneo 1 day ago
Comment by RedNifre 2 days ago
Comment by XorNot 2 days ago
Similarly in Stat Wars with droids: Obi-Wan is right, droids can't think and deserve no real moral consideration because they're just advanced language models in bodies (C3PO insisting on proper protocol because he's a protocol droid is the engineering attempt to keep the LLM on track).
Comment by Mistletoe 2 days ago
Comment by krapp 2 days ago
Comment by bena 2 days ago
Now, this is really because LCARS is "Stage Direction: Riker hits some buttons and stuff happens".
Comment by lo_zamoyski 2 days ago
Yes, although users also judge updates by what is apparent. Imagine if OS UIs didn’t change and you had to pay for new versions. So I’m sure UI updates are also partly motivated by a desire to signal improvements.
Comment by dragonwriter 2 days ago
In the Trek universe, LCARS was continuously generating UI updates for each user, because AI coding had reached the point that it no longer needs specific direction, and it responds autonomously to needs the system itself identifies.
Comment by krapp 2 days ago
Not to be "that guy" but LCARS wasn't getting continuous UI updates because that would have cost the production team money and for TNG at least would have often required rebuilding physical sets. It does get updated between series because as part of setting the design language for that series.
And Geordi was shown constantly making improvements to the ship's systems because he had to be shown "doing engineer stuff."
Comment by calmbonsai 2 days ago
Things just need to "look futuristic". The don't actually need to have practical function outside whatever narrative constraints are imposed in order to provide pace and tension to the story.
I forget who said it first, but "Warp is really the speed of plot".
Comment by PunchyHamster 2 days ago
Comment by calmbonsai 1 day ago
In truth, that was due to having a fixed sight-line and focal distance to the camera so any post-production LCARS effects could be matched-moved to the action and any possible alternative lighting conditions. Offhand, I can't think of any explicit digital match-moving shots, but I'm certain that's the reason.
As pointed out in that infamous Red Letter Media video, all the screens on the bridge ended up casting too much glare so they very blatantly used gaffer tape on them https://www.youtube.com/watch?v=yzJqarYU5Io . :)
Comment by Findecanor 2 days ago
It is for the audience to imagine that those printed transparencies back-lit with light bulbs behind coloured gel are the most intuitive, easy to use, precise user interfaces that the actors pretend that they are.
Comment by RedNifre 2 days ago
Complex tasks are done vibe coding style, like La Forge vibe video editing a recording to find an alien: https://www.youtube.com/watch?v=4Faiu360W7Q
I do wonder if conversational interfaces will put an end to our GUI churn eventually...
Comment by PunchyHamster 2 days ago
It might be nice way for making complex, one off tasks by personnel unfamiliar with all the features of the system, but for fast day to day stuff, button per function will always be a king.
Comment by lo_zamoyski 2 days ago
Comment by TheOtherHobbes 1 day ago
The less obvious answer is how to make it work. That is a hard problem.
And the challenge is how to make it work ethically, especially given where Late Capitalism has ended up.
Otherwise we won't turn into Star Fleet, we'll turn into the Borg.
Comment by rzerowan 2 days ago
Conversly recent versions have taken the view of foregrounding tech aidied with flashy CGI to handwave through a lot.Basically using it as a plot device when the writing is weak.
Comment by JuniperMesos 2 days ago
On the other hand, if the writers of Star Trek The Next Generation were writing the show now, rather than 35-40 years ago - and therefore had a more expansive understanding of computer technology and were writing for an audience that could be relied upon to understand computers better than was actually the case - maybe there would've been more episodes involving dealing with the details of Future Sci-Fi Computer Systems in ways a programmer today might find recognizable.
Heck, maybe this is in fact the case for the recently-written episodes of Star Trek coming out in the past few years (that seem to be much less popular than TNG, probably because the entire media environment around broadcast television has changed drastically since TNG was made). Someone who writes for television today is more likely to have had the experience of taking a Python class in middle school than anyone writing for television decades ago (before Python existed), and maybe something of that experience might make it into an episode of television sci-fi.
As an additional point, my recollection is that the LCARS interface did in fact look slightly different over time - in early TNG seasons it was more orange-y, and in later seasons/Voyager/the TNG movies it generally had more of a purple tinge. Maybe we can attribute this in-universe to a Federation-wide UX redesign (imagine throwing in a scene where Barclay and La Forge are walking down a corridor having a friendly argument about whether the new redesign is better or worse immediately before a Red Alert that starts the main plot of the episode!). From a television production standpoint, we can attribute this to things like "the set designers were actually trying to suggest the passage of time and technology changing in the context of the show", or "the set designers wanted to have fun making a new thing" or "over the period of time that the 80s/90s incarnations of Star Trek were being made, television VFX technology itself was advancing rapidly and people wanted to try out new things that were not previously possible" - all of which have implications for real-world technology as well as fake television sci-fi technology.
Comment by bigstrat2003 2 days ago
That's probably part of it. But the larger part is that new Star Trek is very poorly written, so why is anyone going to bother watching it?
Comment by AndrewKemendo 2 days ago
Comment by sprash 2 days ago
GUI elements were easily distinguishable from content and there was 100% consistency down to the last little detail (e.g. right click always gave you a meaningful context menu). The innovations after that are tiny in comparison and more opinionated (things like macos making the taskbar obsolete with the introduction of Exposé).
Comment by fragmede 2 days ago
Comment by kvemkon 2 days ago
Recently some UI ignored my action by clicking an entry in a list from drop down button. It turned out, this drop down button was additionally a normal button if you press it in the center. Awful.
> UI creation compared to MFC
Here I'd prefer Borland with (Pascal) Delphi / C++ Builder.
> relative resizable layout that's required today.
While it should be beneficial, the reality is awful. E.g. why is the URL input field on [1] so narrow? But if you shrinks the browser window width the text field becomes wide eventually! That's completely against expectations.
Comment by SoftTalker 2 days ago
Comment by Telaneo 1 day ago
Meanwhile, WinXP started to fiddle with the foundation of that framework, sometimes maybe for the better, sometimes maybe for the worse. Vista did the same. 7 mostly didn't and instead mostly fixed what Vista broke, while 8 tried to throw the whole thing out.
Comment by throaway45425 1 day ago
Comment by porise 1 day ago
I can immediately swap to the exact windows I want without tabbing, I can rebind everything to pull up whatever application I want, and I can even switch a window to floating.
Comment by rustcleaner 1 day ago
Great UI/UX will foster emergence in habits and workflows, AND AVOID BREAKING MUSCLE MEMORY AT ALMOST ANY COST! Terrible UI/UX will create hard but beautiful chutes to push cattle through and into the money fleecing machine.
Comment by joelkesler 2 days ago
“…Scott Jenson gives examples of how focusing on UX -- instead of UI -- frees us to think bigger. This is especially true for the desktop, where the user experience has so much potential to grow well beyond its current interaction models. The desktop UX is certainly not dead, and this talk suggests some future directions we could take.”
“Scott Jenson has been a leader in UX design and strategic planning for over 35 years. He was the first member of Apple’s Human Interface group in the late '80s, and has since held key roles at several major tech companies. He served as Director of Product Design for Symbian in London, managed Mobile UX design at Google, and was Creative Director at frog design in San Francisco. He returned to Google to do UX research for Android and is now a UX strategist in the open-source community for Mastodon and Home Assistant.”
Comment by mattkevan 2 days ago
I’m in the process of designing an os interface that tries to move beyond the current desktop metaphor or the mobile grid of apps.
Instead it’s going to use ‘frames’ of content that are acted on by capabilities that provide functionality. Very much inspired by Newton OS, HyperCard and the early, pre-Web thinking around hypermedia.
A newton-like content soup combined with a persistent LLM intelligence layer, RAG and knowledge graphs could provide a powerful way to create, connect and manage content that breaks out of the standard document model.
Comment by compressedgas 1 day ago
Comment by __d 2 days ago
Comment by DonHopkins 2 days ago
>A lot of my work is about trying to get away from this. This a photograph of the desktop of a student of mine. And when I say desktop, I don't just mean the actual desk where his mouse has worn away the surface of the desk. If you look carefully, you can even see a hint of the Apple menu, up here in the upper left, where the virtual world has literally punched through to the physical. So this is, as Joy Mountford once said, "The mouse is probably the narrowest straw you could try to suck all of human expression through." (Laughter)
https://flong.com/archive/texts/lectures/lecture_ted_09/inde...
https://en.wikipedia.org/wiki/Golan_Levin
Comment by reirob 1 day ago
I hope this project will produce some usable new UX at some point.
Comment by AndrewKemendo 2 days ago
it’s just all gotten miniaturized
Humans have outright rejected all other possible computer form factors presented to them to date including:
Purely NLP with no screen
head worn augmented reality
contact lenses,
head worn virtual reality
implanted touch sensors
etc…
Every other possible form factor gets shit on, on this website and in every other technology newspaper.
This is despite almost a century of a attempts at doing all those and making zero progress in sustained consumer penetration.
Had people liked those form factors they would’ve been invested in them early on, such that they would develop the same way the laptops and iPads and iPhones and desktops have evolved.
However nobody’s even interested at any type of scale in the early days of AR for example.
I have a litany of augmented and virtual reality devices scattered around my home and work that are incredibly compelling technology - but are totally seen as straight up dogshit from the consumer perspective.
Like everything it’s not a machine problem, it’s a human people in society problem
Comment by nkrisc 2 days ago
Cumbersome and slow with horrible failure recovery. Great if it works, huge pain in the ass if it doesn't. Useless for any visual task.
> head worn augmented reality
Completely useless if what you're doing doesn't involve "augmenting reality" (editing a text document), which probably describes most tasks that the average person is using a computer for.
> contact lenses
Effectively impossible to use for some portion of the population.
> head worn virtual reality
Completely isolates you from your surroundings (most people don't like that) and difficult to use for people who wear glasses. Nevermind that currently they're heavy, expensive, and not particularly portable.
> implanted sensors
That's going to be a very hard sell for the vast majority of people. Also pretty useless for what most people want to do with computers.
The reason these different form factors haven't caught on is because they're pretty shit right now and not even useful to most people.
The standard desktop environment isn't perfect, but it's good and versatile enough for what most people need to do with a computer.
Comment by AndrewKemendo 2 days ago
yet here we are today
You must’ve missed the point: people invested in desktop computers when they were shitty vacuum tubes that blow up.
That still hasn’t happened for any other user experience or interface.
> it's good and versatile enough for what most people need to do with a computer
Exactly correct! Like I said it’s a limitation of the human society, the capabilities and expectations of regular people are so low and diffuse that there is not enough collective intelligence to manage a complex interface that would measurably improve your abilities.
Said another way, it’s the same as if a baby could never “graduate” from Duplo blocks to Lego because lego blocks are too complicated
Comment by mcswell 2 days ago
Comment by AnimalMuppet 2 days ago
Even more, I don't see phones as the same form factor as mainframes.
Comment by immibis 2 days ago
Comment by AndrewKemendo 2 days ago
Comment by chrsw 1 day ago
Comment by fortyseven 2 days ago
Comment by 7thaccount 2 days ago
Comment by pdonis 2 days ago
Comment by array_key_first 1 day ago
And the taskbar is also not optimal. Having text next to the icons is great, but it means you can only really have, like, 4 or 5 applications open and see all their titles and stuff. Which is why modern windows switched to just icons - which is much worse, because now you can't tell which app window is which!
The optimal taskbar, imo, is a vertical one. I basically take the KDE panel and just make it vertical. I can easily have 20+ apps open and read all their titles. Also, I generally think vertical space is more valuable for applications, and you get more of it this way.
It also allows me to ungroup apps. So that each window is it's own entry in the taskbar, so one less click. And it works because I can read the window title.
Comment by pdonis 1 day ago
More or less, yes; Trinity Desktop is basically KDE 3. But KDE has added on a lot of other cruft since then that has no value to me.
> Having text next to the icons is great, but it means you can only really have, like, 4 or 5 applications open and see all their titles and stuff.
That's what multiple virtual desktops are for. My usual desktop configuration has 8. Each one has only a few apps open in it.
> The optimal taskbar, imo, is a vertical one.
I do this for toolbars in applications like LibreOffice; on an HD aspect ratio screen it makes a lot more sense to have all that stuff off to the side, where there's more than enough screen real estate anyway, than taking up precious vertical space at the top.
But for my overall desktop taskbar, I've tried vertical and it doesn't work well for me--because to show titles it would have to be way too wide for me. The horizontal taskbar does take up some vertical space at the bottom of the screen, but I can make that pretty small by downsizing it to either "Small" or "Tiny".
Comment by christophilus 2 days ago
Comment by Findecanor 2 days ago
By Don Norman's original definition [0], it is not merely another term for "UI" but specifically when you do have a wider scope and not working with a user interface specifically.
So, the term "UX/UI" would refer to being able to both work with the wider scope, and to go deeper to work with user interface design.
Comment by xnx 2 days ago
Comment by array_key_first 1 day ago
1. Burning the planet on your servers is expensive, offloading it to a client-side LLM is not.
2. Ethics means risk means you won't be SOC compliant, your legal department will be mad, your users will be mad, etc.
The current status-quo of a few giant LLMs on supercomputers operated by OpenAI and Google is basically destined to fail, in my eyes. At least from a business standpoint. Consumer stuff might be different.
Comment by GaryBluto 2 days ago
It's really strange how he spins off on this mini-rant about AI ethics towards the end. I clicked on a video about UI design.
Comment by xnx 2 days ago
Comment by immibis 2 days ago
Comment by rolph 2 days ago
MS is a prime example, dont do what MS has been doing, remember whos hardware it actually is, remain aware that what a developer, and a board room understands as improvement, is not experienced in the same way by average retail consumers.
Comment by gherkinnn 2 days ago
Comment by SoftTalker 2 days ago
Touch screens, voice commands, and other specialized interfaces have and will continue to make sense for some use cases. But for sitting down and working, same as it ever was.
Comment by sounds 1 day ago
"The QWERTY layout became popular with the success of the Remington No. 2 of 1878...
"The 0 key was added and standardized in its modern position early in the history of the typewriter, but the 1 and exclamation point were left off some typewriter keyboards into the 1970s."
There's always a few oddball variations. But desk work will probably use a qwerty keyboard in the year 2100
Comment by throwaboat 1 day ago
I'll set the scene that I think most of us have experienced: you're working on a project. You start down the rabbit hole of research to find a solution to something. Maybe you find it quickly somehow. But this case, you don't. The problem is too big for an easy answer and instead requires synthesis and reflection.
Eventually, after opening 50 tabs and only closing the immediately useless stuff, you find that you need to circle back up the problem solving chain. The problem is that you have 45 tabs open and no method to the madness that is clearly visible.
This further compounds if you're trying to solve a new problem with an existing set of tabs that haven't been cleaned out from the last problem.
Nowhere in this process is the UX leading you to solving a problem.
My half-baked solution is to allow for the user to enter "research mode". When a new tab is opened, the browser halts the user and prompts for what they found on the last tab that led them to opening this new tab. When the user leaves research mode, any leafs left should also prompt for a summary or omitted as irrelevant. Then, once all the tabs have been accounted for, a report can be generated which shows all the URLs and the user's notes. Bonus points if allows generation of MLA / APA citations automagically. Further bonus points if I can highlight sections of text / images while in research mode to fill my new tab questionnaire as I go.
Comment by gausswho 1 day ago
It''s best just to ignore tabs anyway. After reopening a window they'll be inactive until you open them.
Comment by throwaboat 1 day ago
Comment by calmbonsai 2 days ago
Take any other praxis that's reached the 'appliance' stage that you use in your daily life from washing machines, ovens, coffee makers, cars, smartphones, flip-phones, televisions, toilets, vacuums, microwaves, refrigerators, ranges, etc.
It takes ~30 years to optimize the UX to make it "appliance-worthy" and then everything afterwards consists of edge-case features, personalization, or regulatory compliance.
Desktop Computers are no exception.
Comment by mrob 2 days ago
1. Incremental narrowing for all selection tasks like the Helm [0] extension for Emacs.
Whenever there is a list of choices, all choices should be displayed, and this list should be filterable in real time by typing. This should go further than what Helm provides, e.g. you should be able to filter a partially filtered list in a different way. No matter how complex your filtering, all results should appear within 10 ms or so. This should include things like full text search of all local documents on the machine. This will probably require extensive indexing, so it needs to be tightly integrated with all software so the indexes stay in sync with the data.
2. Pervasive support for mouse gestures.
This effectively increases the number of mouse buttons. Some tasks are fastest with keyboard, and some are fastest with mouse, but switching between the two costs time. Increasing the effective number of buttons increases the number of tasks that are fastest with mouse and reduces need for switching.
Comment by calmbonsai 1 day ago
I see "mouse gestures" as merely an incremental evolution for desktops.
Low latency capacitive touch-screens with gesture controls were, however, revolutionary for mobile devices and dashboards in vehicles.
Comment by Hammershaft 2 days ago
Comment by calmbonsai 2 days ago
For example, we're not remotely close to having a standardized "watch form-factor" appliance interface.
Physical reality is always a constraint. In this case, keyboard+display+speaker+mouse+arms-length-proximity+stationary. If you add/remove/alter _any_ of those 6 constraints, then there's plenty of room for innovation, but those constraints _define_ a desktop computer.
Comment by pegasus 2 days ago
Comment by calmbonsai 1 day ago
One classic example is the "Bloomberg Box": https://en.wikipedia.org/wiki/Bloomberg_Terminal which has been around since the late '80s.
You can also see this from the reverse (analog -> digital) in the evolution of hospital patient life-sign monitors and the classic "6 pack" of gauges used in both aviation and automobiles.
Comment by pegasus 1 day ago
Comment by calmbonsai 21 hours ago
Now with performant hypervisors, I just run a bunch of Linux VMs locally to minimize splash-zone and do cloud for performance computing.
I'll likely migrate fully to a Framework laptop next year, but I don't have time (atm) to do it. Ah, the good 'ole glory days of native Linux on Thinkpads.
Comment by danans 2 days ago
I wish the same could be said of car UX these days but clearly that has regressed away from optimal.
Comment by calmbonsai 21 hours ago
Comment by bgbntty2 2 days ago
I think the state of the current Desktop UX is great. Maybe it's a local maximum we've reached, but I love it. I mostly use XFCE and there are just a few small things I'd like changed or fixed. Nothing that I even notice frequently.
I've used tiling window managers before and they were fine, but it was a bit of a hassle to get used to them. And I didn't feel they gave me something I couldn't do with a stacking window manager. I can arrange windows to the sides or corners of the monitor easily with the mouse or the keyboard. On XFCE holding down alt before moving a window lets me select any part of the window, not just the title bar, so it's just "hold down ALT, point somewhere inside the window and flick the window into a corner or a side with the mouse". If I really needed to view 10 windows at the same time, I'd consider a tiling window manager, but virtual desktops on XFCE are enough for me. I have a desktop for my mails, shopping, several for various browsers, several for work, for media, and so on. And I instantly go to the ones I want either with Meta+<number> (for example, Meta+3 for emails), or by scrolling with my middle mouse on the far right on my taskbar where I see a visual representation of my virtual desktops - just white outlines of the windows relative to the monitors.
Another thing I've noticed about desktop UX is that application UX seems follow the trends of website UX where the UX is so dumbed down, even a drunken caveman who's never seen a computer can use it. Tools and options are hidden behind menus. Even the menus are hidden behind a hamburger icon. There's a lot of unnecessary white space everywhere. Sometimes there's even a linear progression through a set of steps, one step at a time, instead of having everything in view all the time - similar to how some registration forms work where you first enter your e-mail, then you click next to enter a password, then click next again, and so on. I always use "compact view" or "details view" where it's possible and hide thumbnails unless I need them. I wish more sites and apps were more like HN in design. If you're looking to convert (into money or into long-term users) as many people as possible, then it might make sense to target the technological toddlers, but then you might lose, or at least annoy, your power users.
At the beginning of the video I thought we'll likely only see foundational changes when we stop interacting with the computer mainly via monitors, keyboards and mice. Maybe when we start plugging USB ports into our heads directly, or something like that. Just like I don't expect any foundational changes or improvements on static books like paper or PDF. Sure, interactive tutorials are fundamentally different in UX, but they're also a fundamentally different medium. But at 28:00, his example of a combination of window manager + file manager + clipboard made me rethink my position. I have used clipboard visualizers long ago, but the integration between apps and being able to drag and otherwise interact with it would be really interesting.
Some more thoughts I jotted down while watching the video:
~~~~ 01:33 This UX of dragging files between windows is new to me. I just grab a file and ALT+TAB to wherever I want to drop it if I can't see it. I think this behavior, to raise windows only on mouse up, will annoy me. What if I have a split view of my file manager in one window, and other window above it? I want to drag a file from the left side of the split-view window to the right one, but the mouse-down wont be enough to show me the right side if the window that was above it covers it. Or if, in the lower window, I want to drag the file into a folder that's also in the lower window, but obscured by the upper window? It may be a specific scenario, but
~~~~ 05:15 I'd forgotten the "What's a computer?" ad. It really grinds my gears when people don't understand that mobile "devices" are computers. I've had non-techies look surprised when I mention it, usually in a sentence like "Well, smartphones are really just computers, so, of course, it should be possible to do X with them.". It's such a basic category.
Similarly, I remember Apple not using the word "tablet" to describe their iPad years ago. Not sure if that has changed. Even many third-party online stores had a separate section for the iPad.
I guess it's good marketing to make people think your product is so unique and different than others. That's why many people reference their iPhone as "my iPhone" instead of "my phone" or "my smartphone". People usually don't say "my Samsung" or "my $brand" for other brands, unless they want to specify it for clarity. Great marketing to make people do this.
~~~~ 24:50 I'm a bit surprised that someone acknowledges that the UX for typing and editing on mobile is awful. But I think that no matter how many improvements happen, using a keyboard will always be much, much faster and pleasant. It's interesting to me that even programmers or other people who've used desktop professionally for years don't know basic things like SHIFT+left_arrow or SHIFT+right_arrow to select, or CTRL+left_arrow or CTRL+right_arrow to move between words, or combining them to select words - CTRL+SHIFT+left_arrow or CTRL+SHIFT+right_arrow. Or that they can hold their mouse button after double clicking on a word and move it around to select several words. Watching them try to select some text in a normal app (such as HN's comment field or a standard notepad app) using only arrow keys without modifiers or tapping the backspace 30 times (not even holding it down) or trying to precisely select the word boundary with a mouse... it's like watching someone right-click and then select "Paste" instead of CTRL+V. I guess some users just don't learn. Maybe they don't care or are preoccupied with more important things, but it's weird to me. But, on the other hand, I never learned vi/vim or Emacs to the point where it would make me X times more productive. So maybe what those users above look to me is what I look to someone well-versed in either of those tools.
~~~~ Forgot the timestamp, it was near the end, but the projects Ink & Switch make seem interesting. Looking at their site now.
Comment by rustcleaner 1 day ago
I know what you mean. XFCE today is reminiscent of what KDE 3.5 (TDE today) was in its era. XFCE seems to be arriving in a place somewhat similar to KDE 3.5 in relative customize-ability and feel. KDE 3.5 gave me wallpaper thumbnails on my desktop switcher, XFCE doesn't. It would be nice to open my 12 or 48 or whatever desktops drawer and see all the desktops' wallpapers collage into their master image, like I used to fifteen years ago!
Comment by ares623 2 days ago
Comment by TheAceOfHearts 1 day ago
A UX revolution in teeth-cleaning technology would probably look like some kind of bio-organism or colony that eats plaque and kills plaque-producing bacteria. In an ideal world you wouldn't have to brush your teeth at all, aside from an occasionally floss or scrub.
Comment by ares623 1 day ago
I can have a personal dentist brush my teeth while I lie down.
There's a point where UX-that-works-at-acceptable-cost is good enough.
Maybe desktop UX is like the shark. An evolutionary dead-end, but it gets the job done extremely well. (Would be cool if they have lasers on their frickin' heads though)
Comment by calmbonsai 2 days ago
Comment by esafak 2 days ago
Maybe the experience has not changed for the average person, but alternatives are out there.
Comment by LeFantome 2 days ago
Comment by yearolinuxdsktp 2 days ago
On the positive side, my electronic toothbrush allows me to avoid excessive pressure via real-time green/red light.
On the negative side, it guilt trips me with a sad face emoji any time my brushing time is under 2 minutes.
Comment by AndrewKemendo 2 days ago
https://www.youtube.com/watch?v=zMuTG6fOMCg
The variety of form factors offered are the only difference
Comment by mrob 2 days ago
Comment by jrowen 2 days ago
I don't think most people would find this degree of reduction helpful.
Comment by AndrewKemendo 2 days ago
Correct? I agree with this precisely but assume you’re writing it sarcastically
From the point of view of the starting state of the mouth to the end state of the mouth the USER EXPERIENCE is the same: clean teeth
The FORM FACTOR is different: Electric version means ONLY that I don’t move my arm
“Most people” can’t do multiplication in their head so I’m not looking to them to understand
Comment by echoangle 2 days ago
Comment by AndrewKemendo 2 days ago
Now compare that variance to the variance options given with machine and computing UX options
you’ll see clearly that one (toothbrushing) is less than one stdev different in steps and components for the median use case and one (computing) is nearly infinite variance (no stable stdev) between median use case steps and components.
The fact that the latter state space manifold is available but the action space is constrained inside a local minima is an indictment on the capacity for action space traversal by humans.
This is reflected again with what is a point action space (physically ablate plaque with abrasive) in the possible state space of teeth cleaning for example: chemical only/non ablative, replace teeth entirely every month, remove teeth and eat paste, etc…
So yes I collapsed that complexity into calling it “UX” which classically can be described via UML
Comment by jrowen 2 days ago
Ask any person to go and find a stick and use it to brush their teeth, and then ask if that "experience" was the same as using their toothbrush. Invoking UML is absurd.
Comment by AndrewKemendo 1 day ago
Funny how we haven’t done anything on the scale of Hoover Dam, Three Gorges, ISS etc…since those got thrown away
User Experience also means something specific in information theory and UX and UML is designed to model that explicitly:
https://www.pst.ifi.lmu.de/~kochn/pUML2001-Hen-Koch.pdf
Good luck vibe architecting
Comment by jrowen 1 day ago
UML and functional definitions and iso standards are still important, it's just not UX.
Good luck never observing users using your product. Not everything is a space shuttle, recall that we are talking about toothbrushes here.
Comment by array_key_first 1 day ago
Comment by ErroneousBosh 2 days ago
Because we've been stuck with the same bicycle UX for like 150 years now.
Sometimes shit just works right, just about straight out of the gate.
Comment by DangitBobby 2 days ago
Comment by ErroneousBosh 1 day ago
Comment by esafak 2 days ago
Comment by ErroneousBosh 2 days ago
By the 1870s we'd pretty much standardised on the "Safety Bicycle", which had a couple of smallish wheels about two and a half feet in olden days measurements in diameter, with a chain drive from a set of pedals mounted low in the frame to the rear wheel.
By the end of the 1880s, you had companies mass-producing bikes that wouldn't look unreasonable today. All we've done since is make them out of lighter metal, improve the brakes from pull rods to cables to hydraulic discs brakes, and give them more gears (it wouldn't be until the early 1900s that the first hub gears became available, with - perhaps surprisingly - derailleurs only coming along 100 years ago).
Comment by eek2121 2 days ago
Comment by ErroneousBosh 2 days ago
Are we stuck with the same brake pedal UX forever?
Comment by migueldeicaza 2 days ago
Comment by whatever1 2 days ago
Coders are the only ones who still should be interested in desktop UX, but even in that segment many just need a terminal window.
Comment by linguae 2 days ago
Whether intentional or not, it seems like the trend is increasingly locked-down devices running locked-down software, and I’m also disturbed by the prospect of Big Tech gobbling up hardware (see the RAM shortage, for example), making it unaffordable for regular people, and then renting this hardware back to us in the form of cloud services.
It’s disturbing and I wish we could stop this.
Comment by xnx 2 days ago
Comment by vjvjvjvjghv 2 days ago
Comment by rustcleaner 1 day ago
Comment by PunchyHamster 2 days ago
But outside of that I doubt there will be many users actually doing stuff (as opposed to just ingesting content) that will abandon desktop, and other ones like Mac UI isn't getting worse
Comment by hulitu 1 day ago
... shitty.
Comment by sprash 2 days ago
This also means that I heavily disagree with one of the points of the presenter. We should not use the next gen hardware to develop for the future Desktop. This is the most nonsensical thing I heard all day. We need to focus on the basics.
Comment by silisili 2 days ago
Comment by WD-42 2 days ago
Comment by array_key_first 1 day ago
They basically never remove features, and just add on more customization. You can get your desktop to behave exactly like Windows 95, if you want.
And the apps are some of the most productive around. Dolphin is the best file manager across every operating system, and it's not even close. Basic things like reading metadata is overlooked in all other file managers, but dolphin gives you a panel just for that. And then tabs, splits, thumbnails, and graph views.
Comment by rustcleaner 1 day ago
I use XFCE now.
Comment by vortext 2 days ago
Comment by sho_hn 2 days ago
For example, we intentionally optimized Plasma 5 for low-powered devices (we used to have stacks of the Pinebook at dev sprints, essentially a RaspPi-class board in a laptop shell), shedding more than half the menory and compute requirements in just that generational advance.
We also have a good half-decade of QA focus behind us, including community-elected goals like a consistency campaign, much like what you asked for.
I'm confident Plasma 5 and 6 have iteratively gotten better on all four points.
It's certainly not perfect yet, and we have many areas to still improve about the product, some of them greatly. But we're certainly not enshittifying, and the momentum remains very high. Nearly all modern, popular new distros default to KDE (e.g. Bazzite, CachyOS, Asahi, Valve SteamOS) and our donation totals from low-paying individual donors - a decent proxy for user satisfaction - have multiplied. I've been around the commnunity for about 20 to 25 years and it's never been a more vibrant project than today.
Re the fantastic talk, thanks for the little KDE shout-out in the first two minutes!
Comment by kvemkon 2 days ago
I can't imagine what I'd be doing without MATE (GNOME 2 fork ported to GTK+ 3).
Recently I've stumbled upon:
> I suspect that distro maintainers may feel we've lost too many team members so are going with an older known quantity. [1]
This sounds disturbing.
[1] https://github.com/mate-desktop/caja/issues/1863#issuecommen...
Comment by snovv_crash 2 days ago
For content creation though, desktop still rules.
Comment by immibis 2 days ago
Comment by hollerith 2 days ago
When I need to get productive, sometimes I disable the browser to stop myself from wasting time on the web.
Comment by whatever1 2 days ago
Comment by hollerith 2 days ago
I guess the larger point is that you need a desktop to run vscode or Figma, so the desktop is not dead.
Comment by shmerl 2 days ago
Comment by johnea 2 days ago
B) However, even without watching the video, it must be describing corporate product UI, because in the free software world, there is a huge variety of selections for desktop (and phone) UI choices.
C) The big question I continue to come back to in HN comments: why does any technically astute person continue to run these monopolistic, and therefore beige, boring, bland, corporate UIs?
You can have free software with free choice, or you can have whatever goggle tells you...
Comment by virtualbluesky 2 days ago