SDF Public Access Unix System
Posted by neehao 2 days ago
Comments
Comment by somat 1 day ago
ssh to applicant@register.public.outband.net
instructions at https://www.public.outband.net note that it's ip6 only.
It is pretty pointless, nobody needs or wants a unix shell account in this day and age. But I had fun setting it up, it started as an exercise to see what a shared multiuser postgres install would look like and got a little out of control. My current project is getting a rack of raspberry pi's(6 of them in a cute little case) hooked in as physical application nodes.
Comment by hanslub42 1 day ago
I do. But I do not need just any Unix shell account, I need old and weird ones! I develop and maintain a portable utility (rlwrap) that is aimed at users of older software, who are often also using older or even obsolete systems.
For years, I used Polarhome (http://www.polarhome.com/) as a "dinosaur zoo" of obsolete systems (thans, Zoltan!) For every new release, building it on a creaky Solaris or HP-UX machine would expose a few bugs.
Because older systems are being replaced by (much more uniform) newer ones, there is a diminishing need for such extreme portability. This is also the reason that Polarhome closed in 2022.
In spite of this, testing on many different systems improves general code quality, even for users of mainstream systems like linux, BSD or OSX.
Of course, I could setup a couple of virtual machines, but that is a lot of hassle, especially for machines with uncommon processor architectures.
Comment by sureglymop 1 day ago
Comment by rlonstein 22 hours ago
> I do. But I do not need just any Unix shell account, I need old and weird ones! I develop and maintain a portable utility (rlwrap) that is aimed at users of older software
Thank you, personally. I've used it in several contexts not just old systems, for example rlwrap is recommended with Clojure (okay, perhaps that's a comparatively small audience).
Comment by marttt 22 hours ago
Comment by mghackerlady 21 hours ago
Comment by mghackerlady 21 hours ago
a powerpc xserve (running OSX server)
a sparc box (on solaris)
an alpha box (on either VMS or Digital Unix)
a pa-risc box (hp-ux)
a modern power box (Rocky or AIX)
an itanium box (running either VMS or NT depending on what the alpha is running)
a pi cluster (plan 9)
and a commodity x86 server (running OpenBSD, FreeBSD, Debian, Hurd, Redox, Serenity, reactos, and AROS).
and make a MOAP (mother of all pubnixes). if anyone has any hardware they'd like to donate, get in contact :)
Comment by icedchai 16 hours ago
I have a Sparc, Alpha, NextStation, and SGI in my collection. I'd like to add an AIX system, ideally with PowerVM/LPAR support. I used to work at a place that built everything on AIX (this was 20+ years ago) and the virtualization functionality was pretty neat.
Comment by mghackerlady 16 hours ago
Comment by zie 21 hours ago
Unless it's a super fun hobby for you, I wouldn't plan on this being very fun after the first dozen random crashes.
Comment by mghackerlady 21 hours ago
Comment by zie 13 hours ago
Comment by electroly 21 hours ago
Maybe in the modern age someone could make a "polarhome in a box" that offers a similar gamut of systems, but via preconfigured emulators that you can simply download and run.
Comment by hanslub42 20 hours ago
Until now, I have used qemu (or rather qemu-system-aarch64 in combination with binfmt-misc) on Linux to emulate e.g a Raspberry pi running on arm64. This works very well, but for e.g. Solaris or HP-UX there is the extra hurdle of getting hold of bootable media that will not freak out in the unfamiliar surroundings of a qemu virtual machine.
I have never tried, and it is possible that I overestimate the difficulty...
Comment by wmlavender 18 hours ago
KVM (x86 and x86_64): Linux, BSD, OSX, Hurd, Haiku, MSDOS, Minix, QNX, RTEMS, Xenix, Solaris, UnixWare, Windows 95 through 11.
QEMU (for non-x86): AIX 4, Linux (m68k, arm, sparc, powerpc, mips, riscv), OSX (ppc), Solaris 8 (sparc), SunOS 4.1.4 (sparc), Windows NT 4 (mips)
SIMH (for old DEC computers): NetBSD, VMS, Ultrix, RSX-11M, RT-11
Some of them can be quite finicky to get to work. Xenix was especially hard.
Solaris 11 is quite easy to get running in QEMU/KVM though. You can download the media from Oracle.
The only real hardware I routinely run has either Debian Linux, macOS, or Raspberry Pi.
Comment by NoSalt 18 hours ago
This is not true at all. I have been a member of SDF for over 15 years now and I use it all the time. Most recently, when HostPapa tried to tell me my sftp issue was on my end, and I told them that I was able to recreate the problem from the west coast and the east coast; my home on the east coast and SDF on the west coast. Finally they listened and fixed my issue ... that was on THEIR side, not mine. I like having the ability to compute from different parts of the country, as it lets me do things like that.
Comment by assimpleaspossi 23 hours ago
Comment by asimovDev 1 day ago
If you are not feeling like watching a long series, I recommend checking out Macross Plus, from the author of Cowboy Bebop and Samurai Champloo
The series is known as Robotech in the USA. The original series is not available legally in the USA to my knowledge but should be available on Japanese blu rays with english subtitles or on your favorite Linux ISO sharing website. The rest of the entries are on Disney+ or the aforementioned websites.
Comment by lizknope 1 day ago
Comment by 0x264 22 hours ago
Comment by CursedSilicon 1 day ago
He's an absolutely kind soul who is deeply interested in all kinds of retro projects. I wish there were more folks like him in tech generally
Comment by incanus77 19 hours ago
Comment by buildbot 21 hours ago
Comment by CursedSilicon 15 hours ago
(Disclaimer: I'm an exhibitor. So I'd love more attendees!)
Comment by iszomer 1 day ago
Comment by kristopolous 1 day ago
Somehow I still remembered most of the shell syntax in a book I read about it probably in 2001. Don't ask me ... I don't know how either.
Got bored in about 10 minutes but still, another box checked off!
Comment by fleeno 23 hours ago
Comment by Suzuran 22 hours ago
Comment by ButlerianJihad 21 hours ago
https://www.pearsonhighered.com/assets/samplechapter/0/2/0/5...
Comment by Suzuran 21 hours ago
Comment by fleeno 20 hours ago
Comment by Suzuran 19 hours ago
The slowest would be the 11/725, which was a cost-reduced 11/730 that had a reduced clock speed and half of the bus slots filled with epoxy to limit expansion. The 11/725 was so slow that using it was an act of masochism; It was slower than your 11/23+.
Those models were pretty rare though. Even though they were cheaper than an 11/750 the performance drop from the 750 to the 730 was too severe to justify even the reduced cost. If that were all then maybe replacing PDP-11s being used in industrial applications might have saved it but the 730 was still too expensive versus the existing PDP-11 products, and the 725's limited expansion made it less attractive than those same PDP-11 products. The PDP-11 thus outlived both the 725 and the 730.
Comment by icedchai 22 hours ago
Comment by mghackerlady 21 hours ago
Comment by kstrauser 20 hours ago
Comment by mghackerlady 19 hours ago
Comment by avhception 1 day ago
Comment by mghackerlady 21 hours ago
The PDP series brought us Unix and GNU, and the VAX was the only mainframe capable of competing with IBM. DEC was the largest terminal manufacturer (they made the vt100 and vt220. if you've ever run a terminal emulator, chances are it's emulating one of those or a machine that did). They created CP/M (and by extension DOS). DEC is very well known
Comment by icedchai 15 hours ago
Comment by mghackerlady 15 hours ago
Comment by icedchai 15 hours ago
Comment by mghackerlady 13 hours ago
Comment by ratxue 19 hours ago
Comment by dharmatech 1 day ago
Side note: here's my workflow for running Plan 9 on Windows:
Comment by seblon 1 day ago
Just a question to HN: should I wait more, try again? Or should I simply publish the vulnerabilities somewhere? If yes, where? It's my first time that I found a vulnerability at my own, not sure how to deal with that.
Comment by bayindirh 1 day ago
Their plate is already quite full and they operate a whole universe of services, so cut them some slack.
It's not an ordinary service which is exposed to internet trying to turn a profit. They run SDF, two Mastodon instances, a mail server, a Git server, trying to salvage/keep alive living computer museum (SDF Vintage Systems), etc. etc.
Comment by dwedge 23 hours ago
Comment by SyneRyder 22 hours ago
SDF welcomed everyone openly during the initial Mastodon waves, so it was all very Eternal September.
If you're joining to make a spare account to participate with SDF people, awesome! But if you want it as your identity for all of Fedi, I think that would be a bad experience. I ended up getting my own MastoHost account for a while and it was a vastly better experience, until I burned out on Fedi.
SDF is a super fun place to experiment with Gopher though. I absolutely recommend getting your own Gopherhole on SDF. It's like the old Geocities days but in ASCII. (And make sure you grab Lagrange as your GUI Gopher / Gemini client. I liked Phetch as my terminal Gopher client.)
Comment by bayindirh 21 hours ago
We've completed our first phase of database clean up, thank you for your patience. The impact on performance was heavy, but it was a necessary step. All active users and their posts, profile, connections and media will be migrated to the new servers. Once that has been completed, any remaining data will stay online for further migration and clean up. Our instance is nearly 10 years old of constant daily operation, but we ran into a migration wall which held us back on 4.1.x. Now that it is deprecated, we will do our best to jump to the latest version rather than migrate through. Your support and patience has been greatly appreciated.Comment by bezier-curve 1 day ago
Comment by beej71 15 hours ago
I agree with you that the social downtime is bad. People just won't use the service.
Comment by zorked 1 day ago
Comment by TacticalCoder 1 day ago
You can't have it both ways: if it's not a big deal, then he can publish it.
If you say "Don't publish", then you acknowledge that it's a big deal.
I say to GP: "Congrats for finding a shell escape, it's always a big deal. But don't publish it... Yet".
Give them a chance to fix it. But it they don't even answer to the emails, even just saying: "thx we're busy we can't fix right now but will do", then at some point you just publish.
It doesn't take long to answer an email saying "thanks, we'll fix it eventually".
Comment by Suzuran 22 hours ago
If they can't commit to a hard timeline of less than a few days, then publish. What happens next is not your fault - it was inevitable anyway.
Edit for clarity: This is just in general, not specifically SDF or small orgs or large orgs. The internet does not care about the difference. The internet just does not care period. Nobody is going to give anyone else any breaks, and especially not a botnet.
Comment by nabogh 1 day ago
Comment by glitchc 16 hours ago
Comment by seblon 4 hours ago
But the whole thing is: if you can escape as non verified user, than you can mass automate it to do ddos etc...
Comment by pratyahava 20 hours ago
Comment by seblon 4 hours ago
Comment by aboardRat4 1 day ago
Perhaps just run "bash -c 'stress --cpu 64 ; echo fix your shell escape'"l " or something like that.
Comment by yashasolutions 1 day ago
Some security practices sometimes feels like someone stabbing you just to prove you could be stabbed. Then they point at the wound and say: "See? You should be more careful."
Yes, the risk is real, but creating harm to demonstrate it isnt the same as protecting people.
Comment by bayindirh 1 day ago
If I ever experienced something like that, I'd be banning the person (or limiting their resources drastically) for 60 to 90 days to bring the impact of this matter to their attention.
Anything affecting users on a system is not harmless.
Comment by justsomehnguy 18 hours ago
Comment by anthk 1 day ago
You can do a lot with S9 Scheme and the Unix API/syscalls it supports.
Comment by trashb 1 day ago
I regularly visit and enjoy reading the phlogs of their members as well.
Comment by mackeye 1 day ago
Comment by exitnode 1 day ago
Comment by tomhow 1 day ago
SDF Public Access Unix System - https://news.ycombinator.com/item?id=32340635 - Aug 2022 (29 comments)
SDF Public Access Unix System - https://news.ycombinator.com/item?id=31076886 - April 2022 (46 comments)
SDF Public Access Unix System - https://news.ycombinator.com/item?id=14940790 - Aug 2017 (29 comments)
SDF – Public Access Unix System - https://news.ycombinator.com/item?id=14134798 - April 2017 (51 comments)
Comment by jaypatelani 1 day ago
Comment by bombcar 21 hours ago
Comment by buildbot 21 hours ago
Very cool how they tried to move and preserve many of the living computer museum’s computers before Paul Allens sister could sell them all off. https://wiki.sdf.org/doku.php?id=vintage_systems:lcml_collec...
I remember seeing the TOAD systems when I visited in 2016 long before they closed, it’s very sad that people no longer get to experience computer history in person the same way.
Comment by jbaber 2 days ago
"this page was generated using ksh, sed and awk"
Comment by jbaber 2 days ago
Comment by bombcar 21 hours ago
Comment by miggol 23 hours ago
> While we did initially start out on a single computer in 1987, the
> SDF is now a network of 8 64bit enterprise class servers running
> NetBSD realising a combined processing power of over 21.1 GFLOPS!
Which piqued my interest about how that compares to today's computers. nVidia's venerable 1080Ti from 2017 measures about 11300 GFLOPS, or 11.3 teraFLOPS. About a fifty times increase.
Comment by Narishma 23 hours ago
Comment by kls0e 19 hours ago
Comment by pestle 1 day ago
Comment by bombcar 21 hours ago
It’s much less needed now.
Comment by trbleclef 19 hours ago
Comment by pailingems 20 hours ago
Comment by hsnewman 1 day ago
Comment by vjay15 1 day ago
Comment by user3939382 2 days ago
Comment by anthk 23 hours ago
Comment by whalesalad 20 hours ago
Comment by adaptit 1 day ago