College instructor turns to typewriters to curb AI-written work
Posted by gnabgib 2 days ago
Comments
Comment by throwatdem12311 2 days ago
We already had AI proof education.
Comment by Al-Khwarizmi 2 days ago
Then we did a university reform, partly with the excuse of aligning with the rest of the EU within the Bologna process (and I say "excuse" because that's what it was, because the politicians introduced some things with that pretense that weren't like that in the rest of the EU at all, and it was perfectly possible to comply with Bologna without doing them) and partly to copy the US/UK ways. And one of the pillars of that reform was continuous assessment, and evaluating coursework.
As a consequence of this, first of all working class students were royally screwed. Because suddenly it wasn't OK to just organize yourself to prepare the exam, you had to attend lots of sessions to earn points, which put students who work at a disadvantage. And second, passing by cheating became possible, even before LLMs. People tend to forget that before everyone got access to ChatGPT, some people had access to experts (family members, or even paying someone to do the work).
Now that this kind of cheating has been democratized and everyone can do it instead of just the most privileged with access to experts or money to pay them, people act all outraged. Although pretty much nothing is being done, except for using snake oil detectors, or sometimes increasing difficulty of assignments to make them LLM-proof (with which you screw the students who actually want to learn without LLMs).
They spent years indoctrinating us (professors) in training courses on how the old exam-based ways were wrong (the "Napoleonic" model, they called it... none of them seems to entertain the thought that maybe if it had been working essentially unchanged since Napoleon it wasn't that bad, and you need solid reasons to change it beyond "this is old so let's change") and the new ways were the bee's knees. Like in the Milgram experiment, it's difficult for people to back down and acknowledge that they have been wrong, even when the solution is obvious.
Comment by cataphract 2 days ago
I definitely could tell the difference, though most of the time I just studied full 4-7 days before the exam.
Comment by wongarsu 2 days ago
It was easy to cheat on the assignments. Working on them in groups was common and sometimes encouraged. The only person you could really cheat was yourself (and a TA who had to grade one more exam)
Comment by mattmanser 2 days ago
So that was the justification used to switching to a less impactful final exam.
No idea how true that is.
We were also told learning a phonetic alphabet was better for young children learning to read than using the old ABC system.
As far as I have heard, that turned out to be based on one person's fantasy and zero evidence and has actually had negative impact on children learning to read.
Comment by cornholio 2 days ago
Comment by andai 1 day ago
Comment by inemesitaffia 1 day ago
Comment by kelipso 2 days ago
Comment by rgblambda 1 day ago
To my knowledge they still teach about audio/visual/kinetic learners and how you should structure the way you learn around which one you are. This has been debunked for decades.
Comment by seanmcdirmid 1 day ago
Not just the UK, pedagogy/education is a very soft science, along with any other field that revolves around human behavior (psychology, sociology, etc...).
Using AIs in experiments and studies will be an improvement even if they do not accurately reflect human behavior, just because you don't need a harm review and you can repeat your experiments multiple times under different variables.
Comment by Al-Khwarizmi 2 days ago
Comment by Telemakhos 1 day ago
Comment by Eisenstein 1 day ago
Comment by necovek 1 day ago
Comment by elAhmo 1 day ago
Comment by schnitzelstoat 1 day ago
In the UK it's common for exams to have almost all of the weight. In my Physics degree almost all courses were entirely dependent on the exams.
Including a final exam which examined the entire four-year MPhys course known as General Problems where even getting 30% was considered a good grade!
Comment by Al-Khwarizmi 1 day ago
Comment by nradov 1 day ago
Comment by Al-Khwarizmi 1 day ago
Comment by oersted 1 day ago
It is definitely a lot more work for the professors though, most of my family are teachers. It's a lot of assessments and it's very rare to have funding for TAs. Some think that the extra work with worthwhile for the sake of transmitting the knowledge more effectively, but not all of them do.
Frankly, you sound a bit bitter about it from the professor's perspective, and somewhat rationalizing why it is bad for the students. But students do generally appreciate it, and yes good students too, not just cheaters. I think both good and bad students end up learning more and hating the process less.
Your comments on Bologna do resonate though, it was very confusing when I continued to study in Germany and the Netherlands. The massive reforms were supposed to be for alignment with EU, but if anything it got more misaligned. They unified all 3 and 5 year degrees into 4 year degrees, but in most of EU all degrees are 3 years now, for instance.
Regarding the parent comment, indeed, my Computer Science degree was mostly hand-written exercises and exams, and it wasn't that long ago. The degree is about fundamentals, about understanding concepts and applying them, about the tools you need to learn anything in CS afterwards. You are expected to learn most of the practical skills for building software on your own, since they are ever-changing. And I have to say, that style of education has served me very well in my career.
PS: I was also surprised to learn that most of the undergrad exams in Germany, and some in NL, are oral. I can see how that might be a disadvantage to some, but writing is also a disadvantage to others. I quite liked it, less intense than a long written exam, and I think the professor can get a much clearer understanding of the student's grasp of the subject. But again, it's a ton of work for the professor, 20-30 mins per student one-on-one, giving them your full attention, adds up quickly.
Comment by CBarkleyU 1 day ago
Not sure when this was supposed to be the case, but for actual universities (not meant in a deragotary way, Germany has two types of higher eduction) in hard sciences, most classes are graded on a single written exam. Both in undergrad/bachelors and masters.
Unless things have drastically changed in the last five years...
Comment by oersted 18 hours ago
I know that an oral exam might seem less serious and rigorous, but I do think the professor can get a better grasp of how much the student actually understands the subject through an interactive interview.
Comment by phatskat 2 days ago
The best was when she barely unscrewed one of this big DIN connectors so at quick glance it looked fine, but wasn’t fully connected.
Comment by onionisafruit 2 days ago
Comment by thunfischtoast 2 days ago
That's evil haha. It's the case where you unplug and plug again everything, changing seemingly nothing, but then it works
Comment by jazzyb 2 days ago
Comment by red-iron-pine 23 hours ago
sounds like some of the technical exams i'ev taken, and/or one or two job interviews
Comment by altmanaltman 2 days ago
Comment by Y-bar 2 days ago
If it is mostly a ”show your work”/”show your reasoning” kind of grading where your width and depth of attempts are more important than success then it seems OK.
Comment by ryandrake 1 day ago
Comment by onionisafruit 1 day ago
Comment by bitwize 1 day ago
Comment by zoom6628 2 days ago
Lots of skills from those old days that have been lost/ignored in the pretence of productivity.
Comment by malux85 2 days ago
Comment by phatfish 2 days ago
The internet enabled all the complexity we have today. LLMs will have a similar effect, but instead of engineers actually having to understand the system (even in it's complexity) they will just be querying the oracle to build things or solve problems.
When the oracle can't help (or maybe refuses to) is when it gets interesting.
Comment by synack 2 days ago
Comment by brunoarueira 1 day ago
Comment by jodrellblank 1 day ago
Comment by kmoser 1 day ago
How do you define "productive?" Lines of code written per day? Bugs fixed per man hour? Fewest reported bugs per end user?
The fastest compiler in the world won't help you find all the runtime bugs that simply wouldn't have existed in the days of punch cards, when code was written with with more care and attentiveness since there wasn't a fast edit/compile/test development loop. YMMV, of course.
Comment by nsyne 2 days ago
It's a shame that they are also way more susceptible to cheating with AI.
Comment by Aurornis 2 days ago
They were more prone to cheating before AI, too.
Cheating has always existed at some level, but from talking to my couple of friends who teach undergrad level courses the attitudes of students toward cheating have been changing even before AI was everywhere. They would complain about cohorts coming through where cheating was obvious and rampant, combined with administrations who started going soft on cheating because they didn’t want to lose (paying) students.
AI has taken it further, with students justifying it not as cheating but as using tools at their disposal.
I was talking to my friend about this last week and he was frustrated that several of his students had submitted papers that had all the signs of ChatGPT output, so he asked them simple questions about their papers. Most of them “couldn’t remember” what they wrote about.
It’s strange to me because when I went to college getting caught cheating was a big problem that resulted in students getting put on probationary watch and being legitimately scared of the consequences. Now at many schools cheating is routine and students push the boundaries of what they can get their classes to accept because they have no fear of any punishment. YMMV depending on the institution
Comment by nradov 2 days ago
Comment by Aurornis 1 day ago
An interesting side effect of the AI gold rush is that companies are starting to look critically at these do-nothing email jobs where someone forwards emails around and makes slides and Notion pages.
I’ve worked with many who occupied jobs that didn’t contribute much other than organizing text and sharing it around, but they got a pass because it looked helpful enough. Now it’s a lot harder to justify those positions when management realizes that having the not-really-competent person summarize communications and documents isn’t much better than having ChatGPT do it.
Comment by DaSHacka 1 day ago
Comment by jimbokun 2 days ago
Comment by nradov 2 days ago
IBM used to hire software developers based on aptitude test scores regardless of formal education, then put them through an extensive internal training program. It worked fine.
Comment by ghaff 1 day ago
Comment by watwut 2 days ago
People who got through via cheating in college tend to be low performers in work for the exact same reasons.
Comment by LtWorf 2 days ago
Comment by razakel 1 day ago
Comment by kcexn 2 days ago
So a student who only understands the basics should be able to answer most of the easy questions and students who have a deeper understanding can answer the harder ones.
Well-written exams should feel pretty fair and leave students feeling like the result they got is proportional to the effort they put into studying the material (or at least how well they personally felt they understood the material).
Comment by CSSer 2 days ago
Comment by sudahtigabulan 2 days ago
Is this kind of test - many short questions - a standard thing for math in your country?
My university exams were pretty much all "2-question", in 90 minutes.
The first half was an essay where you have to reproduce a lesson from the curriculum, in your own words.
The second half was "the formulas" - you have to develop one or two formulas from first principles.
I once got an A- even though I got "the formulas" half very wrong. As the teacher explained later, I simply chose the coordinate system beginning at not the same place the textbook did. And this was supposed to be a bad teacher - he actually gave Ds to almost all of us (180 people). This was a makeup exam.
Comment by hawaiianbrah 2 days ago
Comment by CSSer 2 days ago
Comment by bawolff 2 days ago
Comment by ransom1538 2 days ago
Comment by bawolff 2 days ago
Comment by MengerSponge 1 day ago
Comment by thaumasiotes 2 days ago
You've never been a teacher.
Comment by lorenzhs 2 days ago
Comment by nullsanity 2 days ago
Comment by gpm 2 days ago
Assignments and projects are great for learning, but suck for evaluation.
Comment by lokar 2 days ago
Another example, lit classes where the grade is based on time limited, open book exams, hand written in "blue books"
Read the book, pay attention in class, spend 90 min writing an essay, and you are done.
Comment by jason_zig 2 days ago
Comment by musicale 2 days ago
However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
Comment by TheOtherHobbes 2 days ago
A lot of Gen Z are ferociously anti-AI, but for tribal and emotional reasons, not because of a nuanced understanding - which is ironic, because the nuanced reasons for being wary of AI are much stronger than the usual talking points about "stealing art".
Being tribal and emotional is going to make Gen Z easier to replace, because nuanced strategic insight is less common and more useful.
Comment by antonymoose 2 days ago
Personally, I dropped out despite a full ride+ becuase why would I put in work for a no name state school when I already has an FTE job as a developer out of high school anyway.
Turns out fraudulent action can still get the bag.
Comment by WoodenChair 2 days ago
Comment by nradov 2 days ago
Comment by lupire 2 days ago
Comment by II2II 2 days ago
The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
Comment by jimbokun 2 days ago
And for most students that’s all they really care about.
If the companies stop valuing the diplomas, students will stop paying tuition to attend, and the universities eventually collapse.
Comment by TheOtherHobbes 2 days ago
You can imagine a world where the Corporate or State AI handles education, tailors it to individual student levels and talents, and assigns work based on its own direct experience of a lifetime of interaction.
You can also imagine that in that world where most humans would be redundant - unless the AI was optimising for human-to-human jobs and for evidence of unusual insight and creativity, not managing bullshit work for corporate profit.
Comment by lupire 2 days ago
Comment by jmye 2 days ago
Comment by syntaxing 2 days ago
Comment by Cthulhu_ 2 days ago
Unfortunately a lot aren't, they feel like they have to be there or these courses are the only path for them to get a good job. And unfortunately they end up in the workforce, too. You'll often see teams with one good developer and a lot of hangers-on.
Comment by fma 2 days ago
Comment by zdragnar 2 days ago
Comment by nradov 2 days ago
Comment by jillesvangurp 2 days ago
Writing papers is a useful skill to have. And many students aren't very good at that. I taught some classes during my Ph. D. and supervised some students with their master thesis and PhD thesis work. Many students get their degrees without that really getting addressed. At least Computer science degrees in the Netherlands just spend very little time on writing skills. You get students with high school levels of English and Dutch and that's it.
I learned to write properly only when I started my Ph. D. My supervisor made me do it right before he allowed me to submit papers for publication.
AI might actually be good for education long term. It will result in a more personalized approach, which I think is good. There are plenty of ways to test students that are more engaging and interesting for both teachers and students than some of the old ways. You can't fake knowledge when you do a verbal test. Or test people with a good old written exam.
And of course for teachers, you can automate a lot of the verification work. This can be a lot of work.
Comment by amarant 1 day ago
Which strikes me as a terrible way to teach and test programming skills. If you're teaching to program without so much as syntax highlighting, you're not preparing your students for anything that even remotely resembles the industry they aspire to work in.
Honestly, these days universities should probably find a way to incorporate AI into their teaching, rather than fight it. Anything else is betting that AI will not stick around, which strikes me as a hopelessly naïve bet. Especially for software development.
I don't pretend to have all the answers, I don't know how to teach systems thinking in a appropriate way either. But I'm pretty sure typewriters isn't it, unless your students are hoping to get hired by Ada Lovelace, it's just not going to be relevant.
Comment by halJordan 1 day ago
Education isn't any churning out a cog fitted to a bigger gear. A college education should not be preparing you for intellij or familiarizing you with va code.
It should be teaching foundational skills with the discipline. And I just don't understand how you look at the amazing things we've accomplished since ww2 and say "the education system that taught this fucking sucks"
Comment by amarant 1 day ago
Students today aren't studying to repeat the progress made since the forties, that's been done by their grandparents, they're looking to drive the next 40 years of innovation, and if that involves typewriters I'll eat my hat
Comment by Balooga 1 day ago
Comment by senbrow 1 day ago
The vast majority of students and employers treat them as vocational certificates in practice, and the profession would almost certainly benefit from adapting curricula to more closely match that reality.
Foundational concepts are still necessary, but I don't buy the argument that we should continue teaching like it's 1946.
Comment by Cthulhu_ 2 days ago
But there were already heaps of problems with tech in education before AI.
My CS projects were often pretty free-form so in theory I could've just used AI - today, anyway. But a big part of the grade was a face to face interview where you actually had to talk about the code you wrote. Anyone lifting along with other people who didn't actually do any work would fall through then.
Comment by stingraycharles 2 days ago
Comment by nradov 2 days ago
Comment by WoodenChair 2 days ago
Comment by hsbauauvhabzb 2 days ago
Comment by nradov 1 day ago
Comment by lupire 2 days ago
Comment by tehjoker 2 days ago
Comment by nradov 2 days ago
Comment by eudamoniac 2 days ago
Comment by robryan 2 days ago
Comment by icelancer 2 days ago
Comment by raincole 2 days ago
Comment by mbf1 2 days ago
Comment by burnto 1 day ago
The only exception is that when I got into grad level classes we did have some big programming projects. But most of that programming happened on sparc stations, and it was actually just easier and more productive to sit at the machine in person with its nice big (at the time) display with all the other folks doing programming projects. Those machines had the standard dev toolchains provisioned that weren’t easy (at the time) to do on a dorm room Mac or windows computer.
I really think a lot of the ways we can reduce reliance on AI for thinking is to just set up systems where it’s not an inviting or rewarding option.
Comment by asimpletune 1 day ago
I don't want to be polemic, but I really miss those days.
Comment by burnto 1 day ago
One thing I recall is that the grading policy made it very clear that minor syntax issues were inconsequential in handwritten answers. And more advanced classes only wanted pseudocode. Which are exactly the right priorities.
Comment by pokstad 1 day ago
Comment by BobbyTables2 2 days ago
Comment by Izkata 2 days ago
Comment by rz2k 1 day ago
He concluded the class by talking about the importance of observing patients, and pointed out that he had tasted a different finger than the one he had put in the beaker.
Comment by yurishimo 2 days ago
Comment by lupire 2 days ago
Comment by eikenberry 1 day ago
Comment by giancarlostoro 1 day ago
If my college is doing this, I cannot imagine how many others are also impeding on their entire goal: education.
Comment by andai 1 day ago
Apparently you learn to double check your work!
Comment by ghighi7878 2 days ago
Comment by cyberax 2 days ago
Comment by Cthulhu_ 2 days ago
Comment by cyberax 1 day ago
Comment by bitwize 1 day ago
But yeah, everything was hand-written. On sheets of paper with pencil. I even had to write x86 assembly out by hand for my CPU architecture class. Of course, laptops were available back then but not cellphones and certainly not LLMs, so cheating by electronic means probably presents a stickier wicket now than it did back then.
Comment by Eisenstein 1 day ago
Comment by wouldbecouldbe 2 days ago
Comment by casey2 1 day ago
Comment by j45 1 day ago
I'm sure they had some kind of submit your code as assignment and using testing as a way to grade the assignments.
Comment by alanmercer 2 days ago
Comment by SamHenryCliff 2 days ago
Comment by recursivedoubts 2 days ago
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
Comment by zamadatix 2 days ago
The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
Comment by bee_rider 2 days ago
* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.
Comment by acbart 2 days ago
Comment by kelnos 2 days ago
For presentations, usually you spend a lot of time preparing for them (similar to exams), building a slide deck or pages of notes that you refer to while giving the talk (not similar to exams). Sure, you do have to be able to think on your feet, but I don't think the comparison to a sit-down exam is all that apt.
For mundane work tasks, you have the internet and whatever reference materials you want (including LLMs, these days); this sort of thing is so different from a sit-down exam that it's almost comical that you'd try to equate the two.
I'm not saying I know of a better way to evaluate learning than proctored, in-person exams, but suggesting that sort of situation is particularly relevant to real life... no, no way.
Comment by mettamage 2 days ago
The software engineer one: here is a takehome assignment. One week later: finished!
To be fair, they both represented pretty well what work I'm going to do. The data analyst didn't show that well how much I'd also be data engineering, but whatever, I was a SWE before having a DA stint. Back to SWE again though.
Comment by fluoridation 1 day ago
Sounds like you're saying that it's acceptable to be a little foggy about the limits of your knowledge, as long as you remember the core foundations. For a first year medicine student, the edge of their knowledge will include things that are the core foundation of a practicing doctor. Why should such a student be tested as if he already had several years of familiarity with the subject when this is all relatively new material to him?
Comment by bee_rider 1 day ago
> Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental.
I don’t think this is how people actually engage with exams. I had a lot of folks in office hours who treat the exam as the ceiling of their competence, rather than the floor, and do things like cram or try to figure out exactly what topics will be on the test to study just those. If the goal is to establish a 100% solid foundation for things you have to know to be a professional (which I think is a great goal), I prefer something like Mastery Learning to the conventional exam process. (Maybe we could call Mastery Learning conventional exams a different set of thresholds, unusual thresholds if we want to look at it that way).
> From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.
I worked with some professors who I thought gave good exams, some who gave less good ones, so I don’t think the premise is completely divorced from reality. But it seems more like something the good instructors overcame, rather than a construct that is really helpful.
Comment by deepsun 2 days ago
Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.
Comment by eichin 2 days ago
Comment by simpaticoder 2 days ago
Comment by acbart 2 days ago
Comment by kelnos 2 days ago
When I sit down to debug a complex application, I'm drawing my prior 25+ years of experience. While I certainly would rather fix the problem faster rather than slower, I don't have a time limit, and usually taking my time (or even leaving the problem alone for hours or days) can be more effective than trying to work quickly and get everything done immediate.
The last time I sat for an exam was in 2003, and I honestly have not experienced anything in life since then that feels like that. Even job interviews have not felt similar enough to me to evoke that same feeling. (Frankly, I've enjoyed most job interviews; I don't think I've ever enjoyed an exam.) That's just my experience, of course, but I don't feel like an outlier.
Comment by II2II 2 days ago
Sort of. In real life, you are expected to have immediate knowledge of your field and (in some environments) be able to perform under pressure. I'm not going to pretend the curriculum is a perfect match for what people should know, but it does provide a common baseline to be able to have a common point of reference when communicating with colleagues. I would suggest the most artificial thing about exams is the format.
> It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
I don't like dismissing the ordeal of people who face test anxiety, but tests are not really high stakes. There is a potential that a person will have to repeat a course if it is a requirement for their degree. At least at the institutions I attended, the grade distribution across exams and assignments, combined with a late drop date, meant that failing a course was only an option if you choose it to be. A student may be forced to face some realities about their dedication/priorities, work habits, time management, interests, abilities, etc.. It may force a student to make some hard decisions about where they want their life to lead, but it does not bar them from success in life. And those are the worse case scenarios. A more typical scenario is that you end up with a lower GPA.
Comment by fluoridation 1 day ago
How is that not high stakes? The result of several months worth of effort hinges on what they do during a 2-3-hour window. If you had to build something and the last step involved a procedure that could potentially tear down everything you made, you would try to find a way to redesign it so that didn't happen, or to limit the scope of the damage.
Comment by Shorel 1 day ago
If you, on the other hand, attend meetings, you will need both deep knowledge (requires many hours of styding) and fast thinking, and the questions will make you realize you know little about many things. You need to have general knowledge of many things, not just whatever you are building at the moment. If you are successful on these meetings, your salary can and will grow.
Also, some meetings will make you realize that college and/or university are life in easy mode. Very few things you consider hard in college will be work-level hard.
Comment by zamadatix 2 days ago
The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.
I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.
Comment by dublinstats 2 days ago
Comment by ssl-3 2 days ago
That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.
But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.
When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.
Comment by dublinstats 2 days ago
Comment by lelanthran 2 days ago
I dunno how you work, but I'd be getting raised eyebrows from people watching my hit google for any question required of my role.
I mean, we're not talking about using calculators here, and we're not talking about vocational training (How do I do $FOO, in docker? In K8s? How do I write a GH runner? Basically any question that involves some million-dollar company's product).
We're talking about college stuff; you absolutely should not be allowed to look up linked lists for the first time during an exam, copy the implementation from wikipedia, port it to your language and move on.
In the real world, we want people who mostly know what to do. The real world is time-constrained (you could spend 2 hours learning to do what they thought you could do based on your diploma, but they'd be pissed to find out that you need to look up everything because that's how you coasted through college).
Exam situations are more like the real world than take-home assignments: High-stakes, high-pressure, timeboxed.
If your real world does not have high-stakes, high-pressure, timeboxed tasks, then you really haven't had much contact outside of your bubble.
Comment by Al-Khwarizmi 2 days ago
Comment by bartvk 2 days ago
My tests are almost 100% in person. Project work included, you can hand something in, but I'm going over line by line and ask what you did there.
I can do this, because while my school hasn't updated the tests yet, my classes are small and I can do all of them in-person.
Comment by TychoCelchuuu 2 days ago
Comment by acbart 2 days ago
Comment by recursivedoubts 2 days ago
Comment by blharr 2 days ago
Comment by api 2 days ago
Comment by JoeJonathan 1 day ago
Comment by another-dave 23 hours ago
It was just expected that you had a grasp of the literature enough that you could argue off-the-cuff in the exam setting, and then you were given leeway if you didn't have exact Harvard style notations to exact date/titles of referenced material.
Comment by make3 1 day ago
Comment by jerf 1 day ago
The next thing that happens is a befuddled "Ask HN: Why Do All The Daisy Wheel Printers On Ebay Suddenly Cost Thousands Of Dollars?".
https://en.wikipedia.org/wiki/Daisy_wheel_printing
You can sometimes locally win an arms race by doing something really exotic that isn't worth the work to defeat, but this is definitely not a strategy that works if everyone adopts it.
Comment by joe_the_user 1 day ago
Student turn-in a mid-term paper. A professor chooses a certain number of points from the mid-term paper and asks for explanations of these in long-hand. Pulling questions this way doesn't seem like it would take more time than a thorough reading of the text.
Oh, but paper reading has been delegated to a drudge you wouldn't trust with pulling question, oh how inconvenient. Which is to say the problems AI introduces to education are strongly related to much of work already being made mindless before AI appeared.
Doctoral candidates do this kind of thing all the time in qualifying exams, but that's after years of graduate school and fresh off doing nothing but reading 100+ books over the course of a few months.
No, High school students can do this. Well, they get impelled to do this. They can't do this now but that's a testament to current education.
Comment by curun1r 2 days ago
So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI? These tools are not going to cease to exist. The students they are preparing are going to live and work in a world where they exist. To my mind, you best prepare students by teaching them how to use the tools most effectively, not by teaching them how to work without the tools. Students should be learning how to prompt AI without hinting it towards a specific answer. They should be learning how to double check the answers AI gives them to ferret out hallucinations. They should be learning how to produce work that is a hundred times more complex than what us older folks had to do in school. We should be graduating students who are so much more capable than any generation before them. I think we're doing them a disservice by trying to give them the same education that was given to those from previous generations. The world they will inhabit has changed radically from the one we entered into following school.
Comment by PunchyHamster 2 days ago
Because using AI is the complete opposite of "I learned programming just to make tests easier".
By learning how to program solver, you not only learned how to program but also learned the method well enough to write it.
By pawning it off to AI to solve, you have learned nothing, not even how to prompt correctly as test questions are usually formulated well enough that AI doesn't need prompt massaging to get it.
You can use AI to get some knowledge about the problem (assuming you won't hit hallucination) but that's not what will happen when you use it for test.
And if you DO want to teach students how to use AI effectively, you can just have an AI class...
Comment by logicchains 2 days ago
If you got AI to produce a working solution, you solved the problem. In the real world nobody who's paying you cares about the method as long as you deliver results. Students taught to solve easy problems by themselves will be at a big disadvantage in the workforce compared to students taught to solve hard problems using AI.
Comment by bregma 2 days ago
Comment by logicchains 1 day ago
The university evaluator is not the one paying you, the one paying you is your boss or customer. It doesn't matter how highly your university professor thinks of you, if you can't solve difficult problems as fast because your university never taught you to solve hard problems with AI, you're going to be at a competitive disadvantage in the workforce when you graduate.
Comment by Gander5739 2 days ago
Comment by logicchains 2 days ago
Comment by mistrial9 1 day ago
Comment by chmod775 1 day ago
For now there's plenty of people who are significantly more capable than AI models. Someone who fully outscources to machines will never join that club.
You have to evaluate students on their own skills before you continue their education, because at some point AI models won't be able to help them. Anyone can use some LLM to pass the first few months of undergraduate engineering disciplines, but if you got through that and haven't learned a thing, you're completely fucked. Worse, you won't even notice the point at which AI starts to fail until you get your test results.
Once the above is not true anymore, education is pointless anyways. However for now AI can at best replace the worst performers and only in some areas.
Comment by logicchains 1 day ago
If at some point AI models won't be able to help them, then give them assignments that reach the point where AI alone isn't enough, so they'll only be able to solve them if they learn whatever is necessary. This is what's meant by "making assignments harder". Students who learn to solve harder problems with AI will be more competitive in the workforce than students who only learn to solve easier problems by themselves. Because AI already allows people to solve harder problems than they could unassisted, but it's a skill that needs to be learned.
As an example, with AI, it'd be a reasonable assignment to ask students to write a working C compiler from scratch. Without AI that'd be completely beyond the reach of the vast majority of students.
Comment by chmod775 1 day ago
Also what do you think is an appropriate assignment for first graders where "AI is not enough"? Are we supposed to give them problems meant for engineering majors?
The things you are saying at best apply to a few select areas of education and you are hyperfocusing on them. What you are neglecting is that a lot of education focuses on teaching tool use: reading and writing is a tool, CAD software is a tool, AI is a tool, even language is a tool. For many people the best way to learn to use tools is being taught by another human being. That human being has to evaluate their progress somehow. If a first grader uses their phone to have text read to them, this tells me very little, except maybe that they can at least understand spoken language to a degree.
Using LLMs effectively, especially without essentially becoming the LLMs meat-puppet, requires a set of skills many 10th graders still struggle with. Skills like putting what you mean into words, extracting meaning from text, and thinking critically about the information you are fed.
Finally there's the matter of philosophy, ethics, and politics, which also happen to be on the curriculum in some places. Are you going to let a LLM argue for you? If you have never learned to evaluate your own beliefs and turn them into something coherent that you can communicate to others, and instead let the LLM argue on your behalf, then congratulations: you have just un-personed yourself because you refused to let others help you become an actual individual in society. You're a sack of meat hooked up to a machine. ... It's probably obvious I feel strongly about this in particular.
At the end of the day, we can at least agree that people should learn to read and write? For now?
Comment by PunchyHamster 19 hours ago
And another question: why you are even at school if all you do is put questions into AI and pass on the answers ?
Comment by justonceokay 1 day ago
Comment by Dumblydorr 2 days ago
Comment by sbuttgereit 1 day ago
1) That school is simply about landing a job.
2) That there is a value in students knowing how to have the AI do problems for them.
3) That follow-on effects of manually solving difficult problems is discountable compared to the direct output of the work.
I would say you're absolutely correct in that people pay for the result and they don't really care how you got there. But that's a pretty shallow rationale which overvalues the ability to be the conduit from the source of requirements to the final output and undervalues the individual ability to think for one's self when faced with the challenges of technological, geopolitical, or simply uncontrolled personal circumstances.
"The conduit", who you seem to be believe is the one with marketplace advantage, is exactly the person I would say is the most vulnerable. Not because getting the AI to produce demands is without value, but that its quickly becoming a task that doesn't need the intermediary at all. Those magicians that can prompt/agent/mcp/etc their way through to positive successes are actively being challenged by the very AI producers which our conduits people now depend on. Removing the need for intermediaries would be a great competitive advantage for any AI vendor able to achieve it. But insofar as intermediaries create output from LLMs, they'll not be very well differentiated: the common wisdom tends to be the output, lest the AI be accused of hallucination or being overly supportive. But when everyone is using AI for everything the opportunities will be in arbitraging that which is missed by common wisdom... filling in the cracks that any responsible AI would simply never venture to consider. Our conduit-person will be at a decided disadvantage because it takes real thought to know when it's best to color within the lines, and when it's best to not do so.
And that's really it. A good education is teaching you about the process of thought and becoming practiced at thinking. I would expect a better educated, thinking person to more easily adapt and make use of technology such as generative AI to solve problems more so than a person that just knows how to deal with today's prompting needs. The thinking person will be able to understand the bigger picture to better get a consistent and high quality series of results than the person just getting results as needed.
And that's really it. The output of a good education is you as a thoughtful & knowledgeable person: the output on the page is merely a means to that end. But if you focus solely on the answer on the page and the only important thing... you're really evaluating the AI, not the person that acted as intermediary.
In otherwords, if the person following your advice comes for a job, simply ask them which AIs they used in the interview and then just sign contracts with those vendors instead... you'll get better bang for your buck cutting out the middleman.
Comment by cubefox 1 day ago
What hard problems could students solve with AI that requires the students to be especially trained? It seems you are thinking of GPT-3 style "prompt engineering". That's a thing of the past. Students can just copy the assignment into the LLM. They don't need to be taught to do that.
Comment by amtamt 2 days ago
That is way to high recurring cost that many won't be able to afford. One could get a second hand calculator or even computer, and then additional resources needed was one's willingness. With mandating AI usage, we'd only increase the gap of haves and have-nots. I personally do not like the idea.
Comment by madrox 2 days ago
Comment by amtamt 2 days ago
Just to put things in context, https://www.bbc.com/news/articles/ce8444gex65o shares income for a good number of people now a days. (note that many of those workers are taking care of a family of 2+ members, most of the time)
Comment by encrux 2 days ago
I feel like at this point it’s an inevitability that given enough time, capable models will be cheap enough for everyone.
Comment by Al-Khwarizmi 2 days ago
For it to be fair, you would not only need good free models, but actual parity between free models and the highest subscription tier the big AI companies can offer. And I don't think that will happen in the short or mid term future.
Comment by neal_jones 2 days ago
Comment by oerdier 2 days ago
An LLM is a force multiplier only, not a replacement. It's a personal assistant to an expert. To use an LLM in a acceptable way, you still first have to learn how to do what it does yourself. I think your suggestion for people to be taught how to use LLMs is justified, but they should do so only after first being taught a no-LLM curriculum. I think this should be entirely after what the notion of an education was in pre-LLM times. Don't incorporate LLMs into our current education, instead teach use of LLMs after our current education.
Comment by vincnetas 2 days ago
So this i think is applicable to AI also, pay for smarter than you AI's pit them against each other, let them supervise each other and measure the outcomes you need. Who cares how they achieve that (sound clinical and scary)
Comment by oerdier 2 days ago
Comment by prox 2 days ago
Comment by Al-Khwarizmi 2 days ago
It's an interesting debate, but I see several reasons not to.
1. As a human you need to learn gradually, e.g. in CS you need to learn the basics of programming before going into more complex stuff. If you embrace AI from the beginning, it can let you skip the basics (why would you code a simple 200-line program if the LLM can do it?) and then you don't have the fundamentals when you reach the more complex level where human thought is needed. It's a similar problem as firing the juniors (because AI can do their work) but then who will become senior if you have no juniors.
2. If you evaluate coursework with the expectation of students using AI, those who pay the $200 subscription will have an advantage over those who pay $20, and in turn over those who use free LLMs. The only way to make it fair would be to provide all students with the best available LLM.
3. While I have heard the analogy with calculators many times, I feel that LLMs are at a different qualitative level. The calculator doesn't really replace human thought, or if it does, it's only some very specific form of it. LLMs replace human thought in a very broad way, so I think they are much more dangerous for learning.
Comment by bendergarcia 2 days ago
Comment by curun1r 1 day ago
It's an old book now, but Neal Stephenson's Diamond Age includes the vision that we should have for education. We literally have the tools today to build his fictional "Young Lady's Illustrated Primer." What he envisioned was not that far off from an iPad with a Claude subscription where Claude has specific goals for the conversation. It's not teachers lecturing a class, it's individualized education where an AI teaches students at their own pace using their own interests. And built into AI is the ability for precocious kids to go beyond the curriculum, either on tangents or to more advanced subjects. This is impossible in a world where a teacher is trying to shepherd dozens of students through a curriculum as a group.
In the 2010s, we got some of the way there with Khan Academy. It was genuinely new that a student could rewatch something until it clicked rather than having to digest a lecture and have any question that didn't immediately spring to mind go unanswered. AI offers the possibility to go a step beyond this. Instead of rewatching the exact same content, AI can present it to the student in multiple ways based on a student's confusion and keep explaining it until a topic clicks. It can find examples of things that a student finds interesting to show how what they're learning isn't just theoretical. If a student likes space, the AI can discuss how the trig concepts they're learning apply to the Artemis II mission. If they like sports, it could apply the same concepts to tennis. Students in literary classes could read different books according to their interests while AI ensures that they understand the same sorts of concepts while discussing them. By customizing based on the specific curiosity of the student, it can make learning far more engaging and actually fun.
To address your #2, schools should be working with Anthropic, OpenAI and Google to shape a new personalized paradigm of educating students. They should be working out deals that give access to AI to their entire district. If I were heading the Department of Education, I would go a step further and get companies to bid on a contract to put their AI in the hands of every public school child in America. A version of the AI where teachers input their curriculum into the AI and students work through it with the AI, either alone or in small groups and the AI reports back to teachers so they can intervene where they are most needed would allow school districts with staffing shortages to serve more students more efficiently and with better results.
Sometimes it feels like our current system of education is only secondarily concerned with students actually learning and the primary concern is testing students to sort them into different tiers to be absorbed into different strata of our workforce. AI does compromise this sorting process to some extent. But if we can get back to the true mission of education and think creatively to deploy AI to best educate students, we have the potential to transform education like never before. What if we don't need to test students? An AI can give an individualized assessment of how well a student has grasped what they're supposed to be learning based on weeks of individualized work. It's as if we can give every student their own private tutor who will report back to the teacher on the student's actual progress. When you have that, stress-inducing exams are a ridiculous substitute.
I've been pretty shocked at how closed-minded the responses to my comment have been. We're supposed to be a community that envisions radically better futures that can be built with technology. And here we have a revolutionary new technology that upends a staid and increasingly problematic part of our society and the majority of the responses are geared towards explaining why that staid and problematic institution should be maintained unchanged. AI is fundamentally a danger to our current education model, but that model can change radically for the better. And I would've hoped that more people here would have recognized that.
Comment by RugnirViking 1 day ago
I say because most of the experiments I have seen in this space have failed. Chatgpt education was quietly removed not that long ago. Khan academy recently said that their Khanmigo AI tutor was facing challenges because students don't want to use it. Its a long-standing observation in the field of education that the miracle of the internet, computers being ubiqutous, phones etc hasn't clearly resulted in improvements to education (theres some minor evidence for or against it, but no blindingly obvious effect)
I worked at one point in ed-tech and the longer I was there the more I realised that nobody wanted this. Students only used it if they were made to, teachers only did it if admins wanted, admins only did it if they were sold on it, and the sales people seemed to be the only people who actually thought it was helping anyone
People often seem to think that teaching is the process of neutrally presenting facts to a learner. That the better and more clear the facts are presented, the better. But any book can do that. The entire business of education, as it were, is the management of motivation in students. Exams function just as much as a tool in this as the institution of coming to a classroom at all, rather than sitting at home remote learning. You need a clear mind, distraction free - just bored enough to find the content acceptably appealing (content cannot be made more interesting).
The best things for education seem to be:
- getting enough sleep
- getting a good diet
- good exercise
- being around other people you respect who are also learning the content
- being around other people you respect who have already learned the content, and who you want to emulate
- having someone who visibly cares if you learn the content or not, who expresses and reinforces that they expect you to learn it. (both "you need to focus, stop messing around" but also "you are capable of this, its hard but I know you personally can do it")
- having a good reason to want to learn the content
- having time pressure to learn the content
note that literally none of this regards the actual presentation of the content. Books have existed for centuries. A well motivated learner with all of the above will find the content. That is in no way the problem
An argument might be "but why, given all of this, are teachers actually teaching, why arent they standing up and lecturing soley about the importance of learning the content" and to some extent thats fair (although lecturing does satisfy some of the above if you think about it), but they ALSO tend to assign reading, link to further resources etc. Probably half or more of teachers speaking time is given over to procedural stuff working towards the above goals. Explaining the process of upcoming exams, worksheets, homeworks that need to be done (time pressure), demonstrating their love and knowledge of the subject (people you respect who know the content), building rapport with students (people you respect), holding people to account/motivating, explaining why the content is important, and trying to build good habits in their students (organising study groups, project work, dealing with problems and creating distraction free conditions)
AI are a poor facsimile of the above. Though they may try to replicate it, their inherent lack of physicality and humanity makes much of the power lose its effect. I dont care if a chatbot is disappointed in me. I'm not inspired by a chatbot that claims enthusiasm about a subject. I don't care if a chatbot tells me I absolutely must do this by next week else ill be left behind (behind who?).
Comment by muzzleflash 2 days ago
When the scientific calculator was invented, people could easily know what went into its production. As in what circuitry appears in them. You knew that if you bought it, it is yours. Want to program it? Grab a book and do this. The whole package would be a fixed price. You are in control. With AI? You are not at all in control. You rely on a big tech giant (or just like 4 useful ones) who is riding what people controversially still call an economic disaster. You are relying on a technology that is designed to very likely bait-and-switch you. As soon as you get too comfortable with AI, the big tech companies can just bump the prices up and you will not be able to say no. You rely on a technology that you do not control.
The comparison of AI to a calculator or any other technological advancement for students is apples and oranges for that reason.
Imagine giving a student a personal AI datacenter to carry with them. This may be more of a fair comparison.
PS Training students on using AI, especially for free, is setting them up for reliance on the big tech companies and the subscription model.
Comment by mquander 2 days ago
Comment by latexr 2 days ago
Even if we assume that to be true, you severely underestimate how many people that condition excludes.
Comment by zozbot234 2 days ago
Comment by muzzleflash 2 days ago
Comment by mquander 2 days ago
Comment by mschuster91 2 days ago
Another issue is that it forces kids to stay in school for longer to do their homework, which can be a serious problem in rural areas where public transport is limited, so parents are forced to fetch their kids from school which may not be compatible with working hours.
Comment by spaqin 2 days ago
Why doesn't the essay class allow us asking your parents to write it for them? The art class, why not ask your parents to paint something for you? Geography, why not let ask your parents during a test?
Comment by ZiiS 2 days ago
Comment by yorwba 2 days ago
- made tasks easy that were a necessary prerequisite for advanced math (basic arithmetic), but not what the lesson was supposed to be about
- could in theory also let students skip over what they were supposed to be learning (applying the correct operations in the correct order to solve a problem) but doing so would require programming or getting a program from someone else, which the teachers probably figured was a high-enough hurdle to accept the risk
Hence, scientific calculators helped teachers by removing unnecessary friction.
Meanwhile, current LLMs
- will happily attempt to do the student's entire homework for them
- cannot reliably be restricted in functionality to leave the part the students are supposed to do themselves to the student
Hence, LLMs undermine teachers by removing necessary effort.
Sure, in theory LLMs could enable even more focused lessons by removing even bigger unnecessary frictions (e.g. in history class, have a LLM scour a large collection of primary sources to exhaustively list passages mentioning a certain topic), but students cannot be trusted to use them this way.
Hence, teachers are trying to use all kinds of tricks to ensure that what they wanted to teach actually passed through the student's brain at some point.
Comment by zozbot234 2 days ago
Small local LLMs are essentially that. If an LLM can tell you to eat rocks as a tasty snack or use glue to make the cheese stick to your pizza, imagine what it says when you ask it to analyze/explain complex academic subjects, or solve fiddly problems. But it will still reliably help you polish your language, like a subject-specific dictionary/thesaurus.
Comment by yorwba 2 days ago
A single model that can do many things, some less reliably than others, independently of what the requirements for the lesson are, is not a good tool to give students. You would need something that can do ancillary tasks perfectly, but won't do the part student are supposed to practice doing themselves. And what exactly that is changes with every exercise.
If you want to give students access to thesaurus functionality and nothing else, you're better off with a thesaurus.
Comment by PunchyHamster 2 days ago
Comment by qwedaH 2 days ago
Comment by fho 2 days ago
I would say that definitely shaped me in a way where I rarely bother with the underlying details and tend to focus on how high level abstractions interact. [2]
[1] German "Mathe-LK", we could chose specializing in two things, for me it was math and computer science, the later being quite novel back in 2003. [2] I _do_ tend to specialize in things, but e.g. for LLMs or GLMMs, while I do have the capability to understand the technical details, I just don't bother.
Comment by binarypixel 1 day ago
With the calculator analogy, I think that the calculator automates executing certain algorithms (like multiplication, etc.), but using AI takes away some (most?) of the thinking.
Comment by latexr 2 days ago
They. Are not. The same.
Have you ever known people to commit suicide, kill, or give themselves rare diseases because of their calculators? How about people dating their calculator and going batshit for a software update?
Not to mention learning to do on your own is a useful skill to teach you to think, and an essential skill to (as you suggest) verify answers. People not understanding how things work is exactly why they take bullshit output from an LLM as gospel.
I also note that such arguments tend to be profoundly selfish and self-centred. Your anecdote happened to have an outcome you enjoyed and benefitted from, but I bet that wasn’t the reality for all your colleagues. Just like you are glad for the calculators in your class, some other student may be glad for the lack of them in theirs and it may be the reason they got into their field of study.
Comment by rwmj 2 days ago
Comment by galkk 2 days ago
Here’s one possible scenario: After graduation, you (or somebody else) shares the program with friend, with a promise to not to share further. Soon enough, it’s on everybody’s calculator. What did real educational thing for you, is just cheat where one needs to press the right buttons and get the right answer. This completely destroys the educational purpose, but significant amount of people just don’t care and want to get a pass.
Yes, there always is a counter weapon by teachers: for example, to point to random line and ask to explain and whatever, but this is not (always) scalable.
I’ve seen this in reality in college, when there was a cs/database course final project implementation, written in Delphi (very popular at a time in xussr), that was passed from year to year, that the professors and ta were so fed up, that I got almost auto pass because I wrote mine in C++…
——
To summarize - the overinreasing amount of pure slop is seen everywhere. Regular multi-thousand line prs where author didn’t even bother to look into code, written by ai. Just prompt -> commit, push, or. Nobody wants to deal with that
Same is happening here - u it’s not to punish people who use tool in proper context, it’s to filter out people who just don’t give a fuck.
Comment by intended 2 days ago
The other analogy is taking a forklift to the gym. Sure you lift weights, but you don’t really do any exercise to develop your own muscles.
AI automates a significant chunk of the exercises. So you are left with people who didn’t build any mental muscles.
This would be bad enough, but it’s worse because AI severely benefits experts who have build mental reflexes/taste and can judge / verify output with minimum information.
Comment by exceptione 2 days ago
You would give the brains of the younger generation to American tech oligarchy, a class of people openly hostile to the principles of the democratic rule of law. If you want to see the damage actors like Fox News et alii alone can do, just take a look around in the US. Now imagine them taking over the parenting and teaching role; you wouldn't need gerrymandering if you can control people's beliefs.
Comment by sdevonoes 2 days ago
Comment by fph 2 days ago
Comment by Cthulhu_ 2 days ago
Comment by lpcvoid 2 days ago
Because this makes a subscription a requirement for education, and thus advances the grift that is subscriptions, rent-seeking and dependence on a service. This isn't something we should ingrain into our children from an early age.
Calculators were buy once, use forever. Subscriptions to slop generators are a long term dependency and I want my children to not be exposed to that until they can decide for themselves.
Comment by ilovecake1984 2 days ago
You need to be able to do both things. We don’t need to make it a choice.
Comment by shagie 1 day ago
Back in college, my assembly class was in MIPS (incidentally taught by Professor Larus of SPIM fame). I remember slogging through writing the assembly to compute factorial and saving registers and then dealing the frame pointer and the stack pointer.
One of the other students had access to a DecStation, wrote the program in C, and ran gcc -S to get the MIPS assembly from it. However, the compiler realized the for loop (and tail call) optimizations and instead of making it a recursive function (and help us practice and understand $fp) write it with a jump instruction instead.
Aside from getting a 0 on that homework, they struggled with the next assignment that presupposed understanding of how write function calls.
---
You could argue that learning C makes needing to learn assembly irrelevant (and MIPS is even less relevant today than back in the 90s). But for learning in school, it's not about the assignment but rather the journey that one takes to get to that assignment and learn from it.
Being able to check the answers that are provided to someone requires the understanding of what goes before and beneath the answer itself.
Writing the assignment in C when you're learning the before and beneath of "this is how assembly works" means that when you later take the complier class you won't be able to debug if your code generation is incorrect.
Working with an AI as the primary tool for learning problem solving keeps the person at the higher level. There is some foundational level that a person needs to learn without relying on an AI to do it for them.
The AI and other abstractions of the underlying problem do allow us to work with more complex problems. Would you trust a bridge built by an engineer who built it with AI and didn't understand the underlying math and physics themselves?
This is especially at issue in college where some students are taking classes to get the requirement out of the way, some are taking classes for that directly - an accountant is taking the math class to do math with the numbers, while the engineer is taking the math as a prerequisite for physics which is a prerequisite for a material science class which is a perquisite for a soil mechanics class.
If you don't understand the various foundational levels without using AI then trying to identify where the AI (or any other tool) got it wrong isn't something that you're necessarily able to do.
Comment by ksenzee 2 days ago
Comment by mold_aid 2 days ago
Finally, nobody seems to know what teachers are actually discussing. To your assertion:
>So I can't help but wonder whether schools are going about this all wrong. Rather than banning the use of AI and trying to catch students who are cheating, why aren't they creating schoolwork that requires AI?
Many teachers do create schoolwork that requires AI. Many teachers ban it. Everybody's trying to work out policy (to the detriment of other policy discussions, particularly the new ADA landscape). Many ED departments are captured by AI vendors - AI is a normal technological competency for ED majors at different levels. It's not that the discussion is not "are students going to work with AI?" The discussion is "how do we teach?" which is what the discussion always is.
But policy is a part of that. Admin will have guidance and policy statements, and each instructor will as well. Students, who get thrown off balance if they have two teachers with different nav bars in the CMS, want clarity of policy: can I use it? is a different question than should I use it? But "should I use it?" is the much more relevant question for instructors. The instructors passed through the 90s/00s/10s blissfully unaware of anything that was happening in these fields.
>These tools are not going to cease to exist.
Which tools, precisely? Because I'd assert your 200-dollar-or-whatever tier that runs out of tokens on Monday does not functionally exist for most students. I don't know what happens at MIT but Penn State satellite campus students aren't whipping up agentic solutions to "I have a summer online course with discussion boards." They're just plugging that shit into whatever chatbot they have on their phone. Honestly, most online courses aren't even worth that effort, but: different discussion.
The only reason there is pervasive student use is because someone made it free. The CoPilot window that comes with the basic tier of 365 and all the other in-app copilots are what my students have; the Google Docs stuff exists, and the Grammarly stuff exists, and the best way to "ban" it (which is just fine as an approach) is to make it even slightly more expensive. If someone does that, yeah, I think some of these products might cease to exist.
>I think we're doing them a disservice by trying to give them the same education that was given to those from previous generations
Students graduate into a world where all kinds of stuff is going on and all sorts of ideological forces shape what they encounter in schools. "AI" more a dominant element in that force than the calculator was because calculators didn't have hundreds of billions of dollars of investments, and I'm not sure I ever knew what the prevailing political project of Texas Instruments was. Maybe TI's CEO had a manifesto; tbf, I do remember IBM's corporate culture being strong enough to drive cultural change. But it would be naive to not recognize that these products are made by people antagonistic to the idea of education as a compulsory and public good.
Comment by mystraline 2 days ago
So the solution is raise cost, thus making the richies have access but the poors not to?
I guess thats one way to academically bin students; just put yet another financial gatekeeping to the academic process?
Comment by mold_aid 1 day ago
>thus making the richies have access but the poors not to?
I don't know that you need "thus" to describe the status quo. My school has tiny little Dells that are ten years old inside our Prometheus units. You think we're gonna upgrade each unit in the school with costs as they are, or do you think MS is just going to suddenly "provide" a cloud computing solution?
Comment by shmichael 2 days ago
World war I was very much that.
Comment by whartung 2 days ago
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
Comment by Aurornis 2 days ago
Comment by nlawalker 2 days ago
Comment by ssl-3 2 days ago
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Like this: https://chatgpt.com/share/69e405db-1b44-83ea-baf3-6af41fe577...
Comment by vunderba 2 days ago
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
Comment by Dylan16807 2 days ago
Comment by vunderba 2 days ago
I also think that when track changes was first introduced in earlier versions of MS Word, there wasn’t as much concern about privacy/telemetry as there is now, so it wasn’t made as prominently obvious.
Comment by kelnos 2 days ago
I'd be surprised if copy/paste carries the revision history, though. Wouldn't they have had to start with the original document (from the other student) and make their edits directly, and then submit that file?
Comment by Gander5739 2 days ago
Comment by eichin 2 days ago
Comment by djmips 2 days ago
Comment by tejtm 2 days ago
oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.
Comment by vunderba 2 days ago
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
Comment by djmips 2 days ago
Comment by steveworswick 23 hours ago
Comment by vunderba 2 days ago
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
Comment by leptons 2 days ago
Isn't that really what all these AI companies are doing too? It sure seems like it is.
Comment by artikae 2 days ago
Comment by djmips 2 days ago
Comment by Moonye666 2 days ago
Comment by RhysabOweyn 2 days ago
Comment by ryukoposting 2 days ago
I graduated in 2020, so I've only gotten to see the changes secondhand through friends and family who are teachers, and through my sibling who graduated a few years after me. But the difference is staggering.
Comment by bmitc 2 days ago
It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
Comment by ghighi7878 2 days ago
Comment by bmitc 2 days ago
Maybe the medical profession is a counter example.
Comment by close04 2 days ago
I’d argue that dealing with any high criticality operational incident is like an in person exam (maybe even the most difficult kind, the open book one) if you are the one responsible for fixing it. Everyone is looking at you, you have time pressure to solve it ASAP and you can’t afford the time to dig through all the docs on the spot. So there’s at least some similarity with some real life situations.
Comment by bmitc 21 hours ago
Comment by beej71 2 days ago
Comment by dublinstats 2 days ago
Comment by phoronixrly 2 days ago
Comment by doctorpangloss 1 day ago
Comment by ninjahawk1 2 days ago
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
Comment by tkgally 2 days ago
I teach at two universities in Japan and occasionally give lectures on AI issues at others, and the consensus I get from the faculty and students I talk with is that there is no consensus about what to do about AI in higher education.
Education in many subjects has been based around students producing some kind of complex output: a written paper, a computer program, a business plan, a musical composition. This has been a good method because, when done well, students could learn and retain more from the process of creating such output than they would from, say, studying for and taking in-class tests. Also, the product often mirrored what the students would be doing in their future lives, so they were learning useful skills as well.
AI throws a huge spanner into that product-based pedagogy, because it allows students to short-cut the creation process and thus learn little or nothing. Also, it is no longer clear how valuable some of those product-creation skills (writing, programming, planning) will be in the years ahead.
And while the fundamental assumptions behind some widely used teaching methods are being overthrown, many educators, students, and administrators remain attached to the traditional ways. That’s not surprising, as AI is so new and advancing so rapidly that it’s very difficult to say with any confidence how education needs to change. But, in my opinion at least, it does need to change at a very fundamental level. That change won’t be easy.
Comment by terrabitz 2 days ago
It's still a new tech so I'm not surprised a lot of teachers have different takes on it. But when it comes to education, I feel like different policies are reasonable. In some cases it's more likely to shortcut learning, and in other cases it's more likely to encourage learning. It's not entirely one or the other.
Comment by Izkata 2 days ago
Comment by osigurdson 2 days ago
Comment by cyberax 2 days ago
Comment by ninjahawk1 2 days ago
Comment by ninjahawk1 2 days ago
For example, the professor who’s leading me in this project had a fellowship at a certain university in England and said he exclusively coded using claude code for a month straight, their purpose was to solve a vaccine for a specific disease and by using AI tools such as claude code they’re several months ahead of schedule.
Comment by Moonye666 2 days ago
Comment by pesus 2 days ago
Comment by ninjahawk1 2 days ago
Am I saying I’m as knowledgeable or capable as a Ph.D right now? Absolutely not. There’s just not really a terminology that correctly describes accelerated learning and iteration by use of AI since the technology is so new. I can’t speak for others but as someone who’s a senior in my physics degree, I’ve been actually learning faster by using AI. It’s either a mental crutch or mental accelerator. The difference is in if you want it to completely do work for you or if you try to learn and follow along.
It’s a very under explored and new area right now, how higher learning is effected by using AI as a tool instead of as a cheating device, but historically, new tools like the calculator or computer have done a lot to accelerate learning once new rules are in place.
Comment by osamagirl69 2 days ago
Sounds like a fun project, I wish you the best. I ran a similar program (independent study that encouraged freshman/sophomore undergraduates to explore using microprocessors, at the time the EE curriculum was completely focused on analog circuit theory and ended at boolean logic) and it went well enough that it eventually became part of the official undergraduate curriculum.
Comment by margalabargala 2 days ago
Undergrad research is pretty common and it's not all that hard to get your name on a paper as an undergrad. A lot of undergrads think that doing work that gets your name on a paper, equates to PhD level work.
Comment by raincole 2 days ago
Comment by an0malous 2 days ago
Nice idea. What class and what work are you doing then?
Comment by ninjahawk1 2 days ago
Comment by leptons 2 days ago
How do you know you actually learned, instead of being fed slop by the AI that isn't true at all? If you didn't study, then I doubt you'll really know if the AI is lying to you or not. I have to wonder if your teacher will too, sounds like they have kind of checked-out from actually teaching.
Comment by randoments 2 days ago
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
Comment by lionkor 2 days ago
You still do all the same things, and they are graded, but this doesn't affect your final grade. Instead, you need to pass a threshold to enter the exam, which is then graded.
The US isn't so amazing at this, it simply can be done better. Recognizing where you can improve and from whom you can learn is a great first step to ACTUAL improvement.
Comment by meroes 2 days ago
What a narrow set of skills to send into your economy.
Comment by bugufu8f83 2 days ago
At Oxbridge, for CS we still had lab work. We still had problem sets assigned for CS and for math which were graded. We had one large CS group project in, I want to say, our second year. Humanities students were still assigned essays. It's just that none of this stuff contributed to your final degree classification which was based entirely on your exams (although if you didn't do your CS practicals you wouldn't be allowed to pass).
Obviously Oxbridge isn't exactly representative but certainly my experience showed me that the American style is not the only way of making education work.
Comment by cafebabbe 2 days ago
joking, of course everyone does 'projects, labs, teamwork and papers'. It's just not the main focus of the grading process.
Comment by ivankelly 2 days ago
Comment by lionkor 2 days ago
Comment by maplethorpe 2 days ago
Comment by lacy_tinpot 2 days ago
What is the "it" that AI does for you?
This is assuming you know how to get good work out of AI in the first place. But even that is turning out to be a skill in and of itself.
Comment by Levitz 2 days ago
Context helps immensely, for example. Think of what you can do that someone outside tech can't.
Comment by strogonoff 2 days ago
For example, take “X” to be “walking”. Do we have the technology that allows us to pretty much never have to walk? Sure. As far as I am aware, though, we do not generally favour a lifestyle of being bound to a mobility aid by choice, and in fact we have found that not walking when able in the long run creates substantial well-being issues for a human. (Now, we have found ways to alleviate some of those issues for those who aren’t able, but clearly it is not sufficient because we still walk.)
The problem is exacerbated immensely as the value of X approaches something as fundamental to one’s humanity as “thinking”.
Comment by maplethorpe 2 days ago
When running water replaced the need to pump water out of the ground yourself, were people urged to "learn faucets"? You kind of just need to twist a knob and water comes out, right?
Maybe there was an intermediary stage where running water was slightly more complicated and there were more steps to learn, but devoting time to learning those steps would have been a waste of time, since the end goal of the system was for it to function without much input.
Comment by Moonye666 2 days ago
Comment by ivankelly 2 days ago
Comment by mekael 2 days ago
Comment by ivankelly 2 days ago
Comment by doug_durham 2 days ago
Comment by theFco 2 days ago
Comment by ilovecake1984 2 days ago
Comment by make3 1 day ago
Comment by tyrust 2 days ago
Comment by raincole 2 days ago
Comment by Swizec 2 days ago
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
Comment by mjlee 2 days ago
Comment by Swizec 2 days ago
If you don’t pass after 3 tries, commission is mandatory.
You also have a paper trail of written exams and midterms to back you up. If you keep getting good grades and failing the oral, people will find that obviously suspicious.
Honestly the only times I had any trouble in the orals were the exams where I baaaaarely passed the written. Usually oral feels like the chill easy part compared to written because you can have a back-n-forth with the professor.
Comment by Terr_ 2 days ago
Still concerning from a statistical/psych fairness aspect.
There's a famous example of the Boston Symphony trying to fairly judge unseen applicants in 1952, and their results kept getting gender-skewed until they adjusted for the fact judges were reacting to the sound of shoes (e.g. high heels) when the candidate moved around behind the divider.
Comment by ryukoposting 2 days ago
Ah yes, the classic "if you think the system is abusing you, you shall out yourself to the system that's abusing you if you want any chance of recourse." Because a tribunal run by the people you're lodging a complaint against can't possibly be biased.
Comment by jubilanti 2 days ago
Comment by gpm 2 days ago
If you don't get one job you should have - there are others - it's unfortunate but not life altering.
If 3 years into your marine biology program a professor who always teaches a mandatory course fails you because you're a woman who wears non traditional dress - you're not graduating and now there are no jobs. (And this is an example that actually happened to someone I know - not in a western country)
Comment by fl4regun 2 days ago
Comment by Swizec 2 days ago
Our first year class was about 250 people. It was fine.
By the 4th year, class sizes were a much more manageable 30 to 50.
You get maybe 10 to 15 minutes with the professor (usually more in later years), they ask 3 questions with some followup. That’s 1 work week for the professor. And less than half the students even make it that far for every exam season (3 per school year) so you’re looking at something like 3 days of work. It’s fine.
Comment by gentleman11 2 days ago
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
Comment by eichin 2 days ago
Comment by dlivingston 2 days ago
Comment by ryanSrich 2 days ago
I would make this the focus for 90% of the first 2 years of their degree.
I would then have them spend 75% of their last 2 years learning how to use and program with AI. Aside from knowing how things actually work, there's no more important skill now than mastering AI.
Comment by delis-thumbs-7e 2 days ago
I don’t know why you’d need to teach anyone to code with LMM’s though. If you are a CS major (or any reasonably intelligent person) and can’t figure it out on your own, well, workd needs bakers and carpenters still you know?
What I am learning is mathematics, which what all CS including AI and machine learning, really is. ”Mastering AI” for me means building your own models and AI applications and undestanding linear algebra, multivariable calculus and the propability theory behind it all.
Comment by jfengel 2 days ago
If you're doing it in class anyway, and providing typewriters, you might as well provide a locked down Chromebook. Cheaper and better for composition.
Comment by cvoss 1 day ago
Comment by tenacious_tuna 13 hours ago
Every English class I had in high school focused so strongly on how important revision was, or at a minimum having an outline to work from. While AP tests expected us to dash off essays by hand in a single go I empathize with OP around how useful a tool revision is.
Comment by guerrilla 2 days ago
Comment by syngrog66 2 days ago
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
Comment by pftburger 2 days ago
Comment by zozbot234 2 days ago
Comment by coffeefirst 1 day ago
> Everything slows down. It’s like back in the old days when you really did one thing at a time.
Why did we turn computers into frenetic, distracted multitasking machines? What would it take to reverse course?
Comment by kristjansson 1 day ago
When we had to add another core to keep up with Moore’s Law?
Comment by rootusrootus 1 day ago
Comment by paulorlando 2 days ago
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
Comment by binarycrusader 2 days ago
Comment by paulorlando 2 days ago
Comment by 2b3a51 1 day ago
Comment by resident423 2 days ago
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
Comment by medbar 2 days ago
Comment by lombasihir 2 days ago
Comment by resident423 2 days ago
Comment by somewhereoutth 2 days ago
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
Comment by lizknope 2 days ago
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
Comment by s0rce 1 day ago
Comment by pseingatl 1 day ago
Comment by chambored 1 day ago
Comment by fizlebit 2 days ago
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
Comment by emptybits 2 days ago
As a kid, before my family could afford a home computer, I was determined to do something that resembled programming. I borrowed "BASIC Computer Games" (1978) by David Ahl[1] from the library and typed in several programs on a manual Olympia typewriter. More than just reading code and maybe even more than being able to easily execute it, I'm convinced this typewriter exercise forced me to really study the flow and the how of the code.
[1] https://archive.org/details/ahl-1978-basic-computer-games/
Comment by zoom6628 2 days ago
Comment by SilentM68 2 days ago
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
Comment by alex_young 1 day ago
Comment by niek_pas 1 day ago
Comment by 14 2 days ago
I think AI should be treated the same. Who cares if it assists in a lot of the work that is a good thing. BUT as we all know AI has been incorrect on many things so I think what would be a much better learning practice would be to forget if AI wrote the paper and focus heavily on students backing up their claims with sources. So if your paper says ABC is true and AI writes it up in a perfect paragraph you would still need to confirm the facts as true and find a reputable source that shows it to be true.
Comment by eranation 2 days ago
Comment by 52h316g 1 day ago
Feels better to design assignments where students have to use and think about AI, not avoid it. Good experiment, just not a long-term fix.
Comment by opengrass 2 days ago
Comment by armchairhacker 2 days ago
Comment by tencentshill 2 days ago
Comment by zx8080 2 days ago
> The Sentinel not only cares deeply about bringing our readers accurate and critical news, we insist all of the crucial stories we provide are available for everyone — for free.
Thank you very much for interrupting and ruining my reading experience of your article.
Comment by breve 2 days ago
Comment by Our_Benefactors 2 days ago
Comment by breve 2 days ago
If you don't like the website, simply don't use it. Especially when you're making no contribution to it.
Comment by ethmarks 1 day ago
Comment by margalabargala 2 days ago
If someone gives away something free, they can and sometimes do wash their hands of it. That doesn't prevent you from expressing your opinion on what you think they should change about the work, but they're not under any obligation to do anything about it.
Someone made a thing available. You can take it as it is, you can make noise about what you don't like, you can make it better, or you can ignore it and move on.
If someone is providing a mix of useful and garbage information, well, take your pick from the above.
Comment by erickhill 2 days ago
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
Good luck, we’re all counting on you.
Comment by tim-projects 2 days ago
But that would require the teacher to be good at AI too. I think that's the problem here.
Comment by medbar 2 days ago
No, it shouldn’t. I’m not bearish on AI but it shouldn’t replace any part of a classroom where the objective is to learn and communicate in a new language (German). The typewriter argument is memorable and interesting - the article points out the lack of editing forces kids to slow down and think about their writing, as well as iterate through multiple drafts. It’s not a nostalgia thing, they’re not old enough to have ever used one before.
I could see an argument for adding on a new class for GenAI, agents, context engineering or what have you, but considering how behind current US curriculums already are and how quickly the AI field moves, I can only see this ending in wasted time and money: even an up to date class will be stale by the time it’s over. Kids will end up learning this anyway outside of the classroom, no use lecturing them on something they’ll already know.
Comment by tim-projects 2 days ago
You can do all the same learning with AI tools, with the added benefit of developing extra skills on top.
This notion that AI automatically reduces learning seems more born out of fear than reality. There are also different kinds of learning.
People who really want to can still go back to typewriters. Forcing it into a mandatory curriculum though is a step backwards. Just like it would have been pre AI.
Comment by SirHumphrey 2 days ago
You don’t give first graders a calculator because they will always have one in their pocket- they end up just inputting numbers in a magic box and not learning how to do this manually which will destroy their future mathematical education. It’s about the same with AI.
Comment by tim-projects 2 days ago
AI is not a gun that you can't put into the hands of a child. It's a paint brush.
Comment by cafebabbe 2 days ago
Comment by azhenley 2 days ago
Comment by WillAdams 2 days ago
One of my best college professors would review such essays in-person, one-on-one twice each semester.
Comment by nullbyte808 1 day ago
Comment by singpolyma3 2 days ago
Comment by eszed 2 days ago
Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
Comment by bmitc 2 days ago
Comment by janalsncm 2 days ago
This statement is more defensible after removing “only”. If it “only” hurt the cheaters, there would be no need to police cheating at all.
Comment by paleotrope 2 days ago
Comment by delusional 2 days ago
Comment by michaelt 2 days ago
And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
Comment by ButlerianJihad 2 days ago
Comment by jubilanti 2 days ago
Comment by mcmcmc 2 days ago
Comment by isolli 2 days ago
Comment by LocalH 1 day ago
Give them a 5150 PC with WordStar and let them go to town.
Comment by erelong 2 days ago
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development
Comment by cultofmetatron 2 days ago
Comment by tyingq 1 day ago
Comment by onesociety2022 2 days ago
Comment by Peritract 2 days ago
Gyms aren't redundant because tractors exist.
Comment by llbbdd 2 days ago
Comment by Peritract 2 days ago
Comment by llbbdd 2 days ago
Comment by cumshitpiss 2 days ago
Comment by onesociety2022 2 days ago
Comment by Peritract 2 days ago
Comment by ceejayoz 2 days ago
Comment by echelon 2 days ago
We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
Comment by IshKebab 2 days ago
Comment by dyauspitr 2 days ago
Comment by ourmandave 2 days ago
Comment by amingilani 2 days ago
Comment by DeathArrow 2 days ago
Comment by mrweasel 1 day ago
Comment by gorgoiler 2 days ago
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
Comment by PebblesRox 2 days ago
Comment by arcfour 2 days ago
Comment by casey2 1 day ago
We have amazing tools through screens, but insist on using them as if they were the old tools and act surprised when they lead to worse outcomes. Each student could easily have a friend in Germany they speak to in real time if people actually cared about breaking down cultural barriers. Schools have never cared about education, their main purpose is control. Control of the narrative and control over your life.
Comment by toonbit 2 days ago
Comment by pards 2 days ago
I remember having to split bar tabs by hand at the end of the night without a calculator. The waiter delivers a hand-written bill, and we'd calculate how much each person owed, splitting drinks individually and food equally, recalculating tax and tip to derive each person's total. We'd pool the cash in the middle of the table (with people taking their own change as they go where possible), then counting all the cash and making sure it still included sufficient tip. All after many pints. I do NOT miss those days.
Comment by clove 1 day ago
Comment by LoganDark 1 day ago
Huh. I'm not sure I ever use a pinky while touch-typing, except to hit right-backspace sometimes.
For that matter I don't home using F and J either -- I usually home with alt+tab / cmd+tab and right-ctrl / right-cmd.
Comment by varispeed 1 day ago
Comment by walrus01 1 day ago
Comment by CalChris 2 days ago
Comment by teeray 2 days ago
Comment by bombcar 2 days ago
Comment by vunderba 2 days ago
Comment by bombcar 2 days ago
Comment by vunderba 2 days ago
It reminds me of a family friend who's a bit older and did their scuba certification using dive tables, whereas when I did my PADI, I was able to use a dive computer.
Comment by toonbit 2 days ago
Comment by oblio 1 day ago
Comment by linsomniac 2 days ago
Comment by deadbabe 1 day ago
Then, for the final exam, drop the bomb: in person, handwritten, no outside references, mostly the same assignments we’ve done before. If you fail, it’s over for you. If you stayed true and studied, it should be easy for you to pass. If you used AI all semester, you did it to yourself. Those who complain will have their past assignments audited and if AI was used they are reported for plagiarism. This will be the most valuable lesson.
Comment by sonzohan 2 days ago
My colleagues that teach hard skills courses (like data structures and algorithms) either love AI and incorporate it into their teaching at every moment possible, or despise it in the same way graphing calculators were by high school math teachers when they were introduced nearly 30 years ago.
I teach soft skills classes to engineering students, and I'm unconcerned with students using AI. I write my problems in a way such that, if the student truly understands the assignment, prompting the AI to solve the problem and iterating on it takes a similar amount of time to doing the work themselves. AI is not very good at writing introspectively about the student. In other words, AI isn't going to be helpful when the homework question is "A fellow student comes to you asking for suggestions on how to maximize their chances at landing an internship. What advice do you give them that's immediately actionable?"
Try it, plug that into ChatGPT or your favorite LLM. It parrots the same generic tips everyone tells you, with very little on "how" do perform the action in an effective way. Read it, copy it into your advice document, get a poor grade. Try telling other students to take this advice. Note how they don't because the advice isn't actually actionable enough for them to take action.
LLMs are also not very good at the follow-up question "In a previous assignment you gave specific and actionable advice to a peer on the job search. Which of these suggestions were so good you are now doing them?" A number of students write a "Mental Gymnastics" essay, claiming they are following all their suggestions (because they think that's what the professor wants to hear) while the evidence they provide demonstrates they are not. A student asking an LLM to write the essay for them consistently produces a digital 'pat on the back'; a mental gymnastics essay that ultimately makes the student realize how unwilling they are to solve the #1 problem in their college career.
I've done away with exams wherever possible. I stick to project-heavy courses. What I've found to be far more concerning than AI use is the increasing loss of social skills and ability to cooperate within the younger generations. The number of students who would prefer to fail a class instead of talk to literally any human being is astounding.
The number of students who refuse to build soft skills, and believe that tech is truly a meritocracy where the only thing that matters is 'lines of code', there's no politics, and they won't work call or crunch or give code reviews, is also astounding.
Comment by pbgcp2026 2 days ago
Comment by banana_sandwich 2 days ago
Comment by oblio 1 day ago
Comment by pyalwin 2 days ago
Comment by EverMemory 2 days ago
Comment by Accountbar 16 hours ago
Comment by SamHenryCliff 2 days ago
Comment by llbbdd 2 days ago
Comment by hackable_sand 2 days ago
Oh
Comment by llbbdd 2 days ago
Comment by sarchertech 2 days ago
Comment by llbbdd 2 days ago
Comment by sarchertech 2 days ago
You just said that it was a waste of time. So was it or not?
> that option is also trivially available outside of college, it's called “email”.
How many experts have you cold emailed over the years and how much of their time have you taken?
Comment by llbbdd 2 days ago
To your second question - less than a hundred, but tens. Most people who are worth listening to publish their work and their thoughts. Email is free. Experts love to answer questions about their work, professors hate doing extra work for no extra pay. The incentives here are not confusing. How much time have I taken? Confusing question. These are real people with real passion, and they answer questions with that in mind. Professors are obligated to puke up an answer. I've gotten responses in most cases, in some I haven't. When I don't get answers it's because the targets are smart and busy. If I wanted more engagement with my random questions I'd offer money, and if I had offered money every time I'd still be below par on the money I wasted on college. If I wanted to justify it - I'd say I learned enough to validate that paying real money for another 3-6 years would have been less valuable than burning it for heat.
Comment by sarchertech 2 days ago
I think you completely misunderstood this interaction.
There are 2 possible explanations.
1. You are so smart/knowledgeable that the professor thinks you are beyond college.
2. You were acting like such an arrogant know-it-all that the professor was being sarcastic.
I’ve seen #1, but I’ve seen #2 many times.
You sound like you have a huge chip on your shoulder about not having a degree. I had the same issue at one point before I went back and finished (after working as a professional developer for a while), so I recognize it.
When I did go back, I asked questions in class, I went to office hours to ask questions, and I did research projects with professors. Some back of the envelope math says it would have costs me about twice what I got out owing if I’d paid for an equal amount of time with whatever experts I could find.
My strong suspicion based on the few posts I’ve read is that your attitude is the reason you had such poor interactions with instructors.
Comment by llbbdd 2 days ago
Chip on my shoulder - no, and it's a silly label to begin with. Understanding that it's for other people who value the paper more than intrinsic understanding, yeah.
EDIT: I will concede in some way that I'm proud of not having a degree, and it does influence my thoughts on this topic. I've met some real idiots that do, and I don't consider it a serious differentiator.
Also looking up the thread - at my early jobs, I was surrounded by many people who were interested in educating me on any topic I could think of, because similarly we were all being paid for our time. The difference between that and school was the assumption that we were both motivated and capable.
Comment by sarchertech 2 days ago
2. These classes that you blew through weren’t upper level classes. They couldn’t have been because you wouldn’t have had the prereqs to take them. If you already had some knowledge of the field and didn’t need lower level classes, you could have talked to the department about testing out of some of them.
I know you didn’t walk into an upper level class on Automata theory and come up with the proofs on the spot.
No professor would in good faith tell you to go do your own thing based on what you’re describing.
If they thought you were very smart and sincere about learning, they’d encourage you to do independent study with them, do research, work with the department to move into higher level classes, or take cross listed graduate classes.
If they thought you were kinda smart, but a huge asshole, they’d tell you to go do your own thing because they didn’t want to deal with your crap.
This is all coming from experience as someone who came into school not needing the intro classes, and someone who used to be that arrogant.
Comment by tim-projects 2 days ago
Comment by sarchertech 2 days ago
Comment by balamatom 1 day ago
Why is that a resource of "once-in-a-lifetime" scarcity in the first place?
Comment by sarchertech 1 day ago
It’s really expensive, time consuming, and difficult to gather exports together like that.
The closest anyone comes to research university is national laboratory, or something like Bell Labs. But you’re unlikely to have access to those.
And you’re unlikely to ever have another time in your life when you can take 4 years to devote almost solely to learning.
Comment by balamatom 1 day ago
Comment by rvz 2 days ago
Comment by sarchertech 2 days ago
Comment by fl4regun 2 days ago
Comment by tekla 1 day ago