• Dumhuvud@programming.dev
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    2 days ago

    Software Engineers

    Oftentimes I wonder what civil or mechanical engineers think about webdevs-turned-prompt-writers calling themselves “engineers”.

    • Holytimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Every real engineer I have ever talked to gets pissed when a key board jockey calls themselves engineer. Regardless of AI or not.

      Coders arnt engineers never will be never have been. The engineer title was straight up stolen and misused by corpos and idiots to fluff up their egos. The entire term software engineer is a bullshit title for idiots who have zero respect for actual engineers or are toadies to mega corpos and sold their self respect for a bigger pay check. Prompt engineers are even worse and frankly fuck em all.

      They as much engineers as a 3 year old is an engineer when building with Lincoln logs.

  • Amnesigenic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    2 days ago

    Loudly announcing your increasing incompetence to the world seems like a weird career move, maybe consider lying about that?

  • shoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 days ago

    Things I’ve realized while working with AI (Claude code):

    • It’s fantastic for very small macros and medium length scripts. Think dev ops stuff, pre-commit hooks, transforming data. Keep it small enough to manually review and something you can run without destroying anything important. This can massively boost your codebase QoL. [Double bonus for not wasting tokens to solve the same problem over and over]
    • It’s decent-to-good at debugging but not consistent with fixes. It can find some utf encoding edge case that might have taken you 1hr+ but suggest the dumbest bandaid fix you’ve ever seen. Also very good at spinning up unit test suites for basic edge cases.
    • Due to obvious training bias, it’s pretty good with common libraries and cloud platform infrastructure. It could probably help with writing a complex cron call, debugging regex or fixing an IaC config. On the flip side it won’t bother to use the latest package version or know your niche/new library.
    • It does better with greenfield because exploring your codebase introduces a ton of bias. It might try to fit in an ugly hack when a refactor to simplify everything is way easier.
    • It’s absolutely garbage with UI, just throws the most disorganized HTML together that isn’t reactive or reusable. OK enough for ugly internal stuff but God help anyone relying on it for that.
    • This is setting up to be the biggest rug pull in history. People that buy into it heavily just to save a couple bucks on engineer payroll are going to be fucked when they start ratcheting up the token price.

    All in all it can be useful when used with care but will never be a magic bullet.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Yeah, fully agree with all that.

      I’ve got some godawful spaghetti code I don’t understand fully, and it’s pretty good at deciphering that and the bizarre labyrinth of code paths leading around it. But it’s absolutely no guarantee of working code, and in any project larger than a simple crud app, you are going to still need programmers who know about things like memory and databases.

      It often needs pointing at a solution you want, because as you pointed out, it’s fond of dumb band-aids. Like yesterday when it was trying to hook into mouse wheel events and create separate threads, when all it needed was an event on the dataset I was using to load a sub-dataset.

    • Archr@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      This is basically what I discovered as well. I have found that Ai writes code that is complex and “works” (at least most of the time) but it is heavily over engineered and often contains design choices that make expanding functionality effectively impossible without a full refactor.

      When I tried having the Ai fix a test failure the Ai would either fix the code, fix the test, or change the test and the code breaking everything else in the chain.

      I no longer use vibe coding because it is just faster/better for me to write the code.

      But for tiny scripts it is very good.

    • subtex@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      This is pretty spot on from my experience as well. Also, the gap in quality from the Opus models and say GPT is vast.

      100% agree on ui code. Really awful output there regardless of model.

    • Davin@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Claude can do some medium complicated sites from scratch relatively quickly. The problem is I’ve seen so many of these at work, not just from non-engineers, but from peers too, that they’re easy to spot. AI sites/apps are going to be the new geocities.

      But when you want to move beyond the basic thing that impresses the c suites for some reason, it hits a pretty big wall in speed to output and needs a lot more hand holding.

      I fear that the c suites don’t really care about quality, just speed and saving money. So while I’m a much better developer than Claude (which is imo the best at the moment), I don’t think that makes my job secure. I have to use the AI, and it’s getting silly/scary religious here about it. We have to talk about how we used AI and how it’s making things better. And to make things worse, I don’t see a company that’s not drinking the Flavor Aid.

      It can be useful, and used right, you can do a lot of things faster. But the expectations from the top don’t align with the reality of the product, and us developers are being blamed for the gap.

    • Ledivin@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      2 days ago

      Man, I disagree with all of this. The frontier models are actually good, and basically everyone in my F500 company has been using it. The codebases i work on are super-legacy java, where it does great despite us having like 75 different patterns for each task, and a massive front-end web repo where it thrives because we’ve been extremely strict in typing and patterns leading up to this. It even does pretty well across repo boundaries, despite having significantly lower context for those situations.

      I genuinely will never understand the people saying they suck. Are the worth the price? I have no idea, I’ve never used them for personal project. But they are at least as good as a dev with 3-5 years of experience, at this point. Our career is boned.

      • shoo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I don’t doubt it’s possible to get better consistency but the juice is really not worth the squeeze for me. You end up churning through huge expensive models, orchestrating sub agents, writing out boilerplate hand-holding instructions (“please don’t break this, stop trying to commit to main, please lint ffs…”).

        I don’t use it for Java but that would make sense with rigid enterprise patterns and VeryVerboseNamesThatAreEasierForAModelThanAHumanFactoryClazz {...

        I don’t think our career is boned, moreso that all juniors trying to get in are boned. Everyone who knows what going on transition to a more hands-off architect role.

        But like I said, our tokens are heavily subsidized right now. When they pull the rug, code monkey jobs will start to get listed again (with lower salaries of course).

  • muusemuuse@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    That’s the problem. This is one of those things that you gain momentum in, not simply experience. You can lose that momentum.

    Tech bros are going to end up enslaving us to this shit.

  • Kaligalis@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    3 days ago

    Nah, AI isn’t that good. When you don’t properly review every single line twice, you get the most absurd bullshit you’ve ever seen.
    I use Claude Code Opus daily btw.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      3 days ago

      That’s the funnest part. You loose your ability to code, and you do it by using thing that isn’t even that good, and you don’t get anything out of it. Isn’t that great?

  • ferrule@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    We use it at work and I now have disabled it for all the typeahead stuff. Far too many times it guesses what I am doing incorrectly and it made using my TAB key (which inserts the propper two spaces) impossible.

    The only place I still use it is for reading and identifying compiler errors. Even then it is only about 50% correct as most times it falls into the “Oh you are right, X isn’t the solution. Have you tried X?” I have had few bad interns and even they were smart enough to not forget what they said in their previous sentence.

    • Smoogs@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      This is why I’ve never tossed any of the developer bookmarks

      I’ve been training new hires how to look stuff up on stack and dictionaries to fix code that went wrong after AI mucked it up. They aren’t even being trained to parachute in school.

      What a sad time line we are in.

  • ImgurRefugee114@reddthat.com
    link
    fedilink
    English
    arrow-up
    173
    arrow-down
    2
    ·
    4 days ago

    Lol! Losers. I’ve been programming for almost two decades and extensive use of AI hasn’t compromised my skills AT ALL! These slop machines can’t hope to compete with the quantity and magnitude of subtle bugs I write. My code was terrible long before I made bots have mental breakdowns trying to work with it.

    • Goodeye8@piefed.social
      link
      fedilink
      English
      arrow-up
      31
      ·
      3 days ago

      AI also gives you the benefits of a middle manager. If everything works as intended you take the credit but if something breaks that’s not your fault, AI made the mistake. If they try to put the blame on you just say you have 6 agents working on 6 different domains all cross-reviewing their commits and you can’t be expected to review every single line of code yourself. Time to play corporate like a damned fiddle!

  • Katherine 🪴@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    You still need to review and verify the code, actually implement it, and improve it if you use AI.

    If you just blindly accept it then you’re just lazy to begin with.

  • normalentrance@lemmy.zip
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    3 days ago

    It feels like relying on GPS while driving around. If you know the roads well and just want some help with live traffic or somewhere you haven’t been before, it’s a decent tool.

    If you rely on it because you don’t want to think and just want to press the easy button, you’re going to have a bad time sooner or later.

    Back to software, I think there are a lot of people introducing concepts they don’t understand or can’t maintain (either from poor quality slop or it is just too advanced for their current level of understanding). You can do a few turns like this, until you’re stuck burning tokens in a loop without moving forward in a meaningful way.

    I try to avoid taking the easy route myself unless I’ve burnt too much time stuck on some small detail. Ultimately I feel it is super important to understand what you are delivering. Whether it is writing it yourself, copying a stack overflow post, or using an LLM. Once you commit and push to prod you’ve got to deal with that crap.

    • themaninblack@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Agree completely but I wanted to add: you can also get into an incomprehensible mess without vibing. Just follow the serverless flask tutorial, start writing raw SQL, and away you go!

      I asked Claude today about why a coworker was getting errors and it almost exploded.

  • dejected_warp_core@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    6
    ·
    3 days ago

    (X) Doubt

    As a Sr. Engineer, I completely get that my situation may be wildly different from what’s cited in the article.

    Right now, I’m using AI “in the loop” rather than “as the loop”. That’s a big difference. And I’m getting my ass kicked routinely on review for dumb-ass things that I’m letting slide from AI generated output. And rightly so. Plus, models routinely lead me down sub-optimal blind alleys while dreaming up really stupid ways to fix problems. The level of (re)prompting I have to provide to suggest to get decent quality results converges on a post-grad that has encyclopedic knowledge of software engineering as it exists online, but with zero real-world experience. It’s both impressive and dangerous as a replacement for software engineering.

    In the mode I describe above, I’m not losing the ability to do anything. I can see how one could surrender some coding chops or familiarity with a whole language or stack, in favor of automation. But all you have to do is not do that.

    I will say that as a rapid-prototyping technology, It’s nothing short of miraculous. I’ve watched junior engineers knock together medium-weight applications, complete with browser UI/UX and decent workflow, in less than a week. This is great for showing value or putting something semi-functional in front of management and/or customers. But pivoting those prototypes into something maintainable is an utter nightmare. Depending on how beholden to AI and forever prompt-looping with “skills” and MCPs you want to be, I suppose it’s possible to just keep mashing the AI button. But at some point, you’re going to need to get inside there to fix security problems or bugs that elude this workflow. What then?

    • tinfoilhat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      18
      ·
      3 days ago

      I joined a project that was forced to use some vibe coded solution that an intern cooked up – marketed as “solution for data pipelining”.

      There are no tests, every semantic query calculates embeddings every time, and there is help together with so much bubble gum and “glue code” that nobody feels confident with any of the data were showing our customer.

      It’s great for rapid prototyping, and then straight to the trash.

      • northface@lemmy.ml
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 days ago

        Thing is, as we all know, prototypes rarely make it to the trash bin if managers and product owners have a stake in the project. Which becomes an even bigger problem now that minimal amounts of humans are involved in producing said prototypes.

        I had a meeting with a customer who proudly proclaimed they do “full-on agentic coding” at their startup, and one of their developers mentioned their entire codebase has been rewritten three times in the past week before the meeting took place. I do not have high hopes for their project ever being refactored by humans involved in anything else than light UAT before customer demo time.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 hours ago

          prototypes rarely make it to the trash bin

          At one of my previous jobs, I was maintaining the product that people prototyped for themselves to check if the idea they’re going to build actually works under high load, it was full of parts that were added only and exclusively as stubs for the simulation. The idea ended up being feasible so they said to managers that they can start working on the product, and received an answer that there is no need, the product is already sold to clients and they just need to package it and write documentation. Eventually they had to hire a whole department so we can actually build an app that was already sold and shipped.

    • Nalivai@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      And I’m getting my ass kicked routinely on review for dumb-ass things that I’m letting slide from AI generated output.

      Now imagine if you aren’t that experienced and the reviewers aren’t that thorough, or, and this is the most depressing part, review process doesn’t exist. And you get people, even senior engineers, who push that sub-optimal barely working code, but because their project isn’t that complicated, it somehow works, so they continue with it, and after some iterations they get code that nobody wrote, nobody knows how to maintain, and nobody reads. But because a lot of modern frameworks are made so monkey can make that barely work by sitting on a keyboard, a lot of the projects didn’t collapse on itself yet.
      And that’s how you get a generation of programmers who lost the ability to program.

  • vogi@piefed.social
    link
    fedilink
    English
    arrow-up
    49
    ·
    3 days ago

    Its a silver lining of AI that you can easily tell whos a big baby idiot and whos actually worth engaging with.

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      49
      ·
      3 days ago

      Preach.

      The AI “revolution” is the thing that finally killed my imposter syndrome as a software engineer. Not because I can write better code than AI (that’s a very low bar), but from listening to all these breathless idiots talk about how they’re “10x-ing my productivity!” or how “AI has replaced search for me!” or how “In 6 months no one will have to manually write code anymore!”

      • Zagorath@quokk.au
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 days ago

        In 6 months no one will have to manually write code anymore

        For the last 18 months

      • FosterMolasses@leminal.space
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Fr, I’ve never felt more confident about my coding capabilities and I’ve even picked back up some old projects I had shelved indefinitely due to tha syndrome.

        Now every shitty line of code I produce feels like polevaulting over 1000 other future applicants on my career path lol

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Hear hear.

          I never really had any interest in “personal projects” until AI came along. Not because AI finally gave me the tools to work on those projects, but because it completely changed my perspective on the craft of writing code and made me appreciate the value in actually creating things by myself, for myself.

      • schema@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        3 days ago

        Similar for me. What i find ironic is that AI already ran into a brick wall. It’s inherent statelessness by design means that AI is unlikely to be suited for anything more than isolated well defined tasks in the near future. Still usable as a tool, but without someone who is actually experienced, it will result in disaster.

        and even in smaller tasks it can fucks up, especially if the person prompting it is incapable of writing the code themselves as they don’t know how to properly design it and don’t spot the issues. Like everything with AI, it looks impressive at first glance until you look at it for more than 10 seconds and spot the metaphorical 6th finger.

        What we see currently with AI getting “better” at coding is more or less duct tape to make it work. Basically, they create the agents to bolt on the state, more layers between user and model. Iterative processes to make the answers better, etc, and to create “memory”, which in essence is just an ever growing prompt managed by the agent. But in the end, this won’t fix the inherent problem, so it will only do so much and is already hitting another ceiling. It introduces state decay. With the agent method its not really possible to “take away” memory, so if you gave it multiple versions of the same code (as you would if you work with AI), the AI never really forgets about old code. It can supress it through agent instructions (more duct tape), but the more there is the more it bleeds through, which can make the AI reintroduce old code or base assumptions on outdated things.

        There is no fix without changing the inherent way how models work, which would introduce complexity beyond what is currently feasible in computing (and the current AI is already gobbling up all computing reaoureces as is)

      • foodandart@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        As someone that wanted to write but never had the time to learn, what’s so bad about writing code manually?

        Seems like if you can learn to do it well, you will be fairly well set with that skill.

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          ·
          3 days ago

          If you’re someone who cares at all about the quality and consistency of your craft, there’s absolutely nothing wrong with manually writing code.

          If you’re a misanthropic “techno-feudalist” who thinks of code as nothing more than an asset to sell, then pumping out as much code as quickly as possible without any human intervention is a very attractive proposition.

          Tech, sadly, is absolutely infested with these people at all levels.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 days ago

          You will still need to learn programming manually.

          The process of struggling to understand and synthesize working code is a critical part of learning. Skipping it feels easier, but you’re hurting your ability to understand coding.

          Sure, you can make an LLM generate code and if you’re inexperienced it can outperform you on the basic tasks that you’re given as exercises. This is a trap that a lot of students fall into. It’s very easy to let LLMs do the ‘hard work’ part of learning while you just read the textbook or watch a video. Unfortunately, the hard part is the part that builds your skillset.

          It’s just like how you can’t just watch a video about physical fitness and then use a robot to lift the weights for you. Sure, you get to the end of your sets faster and you’re not physically tired and sore but you won’t actually benefit in the ways that matter.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 days ago

          There’s nothing wrong with writing code manually. Over the past few months LLMs have gotten a lot better at writing code than they were before, but they can still make weird mistakes.

    • TotalCourage007@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Honestly yeah its like wearing a huge red AI flag. Can’t imagine being stupid enough to fall in love with a not-secure CHATBOT.

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    1
    ·
    3 days ago

    We have been interviewing for entry level positions and the new grads know less than ever before. I don’t really care what they know, I am looking for evidence that they can think, but I usually ease them into thinking scenarios by asking easy foundational questions like how many bits in a byte. You would think I was asking for them to explain the Shrodinger wave equations… One candidate was waivering between 13 and 17…

    • Kaligalis@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      It’s called entry-level for a reason. Back in my days, you could start such a position without any formal education as long as you were willing to acquire the required skills and knowledge without needing a nanny. We had to go to the library or actually buy the books for knowledge. Now they can just use the internet.
      The actual requirement for doing the job never changed. And it’s not knowledge.

      • collapse_already@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        My company probably doesn’t get the best candidates (defense contractor that pays somewhat less than market rate), but yeah.

    • foodandart@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      …easy foundational questions like how many bits in a byte…

      GTFO.

      I mean, yeah… perhaps it’s to be expected. https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z - if this is true, it’s as the methods of using computers and various devices has been infantilized and made too easy.

      Yeah… let’s obscure the inner working of computing and make the process as opaque to the user as possible. It’ll be fine… no negative consequences at all.

      Colleges do not matriculate anymore (that’s in the British sense of the word, where one has to show actual knowledge in the degree field one is seeking before enrolling, and TBH, they haven’t done so for a very long time, actually…) so this is what we get.

      Higher ed in the US is just about da moneys…

      • mnemonicmonkeys@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        I can’t wrap my head around how the people in the article get anything done on the computer.

        Sure, I could have File Explorer search for a file in theory, but it’s ridiculously slow and often fails to find the files I actually want. It’s way faster to just have things organized on a day-to-day basis

        • foodandart@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Oddly enough I’ve always sorted current working files by date.

          Then when backup time comes I’ll look at the last dated file in the archive, then go to that date in my current work folder and everything newer goes into the backup. Once it’s in the main backup folder, I then sort the files into year and project.

          Still, on my system (a MacPro from the Olden Times when Steve Jobs was still kicking) I have 4 drives, so it’s crucial to know what is where.

      • collapse_already@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        It is ridiculous. I am interviewing for embedded systems development where we frequently write to specific bits in a register. I am sure these kids have had to learn something, but I can’t figure out a polite way to ask them to give me some examples of what.

        • foodandart@lemmy.zip
          link
          fedilink
          English
          arrow-up
          10
          ·
          3 days ago

          There was a series of questions I heard in a political discussion about whether or not any given politician understood what the internet was, and if they really had any idea of how to regulate it.

          They are… “Explain the differences between, the internet, the world wide web, a search engine and a browser.”

          If the person could not answer those 4 questions , well… they shouldn’t have been trying to write legislation about it. I think that still stands as a basic foundational step to start from

                • foodandart@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 days ago

                  Well, I for one am delighted to find lemmy and in a small way, do my bit to resurrect a miniscule, tiny bit of it.

                  It’s mandlebrot patterns, all the way down… right? Smaller iterations of the larger seed.

                  Best we can do…

    • Jako302@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      One candidate was waivering between 13 and 17…

      Pleas tell me that’s a joke. Or were they trying to switch fields and were a baker or something before? I just can’t accept that someone that would struggle with that question, even in a stressfull situation, ever took a single comp science class.

      • collapse_already@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        I wish it was a joke. Maybe they were deliberately getting the answer wrong to waste our time, but the body language was not consistent with someone fucking with me.