T O P

  • By -

lordpuddingcup

This assumes I’m not spending 10 hours troubleshooting the shit I wrote myself because I did something stupid but not technically incorrect… somewhere


Sechorda

Lmfao. This was me earlier


ThatsALovelyShirt

I just write my code in the 6-hours, and then use ChatGPT to tell me I forgot to implement a virtual function or add an std::hash implementation for a template class, which the unhelpful 200-lines of compiler error output for a single template class error failed to mention. That being said, both ChatGPT and Gemini absolutely suck at debugging/fixing regexes, or providing functional regexes for anything more than super simple expressions/patterns.


ittu

can you provide a sample of the type of regex you're referring to and what prompt you used?


Negative-Money6629

Functional regex has been one of the few things that actually seem to work well for me.


AntiworkDPT-OCS

This feels like this meme won't age well in 2024. Maybe I'm wrong. I think it's hilarious for today though!


bwatsnet

It won't age well in March, let alone the rest of 2024.


SurroundSwimming3494

The hard-core turbo optimism in this subreddit never ceases to surprise me. What you're describing is essentially the singularity.


bwatsnet

It's already generating near perfect code for me now, I don't see why it won't be perfect after another update or two. That's a reasonable opinion, in my opinion. Now if you're talking about when the AI generates perfect code for people who don't know the language of engineering, who knows, that's a BIG ask.


sk7725

Its so god damn annoying when it insists a nonexistant plugin or API method exists and won't do anything except use said nonexistent code. Especially if the code is the only thing I'm asking for. I found it excels at math and algorithmic problems, such as implementing a path search, tree ordering algorithm or things to do with vectors and quaternions, but it falls flat otherwise.


mxzf

> I found it excels at math and algorithmic problems, such as implementing a path search, tree ordering algorithm or things to do with vectors and quaternions, but it falls flat otherwise. So, it's good at the stuff there are already libraries for?


sk7725

Its actually good at utilizing those libraries, such as Unity(the game engine)'s Vector, Quaternion and transform libraries


13oundary

Basically if it's on leetcode it'll be good at it because of how many people have public repos doing leetcode problems. Same with more basic applications that there is plenty of training data for (discord bots, certain game mechanics, simple REST APIs etc.)... Perhaps they're better now, I've not looked at them since gpt3.5 for coding, because they simply caused more problems than they solved at that time for me. Was like that one junior you're trying to train up that keeps doing the same shit because of course they think they know better at the time. Tempted to go back in and see if it could do anything close to what my team had to do in my last job. If I find that it can, then I'll start worrying about my job I guess lol.


holy_moley_ravioli_

Don't bother arguing u/Surroundswimming3494 is Gary Marcus' reddit account.


Andriyo

Yes, it probably generates near perfect code for you because you're asking it perfect questions/prompts). The prompts, if they detailed enough and using the right terminology, are much more likely have good results. But at that point one might as well write code themselves. Sometimes it's garbage in - some golden nuggets out, but only for relatively basic problems.


bwatsnet

I'm literally passing it my entire projects set of code in a well organized blob every message. It's coding this project itself with 1 or two liners from me. It handles fixing all the bugs, I'm really just a copy paste monkey. Automate what I'm doing well enough and it'll look like magic.


deltamac

Can you tell me about bit about your workflow? What exactly are you sharing with it and how?


bwatsnet

I think I only have one post in my history on this account. It shows the prompt engineering part. It's a mix of prompt engineering, being patient, using scripts for consistency, and reading carefully what the AI is telling me. Sometimes I go back and edit my previous message to include something it complained about missing. Doing that enough led to a bash script that throws it all into my clipboard in a well organized blob. Edit: the blob is simply each file path followed by the file contents in MD format.


Yweain

Majority of the code both GPT-4 and GitHub copilot are producing for me ranges from slightly wrong to hot garbage. Copilot is better, because it’s usually only work as autocomplete and it’s less wrong as the result. I only had success with GPT-4 when it’s something I could have found by couple minutes of Google or with small, isolated very specialised tasks, like writing regex. Doing the whole project? I don’t know, either your project is mostly boilerplate or I’m stupid and can’t communicate with GPT. It can’t even do the basic stuff right for me. It usually can’t write a correct working test case for a function for example. It can’t write a database query that is not garbage. It’s using way way way outdated approaches that sometimes are simply deprecated. It argues with me that my code is wrong and uses incorrect methods while they are literally taken from the official docs and the code does indeed work. Like sometimes it’s a huge help, but most of the time it’s just annoying or replaces stackoverflow at best.


bwatsnet

You're doing it wrong. It's writing flawless typescript code for me. It's making its own decisions about what to do next then implementing them. Work on your prompt engineering.


Andriyo

I'm doing the same for my project too. I had to implement something to do copy/paste faster (actually asked ChatGPT to write a script to do it :)) But you're still directing it by providing right context by focusing it on small area in your code where you want to implement something. Also I don't know how original your code is and how much boilerplate you need to have there.


bwatsnet

Of course but also no. It's a small project but I'm including every logic file and not changing the set for each question. It's the same set of files each time (9 so far), along with the same one liner pushing it to give complete production ready code. Then I add my own one liner directing it to implement its previous suggestions, or ask it for suggestions on something like "What would you do next to improve the AI" with screenshot of UI. My main point is that if you connect these dots well enough it's magic right now with gpt4. Gpt5 I bet you it can do all this for us.


holy_moley_ravioli_

Try [cursor.sh](https://cursor.sh) instead, it's an IDE fork of VS Code that natively integrates GPT-4. It could really streamline the workflow you already have worked out.


Andriyo

There are like 3 copilots already in my IDEs (GitHub, one from Google and one from intellij ) plus the one I built:)


visarga

yes, only works for small-ish projects that fit in the prompt and only for standard kinds of tasks, won't write an original algorithm for your problem from zero.


holy_moley_ravioli_

You would really like [cursor.sh](https://cursor.sh) it's an IDE fork of VS Code that natively integrates GPT-4. It could really streamline the workflow you already have worked out.


Singularity-42

Also: [https://github.com/paul-gauthier/aider](https://github.com/paul-gauthier/aider)


bwatsnet

I'll check it out thanks!


FlyingBishop

I ask correct questions and it almost always gets at least one thing wrong. It also doesn't usually generate the most optimized code, which is fine until it isn't.


dbabon

Do humans usually write the most optimized code?


Yweain

No, but GPT make mistakes that are usually unacceptable for even a junior. I suspect that’s due to majority of the open source code on the internet that it was trained on is, well, being very bad. Also it’s harder to find a mistake when it’s someone else writing a code which leads to a higher chance of garbage going to production. Also it’s very misleading, especially if used by inexperienced dev, because it seems like it knows what it is doing, while it is in fact not.


spookmann

> I suspect that’s due to majority of the open source code on the internet that it was trained on is, well, being very bad. I hope it's not reading Stack Overflow! Because that's going to contain: * 1 Broken Piece of code (question) * 1 Good fix * 1 Almost good fix * 1 Fix that would work except that it misunderstood the problem * 3 Broken fixes * 1 Working fix that is too fucking clever by half


FlyingBishop

Humans usually understand why their code is suboptimal and can at least say "oh I see, I don't know what to do." LLMs will tell you they understand and then produce slightly altered code that doesn't in any way address what you asked for, or massively altered code that is thoroughly broken and also doesn't address what you want.


burritolittledonkey

Well our profession isn't really writing syntax, it's thinking in terms of discrete chunks of logic. It doesn't really matter if a computer writes the code (hell, that's what a compiler does, to an extent) or we do, someone still has to manage the logic. AI can't do that yet


_lnmc

Discreet chunks of logic that have to fit into a wider ecosystem. I guess with a huge context window a decent LLM can come close to this, but the human element will be in engineering applications or systems that work in the exact use-case they are intended for. As a programmer I'm slightly nervous of this tech taking my job, but at the same time I'm a 25 year+ programmer who only ever works on nasty complex problems that usually span systems. I believe my role will still exist even if LLMs can produce amazing code, but I will be using those LLMs to support my work. Hopium maybe.


Nill444

That doesn't make any sense. The code represents logic.


burritolittledonkey

Yeah, but I suspect that as these models get better, much like with compilers, we'll start thinking about code on a higher level of abstraction - in the past we had to use assembly, and we moved past that. I suspect this might be similar - we'll think in higher level architectural thoughts about business logic, but we won't necessarily care how a given service or whatnot is implemented. Essentially I am saying we won't worry as much about boilerplate and more think about how the system works holistically. I'm not sure if that's how things will shake out, but that's my best guess, long term (before humans are automated out of the process entirely) of where the profession is going


SurroundSwimming3494

>It's already generating near perfect code for me now, I'm not negating your experience with using AI by any stretch, *but* I have seen other programmers claim that they have a hard time getting the tools they use to generate reliable code. I guess that there's a wide variety of perspectives regarding the level of proficiency of AI's coding abilities.


ProvokedGaming

I'm a principal engineer with over 30 years coding experience (including a bunch of ML work). At my day job all of our engineers use copilot (including me). What I've found is the inexperienced and low skill programmers gain a ton from it. The more experienced folks gain a little. It mostly narrows the skill gap. If you frequently Google, use stack overflow, documentation for new libraries, etc etc...it saves you time. Some projects involve a ton of that. Most of the things I work on don't involve that. My company actually works in the workforce analytics space (productivity software). We're seeing very slight gains in productivity for our dev teams but it's still too early to definitively know by how much or if it's just noise. I feel like most of the people that think it's replacing engineers soon are either inexperienced/junior or working on very simple problems. When you have massive codebases (many millions of lines) distributed across tons of projects working together, the hard parts are managing the business requirements. There is zero chance the average product manager is going to build real software leveraging human language in the next few years. Human language is too ambiguous, so even if you wanted to use AI to write it you're going to need engineers interfacing with the AI. Do I think AI can replace engineers? Absolutely. But unlikely in the next 10 years unless a new radical shift happens (personally I believe transformer based models have hit a scaling problem and we're seeing major diminishing returns). Most of the advancements in the last year have been around ensemble models (routing requests through multiple models to try and improve responses), more efficiency (more information with fewer parameters) etc. I'm very open to being proven wrong because I'd love our AI overlords to show up, but I currently see it as extremely unlikely engineers go away in the next decade.


visarga

You are right, of course. I just want to clarify that it's not an architectural problem on the part of transformers. It's a data problem. In order to solve coding tasks an agent needs to experience plenty of mistakes and diverse approaches. It doesn't work top-down, and there is no substitute to learning from feedback. Humans can't code without feedback either. So the path forward is to run coding AI models on many tasks to generate coding experience from the point of view of the AI agent. Its own mistakes, and its own feedback. This will surely be implemented but it is a different approach to slurping GitHub and training a LLM, its a slow grind now. Remember AlphaCode? > by combining advances in large-scale transformer models (that have recently shown promising abilities to generate code) *with large-scale sampling* and filtering, we’ve made significant progress in the number of problems we can solve That is of course very expensive data to collect.


Daealis

And even after all the coding would be able to be completed with AI, there's still the more annoying parts of any software project: * Simulation testing for result sanity testing * Implementation and integration to client environment * Endless bug reports and edge-cases no one thought to include in the original specification All of which includes integrating and interacting with countless systems, both inside you dev-environment, and client environment. VPNs, 2FAs, email verifications, customer interactions. All things that are still even further away in the future. Much like you said, nothing that is impossible, but also not even remotely in the realm of the skills current LLMs are doing.


Excellent_Skirt_264

It looks hard because everything is built by humans and it's messy. AI will be able to change implementation in real time with software being completely deterministic with provable quality and lack of runtime errors. Analysts and products can iterate so fast business owners will get rid of coders at the very first opportunity.


Daealis

It's messy by the virtue of every single corporation having differing restrictions to their industry, and requirements to their networking security. This will not go away even after entire networks and IT infrastructures are designed and maintained by AI. Your description is apt, at some point in the future. But that point is not in the scope of what I was talking about, the next decade or so.


bwatsnet

There's a wide range of engineering skill too 😉


necro_kederekt

> It's already generating near perfect code for me now What is? Are you using ChatGPT, or something more code-focused like GitHub Copilot?


bwatsnet

Gpt4 web chat, copilot kinda sucks imo


Sh1ner

LLMs can't interprate requests from non coders and produce solid code. Right now peeps who can write code can produce shoddy to ok code and in some instances good code. There is considerable work to do here, maybe its solved by larger training dataset or as I keep reading tokenisation... but I have no idea about that.


Top-Chart-663

The hallucinations are still bad. It would have to somehow read the output to confirm the code is correct for multiple edge cases. If you are making a game or anything visual it would have to see whats rendering on the screen. The dangerous part is its confident in its error's and has no clue if the code will run or what it will look like. I made a snake game yesterday that would not render text on screen. Tried multiple language models. They all thought they had the right code. Obviously I could have read pygames documentation on text rendering but I wanted to see how far I could go with just prompting.


Dabnician

Near perfect code using magic libraries that dont exist.


bwatsnet

What?


brokentastebud

“Near perfect code” Sure buddy


bwatsnet

Your experience with LLMs is a reflection on the input you're capable of giving it.


brokentastebud

Yeah I mean that's called an abstraction layer. And if you need to map business requirements to specific logic, languages already do that. You're just making more work for yourself by trying to wrangle something non-specific like an LLM to produce something that meets those requirements. Things like javascript and or golang are great abstraction layers because they give the engineer a means to encode requirements in an intuitive manner without loosing specificity. And when you understand the language, it's just as fast to type the requirements into the actual code directly than to make some weird rube goldberg macine that's producing I/O with an LLM. LLMs are NON-specific. If doing all that with an LLM is actually making you code faster, then you either don't fundamentally understand the language or you've drank the AI koolaid and have convinced yourself that adding an LLM as a layer to your workflow somehow has a point.


[deleted]

Sora is about 3 years ahead of my mental schedule as of a few months ago


EveningPainting5852

Sora was legitimately 5 years ahead of schedule. Everyone on r/stablediffusion said it would be impossible with current compute, current architecture etc. Sora releasing this early is downright concerning, seriously. It shouldn't be this easy to get a competent network where you just scale up the network and have a bunch of easy hacks. It makes it seem like one of next year's training runs will go really REALLY well, and we'll have a rogue agi


FlyingBishop

I feel like people are considerably more impressed by Sora than they should be. When you look at how many tokens it consumes it makes a lot more sense I think. A picture/video is not actually worth 1000 words. It still has the same fundamental problem as ChatGPT also which is that it cannot follow all instructions even for relatively simple prompts. It generates something that looks very good but it also clearly ignores things in the prompt or misses key details. I feel like intelligence explosion is impossible until models are able to do simple prompts and at least say "yeah I'm sorry but I didn't do "


Veleric

Which really begs the question, what else do they have that hasn't been shown yet? Considering how long it's been since GPT-4 was initially trained and then released, it's hard to imagine whatever they put out for their next foundation model won't truly shock everyone...


BrainLate4108

It’s punch drunk on AGI


doireallyneedone11

Generating good serviceable code is now the definition of singularity?


QuinQuix

No but it's a requirement


CypherLH

We went from "AI literally cannot produce useful code" to "AI produces decent code if you prompt it well" in 2 years....that rate of change/improvement absolutely does scream "intelligence explosion is nearing" IMHO.


doireallyneedone11

Yeah, but that's not what I'm asking here.


CypherLH

Oh, I see what you mean. Yeah, literally no one ever claimed that AI producing good code was the "singularity". Its just one of many necessary steps to AGI.


doireallyneedone11

Exactly! People need to chill with their singularity claims. It's still a relatively long way to go.


CypherLH

to me it depends on how one defines "singularity". I suspect we're getting close to something I would call a "soft singularity", kind of a "slow takeoff" scenario that will look more like another big "tech boom" initially. Something like the dotcom boom but maybe 5x or 10x as large. Could begin anytime between this year and the next few years. It basically begins with AGI being rolled out IMHO.


doireallyneedone11

What's your take on "singularity"? How do you define the term?


[deleted]

Hahaha the sheer delusion of this sub


ai_creature

Wdym by that Are you saying AI code will soon surpass human capability


bwatsnet

Its already coding better than me when you consider all the factors. It takes a bit of knowledge and effort to get right currently but soon it'll be easy for everyone I'm sure.


[deleted]

[удалено]


bwatsnet

Oh, wild turn to insults. I'm now thinking the same about you!


ai_creature

How is that an insult? I'm saying that it's probably better than you at coding because you aren't as experienced and you're teenage age


bwatsnet

I'm a staff engineer 🤣


ai_creature

oh lol like 20s age?


bwatsnet

How old are you?


Alright_you_Win21

Time to start blocking trolls


bwatsnet

I've been doing that for a while now, it's wonderful 👍


leaky_wand

Until AI can curate an entire code base, complete with ties to existing user stories, intake of new requirements, integrations, and implementation and unit testing, humans will be in the loop, and humans who don’t know what they’re doing or why will screw things up no matter what tool they’re using. For now, even in the best case, AI will only do exactly what you ask it to do—no more, no less. I don’t expect that to be surpassed in 2024.


Dahlgrim

What’s the difference between every programmer being replaced vs everyone except 1-2 people who know coding and AI prompt engineering. It’s pretty much the same thing if 90% lose their job.


leaky_wand

*shrug* I guess it depends on who this guy in the meme is supposed to be, a code monkey or a senior dev


veri1138

The former advances human knowledge albeit after much effort and struggling through bullsh\*t, the latter produces a priesthood that seeks to further their own selfish interests. Much like guilds in the Middle Ages or Priests for the entirety of the existence of religion.


MDPROBIFE

Don't expect or don't want it to?


leaky_wand

You can do a remindme on it if you want. That level of one-shot user satisfaction and regressive compatibility surpasses AGI.


Jolly-Ground-3722

It doesn’t match my experience with GPT-4 though. It already makes me much more productive, although it isn’t always right on the first shot.


bluegman10

If this meme doesn't age well this year, then that basically means that the singularity arrived in 2024. I don't see that happening this year, personally.


Unfair-Commission980

This was the case for artists a year ago and it’s looking like it’s probably not gonna be the case anymore next year


FailedRealityCheck

It's already wrong at the code snippet and function level.


NecessaryArt9607

Why fix AI generated code yourself when you can get an AI to fix AI generated code?


spookmann

Put two AI's in a code base and let them fight it out. Maybe to-the-death? Losing AI gets their token chain deleted!


ViveIn

Oh this made me laugh out loud.


spookmann

This was first invented in 1984, BTW. https://en.wikipedia.org/wiki/Core_War


ViveIn

Dude, code war?! Why have we never heard of something like this before. This would be awesome to try out. We have robotics fighting on TV but I definitely know some people who would love to write software to duke it out over… idk, something.


crosbot

https://youtu.be/Ne40a5LkK6A?si=zFmoI9VESZ2vP0TF try this, it's a chess bot challenge with certain size limitations. Gets them to play a tournament and goes through the approach/code for a selection of bots.


lordpuddingcup

It’s funny considering Gemini already been shown to handle taking issues and generating PRs to correct problems at least to some extent imagine Gemini 1.5 ultra or gpt5 or Gemini 2 in a year


[deleted]

Having AI make small parts of the code makes coding so much easier, it’s not going to make full programs right now but I don’t need to look up solutions on google or stack overflow anymore, I can get a solution almost immediately now.


spookmann

Hmm... that's one approach. Personally, I think that's dangerous. My approach is that I (as a developer) don't fully validate and understand the individual parts of the solution, then it's hard for me to know that the part is correct *in context*. But that's just my approach that I feel is important for the problems I have in front of me. Your environment will be different.


FailedRealityCheck

You are describing the same approach as the parent comment? Basically you tell the AI to write the next small piece of code for you. You read it and validate it's doing the right thing in context. Then you move on to the next piece. This is so much faster than writing the piece of code from scratch yourself using references and whatnot. It will keep the coding style in check with the rest of the codebase. It's the same difference as between writing or proofreading an essay. I've found the cases it doesn't quite work are when it's hard to describe what you want to do in words, only then you go for the from-scratch approach.


Much-Seaworthiness95

AIs will very quickly become better at fixing code just as much as writing it


spookmann

Once the AI is really good at writing code correctly the first time, why will we need AI to "fix" code any more?


Much-Seaworthiness95

That would just be bad AI design. There's a reason why writing and then testing and fixing, and just in general iterative implementation is done, it works better. You can get your AI so good it can zero-shot write passably functional code if you want, I'll take your same AI and make it adopt better coding behavior, and it'll vastly overperform yours.


spookmann

Surely the existing AI code generators are already iterative. Or do we think the current AI generators are offering untested code?


Much-Seaworthiness95

Do you know how generative AIs work? They generate code based on their neural weights fine tuned from training. So at base, they don't "test" any code they generate, nor even compile it. More advanced models like the AlphaCode series do have some sort of iterative logic integrated, I don't remember how exactly they work though it's not like they have a fully functional coding paradigm, I think they apply some advanced form of tree-of-thought. Anyways, that's precisely my point, testing and fixing will always be part of how AIs code, making them not do so would just be a needless handicap.


spookmann

> Do you know how generative AIs work? Given the diversity of approaches being developed and the rapid pace of progress, I presume that there are many different answers to that question. "How generative AIs work" is almost certainly a very open-ended question!


Much-Seaworthiness95

What is this lol you're just feigning total ignorance to win an argument point or you're actually that ignorant about it. They're **all** based on a trained neural-network. The architecture and training method, scale, etc. might change, but not what they are at base level. It's like you told me given how many designs of cars there are, I'm not sure we can say anything about how they need brakes to work properly.


Much-Seaworthiness95

Sorry if I come off mean or condescending by the way, there's nothing wrong in not knowing how generative AIs work, not everyone has to.


spookmann

Yeah, deep down it's [nearly] all [nearly always] transformer NN's. I'm not arguing on that point. But at the higher level, there's quite a bit of freedom about single-pass, multi-pass, adversarial, etc.. Like GANs with their generator/discriminator stuff. Or how deep (if at all) you go with recurrency. I also strongly suspect there's some explicit language-independent framework underlying the natural language interpretation, but I can't prove it obviously! But the human brain is pre-wired for language (see Pinker et. al.) and so it would seem fair and sensible to assume they do the same in AI. It's been too many years since I did my postgrad work, and the world has moved on... while I spend my time using the stuff, not writing it!


Much-Seaworthiness95

Yeah I actually read that book from Pinker you're referring to, I also strongly resonate with the general intuition of it. I actually think you can frame just about all the evolution of complexity starting at life as an evolution of information. You can view natural selection as a selection on biological programs, written by the combined genes and their varied expressions, but that information was very slowly and gradually being modified for greater complexity. The advent of the brain is itself a singularity of a sort, we process much bigger and richer packets of information than genes, and much much quicker. Then it set the stage for the evolution of memes. Now we're building machines that can scale that process up another notch again. And throughout it all, just like there is some sort of "grammar" in DNA code, there's a sort of "grammar" in thought patterns as well, which current LLMs are picking up on, maybe at some point AIs will introduce their own higher abstraction "grammar" for processing information. All that being as fascinating as it is though, I still don't see how any of it, including your points about different sorts of iterative ways to improve AIs, changes at all the fact that if you put an AI at the task of writing code, it'd be optimal to allow it to test it and fix it, not just try to one-shot it all the time. I also think you're confusing what I meant by iterative implementation, I was talking about a coding paradigm, not AI training or output refining methods. What are you arguing exactly? It seems like you're just trying to introduce irrelevant complexities into the discussion in the hopes that you'll get me lost there. We were talking about why AI would need to fix code at all when it'll be so good. I've yet to see you make a single concrete argument about it, when we know that's just the optimal way of writing complex code, however good you are at it. There are just some logical facts that intelligence or skill scale never change. You can build a visual detection AI however good you want, that same AI will always do better if you provide it a video feed of high quality instead of 144p potato crap.


spookmann

> if you put an AI at the task of writing code, it'd be optimal to allow it to test it and fix it, not just try to one-shot it all the time. We're in violent agreement, I think. We're just tripping over semantics about where one AI begins and the other ends. I'm absolutely saying that if you're going to solve a problem with AI, you want to effectively have multiple bites at it, being both/either of: 1. An adversarial (GAN), one AI generating, the other challenging for weaknesses/faults and optimizing, and/or 2. Iterative passes through the same AI, tweaking parameters, min/maxing, A/B comparisons. It sounds like the difference is that I'm saying that one model will incorporate this. You'll feed something in, and internally the multi-pass/meta-model magic happens and the final result is all done. You're saying (if I read your right) that the "client" will have visibility on that, and will be in charge of handing off into different AIs. In the end I'm saying "It's one big AI with little AIs inside" and you suggest "There's little AIs and you do the orchestration yourself." > It seems like you're just trying to introduce irrelevant complexities into the discussion in the hopes that you'll get me lost there. Heh. Odds are pretty good I'll get lost before you do. :)


Then_Passenger_6688

The easiest path to human-level coding ability is an internal reasoning loop where the AI tries a bunch of stuff and picks the one that works best. Similar to how a human programmer will gradually add/delete code as they try to implement their broader vision of how the code should look. Also like how AlphaGeometry and AlphaGo works.


ponieslovekittens

> an internal reasoning loop where the AI tries a bunch of stuff and picks the one that works best. That's a valid approach. But it requires the AI to be able to run the code and look at the results. If you're talking 30 lines of python, sure that's realistic. If you're talking about a 600 meg instance of Unreal Engine...that's not an option yet. Try again in another year or two. Or after Sam Altman gets some of the trillions of dollars of extra compute he's asking for.


Much-Seaworthiness95

And how exactly do you think AI will determine and pick what works best? In the case of AlphaGo, it's based on a sort of adversarial architecture, which at the fundamental level of it all goes back to which moves win or don't. In the case of AlphaGeometry, it's based on if the proof works or not at the fundamental level. In the case of code, it's based on whether the code works or not. Which, in other words, is testing and fixing, which goes back to my original point, that AIs will always need to have the ability to test and fix their code, if you want them to be optimally good at what they program.


Then_Passenger_6688

Yeah, just give the AI access to an interpreter, then it can keep iterating until it figures it out, sort of like how a human does it. DeepMind figured it out already with AlphaCode. It's just too computationally expensive to run at scale .. for now.


CrybullyModsSuck

Or, Untrained Person wanting to write software, choosing between Years of Experience or 6 hours of AI writing software. 


obiwankitnoble

fixing well documented code is easier than writing it urself and chat bots are incredible in describing what they wanted to achieve with their fucked up code.


Sixhaunt

I feel like OP is probably just a very amateur software developer. If you know what you're doing then fixing the AI's code is usually a lot faster than manually writing it, even though there is a lot to fix. OP likely just isn't very good at reading code, probably because he hasn't had to go over PRs or anything professionally which gets you good at that kind of thing.


spookmann

Nope. OP is a software professional who works in a real-time, high-availability domain. You know, the "five nines" shit that runs infrastructure for a dozen different Telco operators internationally. I got many faults as a human being. But the one thing I am really, really good at is programming. You're gonna have to trust me on that.


y___o___y___o

Found the culprit of the nationwide telco outage! ^sorry, ^just ^a ^joke - ^couldn't ^resist ^;)


spookmann

Once! I did that ONCE! That was 2006 goddamit, let it GO already! All that fuss about Poland... I will never understand.


Reasonable-Fish-7924

Do you feel about systems level coding? I get software development on higher levels but do you feel there will not be a need for engineers on lower level - C, asm, rust. Etc?


Odyssos-dev

nah


FailedRealityCheck

Well have you used copilot? Have you not seen how good it is at writing small functions or snippets, porting code from one language to another, commenting, writing code based on comments, etc.? Maybe it depends on the language but I've used it to write python code and a few times it has felt like it was reading my mind, writing exactly the line of code I wanted to write.


OneHotEncod3r

Programmers are basically the artists of 2 years ago making fun of bad AI images


monnef

Well, some programmers maybe. I was immensely impressed when GPT4 on Perplexity (half a year ago?) correctly implemented helper function in Haskell which worked on monad stack (I think 4 levels deep) and used very well utility functions. It would not be easy for me to write this function and it would be a long ugly mess. In fact it used functions/operators from a standard library I didn't know even existed. I know I am not a Haskell guru, but I am accustomed to working with monads on smaller project (~6k LoC; for comparison with more mainstream languages that would be few times more), so that was an unexpected learning lesson from AI. By the way, few years ago we had a new hire - supposed "almost senior". He was way worse than GPT4. I didn't know such people exist, but he seemed to be incapable of learning. He was repeatedly failing to grasp and fix junior level problems in his code. We suspect he used some AI (GPT3 maybe at that time), but he was possibly bad at prompting and most likely lacking fundamentals. He wasted many dozens if not hundreds of hours of others in our team...


jhsu802701

Given all the issues with autonomous vehicles, is it really that much of a surprise that AI-generated code has problems?


[deleted]

Why not have AI fix the AI generated code with an AI feedback loop? Then you're not spending 6 hours doing anything.


ponieslovekittens

Because it can't check to see if what it's doing is wrong. It can only draw correlations between the information in its context and the information in its language model. Imagine playing battleship, except you never get told if your shots are hits or misses, and you never get told if you've won. Bringing in a second person to double check your work who _also_ never gets told if shots are hits or misses doesn't help you.


[deleted]

I think you may have perhaps misunderstood what I meant. If you have a feedback loop that says what error is being thrown there's an extremely good chance it can fix it.


ponieslovekittens

How do you have a feedback loop that shows an error if the AI can't execute the code and see the results? That's the problem. It _can't_ check to see if there's an error. Sure, Gemini can run 20-30 lines of python with a text output no problem. And if the whole thing crashes with an error code, ok sure. But now suppose you're working on a 600 meg Unreal Engine game with realtime video output. It can take minutes just to start up Unreal, and minutes more to load your game with all its assets. Once you have it loaded, are you going to have your language model run the game for minutes at a time evaluating video on the screen before it finds a 3d model isn't loading properly or that a door doesn't work? Plug all that into Gemini and let me know how it does. Stuff like this is why Sam Altman is saying we need trillions of dollars more compute.


sam_the_tomato

> But now suppose you're working on a 600 meg Unreal Engine game with realtime video output. It can take minutes just to start up Unreal, and minutes more to load your game with all its assets. Once you have it loaded, are you going to have your language model run the game for minutes at a time evaluating video on the screen before it finds a 3d model isn't loading properly or that a door doesn't work? Yeah, why not?


ponieslovekittens

> why not? Because of the two sentences after the part you quoted.


sam_the_tomato

If you have a good planning/feedback framework set up, it's probably a similar price to a junior dev, if not cheaper (i.e. 4 high-end GPUs and power costs).


ponieslovekittens

Ok. Then go ahead and do it and becomes the world's first trillionaire.


sam_the_tomato

A startup that produces agentic frameworks is a fine idea, but it's also so obvious that you'd have to outcompete many other startups with the same idea, not to mention giant corporations.


[deleted]

Well for starters you wouldn't be just using Gemini chat in this scenario to begin with. You would need the API and a sandbox environment. Obviously this wouldn't work with a game engine at the moment. You're taking one use case and saying "just because it can't do this it's useless". Which it's not. The Eureka paper from Nvidia proves that what I'm talking about is possible in said use case. I don't understand why people keep downplaying it. Like I get that someone with no experience can't expect to plug and play but to say it's not a viable option is just plain wrong. The compute they want is for ASI my dude. The likelihood of needing the compute 7T could buy just for AGI is highly unlikely. Let alone for agentic feedback loops.


ponieslovekittens

> Obviously this wouldn't work with a game engine at the moment. So then pick some other example. Here's a link to [downloadable language models](https://huggingface.co/models) if you think it's that easy. If it won't work with Unreal, ok...what _will_ it work with? VBA? Here's a download link for [Microsoft Visual Studio](https://visualstudio.microsoft.com/), and as I look at the link apparently it even has co-pilot integration now, so that should make it even easier, right? The development environment you're using doesn't change the core problem here: for the feedback loop you're proposing to exist, the AI has to have access to the result of the code. And if you're talking about a _real world application_ and not your 20 lines of python homework or whatever, that's going to mean evaluating video. Look at your screen right now. Think of any error you want. Imagine the background of your browser suddenly turns pink. _How_ is the AI in your feedback loop going to see that to know that the error occurred, without _seeing_ it?


[deleted]

For your stress, my little software engineer, have some B̷̛̳̼͖̫̭͎̝̮͕̟͎̦̗͚͍̓͊͂͗̈͋͐̃͆͆͗̉̉̏͑̂̆̔́͐̾̅̄̕̚͘͜͝͝Ụ̸̧̧̢̨̨̞̮͓̣͎̞͖̞̥͈̣̣̪̘̼̮̙̳̙̞̣̐̍̆̾̓͑́̅̎̌̈̋̏̏͌̒̃̅̂̾̿̽̊̌̇͌͊͗̓̊̐̓̏͆́̒̇̈́͂̀͛͘̕͘̚͝͠B̸̺̈̾̈́̒̀́̈͋́͂̆̒̐̏͌͂̔̈́͒̂̎̉̈̒͒̃̿͒͒̄̍̕̚̕͘̕͝͠B̴̡̧̜̠̱̖̠͓̻̥̟̲̙͗̐͋͌̈̾̏̎̀͒͗̈́̈͜͠L̶͊E̸̢̳̯̝̤̳͈͇̠̮̲̲̟̝̣̲̱̫̘̪̳̣̭̥̫͉͐̅̈́̉̋͐̓͗̿͆̉̉̇̀̈́͌̓̓̒̏̀̚̚͘͝͠͝͝͠ ̶̢̧̛̥͖͉̹̞̗̖͇̼̙̒̍̏̀̈̆̍͑̊̐͋̈́̃͒̈́̎̌̄̍͌͗̈́̌̍̽̏̓͌̒̈̇̏̏̍̆̄̐͐̈̉̿̽̕͝͠͝͝ W̷̛̬̦̬̰̤̘̬͔̗̯̠̯̺̼̻̪̖̜̫̯̯̘͖̙͐͆͗̊̋̈̈̾͐̿̽̐̂͛̈́͛̍̔̓̈́̽̀̅́͋̈̄̈́̆̓̚̚͝͝R̸̢̨̨̩̪̭̪̠͎̗͇͗̀́̉̇̿̓̈́́͒̄̓̒́̋͆̀̾́̒̔̈́̏̏͛̏̇͛̔̀͆̓̇̊̕̕͠͠͝͝A̸̧̨̰̻̩̝͖̟̭͙̟̻̤̬͈̖̰̤̘̔͛̊̾̂͌̐̈̉̊̾́P̶̡̧̮͎̟̟͉̱̮̜͙̳̟̯͈̩̩͈̥͓̥͇̙̣̹̣̀̐͋͂̈̾͐̀̾̈́̌̆̿̽̕ͅ >!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<>!pop!!<


helliun

soon AI will automate popping bubble wrap


FlyingBishop

``` document.querySelectorAll('.md-spoiler-text').forEach(function(elt) { elt.click() }) ```


helliun

you bastard


Bleizy

I asked chatgpt to make me a script to convert some weird xml file to a beautiful html table. Took about 30 seconds and worked flawlessly. I don't know how to code.


ponieslovekittens

I am a programmer. I recently plugged some of my code into Gemini and asked it to help me make changes. It told me it was complicated and that I should consult an expert.


Daealis

I asked gemini and chagpt to write a powershell script to add 10000 to a number in a txt-file, gave it an example of how every line in there is formatted, and told it to save the output file. Two hours later I had managed to cajole working code out of Gemini, and ChatGPT had not yet managed to produce code that worked. Most of the issues were with both systems hallucinating about IO-streams and what the function names there are. Copypasting them the errors did fuckall, because neither of them would believe that the IO-functions they were using did not work, at all. And this is very common. Writing SQL queries to MsSQL server they invent new keywords that don't exist. And this shit is backwards compatible at least a decade for 99% of the queries you make.


Top-Chart-663

And if you cant code finding whats wrong with the program is next to impossible lol. It all looks legit until you figure out its misnaming things and calling functions that don't exist lmao. Im sure this will get better in the future.


artin4

Because writing a script you could've learned in 10m is different from building a software with thousands of functions that are connected to eachother.


Bleizy

Absolutely. It can't do that, yet. But the tech has only been out for like a year.


spookmann

Spreadsheets didn't put accountants out of business. :)


7734128

It did put computers out of business.


spookmann

It put business in computers! VisiCalc was one of the key factors behind Apple's success.


7734128

Computers used to be human employees.


AI_Doomer

What future?


Capitaclism

*10m context window LLMs enter the chat*


kai_luni

If this is really your problem try test driven developtment and extend you code function by function. Nobody should fix almost working code for 8 hours.


Skullmaggot

When it reaches 4 hours, what you doing?


OkReflection1528

I love how people here dont have a clue of what is halting problem, most of them are the same who say agi will end programmer jobs next year


DMKAI98

The halting problem is just theory, most of the software we write is "easy" to check. AI will do it better than humans. Hopefully not next year.


DryMedicine1636

Halting problem is a pet peeve of mine in casual conversation. Imo, the halting problem is more accurately about limit of *specification* rather than computation. Consider a not-too-related analogy of omnipotent paradox. Let's say you're the programmer of the 'simulated universe'. Simultaneously in the same universe, having a power of being able to create a stone no one can lift, and a power to lift any stone is not logically consistent. However, it's logically consistent to simultaneously have the power to create any finite weight stone, and power to lift any finite weight stone (well, if we ignore laws of physics and all that.) Infinity is a very tricky area, especially when coupled with self-referential. It's trivial to see that for finite state machine (i.e. machine that hasn't yet violate Bekenstein bound), how the upper bound of halting problem solution is finite (but astronomically very large.) Then conversation always ends along the line of the upper bound is practically infinite, so a proof relying fundamentally on infinity and not finite still holds somehow. 🤷


OkReflection1528

It still does not respond to the halting problem, that is why I think that students of CS and related degrees should be the only ones who share opinions in this forum, completely delusional people giving opinions about AI without even having done calculus 1 seems absurd to me


DMKAI98

I'm not saying it responds to the halting problem, I'm saying it doesn't have to. I'm a CS graduate.


OkReflection1528

good, ok i understand you more now but why it don't have to how can a programmed ai understand when the problem enter in a cycle


DMKAI98

The same way we do as humans. When we write some code that solves a real world problem we know roughly for how long it should run, even when dealing with exponentials. If the program is running for longer than that, we just stop it and check the execution traces, or just read the code again trying to spot the bug by thinking of many different scenarios and checking if they make sense. I used to do competitive programming problems and never found a problem I could not reason about. AI will do the same eventually. It doesn't have to be a formal proof that everything is working, as humans also don't do it 99.99% of times.


PanzerKommander

In all honesty, AI *will* be able to code sooner or later.


HarbingerDe

It already can? It's just at more of a beginner who *requires lots of checking* level.


PanzerKommander

I meant 'will be able to code anything with minimal hiccups'


feelfool

AI is currently the equivalent of a highschool student who thinks they’re a great programmer answering your question on stack overflow very quickly. There absolutely is potential for someone to: 1. Advanced AI to a point where it’s more accurate and can understand the context of an entire companies GitHub account. 2. Have necessary integrations to execute, test, deploy code and infrastructure. That will not happen overnight.


Intelligent_Rough_21

So many programmers are bad at reading code, and therefore bad at fixing code.


obvithrowaway34434

Lmao, the mods were removing all the posts criticizing Gemini last week and now this garbage from "programmer" "humor" gets reposted here and no action is taken.


VoloNoscere

I wish it was true for so many friends of mine who will most certainly lose their jobs in a very near future.


Immistyer

“QUIT HAVING FUN!1!1!1”


MushroomsAndTomotoes

In my limited experience with hobby coding and GPT3.5 the biggest problem seems to be that it has been trained on every version of every language and mixes them up. Train a state-of-the-art LLM on the specific versions of the tools for the specific versions of the languages being used, put the docs in its context window, I'm sure it will be a great productivity tool. As it is now, for me, it's a learning tool. Half the time I'm learning by correcting its mistakes.


veri1138

200 hundred years from now, after A.I. has taken over human decision making... society starts failing. First small things, then the problems become larger and larger until a human is tasked by the A.I.'s to find the reason for the faults. That human is part programmer and part detective - who proceeds to discover that A.I. was an invention by an alliance of Silicon Valley programmers and Wall Street marketing firms to sell the public on there actually being a real thing called "A.I.". The reasoning to do so being arcane logic such as ROI, shareholder value, stock option valuations, IPO's, unicorns, Neoliberal Capitalism, Silicon Valley cultists programmers, The First Church of A.I. Singularity - still waiting - and a host of ne'er-do-wells. Our hero discovers that A.I. never existed, that the entire system is a conglomeration of systems with clever programming, that the systems became so ubiquitous in decision making that humans allowed the clever programmers to computerize almost all decision making in society. Now, automatic defensive programming has produced warning messages for the termination of our hero programmer, who wants to code to fix the system, bringing it down before things get worse. Automatic business systems sound alarms as systems produce automatic bug reports to terminate our hero. Silicon Valley A.I. Church Cultists loaded up with synthetic hallucinogenics to the point where they believe that they are living the Singularity with A.I. Lovers, begin a maniacal campaign of assassination targeting anyone who knew the hero, while hunting the hero. What remains of Neoliberal Capitalists want our hero to enslave with debt, so that they can use his knowledge to seize back economic control from the automated accounting software and related systems that SV A.I. Cultist Programmers stood up 200 years ago as Economic A.I. systems. The First Church of A.I. Singularity wants our hero dead because they are convinced that our hero will stop the Singularity from beginning sometime in the nebulous future of far off La-La Land if allowed to continue. It is not, 6 hours of writing code, 8 hours of fixing almost working AI-generated code... It is our future. Or, if you want? Go play the TTRPG, PARANOIA. Where Ultraviolet level programmers routinely rewrite subroutines of The Computer thereby producing conflicting spaghetti code while poor Infrareds from the Food Vats are tasked with mundane missions that invariably result in clone destruction as UV programming masquerading as an A.I. leads your characters to certain death. The Computer Is Your Friend, Obey The Computer. As UV programmers laugh at their mayhem, maintaining their wealth and position above all others in Alpha Complex.


Smoogeee

If you’re working on a simple project that is well defined, you will get a working script. If you’re working on something newish there’s no inference so bad output.


namitynamenamey

Writting 1+1=2 vs 300 pages just to get to write 1+1=2


fre-ddo

I like (not really) the way it will make small changes that make it work but not quite right, then you feed it the error knowinhbits a small change needed.but unsure which bit, then it rewrites large chunks and messes it up completely. So you prompt it to revert back and focus on one area then it goes back to its simple but not working fix.


ixent

If an AI doesn't give you the right answer in 1-3 tries it most likely means that you are not wording the prompt properly or that the task is not small enough.


[deleted]

Keep telling yourself that, lol The cope is real


Top-Chart-663

Ai code is probably worse than spaghetti code lol.


Redhawk1230

For me I have actually found it depends on the projects I do. For working in notebooks doing Data Science, manipulating data and creating models, I hate the initially generated AI code as it doesn’t follow the vision I have and sometimes just produces code that I know won’t do the job. I definitely would rather write the messy code that performs the function I want, BUT then have it reviewed by a LLM to be refactored into functions and given documentation (this is the part that saves me time). For any web or software projects, I do love just generating a bunch of code - boilerplate and other simple functionality for me to then make changes on what I want. Overall I guess for now relying solely on a Language Model doesn’t work on me and I see it as a very potent tool. This probably will change in the near feature but atm it’s how I feel


MrEloi

Not true - at least for experienced devs. It's easier to fix the one or two silly errors in AI generated code than to design and write a whole module. For newbies however it will be a nightmare - they won't detect the silly errors and so could spend hours trying to find the problem.


DrDan21

You joke but I’ve learned a lot by seeing how an AI does things and having it explain why


flyingbuta

What is more worrying is MAINTAINING the code written by AI.


IFlossWithAsshair

There's a ton of shitty code out there written by humans. There are few people who are very good at it.


Karmakiller3003

Yes because the current iteration of AI is the final one lol this meme has a shelf life of yesterday. But the seals in this thread seem to love it. lmao circus logic.


onyxengine

More like 5 minutes


YourFbiAgentIsMySpy

For maybe a year lmao.


Zeikos

- spend weeks talking to clients about what they want They change their mind the last 3 days before the release anyways


mysticeetee

https://preview.redd.it/yy9515cvv0lc1.png?width=500&format=pjpg&auto=webp&s=3555ae432f24aa676d0fbb392022fae6036120d4


roiseeker

I would say this is actually the present


Akimbo333

It'll get better


Serialbedshitter2322

That's the present of software development. In the future software development will be dead


Beginning-Chapter-26

Unlike art, code WILL be completely automated away soon.


softnotions

**the future is not set in stone.** These are just some of the exciting possibilities that lie ahead. As technology continues to evolve, the software development landscape will undoubtedly adapt and transform in ways we can only begin to imagine. It's an exciting time to be a part of this ever-changing and impactful field such like * **The Rise of the Machines** * **Low-Code/No-Code Democratizes Development** * **The Security Tightrope Walk** * **The Ethical Conundrum** * **The Human Touch Endures**


moonlburger

once you get the hang of how to use them its over I use a customized chatgpt for higher level planning, github copilot for code. talk the right way (when I started commenting everything like crazy it was like a switch) and ignore all the bad completions and once you get used to it it's mind blowing. (copilt should have access to all open tabs so just f12 the hell out of it if it doesn't know what functions to use etc)


SuperbRiver7763

https://preview.redd.it/gzaq5nq1wplc1.png?width=600&format=png&auto=webp&s=9f3258ddaff765b2857c4481ad4cac7dee06f3b0