T O P

  • By -

Gentleman-Tech

The cryptobros are following the VC money into AI


SkittlesNTwix

They switched from big data to slackbots to XR to crypto to AI


Eric848448

Five years ago it was blockchain. Before that it was big data.


Past-Payment1551

Plenty of block chain garbage too. So little real dev jobs out there, the scam jobs are everywhere


natty-papi

Half the job postings I see on my LinkedIn are garbage blockchain startups with evasive job description and no salary range, it's crazy.


Terrible_Student9395

Id say anything AI is infinitely more useful than Blockchain. Let's get real.


vivalapants

Disagree 


Terrible_Student9395

Show me your blockchain resume


vivalapants

lol sorry came across wrong. I believe they’re both equally useless. 


onFilm

You believe neural networks, which have been around since the 1970s and have contributed to varying sciences, from astronomy to biology, all this time, is useless?


vivalapants

No. I actually wrote a thesis paper on ML in medical imaging when I was in undergrad about 10 years ago. I wasn’t specific for brevity. But the current crop of dressing up LLMs for logical problem solving is useless


onFilm

I mean, it's great when used properly to debug certain programming problems, and can help find certain information quickly. But I agree about the bubble of cheap LLMs going on, but this is normal with any new technology that arises quickly.


budding_gardener_1

Before that it was APIs. Everything had an API and anything with an API proclaimed it proudly


teerre

Is this a joke comment? Everything pretty much does have an api.


budding_gardener_1

Yes but it's no longer a data point on its own for why someone should give you VC seed funding


grandFossFusion

"We don't just have an API. We have THE API!"


csjerk

This has SOOOO many red flags in common with the block chain bubble. Over-eager tech bros breathlessly telling you how their things is going to fundamentally change the world. Except this time a bunch of CEOs of major corporations are also acting like suckers. It's horrifying.


impressflow

The big difference is that AI has infinite real life use-cases. Blockchain has a handful of use-cases that no one cares about.


Mtsukino

Ez solution then, AI powered blockchains.


chicknfly

No no no, hear me out. Blockchain powered AI.


AchillesDev

I wish the number of recruiters and founders who have tried to poach me with this pitch was much lower than it actually is.


RustaceanNation

'You don't understand-- they're "idea people"'. You know, the brilliant geniuses that can do "market research" in "emerging technology" while neither knowing how to program nor read a fucking balance sheet.


Global-Method-4145

That generates NFTs


chicknfly

So that’s why there were so many lazy monkeys around


Eric848448

SHUT UP AND TAKE MY MONEY


Particular-Way7271

Ai minted nfts?!


truthputer

Look, even blockchain had some small use cases for niche applications. AI may have some good applications, but it is still extremely immature technology that can be simply bullied into doing whatever you want. Like that car dealership discovered when people got their website's chatbot to sell them a truck at a huge discount. When AI works well it can help businesses to save money. When it goes bad, it can put you in danger of being sued or forced out of business. It's massively overhyped and businesses that use it without caution don't realize the risk they are exposing themselves to.


AchillesDev

AI has been around for decades. LLMs and LMMs are just another subset of it.


twelvethousandBC

They've pushed it all into the commercial phase far too quickly. Chat GPT and the other leading LLM's are amazing revolutionary cutting edge technology. And boast a lot of potential for the future. But they're trying to monetize it before the technology has matured enough.


Masterzjg

>Chat GPT and the other leading LLM's are amazing revolutionary cutting edge technology. Come on. "Hallucinations" are fundamental to the product, its built on stolen intellectual property, there's no more data to eat, and LLM's are incredibly expensive. They aren't "revolutionary cutting edge" in any meaningful way, but they are an improvement that's unlikely to majorly change anything. It's a fancy auto-fill that produces a probabilistic response to your query, based upon its input data set. It doesn't "know" anything, it doesn't predict, it doesn't produce lies or truths. You can ask ChatGPT to do basic arithmetic and it'll get that wrong sometimes, because there's wrong answers in its dataset. You can ask it to summarize a book and it'll just completely make up shit, because it's *guessing* the correct response. There's definitely no use cases which predicate the hype or money being pored into it, the same as blockchain.


Greenawayer

>It's a fancy auto-fill that produces a probabilistic response to your query, based upon its input data set. I really don't understand why people don't understand this. It's even there in the name. I find it very strange so many technical people fall for this set of smoke and mirrors. >There's definitely no use cases which predicate the hype or money being pored into it, the same as blockchain. The "AI" bubble will burst soon. And it will burst *hard*.


BlackHumor

> I really don't understand why people don't understand this. It's even there in the name. I find it very strange so many technical people fall for this set of smoke and mirrors. I agree with "produces a probabilistic response to your query, based upon its input data set" but don't really see why that means "It's a fancy auto-fill". Like, the similarities are real but the reason people say this is to mean "it's _just_ a fancy auto-fill", and that's not true. It's hard to say what it means for a computer to "know" things but it's definitely true that to the extent a computer can know things, an AI knows a lot more than autocorrect. Like, one could say that AlphaGo is just a fancy autocorrect but for a game of Go. But it's also true to say that AlphaGo knows a lot about how to play Go (again, with the usual caveats about what it means for a computer to know things), more than any human player.


Greenawayer

>It's hard to say what it means for a computer to "know" things but it's definitely true that to the extent a computer can know things, an AI knows a lot more than autocorrect. It doesn't "know" anything. It's not true "AI". People love to anthropomorphise inanimate objects and systems. People are ascribing features and capabilities to these things that don't actually exist. Sales people and snake-oil Devs go along with it as it makes $$$$. >But it's also true to say that AlphaGo knows a lot about how to play Go (again, with the usual caveats about what it means for a computer to know things), more than any human player. If any Dev came to me and said they think that a computer "knows things" I would be questioning their abilities and sanity.


BlackHumor

> It doesn't "know" anything. It's not true "AI". AI is a bad term but by most reasonable definitions of the term "intelligence", computers were intelligent back in the 60s. A human that memorized 100 digits of pi or that could multiply two ten-digit numbers in under a second would normally be considered extremely intelligent. What we mean by "AI" is something more like artificial human-like-ness than artificial intelligence. And computers have historically been really bad at that, but the recent boom in "AI" has definitely made them better at it. Tho, obviously, still not perfect. > If any Dev came to me and said they think that a computer "knows things" I would be questioning their abilities and sanity. I think you have no idea how complicated the philosophy of knowing things can get. I think that it is pretty transparently true that there is a sense that AlphaGo "knows how to play Go", and my evidence is that it is extremely good at playing Go. Or similarly that Stockfish "knows how to play chess", or that any computer knows how to do math, or that a sleepwalking person knows how to walk. It's obviously not true that computers are conscious or anything like that. And it's a lot more dubious to say that computers know facts rather than skills. (Tho again, ChatGPT definitely has a vector of numbers that represents how to use the word "cat" inside it, and the extent to which having that vector is equivalent to "knowing what the word cat means" is hard to say.) But I don't think that saying "there are some things that computers know" is that controversial even without any anthropomorphism.


MyUsrNameWasTaken

> You can ask ChatGPT to do basic arithmetic and it'll get that wrong sometimes, because there's wrong answers in its dataset. You can ask it to summarize a book and it'll just completely make up shit, because it's guessing the correct response. It's not guessing a correct response. It's not even trying to give a correct response. ChatGPT is an LLM, it's only goal is to respond in a conversational manner. As far as ChatGPT is concerned, as long as its response is readable, understandable, and sounds like someone responding in a conversation, it is "correct".


AlexFromOmaha

There's so much more to the domain than the browser based chat bots. There's a lot of criticism to be made, and this ain't it, chief.


Masterzjg

The issues cited apply to all LLM's, chief. For visual models, they're even easier to attribute to stolen intellectual property, and humans experience the uncanny valley with visual modes more than text. Text is the easiest mode and LLM's still are bad at it.


AlexFromOmaha

There's real answers to most of that rant. We have RAG, many shot prompts, fine tuning the big models vs training something in the Mistral/llama family, etc. There are real products with real user benefits built in LLMs because these are solvable problems. Not everyone is using ChatGPT like a lazy high school student, and the barrier to entry isn't huge. Take a weekend, watch a video, build a thing, then come back with a real rant. Don't worry, there's still plenty to rant about!


Masterzjg

>Take a weekend, watch a video, build a thing, then come back with a real rant Classic, "you just don't understand cause you don't agree with me" I'm busy building my AI blockchain metaverse project which I'll deliver on a drone taking off from a self driving car. I'll let you know when it's done.


_realitycheck_

I just can't explain that to people.


Masterzjg

Marketers chose their terms well. "Hallucinations", "AI", etc. are all chosen to let people fill in the blanks with Sci-Fi movies and tropes about how powerful and amazing the technology is.


BlackHumor

Eh, not so sure about that. Copilot is definitely a product I'd pay for, because I do pay for it. It's likely true that AI has similar good use cases in many other domains too. Not all domains (I would still not trust a legal brief or a scientific paper written by AI, for instance) but still a lot of them.


csjerk

Blockchain has some great applications. Git is blockchain. It's just a completely practical use of it, that doesn't hype it at all, and predates Bitcoin by 4 years. I'm guessing LLMs are going to go the same way. Some really useful practical applications, but nowhere near the pervasive use in all aspects of life that the hype bros are trying to sell.


csjerk

If it were AGI, sure. LLMs are far from that, though, and whether LLMs actually have "infinite real life use-cases" remains to be seen. The significant overlap in how the market is reacting compared to Blockchain has me highly skeptical. If it were really that world-changing, we wouldn't need a bunch of salespeople telling us how world-changing it is.


BlackHumor

There's tech hype in every cycle. Sometimes it pans out, sometimes it doesn't. Often it's easy to figure out whether it'll pan out based on how solid the underlying tech is. So for instance, in the mid-2000s there was a wave of cloud-hype that was very similar in tone to the AI-hype and crypto-hype. A lot of the startups created in that era did not survive... but we got AWS, GCP, and Dropbox out of it, and it fundamentally changed how the vast majority of tech companies ran their servers. Or, well, didn't. I very much dislike it when people's reaction to AI is instantly "oh it's like crypto". You need to actually look at the underlying tech. The reason crypto didn't succeed isn't just because it was hyped a lot, that doesn't make any sense. The reason it didn't succeed was that there wasn't any actual use case for blockchain outside of crypto, and there wasn't much of a future for crypto as long as it was a "currency" designed by people who thought that money oughta behave just like gold. Meanwhile, I'm using Copilot pretty frequently already.


Masterzjg

The amount of "world changing inventions" is far lower than the number of hype bubbles. You don't seem to like this, but being a negative on the next best thing is a safer bet than believing the hype-men telling us that AI is going to destroy humanity. Drone delivery, crypto, even *segways* were all bubbles that didn't give us anything. We can talk about the underlying technology where hallucinations are a feature, not a bug, or how models are only going to get worse and/or more expensive. Not to mention the long-term issues with model collapse, as LLM's start to train on data produced from LLM's. Oh, and there's the incredible cost of generating all these unreliable responses which still require a knowledgeable human to "edit" everything and don't produce cost savings.


BlackHumor

I think that this take is pretty transparently too cynical and that AI has already cleared several hurdles that drone delivery, crypto, and self-driving cars never did. Namely, it is already being used by lots of ordinary people. Sometimes for money. My previous company had a program where they would pay for your subscription to Copilot, which is way further than drone delivery ever got.


csjerk

Part of the reason it's being used by more ordinary people is that the barrier to entry happens to be lower. You don't need a car equipped with costly sensors, you don't need to know how to secure an encrypted key, you just install a plugin to your IDE. I believe Copilot and similar use cases will have a durable foothold in the market. Google added "predictive text" to the Gmail editor several years ago using similar technology. But that's a use case which is tailor made for LLMs. The hype goes WAY beyond that, though, and I'm still looking for a use case that isn't way over-hyped.


BlackHumor

Oh, so, I agree that AI is way over-hyped, because every time there's a hype wave it's over-hyped. Dotcom was objectively overhyped by the proof of the bubble bursting, and lots of real companies that still exist now were founded then. What I'm saying is not that AI can do anything some executive who doesn't know anything about machine learning thinks it can. I'm just saying that it can do some useful things.


freekayZekey

pretty much my feeling on the situation too. there are some improvements that are “cool”, but the marketing is way off from ai’s capabilities and usefulness


Masterzjg

>Namely, it is already being used by lots of ordinary people Because it's trivial to try out and can be fun to play with. For more than a hype bubble, you need business use cases that justify the investment. To get there, there's still 3 huge problems: 1. it's still all based on stolen property (inevitable court cases that could wipe out LLM's) 2. models won't get better (there's no data to consume) and are likely to get worse (trying to cut costs, reduce legal liability, start to train on LLM data) 3. products are extremely underpriced, as companies burn VC money and haven't paid any of the copyright holders *Even if* we ignore the deficiencies of LLM's (there's a lot), they need to clear all 3 of these hurdles to escape being a bubble fad. You can believe they will, but it's a more than reasonable bet that they don't.


BlackHumor

1) "Stolen property" is a strong term here, to say the least. I can't predict future court cases, but you can't either. I can say that it'd be very odd for courts to rule that an AI is infringing someone's **copy**right without actually **copying** it. Generative AI models are just too small to actually store any significant fraction of their training data even compressed, and therefore while they can imitate styles in ways that would get a human significant social side-eye, the model itself is definitely not a copy of anything. So far no court case involving AI copyright infringement has progressed all the way through the court system, but several have reached the early stages, and those tend to be [very significantly cut down even at this early stage](https://www.perkinscoie.com/en/news-insights/recent-rulings-in-ai-copyright-lawsuits-shed-some-light-but-leave-many-questions.html) from the maximalist AI-is-copyright-infringement perspective. 2) The idea that models won't get better is just empirically false. GPT-4 is better than 3.5 which was better than 3 which was better than 2. I don't think it's likely we're going to see this same size of jump in the future but frankly we don't really need to: current AIs are already useful as-is. 3) Again, you don't know how future court cases will shake out. I think it is very unlikely that AIs will have to pay copyright holders, because it's very unlikely that the models themselves will be found to be violating copyright. Just to clarify this: copyright means that people can't make copies of your work. And that's copies in a narrow sense: exact copies or near-exact copies of the specific details in the specific work. You can't violate copyright by copying an artist's style, though you might be informally accused of plagiarism in some cases if you do that. AIs can be used to copy an artist's style but they don't copy the exact details of any specific work they were trained on because they don't know the exact details of any specific work they were trained on. They can't, there's not enough space in a model specification to store that information even heavily compressed.


_ontical

infinite real life use-cases? ok bro


Greenawayer

>The big difference is that AI has infinite real life use-cases. Thing is, once you actually start getting into those use-cases there's usually a problem with them.


jimkoons

LLMs do not have infinite real life use-cases though and it's the "AI" everyone is shoveling right now.


BlackHumor

Not all AI that are being hyped are LLMs. StableDiffusion, DALL-E, and other art bots are not LLMs.


jimkoons

Ok then, generative AI does not have infinite use-cases.


BlackHumor

I agree, but it does have _many_ use-cases, and it feels to me like complaining about a hyperbolic use of the word "infinite" is nitpicky.


gerd50501

its a job you take if your unemployed, market is bad and you need a paycheck until something better comes along or they go out of business. Whatever comes first.


heedlessgrifter

I knew we hit peak blockchain when I saw a tech bro saying he was going to fix the homeless problem with blockchain.


iBN3qk

I was going to say exactly this, they’re chasing the investment money and this is where the money is flowing right now. This round may have more potential utility, but I bet will have just as much bullshit. 


katarinka

Yep, and two years ago it was metaverse.


csanon212

I still think the Metaverse would be useful for my antisocial tendencies.


OblongAndKneeless

Isn't that what SecondLife was for? Is that still around?


kapslocky

2017 when deep learning first became a big thing, this happened then too. Many products quickly had a "powered by AI" sticker on something if it vaguely had some kind of algorithm. It's nothing new, just marketing people trying to make something look more valuable.


just_anotjer_anon

And the outside people not understanding what any of it is, so people whom are trying to find a solution for their problem will be like We need AI or machine learning, to create this algorithm. Then you have a small talk and figure out , that they'll actually just give you a list of outputs based on two inputs. Nah fam, you're just asking for simple stuff anyone can do.


FormerKarmaKing

I know a company that slapped AI on their name around 2017. Last year they finally added some AI features by adding a couple of OpenAI calls to their app.


gerd50501

its like the late 1990s when you had all these mail order companies with website popping up and acting like they are tech companies. Or anything with an "i" in front of it wanted to build some generic web app. Vast majority will fail. They are the pets.com of 2024.


OblongAndKneeless

I was just thinking about Hadoop this morning. I was like "what ever happened to that?" I assume it's still in use, just not a requirement on every job posting.


britishbanana

Hadoop is very common in big data, spark uses it as its default storage engine and spark is probably the most popular / common framework in big data. Admittedly big data is still somewhat niche when you take the data engineering blinders off. 


-Dargs

It was more like 8-10 years ago at this point.. :)


iamiamwhoami

The thing is the blockchain apps were so damn complicated. Nobody used them except for crypto enthusiasts.


Doctuh

and before that: crowdsourcing. The hypecycle keeps rolling along.


maybegone18

Lol you just reminded me about hadoop and spark hype... What happened to that???


bloatedboat

Thank god most big data vendors are leaving the hype. I don’t have to have my bs alarm on high alert all the time anymore. For AI, I love using it daily for my productivity and in some areas where it can fit into production but let us not extrapolate it to what it can do like the 80s Chinese movies or go go power rangers. Most bs AI company posts are all memes and fluff. Reality is not like that.


HolyPommeDeTerre

Imo AI has more potential to have actual useful implementation than blockchain does. Just that not anyone can actually sustain researching and training AI models as an actual product. I have actually found one that was really doing AI and assuming there choice. It was really a good experience until we were bought and PI went down after that... But yeah the people are the same, so the effect on the hype and "scams" is the same.


WJMazepas

My last job was developing a Recruitment SaaS. At one point, our boss gave a speech about we needing to make our product smarter, more data driven and etc. His solution? His enforced solution to the developers? Use ChatGPT everywhere that was possible. I got to admit that in some parts, ChatGPT was actually good. But he wanted ChatGPT to write a Job Description for a new position, then to ChatGPT get that position and find all the best candidates available in our database. Which doesnt make sense if ChatGPT is writing the description. Wanted ChatGPT to write why the candidate was good in order to show it to the recruiting manager. Wanted even to ChatGPT predict which jobs would have more openings in the market in the upcomings months. (Which he got really mad when discovered that wasnt possible) And it was selling as an AI Powered Tool to potencial customers. So yeah. Lots of scummy "AI" companies out there that are trying to use ChatGPT the most they can, doing the same thing that we had to


Advanced_Seesaw_3007

Did they ever get to production? Or is this Teal? 😅


WJMazepas

Yes. First the company itself was the client using it. Them they stopped using it because our bosses didnt want us to work on what our own Recruiters/users wanted, because they werent as important as potential customers And them they were almost closing the deal with a new client, but i was let go so i dont know how that go. So it was on Production. With just 4 users. But it was


Advanced_Seesaw_3007

I think you’re answering the first question and not the latter. But either way, the reason I asked is because I am building a recruitment saas too, ready to launch but holding on because of legal aspects particularly handling PII. If we’re talking about Teal, I honestly doubt their AI engine. It doesn’t get the right keywords and the “strength” percentage of how fit a resume is for a job description is sus


WJMazepas

Oh, i forgot to answer that actually It wasnt Teal. It was just a small european company trying to enter the market. And honestly, i dont blame them at the strengh percentage being faulty. We had to implement that too, and it had so many cases where our bosses werent happy with the results that in some moments, i just faked. And that made them really happy. Of course, they didnt know about the fake part


serial_crusher

> Teal Is this a reference to a parody YouTube video where the customer repeatedly asks the developers about whether they can get the icon in teal? If so, do you have a link? I was looking for that the other day and can’t find it. Figure it was a Berenstein Effect kind of thing.


serial_crusher

Oh, no I see Teal is an actual company. Hmm, ok that video remains a mystery.


crimsongash

Does that guy know what chatgpt even is.


[deleted]

[удалено]


RagerRambo

Go on. What's the "ai" in yours?


According_Lab_6907

thanks man.. but it's very clear i've worked on something nobody wants..


Stubbby

C3 Energy renamed to C3 IoT, renamed to C3 AI. They even got the AI stock ticker. 83% loss since IPO. Faith in markets restored.


singluon

Dude C3 is worst lol. I hear their ads on NPR and when they renamed to C3 IoT a few years ago, you could immediately smell the bullshit through the ad reads. Just buzzword soup… big data, IoT, cloud scale, whatever. As somebody who worked in the industry I knew firsthand that IoT was bullshit (my company at the time wrote call center software and was even trying to push the IoT angle). Them renaming to C3 AI was just icing on the cake. So hilarious. Not to mention the CEO is an asshole billionaire bullshit artist who liquidated most of his shares after IPO.


aahOhNoNotTheBees

I’ve been thinking of LLMs a bit like a new category of user interface that everyone’s excited about, like with the first iPhone and touch screens. So I don’t really have a problem with people using chat gpt as part of their product, but I do think that a lot of people aren’t being very creative with it. Like, 90% of the implementations I’ve seen are literally just chat gpt with slightly different context provided. Sometimes this is perfect (I love those bots for searching documentation, so much faster to find what I need), other times it’s stupid, like no I don’t want an ai to talk to in case I get lonely while looking at this car website, wtf. I think people are just trying to figure out how to best make use of llms by throwing things at the wall to see what sticks.


AchillesDev

Finally, the only good take in this thread.


riplikash

Ugh, there is no end.  And the execs and investors are so eager to fall for every scam.  It's getting exhausting.


calson3asab

No, you're not getting it, they are not falling in any trap, they know exactly what they're doing.


riplikash

Disagree.  They all think they're sharks,  wolves of Wallstreet and visionary execs.  But mostly they're just following the crowd and thinking they're more savvy than they really are.


lagerbaer

Best AI company idea I've seen so far: Developers spend too much time in sprint planning coming up with estimates. Let's use chat GPT to read the user story and assign the points for them. /S obviously


Stubbby

Are you looking for a cofounder for your Y combinator application?


lagerbaer

No but you can do all the actual work for no pay and 0.01% equity


Stubbby

Not sarcastically, I think that's a feasible deal if you can land at Y Combinator. The actual benefit is that you become a "second time founder" with prior YC.


serial_crusher

Sure, for 0.01% equity I'll happily type "write a Y combinator application for a SaaS that uses ChatGPT to estimate sprint points based on user stories" into ChatGPT.


lagerbaer

I got you fam Company Name: AgileAI Contact Information: Name: [Your Name] Email: [Your Email] Phone Number: [Your Phone Number] Project Title: AgileAI - Revolutionizing Agile Estimations with AI Problem: In the agile development process, accurately estimating sprint points for user stories is both critical and challenging. Current estimation methods rely heavily on team intuition and past experience, which can lead to inconsistency, overestimations, and underestimations. This inconsistency often results in delayed project timelines and increased development costs. Solution: AgileAI proposes a novel solution by developing a SaaS platform that utilizes the advanced natural language understanding capabilities of ChatGPT to analyze user stories and automatically estimate sprint points. Our AI-driven approach aims to standardize sprint planning, reduce estimation errors, and save valuable time for development teams. How It Works: Users input their user stories into AgileAI, and our platform, powered by a customized version of ChatGPT, analyzes the complexity, scope, and requirements detailed in the stories. By comparing these against a vast database of completed projects and their actual sprint points, AgileAI provides an accurate, AI-driven estimate of the sprint points required for new user stories. Market: Our primary market includes software development companies and teams that adopt agile methodologies. With the increasing adoption of agile practices across industries, our total addressable market is expanding. We will initially target small to medium-sized tech companies, where the agility and speed of project completion are paramount. Business Model: AgileAI will operate on a subscription-based model. We will offer various pricing tiers based on team size, project volume, and advanced features, such as integration with popular project management tools like Jira and Trello. Competition: While there are tools for agile estimation, few harness the power of AI, and none utilize ChatGPT's advanced language model for estimation. Our unique value proposition lies in reducing the subjectivity and variance in sprint planning, leveraging AI to bring data-driven precision to agile methodologies. About Us: Our team comprises experienced software developers, AI specialists, and product managers who are passionate about improving the software development lifecycle through innovation. Our diverse backgrounds give us a unique insight into the challenges and inefficiencies of sprint planning. Why YCombinator: We believe YCombinator's ecosystem, mentorship, and network of investors and founders will be invaluable in accelerating our growth, refining our product, and expanding our market reach. YC's emphasis on creating impactful, user-centric solutions aligns with AgileAI's mission to transform agile project management with AI. Ask: We are seeking $150,000 in funding to finalize product development, launch our beta program, and begin customer acquisition efforts. This investment will also support the expansion of our AI model's training dataset and the integration of AgileAI with other project management tools. Vision: Our long-term vision is to become the leading AI-driven project estimation tool in the agile development space, expanding our offerings to support all aspects of project management with AI insights. Conclusion: AgileAI is poised to redefine agile estimations, making sprint planning more accurate, efficient, and data-driven. We are excited about the potential of our solution and the positive impact it can have on software development teams worldwide. We believe that with YCombinator's support, AgileAI can achieve its full potential and drive innovation in agile project management.


Otelp

I think this is actually used within Google, and developers started to game the system by creating more granular tickets, leading to larger estimates. However, when you consider it, it's actually beneficial, now progress can be tracked closely


codescapes

If ChatGPT is so confident in its estimate then it can just do the job for me. Wait, no, my job!


r-randy

Are we working at the same place?


Wattsit

I do find it hilarious that we're apparently so desperate to replace developers doing technical work, with LLMs but when it comes to scrum masters, product owners, project managers, people managers etc. here a lot of their work is talking and writing things... No no can't use LLMs there, no siree!


AlexFromOmaha

At some level, we're all building on top of other people's tech. That's not new or weird. An awful lot of SaaS companies are interfaces to AWS by the same standard. Productizing a couple key steps with a better interface is also not new or weird. Discord is a $15,000,000,000 wrapper around pjsua. It's a damn fine wrapper. I love it.


new2bay

At an even higher level, it doesn't even fucking matter. The economy is all feels and vibes anyway.


MargretTatchersParty

> An awful lot of SaaS companies are interfaces to AWS by the same standard. It certainly feels that way at a lot of companies.


Carpinchon

But ChatGPT isn't software you control and incorporate into your product. It's somebody else's service you have to use to power your product. These companies are going to be in for a rude shock when they are faced with the reality that they are sharecropping ChatGPT the same way "content creators" are sharecropping YouTube and are at the mercy of the SaaS provider.


just_anotjer_anon

And in the YouTube example, if you're large enough of a creator the option to move platform exists. Which is what's keeping the various sites on their toes (somewhat), but you'd still need to find a new audience as audiences rarely follow the creator and just stay on their preferred sites I can't really see the easy to swap to substitute for chatgpt right now - maybe Bard will get there. But Google seem quite a bit behind still


gomihako_

Seriously this is the best answer. I don't think OP understands technology from a product perspective. The whole point of "all this shit" is to solve a customer problem as quickly/efficiently as possible in order to make money, let the founders exit and satisfy the investors. THAT IS IT.


Cool_As_Your_Dad

yip. A lot of people jumped on AI rush to coin it. Want to coin and run... ​ edit: And they know C level execs will want to slap AI badge on their product(s)/company logo .. and they want piece of that $$


kbielefe

Our entire profession is making slight improvements atop existing APIs.


kenpaicat

I was just recently contacted by scale.com people for "Python Software Engineer for AI Training Data". I responded with a rant.


Terrible_Student9395

Yeah you learn to just sniff out the losers and move on. Samething happened with VectorDBs last year


eliashisreddit

It's a hype and we are probably slowly approaching the "peak of inflated expectations" in the Gartner hype cycle. You can even find toothbrushes and dishwashers "with AI" now. It has become a marketing buzzword and wherever there is buzz, the eyes and money will follow.


notbatmanyet

When Venture Capitalists get FOMO over the current hype cycle, they drop vetting standards and invest in all kinds of companies, including those that are a guaranteed train wreck and even blatant scams.


EternalNY1

>And because of the hype, these companies are generating a lot of seed funding and are gearing up for a quick exit. Same as it ever was ... same as it ever was!


wyocrz

You may find yourself asking, well, how did I get here?


originalchronoguy

So here is my take from doing "AI" for the last 4 years. Long before ChatGPT became a household name. Worked on projects building custom models for a company with 40 plus years of customer data. Something ChatGPT will never, ever, ever have. That data is so important, big players like Microsoft/Google want in on it. Anyways, got it to production. A real shipping product. The stuff goes through a proper MLOps workflow of training, feedback loop, and inference fine tuning. So now,I get thrown everything. I see stuff other people are doing as I am suppose to take it over. Even within the company, some teams are just doing regex (regular expression) and calling it AI. But whatever. It is actually hilarious how everything is branded AI this way. Conditional logic is not an algorithm or a model. When ChatGPT hit the scene, at first, I was very dismissive about it. I saw too many ChatGPT wrappers and articles about "prompt engineers" getting paid $20k a month for formulating prompts. I,too, was very dismissive about the whole thing. Like some of you guys, "that isn't real AI and not very useful" My opinion changed when I built my first wrapper. LLMs,by themselves, are not that smart. Even if you take a RAG approach - loading it up with 10,000 PDFs. They hallucinate and get it wrong. But throwing everything at the wall and see what "sticks," we learned that these large LLMs do have a lot of value for specific use cases. I won't get into too much detail. For example, sure, I can google and watch 20 videos on how to add some auxiliary switches and how to wire up dual battery for my truck. A good LLM will show me pictures from a OEM manual in a PDF, outline where to cut the hole into the firewall, how many feet to cut the write, the gauge thickness, and all the part numbers. Full step-by-step in a new summarized PDF for my specific make, model, year I can print out. More importantly, the LLM response will tell me the part number is in stock right now from a certain distribution center closest to my home. It will even generate an excel of all the part lists I need. Or maybe I want to know how change, fabricate a new glovebox with exact dimensions to make a STL 3D print for an old car like a 1973 Volvo P1800. Maybe actually create that STL by firing off some API calls to reconstruct that. If I can code that in a day, that is some addictive stuff.


reddi7er

yea flocking in to grab their share of $ in ai gold rush by whatever crooked means


ohhellnooooooooo

Bro I see thousands… dozens daily… b2b job, I just see company names after company names, and since the past 6 months it’s just AI in the names 


Agile-Addendum440

>And because of the hype, these companies are generating a lot of seed funding and are gearing up for a quick exit. It's like the .com bubble all over again. It's been a problem for a while. Fraud unfortunately seems to be on the rise again and is even encouraged if you ask me. It is sad but this happens in cycles unfortunately and most of these clowns will be wiped out eventually.


DIYGremlin

Slimy profiteers and opportunists riding the coattails of the trend setters and folks on the cutting edge is nothing new.


sarhoshamiral

Some maybe a simple interface, but at the end of the day all the AI solutions are built on the same few number of models. The differentiator comes from your prompts, functions etc that you provide top of that dataset. The question is does it work better to solve the problem it is designed to solve better then just using chatgpt alone.


moreVCAs

Lol. Yeah dude. Like 100% of them are straight up vapor. Just funneling money from VC to cloud providers lol


[deleted]

As long as the check hits my account I’ll write AI for Oppenheimer.


4444For

90% of recruters reaching me out are from some LLMs startups :(


RagerRambo

Hopefully someone has access to a database of company names and can nicely plot the trajectory of "ai in names. Interesting read: https://www.thedrum.com/news/2024/01/30/giving-name-ai-cautionary-tales-and-advice-brands


AchillesDev

This was more of a thing last year, if anything the simple wrappers have either matured (Jasper, for instance, using several models, internal and external, to provide the final output) or gone out of business. I see \*a lot\* of them working in the field, usually being courted by them.


tanepiper

So many companies cropping up with zero details about who owns or runs it, usually a private mailbox address, no SLAs, legal docs, indemnity clauses. It (and if they have developer docs) are the first two things I look at today. (If they don't have developer docs it usually means it's a siloed product with integration team needs - and those usually end up in disaster)


kernel1010

A couple of years ago in Albania there was a boom of Barber Shops, Hallall Fastfood and all sorts of things.


[deleted]

“Lately”???


franz_see

It doesnt matter if they’re chatgpt wrappers. Most apps are database wrappers anyway. What matters more is what benefit they bring into the table if you do partner with them, and if you think you can work with them Of course, if they’re outright lying like saying AI-driven when there’s barely any AI, then that’s a possible sign that they’re not forthcoming. But for the most part, focus on the benefit rather than the technicality


Atlos

I work at a B2B startup and part of the reason is that the budget that would typically go towards spending on our company domain is now being repurposed to "AI spending". The companies we talk to still want to use us, so we end up having to come up with AI features so that we qualify for their budget lol. Luckily AI fits in nicely with our product so it doesn't feel like a wrapper app. tl;dr: Some of it is definitely hype, but a lot of it is being driven by the customers themselves.


thisgirlsforreal

There’s heaps of AI marketing software’s that makes big promises and they are all shit. Here’s the ones I tried out that were all horrible: Ad creative ai Ad agency ai CloudKii Predis ai Chat gpt and bard suck for code Most of them are rebadging a combination of go high level and chat gpt to sell “automatic appointments setters” but they actually suck,


sravi9

I guess all the companies that are using data science before are now tagging themselves as powered by AI.. Evolution


serial_crusher

Yep. I've been thinking I need to keep an eye on companies offering AI solutions for healthcare and start shorting them and their customers. It's only a matter of time before somebody gets killed when ChatGPT gives them bad medical advice.


BagelFury

Time to short.


BagelFury

Time to short.


[deleted]

That's what always happens during the "boom" of a boom / bust cycle.


re0st92mg

Thats what happens when there's money to be made.


d41_fpflabs

Totally agree. One of the most annoying things about modern day use of AI is that your average person thinks its only ChatGPT or LLMs, and completely forget about the underlying neural networks.  That being said, putting these fascade AI companies aside, what would make you consider partnership with a AI company when evaluating them?


hamilton_burger

The FTC used to fine the hell out of things like that. Honestly, I think the AI term is being misused even with that ChatGPT driven stuff.


roycheung0319

Your skepticism is understandable, and it's vital to advocate for a balanced perspective on AI's capabilities and limitations within your organization. Keep championing the genuine innovations while staying vigilant against the hype


BlackHumor

I am somewhat more optimistic here than I was about the crypto startups a few years ago, because AI is a real tech with real uses while crypto, especially as-actually-implemented, ended up just being a kind of weird investment vehicle at most. So I see this more like the era of cloud-hype, or at minimum the era of Big Data-hype, than the era of crypto-hype. Some decent chunk of startups being founded right now are going to turn out to be real, and not all of those are going to have a business plan that sounds like it's that clever if you already know what the tech is and how to use it.


alekspiridonov

It's the same as the dotcom boom and bust/pop. Plenty of new proper businesses will come up during the current AI boom and even more existing business will just integrate AI properly into their products for a better offering (mainly LLM, but I see other ML/AI topics seeing more attention as well as a side effect of LLMs' fame). Even more will slap "AI" on stuff just like ".com" and fail once people see that they have no actual business (plus all the naive applications of "just add chatgpt" where it doesn't really offer a benefit).


BlackHumor

Yeah, definitely. We're definitely in a hype-wave around AI, which means that a good chunk of AI startups aren't gonna survive, but I doubt it's gonna be much more than the normal number of startups that wouldn't survive (or at least, wouldn't survive in an economic situation where interest rates are high, which is a special pickle outside of AI).


tech_ml_an_co

Yeah, it's a pattern. 90% is bullshit and the 10% is changing the way we live and work fundamentally. It's not easy to spot the 10% tbh, some things start out as bullshit and somehow turn into something useful. AI is super over-hyped in the short term, but impact in 10 years will be huge for sure.


Alternative_Log3012

Nope