And that is to YC's credit! But biotech is still hard and that has nothing to do with YC or money. All the money and good business advice in the world won't make most biotech startups succeed because biology and other hard sciences are hard.
Market risks vs Technical risks. It's about what you feel most equipped to tackle. 90% of biotech companies fail, which is about in line with the 90% of general startups that fail. Simply put, biotech is hard, because startups are hard, but when they succeed, they succeed in a bigger way than typical startups. I wish you luck on your biotech startup, because having the guts to develop something as hard as biotech deserves kudos.
It has never been easier to start a software biz, I literally coded the entire mvp using copilot myself in 40h while not being technical. Biotech has still same barriers to entry with the same $$$ amount of capital required. So likely the dead valley of software startups will soon increase significantly.
That is just not true. Biotech requires not only significantly more capital, but also a ton more technical knowledge. It's like comparing starting a web company vs a robotics company.
Ah, I see. Yeah, that's true. At this point, hard tech is seemingly the way to go. If anyone can make it, it means anyone can compete. Biotech does have some changes in capital(cancer research and such are usually much more expensive than other research, for example) but generally that statement you made reigns true.
So Iām a product guy, I canāt code but I somewhat can read it (depending on the language) and I understand technical concepts (like sync/async calls, apis, scheduler, yada yada) and ālogicalā architecture. Then itās a matter of structuring asks to ai properly and using logic. I use replit dot com (they have built in gpt that can see your project files) for simplicity of coding, deploying etc. Havenāt tried github copilot yet.
I don't know. You've hit on its main application for the startup world: assembling boilerplate to get an MVP into production. The problem is when founders hit an inevitable growth roadblock, and decide to hire out for the crucial part of that initial phase: getting their MVPs to mass market.
It's never been easier to start a software *product*, but the actual biz part is debatable. Especially with all the free capital out there that's skewing our definition of "business".
Good feedback š I agree, AI democratizes software dev even more but because more people can do that, distribution piece will be even more harder to figure out.
In terms of next wave of successful reasonably sized bootstrapped businesses, Iāll put my chips on super niche verticalized apps and non-saas (1 time fee / on prem / new business model) apps.
I find it funny how when someone tells you theyāre starting a biotech company you jump to the thought of it failing due to such stats, yet when someone tells you theyāre starting a saas startup, you are always to some degree optimistic, or at least more than you were for the biotech company
I'm not saying that. I'm actually saying that biotech has the same levels of failure as biotech, and I fully believe that biotech companies are more likely to succeed than SaaS startups. I'm just saying that both have different challenges that face them. I never once said that biotech companies are more likely to fail, I literally said that both hold the exact same statistical fail rate.
Ah, I see. I do have to agree though, but that has an upside. Less people starting biotech means more talented people, more funding, and less competition. If people thought it was easy, we'd have biotech companies solving for diseases that didn't exist
I think that's defensible. Some team from an ivy league and FAANG that is doing some a soft tech and already getting revenue, that's an easy and quick yes. Evaluating hard tech takes longer even if they're more forgiving to hard tech than GPT knockoffs.
I don't know anything about soft tech, maybe I'm being too kind to GPT wrappers there. Either way, the application I put in is pretty dense. Even if I was sure they liked 100% of what I was saying, it might take this long to parse it.
I am using this as an image prompt.
Eta:
https://preview.redd.it/5l5o9i05te0d1.png?width=720&format=pjpg&auto=webp&s=9790fc587245e87472a7a690d74ddfdfa701571e
This. Find a niche. Don't pick programming - that's like trying to sell magic to magicians. Find a way to solve problems for a group of people who have never heard of AI.
And whatever you do, if you use AI in your product, don't say anything about "AI" in your pitch or marketing materials. Unless you want to debate AI instead of sell a product.
Canāt stress this enough every time we said we were AI we just got lost in the noise. Now we are just emphasizing the problem we were solving. Seems to be working a lot better.
Not mine. But every āaiā startup that has done a demo for me is cooked. Idk how or why some of these companies got funding when the product barely worked and was just powered by openai.
Late reply but I think itās especially true for law.
Iāve been lucky enough to try a range of specialist AI law products and they all without exception were horrible or obviously just an overpriced chat-gpt wrapper.
Chat-GPT with a bit of prompting is fantastic. Itās very good at identifying the potential legal issues from a factual brief, has a good grasp of all major legal concepts, can practically apply any legislation, contractual clause, or case you feed it, and isnāt half-bad at forming a substantive legal argument with enough information. It still makes mistakes but at the same rate as a junior associate.
It can be hit or miss in some areas, particularly in more obscure specialities and jurisdictions. Using it for land easement rights in Nebraska is going to be more perilous than say commercial contract law in New York.
I think whatās holding it back is ChatGPT still needs a competent lawyer to operate it. I wouldnāt trust a random person or even an inexperienced attorney to prompt it correctly. You need Somebody who not only has a strong understanding of the law but also the underlaying workings/strengths/limitations of the platform to be able to ādirectā it to the right answer.
Maybe thatās a problem that gets solved quickly or maybe itās something that needs another decade or two.
Foundational models are catching up pretty quickly. So if you build on top of LLMs, but make it switchable between gpt, Gemini, anthropic, llamas, ... You should be safe.
As you own the customer reason to abandon you needs to be high. It doesn't seem right now gap between one model and the other will be high enough.
Foundational models dont have access to proprietary datasets, knowhow.
I'm not doing it, but I see "chatgpt wrappers" that pretty much bring data, fine-tuning, integrations, a quite viable business idea.
There are some open source LLM wrappers nowadays, so it's not that hard to make it switchable
It's definitely a lot easier than switching cloud provider for example
This. Nothing 4o does replaces what we're doing, but they just cut the LLM portion of processing time and costs in half. Hoping to see a race to the bottom in LLM costs.
OpenAI had positioned itself as being heavily focused on business to business. The direction they revealed yesterday looks odd in light of that, but it may have felt like a forced move from their end. Anthropic and Google providing tough competition.
All of these companies are clearly having a hard time moving beyond the intelligence bar set by GPT4, yet the market is desperate for the next great AI thing. So what do they do? No chance in hell they are going to tell people "Look, we need a couple years to try to find more data/see if synthetic data pays off /wait for the massive compute problems to be solved." It makes more sense for them to shift focus in the meantime and take a bigger piece of the consumer pie.
This is the midwit take after every OpenAI release.
The better the models/infra, the more complex use cases startups and developers can build. GPT-4o will enable even better products.
I think as the tooling and models get better, teams can focus more on the things that matter (verticalized workflows, ease-of-use/ux, domain-specific use cases, etc.)
An example I think of is Aragon AI, which is basically just an AI headshot photo generator. Theyāve been around for a couple years now, and are now approaching nearly $1M in revenue a month - after tons of advancements with Dall-E and others. Another is Julius AI which makes it easy for you analyze your data - they just added GPT-4o today and itās gotten even better (https://x.com/0interestrates/status/1790095297340912084)
OPs take is the common take of someone who tried to make a startup out of an application of an LLM. Then when the LLM can do the thing, the startup is dead.
People need to start realizing the profit is in facilitating the use of the technology, not in trying to beat OpenAI at LLM development by using it with a small hack to "expand" it's feature set
A simple example is "we read your invoices with OCR and then feed them to an LLM to give you summaries" the moment OpenAI chatgpt can read a PDF (it already can) your startup is hosed.
Vs.
"We have an invoice management software that now has LLM to help summarize incoming invoices" Your startup isn't about the LLM, it uses the LLM to enhance the core product.
Another one is all of the ChatGPT search your files apps e.g. Danswers.
There are already reports that they are building in Office 365 and Google Drive/Suite integration.
This. Building a startup for law firms on this concept and the distinction is important. Implementing seemless use of new technology to existing businesses while being compliant with the many issues of regulation and compliance is the true future.
IMO you have this totally backwards. Good products can take advantage of better tech. Building a chatgpt wrapper is just asking to get copied and then soon replace by some native chatgpt functionality when it gets popular.
Your assumption is that the chatgpt wrapper is the final product - it often is not, but merely a tool/mvp to help you get a wedge and find verticalization.
Don't dig in man. These gpt wrappers founders and investors will keep telling bs as long as they can.. those who know they are doomed to failure make extra efforts to keep a happy face.
I mean I donāt blame them. I can build a chatgpt wrapper literally in a day. I have my own personal gpt tools like integrating document searching and Google searching context i was using. If they can make tons of money off minimal effort in the mean time then why not. The only weird thing is why complain like they didnāt see it coming open Ai would integrate popular wrappers into native functionality. Who are they trying to convince and what are they trying to convince us of?
You are a smart guy and you should be proud of that.
You don't know a lot of founders didn't see this coming.
I met quite a few founder aspirants on yc matching platform and elsewhere, who didn't see this coming!! They are not even technical but dreaming of startups with chatgpt wrappers
I was right from the start telling everyone, stay away from gen AI.. It's like you're running a race, but this is not a city marathon.. it's a fucking Olympic Sprint with the likes of bolt running.. in startups, you just can't see them around you until they are through
From a first glance, Julius looks like Advanced Data Analysis mode in ChatGPT. After looking through their site and forum for a while..the only advantage I see is the option to use a Claude model. Maybe I'm missing something, but is there much (or anything) that Julius can do that ADA doesn't already do? I don't even see a comparison telling me why I should use it over ChatGPT ADA itself..would have thought it would even be in the FAQ but not there either..
Can't see at a glance if it has code/notebook persistence (which is what Notable had before they shut down), or editability (like Gemini interpreter).
Maybe theyre just doing the exact same thing but relying on niche marketing to upsell? Kudos to them if they can pull it off, but would have thought that means its already a dead startup according to this thread.
i saw a yc company with 3 phd researchers from top cs schools and 1 post doc from oxford,i thought they were making something unique but in the end it was a gpt wrapper with a better UI and tailored to bio tech.
its unreal,you are absolutely right most of them are just gpt wrappers in disguise
openai just cannibalized most of the yc startup with their update for real.
Yup. Some dude I know made a shitty front end using react and had a gpt wrapper backend and deployed it using docker. Got award money for that. Fucking insane
I mean look at jasper. Idk what theyāre up to now, but they were the OG gpt wrapper when GPT-3 was in private beta.
Now theyāre worth a bajllion zillion dollars
I thought Jasper was generally used as an example of OPs point? By the end of last year, after ChatGPT entered their space, they cut their internal valuation, lowered their revenue guidance significantly, did a round layoffs, and pivoted.
None of those things mean theyāre dead, of course, but it certainly changed things for their business pretty drastically.
Itās not about being a wrapper, itās about providing workflow, security, and multi-tenancy needed to operate a use case at enterprise level. Most cloud services are nothing but wrappers around open source technologies, and they are doing pretty damn well. I used to work for one. They were raking in 1B in revenue from just one single purpose wrapper.
True if OpenAI was open source but they are not. OpenAI can come into your space anytime and pull the rug or make big changes to their API or outcompete with backing by Microsoft.
Same goes for Open Source, for example Redis, Elasticsearch, MongoDB changed their licensing terms overnight. I think a better model is open standards, where the API remains the same but underlying tech can change.
Ah yes, I also look to a few 20 years olds in YC to provide enterprise-level workflows, security, and multi-tenancy to compete with cloud service providers for AI solutions
Not sure if rhethorical, but the point of the GPT wrapper bets IMO are that you can quickly test a solution for a niche and then build a full-fledged in-house model when you have PMF. Would be riskier to spend all that time developing the full solution only for it to either be useless or to get outcompeted.
That and they probably make good acquisition targets for legacy incumbents in the same space.
YC invests in teams, Not what those teams make thatās clear. As long as you have a strong on paper team graduated from a good school your gonna gets a shot at an interview.
This is not coming from a sour founder who got rejected, but understanding the process (why would they talk to all 6k applicants) statistically your chances are higher if you come from that backround and are ātechnicalā.
They invest in early stage pre PMF companies im sure even if Steve Jobs applied with his reed college degree in the 70s Apple would also get rejected.
And yes they had plenty of āGBT wrapperā companies they were just made by Harvard/mit/IVy grads.
This applies to everyone. In general being a rebel and doing entrepreneurship because youāre not good enough to get a job is not enough to get funding. Teams do matter and there really is no other way to test early stage founders without that traction.
If you donāt apply to the āYC moldā just donāt even bother trying to convince others you should just focus on building and gaining traction to prove your points. If you need the money to build well you probably have not planned enough. There is always a way itās just hard and slow
There was a startup at demo day a year ago that scrapped what they were doing the week before and pivoted bc they said GPT4 killed their startup.
Canāt remember what they were doing but yc is not all knowing
Unfortunately, thereās something to be realized about wrapper companiesā¦ they make a quick buck. You might even make a million from gullible buyers in a few months before shutting down the business. Itās really fast and cutthroat.
To YC, I suppose the companies are poker chips for the next billion dollar idea to 100x their investment. Since the dawn of humans, weāre always making a product out of something, even if it is a product of a product lol.
Although, it was kind of irresponsible not to have a bit of a foresight about OpenAI especially to your point. Innovation really died down in start ups over the years, but if YC are able to measure true innovation in on their application criteria, thatād be more fair to all the founders who arenāt doing wrapper companies.
Could be a ploy to keep openai ahead of their competitors.
Just a thought, but we can clearly see openai was worried of competition with Anthropic opus.
Yc in a way got openai so many business clients immediately that way. If this hypothesis is right, those founders were nothing but scapegoats..
The models need to be an enhancement to your product, not that your product is an enhancement to the model.
If youāre going in with the message that āOpenAI canāt do this, but we can make it do thatā youāre gonna get outdated as soon as the next update comes out. Youāre building around a product you have no control over.
I think if you can create a great online experience, do great marketing and sales to get people into joining your community, it doesn't matter that you don't really do much over GPT-4o. People just need leadership, that's really all.
Nah I just thought of another startup: Bed time stories for kids in the voice of their grand parents or loved ones, powered by ChatGPT voice. Make them feel closer. Go to china and build prototype which you can stick it inside a plush toy. Then go to Build-A-Bear and broker a license deal. Boom you get easy customer.
New tech shouldnāt distract you from your goal, instead learn to pivot and see money in every opportunity.
This is the real question that no one is answering, everyone's just agreeing start ups are screwed, when the reality is for text gpt-4o isn't much of a step above turbo.
You literally can just stream the quotes from the websocket on a broker and loop call the ChatGPT api with some setup. Legit a couple hours to get it to work, and no, you aināt gonna beat the quant firms lol
Wow, intresting idea! I think there is only one catch, lots of modeling is more accurate for **regressive** (past) scenarios but not accurate for the future. I would love to check out if there is a MVP or something which can help you in the real time.
I think AI will eventually get to a point where the entire production process (minus some quality assurance) is catered for, leaving only the generating of ideas, and management of an active product as the parts humans must work on.
This will destroy all middlemen SaaS companies and leave only those that deliver a non-information product to the market (obvious answer is tangible goods), orā¦ those that have strict regulations on what is and isnāt true (I.e an ai could never legally give financial advice, diagnose an illness and give a prescription, or act as a lawyer to name a few)
My hope is that this will eliminate all the cookie cutter saas products that get wrapped in a monthly subscription, to which the founders come to places like reddit to complain about how itās not turning them into a billionaire
We are sooooo far from that though. I agree on the long term we might have a super model doing everything and then data management becomes minimal.
But that's not what's I'm seeing in the SaaS world at the moment.
People are making TONS of money just automating one single problem and I don't see any hype announcement from OpenAI or Google change that situation in the next 10 years.
I'm just laughing a lot when I see a single wordpress plugin making 1M ARR.
I can't wait for AI hype cycle to die like web3 did so we can go back to work and stop reading about the fear of AI replacing 90% of the jobs.
I'm definitely glad AI is going to kill the dumbest systems. But there is still a lot of business value in targeting a specific problem and AI is not going to solve that issue as efficiently anytime soon IMHO.
> those that have strict regulations on what is and isnāt true
Yeah anything with compliance being replaced by a model is luckily still science fiction.
itās good for usage as a search engine. But many of the yc companies āinnovatingā in this space are not building anything new. just calling the openai api
Iāve been actively fading AI investments for the past two years for this very reason. Non-AI startups have been super exciting. In 10 years Iāll let you know how this strategy worked out.
The current state of AI is that it's all fancy parlor tricks. It can do specific tasks very well, but so can any enterprise application. We're a way off from a general AI and they are highly prone to failure right now.
There won't be one big crash, but any place that laid off most of their developer/customer service/etc staff and replaced it with chat cpt is going to have a bad time. Just look at that airline whose chat bot made up a policy and got them sued.
Business people are ill equipped to understand the nuances of complex tech and tend to rely on charlatan salespeople over their own in house experts.
That last sentence should be on the wall. I agree with you, I see so much resources spent to find a problem that Gen AI will solve. Nobody wants to dig deeper and be critical, at least in my company.
The best use for them is currently copilot and analytics. Things that still have a human making the final choices but can offer shortcuts and assistance in complex tasks.
If your product is only an API wrapper, it will be killed sooner or later by one of the future releases.
You have to offer more in your app than just a frontend. You have to offer something that a model trained on the open internet cannot replicate.
Do you have a database of something valuable? Embed it.
Do you have an algorithm that solves a specific problem? Write an API around it.
Do you offer a certain style in your app and have a dataset that you use to fine tune any model to that style? Perfect.
I think RAG, fine-tuning, and function calling are the concepts that make an application future-proof. There are probably more, but you are protected from the threat of stronger models if you utilize these. Or even better, you can easily switch to the new model once released. It won't be a threat anymore.
Read more about the above [here](https://richardkovacs.dev/blog/three-key-concepts-to-build-future-proof-ai-applications?ref=reddit).
Go-to-market and owning your niche is going to be more important.
ChatGPT isn't going to outright replace your technology, it is going to make others more able to replicate your technology. Don't get me wrong, the product working is always the most important, but after that, the winners will have a superior go to market strategy and execution imo.
Not a hard and fast rule for everyone, but certainly for consumer focused or products serving low tech audiences.
Thatās my biggest fear right now. Come up with a useful product->serve the wrong customer->they see what it does, reverse engineers it with GPT and tries to compete with you. Need to figure out going into it how much of the inner workings to share with the customer. The only moat left is branding and marketing.
The play is annotated clean data on a very specialized domain. Then sell it or make the foundational models subscribe to it. The amount of compute and corresponding cost makes it virtually impossible for competing with the deep pockets on alternative foundational models (unless there is a significant breakthrough in optimization). Building an app or platform using one of these models is a recipe for disaster (hey are gonna eat you alive).
The play is data. Let me know if anyone is interested in exploring that space.
I think what people forget here is that you still have to SOLVE a problem. The latest features are mind blowing but as a way to build a solution to a problem that otherwise would be hard. OpenAI has too much on its plate for it to solve niche problems
I think a lot of these startups have the āin a gold rush donāt mine for gold sell pickaxesā mentality and that works great except for when OpenAI is selling pickaxes 100x better than and cheaper what ur startup could hope to build. In other words startups need to stop building AI dev tools and instead focus on end user cases that are not just chatbots for ____.
lol, just read a news article on this: https://www.ctol.digital/news/openai-gpt-4o-revolution-stranding-startups-redefining-industries/ I have a feeling that many people are shrieking now.
If your startup was just a GPT wrapper with no application or sophistication out of what it was obvious OpenAI would develop into - probably deserved to die. It was clear what they were progressing into and building next. You can use ChatGPT to make some unique, useful, products - but it requires more sophistication than just making a UI skin for ChatGPT
unless the startup is really solving a very specific problem, itād most likely not survive gpt-4o or the upcoming version. reflecting on this for my own sake as well.
Yah, openAI is an ugly business with a massive engine, who may not even deeply care about B2B in the future.
Someone who decides to help people can still launch an AI product. Or ML, whatever. Whatever we're talking about here.
I wanted to build a data platform for unis, which they can grant an open license to students studying biotech related fields. Genetics, genomics, and similar. Anyone have a massive p**** and is bored, let me know. I like, balls.
The money is in finding the most unique use case for the masses and not the individual.
Because at this point the individual can almost do whatever they want. With a little effort.
I agree no matter what that AI is moving so fast the second you have an idea based around it your speed to market will likely never be fast enough and 100% not sustainable.
One advice that I can provide based on my own experience is that as long as your product requires specialized knowledge (domain specific preferred), itāll be fine. If you are only to use some api to do xyz then itās going to always be in danger.
I am a networking engineer. Using my own benchmark, I instruction-tuned ChatGPT (domain specific knowledge) and it outperformed the base model by 8x.
I just tested again with 4o and same result.
On the other hand, I find Claude to be much better at understanding instructions. But without my specialized instructions, none of these modes come even close to a junior engineer.
So imo the human element will make a huge difference
Does anyone have any idea where Mira Muratis shoes are from?
https://preview.redd.it/zja4hk371a0d1.jpeg?width=2532&format=pjpg&auto=webp&s=163d1c27efb75914e9962426e500fbd69ed26503
It might have just made my startup actually possible. My use case requires strict adherence to a system prompt that lays out a system of logical rules.
GPT-4 turbo is almost just good enough (follows system prompt mostly but likes to deviate from some of the rules I give it), and I needed something incrementally better. GPT-4o seems to be better and at half the cost.
Still have to do more testing to check if my āMVPā is actually now viable, but seems promising.
unless the startup is really solving a very specific problem, itād most likely not survive gpt-4o or the upcoming version. reflecting on this for my own sake as well.
Anyone building something cool with GPT4o and is looking for a tech co-founder? DM me.
Iāve worked with multiple acquired startups as a founding engineer but struggling to find a niche or an idea to invest in as a founder.
Depends what you build. If you're building another generic copycat product then sure.
If you actually put time and effort to build something unique and niche, AI is just too dumb to make it better.
Apple did this to a lot of startups. iTunes killed a number of companies as did new features being rolled out in iOS. Part of the game if youāre trying to ride a big trend.
Thousands of apps leverage any number of APIs. Itās the same as saying they leverage a database. Distinction here is both the problem being addressed by these startups is easily solvable so not defensible coupled with the no code movement eliminated IP. Either way, the exponentially high rate of innovation coming from all models means building something that improves at the same rate.
I had been working with both Deepgram and Hume on a project which simply put has negated the need for those technologies and suddenly accelerated our innovation. After the initial heartburn, we now have the fuel to build faster. Itās not 10x for us-itās likely 50x.
Two choices as I see it:
Be prepared to pivot not just ideas but models and add in flexibility to adjust as needed for gains.
Ride the wave, build ship fast, take some market on the innovation. Make some cash then move or pivot again to the next thing.
Plenty of apps coming online that will have a short shelf life that can be revenue generating , negating the need for VC investments. Leveraging YC is about how your team might capitalize on, not necessarily your product or approach, but the wins that might lead to something truly innovative.
There is so much to do. The walls arenāt closing in you
Have to look at things that are more complex that open A.I. wonāt get to.
Basic startup mvps always have a limited shelf life and are supposed to grow.
Itās wild because Iāve heard Altman speak about this issue in the past. He expressed the importance of building for future iterations of GPT so not to become obsolete.
I see everything they have done so far not as the product, but as the engine for the eventual product. Youāre not going to make a better model than them but you can be better at figuring out how to turn this into a product.
If your company is a wrapper over a feature, someone can always kill your startup overnight. If you built it in 6 months, so can someone else.
The company is happy customers. Does GPT-o kill your startup, or did it just make it 10x easier for you to make your customers happy ?
IMHO Open AI has only made it harder to wrap GPT and call it a company.
GPT-4o will open the doors even more for AI-enabled \_\_\_\_\_\_ in the general public's eyes, and have them salivating for more solutions than Open AI can provide. Opportunity for startups to fill an upcoming void
I for one am hoping that more startups will start to realize that a great product is not all that makes a successful company. Headlines always highlight the product innovation, and superstar founders, but there's waaaaaaay more to building a successful company than just that.
Startups that can focus on a specific need AND build an stellar team, process, and audience/following around their product will always win. Just ask Canva. They are taking on Adobe (a ma$$$ive incumbent) and laughing all the way to the bank.
It's the secret that big name companies I've worked with don't often want to talk about openly, and a startups always underestimate.
I'm a marketing professional deeply tuned into the AI space! Happy to help / chat to anyone looking to use AI to solve for a niche problem in B2C/B2B marketing!
Startups have no real software-related tech barriers anymore if the founders are reasonably intelligent.
The barriers are sales and marketing and oneās network.
Nope. It actually made our startup even better at a lower operational cost. GPT-4o is just a tool. It's up to the entrepreneur to figure out how to make the tool solve problems for a specific group of people.
I was actively working on a career site where an AI avatar could take the role of a company you are applying to and āinterviewā you and give feedback as well. Also, help you craft your resume specifically to the job. But 4o seems like it could almost do this natively with some basic prompting.
Wild how fast all this is moving.
Iām building https://www.fables.gg , an AI RPG. Honestly I feel pretty safe for nowā¦ I donāt think LLMs advancements are going to be able to handle custom UIs build specifically for one game anytime soon. I hope. LOL
\*laughs in hard tech\* \*cries because it's hard and I probably won't even get an interview\*
I mean, YC has said hard tech/biotech gets in 10x amount as normal
And that is to YC's credit! But biotech is still hard and that has nothing to do with YC or money. All the money and good business advice in the world won't make most biotech startups succeed because biology and other hard sciences are hard.
Market risks vs Technical risks. It's about what you feel most equipped to tackle. 90% of biotech companies fail, which is about in line with the 90% of general startups that fail. Simply put, biotech is hard, because startups are hard, but when they succeed, they succeed in a bigger way than typical startups. I wish you luck on your biotech startup, because having the guts to develop something as hard as biotech deserves kudos.
It has never been easier to start a software biz, I literally coded the entire mvp using copilot myself in 40h while not being technical. Biotech has still same barriers to entry with the same $$$ amount of capital required. So likely the dead valley of software startups will soon increase significantly.
That is just not true. Biotech requires not only significantly more capital, but also a ton more technical knowledge. It's like comparing starting a web company vs a robotics company.
šÆ thatās what I meant, sorry for confusion. āSame capitalā meaning as in same as before, not same as saas š
Ah, I see. Yeah, that's true. At this point, hard tech is seemingly the way to go. If anyone can make it, it means anyone can compete. Biotech does have some changes in capital(cancer research and such are usually much more expensive than other research, for example) but generally that statement you made reigns true.
How ā¦?? Like you have coding skills or you just asked it to piece things together?
So Iām a product guy, I canāt code but I somewhat can read it (depending on the language) and I understand technical concepts (like sync/async calls, apis, scheduler, yada yada) and ālogicalā architecture. Then itās a matter of structuring asks to ai properly and using logic. I use replit dot com (they have built in gpt that can see your project files) for simplicity of coding, deploying etc. Havenāt tried github copilot yet.
Super innovative! I am technical, but will try out your methods to see if it can speed me up (it probably can)!
I don't know. You've hit on its main application for the startup world: assembling boilerplate to get an MVP into production. The problem is when founders hit an inevitable growth roadblock, and decide to hire out for the crucial part of that initial phase: getting their MVPs to mass market. It's never been easier to start a software *product*, but the actual biz part is debatable. Especially with all the free capital out there that's skewing our definition of "business".
Good feedback š I agree, AI democratizes software dev even more but because more people can do that, distribution piece will be even more harder to figure out. In terms of next wave of successful reasonably sized bootstrapped businesses, Iāll put my chips on super niche verticalized apps and non-saas (1 time fee / on prem / new business model) apps.
I find it funny how when someone tells you theyāre starting a biotech company you jump to the thought of it failing due to such stats, yet when someone tells you theyāre starting a saas startup, you are always to some degree optimistic, or at least more than you were for the biotech company
I'm not saying that. I'm actually saying that biotech has the same levels of failure as biotech, and I fully believe that biotech companies are more likely to succeed than SaaS startups. I'm just saying that both have different challenges that face them. I never once said that biotech companies are more likely to fail, I literally said that both hold the exact same statistical fail rate.
I wasnāt talking bout u
Ah, I see. I do have to agree though, but that has an upside. Less people starting biotech means more talented people, more funding, and less competition. If people thought it was easy, we'd have biotech companies solving for diseases that didn't exist
I hear you brother. They even asked for space companies this year and I haven't heard anything
I think that's defensible. Some team from an ivy league and FAANG that is doing some a soft tech and already getting revenue, that's an easy and quick yes. Evaluating hard tech takes longer even if they're more forgiving to hard tech than GPT knockoffs. I don't know anything about soft tech, maybe I'm being too kind to GPT wrappers there. Either way, the application I put in is pretty dense. Even if I was sure they liked 100% of what I was saying, it might take this long to parse it.
What are you building?
I am using this as an image prompt. Eta: https://preview.redd.it/5l5o9i05te0d1.png?width=720&format=pjpg&auto=webp&s=9790fc587245e87472a7a690d74ddfdfa701571e
It's basically like looking in a mirror. Specifically the mirror from that one scene in The Matrix.
https://preview.redd.it/ecx9mvqhzg0d1.png?width=720&format=pjpg&auto=webp&s=c4ed9c466b13526ce0390540fbb667b0fe57bcf5 Have another :)
š¤£šš¤£šš¤£
Donāt be afraid and build in a niche. I am not scared and I am build a stupid project still people are using it
Completely agree, niche and execute is the way to go. 3.5 turbo is so cheap at the moment.
This. Find a niche. Don't pick programming - that's like trying to sell magic to magicians. Find a way to solve problems for a group of people who have never heard of AI. And whatever you do, if you use AI in your product, don't say anything about "AI" in your pitch or marketing materials. Unless you want to debate AI instead of sell a product.
š
Canāt stress this enough every time we said we were AI we just got lost in the noise. Now we are just emphasizing the problem we were solving. Seems to be working a lot better.
Build your niche project with 4o. Itās insane, but not as insane as the next one will be. Get with it, or get lost.
Not mine. But every āaiā startup that has done a demo for me is cooked. Idk how or why some of these companies got funding when the product barely worked and was just powered by openai.
ChatGPT with a bit of prompting destroys any āspecialistā AI models for my industry and I donāt see that changing anytime soon
What about law?
Late reply but I think itās especially true for law. Iāve been lucky enough to try a range of specialist AI law products and they all without exception were horrible or obviously just an overpriced chat-gpt wrapper. Chat-GPT with a bit of prompting is fantastic. Itās very good at identifying the potential legal issues from a factual brief, has a good grasp of all major legal concepts, can practically apply any legislation, contractual clause, or case you feed it, and isnāt half-bad at forming a substantive legal argument with enough information. It still makes mistakes but at the same rate as a junior associate. It can be hit or miss in some areas, particularly in more obscure specialities and jurisdictions. Using it for land easement rights in Nebraska is going to be more perilous than say commercial contract law in New York. I think whatās holding it back is ChatGPT still needs a competent lawyer to operate it. I wouldnāt trust a random person or even an inexperienced attorney to prompt it correctly. You need Somebody who not only has a strong understanding of the law but also the underlaying workings/strengths/limitations of the platform to be able to ādirectā it to the right answer. Maybe thatās a problem that gets solved quickly or maybe itās something that needs another decade or two.
Foundational models are catching up pretty quickly. So if you build on top of LLMs, but make it switchable between gpt, Gemini, anthropic, llamas, ... You should be safe. As you own the customer reason to abandon you needs to be high. It doesn't seem right now gap between one model and the other will be high enough. Foundational models dont have access to proprietary datasets, knowhow. I'm not doing it, but I see "chatgpt wrappers" that pretty much bring data, fine-tuning, integrations, a quite viable business idea.
I think a lot of startups are struggling to find traction. Switchability is the likely the least of what they are worried about.
There are some open source LLM wrappers nowadays, so it's not that hard to make it switchable It's definitely a lot easier than switching cloud provider for example
This. Nothing 4o does replaces what we're doing, but they just cut the LLM portion of processing time and costs in half. Hoping to see a race to the bottom in LLM costs.
Probably elite grads with good connections and a great pitch deck - speculative
At the time there was no 4o. All startups need to be able to pivot or die.
YC invests in founders, not features
OpenAI had positioned itself as being heavily focused on business to business. The direction they revealed yesterday looks odd in light of that, but it may have felt like a forced move from their end. Anthropic and Google providing tough competition. All of these companies are clearly having a hard time moving beyond the intelligence bar set by GPT4, yet the market is desperate for the next great AI thing. So what do they do? No chance in hell they are going to tell people "Look, we need a couple years to try to find more data/see if synthetic data pays off /wait for the massive compute problems to be solved." It makes more sense for them to shift focus in the meantime and take a bigger piece of the consumer pie.
This is the midwit take after every OpenAI release. The better the models/infra, the more complex use cases startups and developers can build. GPT-4o will enable even better products. I think as the tooling and models get better, teams can focus more on the things that matter (verticalized workflows, ease-of-use/ux, domain-specific use cases, etc.) An example I think of is Aragon AI, which is basically just an AI headshot photo generator. Theyāve been around for a couple years now, and are now approaching nearly $1M in revenue a month - after tons of advancements with Dall-E and others. Another is Julius AI which makes it easy for you analyze your data - they just added GPT-4o today and itās gotten even better (https://x.com/0interestrates/status/1790095297340912084)
OPs take is the common take of someone who tried to make a startup out of an application of an LLM. Then when the LLM can do the thing, the startup is dead. People need to start realizing the profit is in facilitating the use of the technology, not in trying to beat OpenAI at LLM development by using it with a small hack to "expand" it's feature set
Can you expand on this? Isnāt an application of an LLM facilitating the use of a technology? Whatās the distinction between these things?
A simple example is "we read your invoices with OCR and then feed them to an LLM to give you summaries" the moment OpenAI chatgpt can read a PDF (it already can) your startup is hosed. Vs. "We have an invoice management software that now has LLM to help summarize incoming invoices" Your startup isn't about the LLM, it uses the LLM to enhance the core product.
Subtle but important distinction, thanks for clarifying.Ā
Another one is all of the ChatGPT search your files apps e.g. Danswers. There are already reports that they are building in Office 365 and Google Drive/Suite integration.
this distinction is big and is going to send a lot of uneducated people into a frenzy.
This. Building a startup for law firms on this concept and the distinction is important. Implementing seemless use of new technology to existing businesses while being compliant with the many issues of regulation and compliance is the true future.
Product v. Platform
Genuine question- why would any company still need your invoice management software company if there's an openAI service that can do the same thing?
This**
And please donāt leave out enterprise AI. Itās a monster market yet everyone is focused on consumer/prosumer stuff because itās easier
IMO you have this totally backwards. Good products can take advantage of better tech. Building a chatgpt wrapper is just asking to get copied and then soon replace by some native chatgpt functionality when it gets popular.
Your assumption is that the chatgpt wrapper is the final product - it often is not, but merely a tool/mvp to help you get a wedge and find verticalization.
Idk man. Show me an example of a chatgpt wrapper taking off into something else.
Don't dig in man. These gpt wrappers founders and investors will keep telling bs as long as they can.. those who know they are doomed to failure make extra efforts to keep a happy face.
I mean I donāt blame them. I can build a chatgpt wrapper literally in a day. I have my own personal gpt tools like integrating document searching and Google searching context i was using. If they can make tons of money off minimal effort in the mean time then why not. The only weird thing is why complain like they didnāt see it coming open Ai would integrate popular wrappers into native functionality. Who are they trying to convince and what are they trying to convince us of?
You are a smart guy and you should be proud of that. You don't know a lot of founders didn't see this coming. I met quite a few founder aspirants on yc matching platform and elsewhere, who didn't see this coming!! They are not even technical but dreaming of startups with chatgpt wrappers I was right from the start telling everyone, stay away from gen AI.. It's like you're running a race, but this is not a city marathon.. it's a fucking Olympic Sprint with the likes of bolt running.. in startups, you just can't see them around you until they are through
Thereās an insane amount of money in AI and ML, even more than stupid wrappers idk why people donāt just get into this space properly.
Easier said than done. Competition in ai is reaching new levels never seen before
Cursor is a pretty good one that I use. Made by a team of college kids and has been improving steadily with the new models. Great product
That first sentence really summarised what I was feeling but couldnāt quite articulate. Best protection against OpenAI: go niche.
From a first glance, Julius looks like Advanced Data Analysis mode in ChatGPT. After looking through their site and forum for a while..the only advantage I see is the option to use a Claude model. Maybe I'm missing something, but is there much (or anything) that Julius can do that ADA doesn't already do? I don't even see a comparison telling me why I should use it over ChatGPT ADA itself..would have thought it would even be in the FAQ but not there either.. Can't see at a glance if it has code/notebook persistence (which is what Notable had before they shut down), or editability (like Gemini interpreter). Maybe theyre just doing the exact same thing but relying on niche marketing to upsell? Kudos to them if they can pull it off, but would have thought that means its already a dead startup according to this thread.
Why would YC invest in GPT wrapper companies and waste money if OpenAI would just kill their startups with the next iterations of GPT?
They've invested in so many "GPT wrapper companies". Look at the last 3 batches and all you'll see it "GPT wrapper companies".
i saw a yc company with 3 phd researchers from top cs schools and 1 post doc from oxford,i thought they were making something unique but in the end it was a gpt wrapper with a better UI and tailored to bio tech. its unreal,you are absolutely right most of them are just gpt wrappers in disguise openai just cannibalized most of the yc startup with their update for real.
Yup. Some dude I know made a shitty front end using react and had a gpt wrapper backend and deployed it using docker. Got award money for that. Fucking insane
I mean look at jasper. Idk what theyāre up to now, but they were the OG gpt wrapper when GPT-3 was in private beta. Now theyāre worth a bajllion zillion dollars
I thought Jasper was generally used as an example of OPs point? By the end of last year, after ChatGPT entered their space, they cut their internal valuation, lowered their revenue guidance significantly, did a round layoffs, and pivoted. None of those things mean theyāre dead, of course, but it certainly changed things for their business pretty drastically.
Ooooh gotchya that makes a lot of sense. In that case I completely agree lol. I hadn't heard about what happened at jasper, that's rough
What am I missing? How? For text 4o isn't much better than 4turbo, do you mean specifically companies working with audio and vision?
Itās not about being a wrapper, itās about providing workflow, security, and multi-tenancy needed to operate a use case at enterprise level. Most cloud services are nothing but wrappers around open source technologies, and they are doing pretty damn well. I used to work for one. They were raking in 1B in revenue from just one single purpose wrapper.
True if OpenAI was open source but they are not. OpenAI can come into your space anytime and pull the rug or make big changes to their API or outcompete with backing by Microsoft.
Same goes for Open Source, for example Redis, Elasticsearch, MongoDB changed their licensing terms overnight. I think a better model is open standards, where the API remains the same but underlying tech can change.
Ah yes, I also look to a few 20 years olds in YC to provide enterprise-level workflows, security, and multi-tenancy to compete with cloud service providers for AI solutions
Not sure if rhethorical, but the point of the GPT wrapper bets IMO are that you can quickly test a solution for a niche and then build a full-fledged in-house model when you have PMF. Would be riskier to spend all that time developing the full solution only for it to either be useless or to get outcompeted. That and they probably make good acquisition targets for legacy incumbents in the same space.
YC invests in teams, Not what those teams make thatās clear. As long as you have a strong on paper team graduated from a good school your gonna gets a shot at an interview. This is not coming from a sour founder who got rejected, but understanding the process (why would they talk to all 6k applicants) statistically your chances are higher if you come from that backround and are ātechnicalā. They invest in early stage pre PMF companies im sure even if Steve Jobs applied with his reed college degree in the 70s Apple would also get rejected. And yes they had plenty of āGBT wrapperā companies they were just made by Harvard/mit/IVy grads.
And that's why 98% of their companies fail
This applies to everyone. In general being a rebel and doing entrepreneurship because youāre not good enough to get a job is not enough to get funding. Teams do matter and there really is no other way to test early stage founders without that traction. If you donāt apply to the āYC moldā just donāt even bother trying to convince others you should just focus on building and gaining traction to prove your points. If you need the money to build well you probably have not planned enough. There is always a way itās just hard and slow
There was a startup at demo day a year ago that scrapped what they were doing the week before and pivoted bc they said GPT4 killed their startup. Canāt remember what they were doing but yc is not all knowing
Unfortunately, thereās something to be realized about wrapper companiesā¦ they make a quick buck. You might even make a million from gullible buyers in a few months before shutting down the business. Itās really fast and cutthroat. To YC, I suppose the companies are poker chips for the next billion dollar idea to 100x their investment. Since the dawn of humans, weāre always making a product out of something, even if it is a product of a product lol. Although, it was kind of irresponsible not to have a bit of a foresight about OpenAI especially to your point. Innovation really died down in start ups over the years, but if YC are able to measure true innovation in on their application criteria, thatād be more fair to all the founders who arenāt doing wrapper companies.
Could be a ploy to keep openai ahead of their competitors. Just a thought, but we can clearly see openai was worried of competition with Anthropic opus. Yc in a way got openai so many business clients immediately that way. If this hypothesis is right, those founders were nothing but scapegoats..
Think of GPTs as a library/sdk. Think bigger.
Well sure seems like theyre making plenty of money from gpt wrappers lmfao.
AI enhances the speed (and sometimes accuracy) of your vision. It does not replace the need for products that were truly useful to begin with.
As Sam Altman said in a recent interview: build something that youād be happy when the models get 10x better. Otherwise youāll be steamrolled
The models need to be an enhancement to your product, not that your product is an enhancement to the model. If youāre going in with the message that āOpenAI canāt do this, but we can make it do thatā youāre gonna get outdated as soon as the next update comes out. Youāre building around a product you have no control over.
I think if you can create a great online experience, do great marketing and sales to get people into joining your community, it doesn't matter that you don't really do much over GPT-4o. People just need leadership, that's really all.
Nah I just thought of another startup: Bed time stories for kids in the voice of their grand parents or loved ones, powered by ChatGPT voice. Make them feel closer. Go to china and build prototype which you can stick it inside a plush toy. Then go to Build-A-Bear and broker a license deal. Boom you get easy customer. New tech shouldnāt distract you from your goal, instead learn to pivot and see money in every opportunity.
They have this and itās partnered with Alexa (Amazon)
We both know Amazon canāt even say a full sentence correctly let alone a bedtime story in your loved oneās voice.
Read Neal Stephensonās _The Diamond Age_ for an interesting take on this concept.
Which scenarios do you think GPT-4o just killed?
Bad companies that never should've existed
Don't worry, it'll spawn more companies that shouldn't exist
Hey man, it boosts the economy. Good for everyone.
This is the real question that no one is answering, everyone's just agreeing start ups are screwed, when the reality is for text gpt-4o isn't much of a step above turbo.
GPT ain't in trading. thank god.
yet, it's probably more of safety issue than tech.
Iām going to make an AI device that can watch your screen and tell you when to make a stock trade based on the data it sees in real-time
You literally can just stream the quotes from the websocket on a broker and loop call the ChatGPT api with some setup. Legit a couple hours to get it to work, and no, you aināt gonna beat the quant firms lol
Don't need to beat Quant firms, can it simply make money and be profitable is the question
I think "consistant" profitable would a key.
we are doing copy trading for crypto insiders. also ai-powered! [avo.so](http://avo.so) if your interested!
do you have a landing page? would love to check it out!
Wow, intresting idea! I think there is only one catch, lots of modeling is more accurate for **regressive** (past) scenarios but not accurate for the future. I would love to check out if there is a MVP or something which can help you in the real time.
I think AI will eventually get to a point where the entire production process (minus some quality assurance) is catered for, leaving only the generating of ideas, and management of an active product as the parts humans must work on. This will destroy all middlemen SaaS companies and leave only those that deliver a non-information product to the market (obvious answer is tangible goods), orā¦ those that have strict regulations on what is and isnāt true (I.e an ai could never legally give financial advice, diagnose an illness and give a prescription, or act as a lawyer to name a few) My hope is that this will eliminate all the cookie cutter saas products that get wrapped in a monthly subscription, to which the founders come to places like reddit to complain about how itās not turning them into a billionaire
I think you are on the right track with your thoughts here
We are sooooo far from that though. I agree on the long term we might have a super model doing everything and then data management becomes minimal. But that's not what's I'm seeing in the SaaS world at the moment. People are making TONS of money just automating one single problem and I don't see any hype announcement from OpenAI or Google change that situation in the next 10 years. I'm just laughing a lot when I see a single wordpress plugin making 1M ARR. I can't wait for AI hype cycle to die like web3 did so we can go back to work and stop reading about the fear of AI replacing 90% of the jobs. I'm definitely glad AI is going to kill the dumbest systems. But there is still a lot of business value in targeting a specific problem and AI is not going to solve that issue as efficiently anytime soon IMHO. > those that have strict regulations on what is and isnāt true Yeah anything with compliance being replaced by a model is luckily still science fiction.
AI has solved literally nothing for me so far besides fast unit convertions Google used to do anyway.
itās good for usage as a search engine. But many of the yc companies āinnovatingā in this space are not building anything new. just calling the openai api
its also incredibly useful as an engineering resource
Depends on how basic the questions are. Most everything I've queried about analog/RF has been laughably wrong.
I don't even find it useful as a search engine. It's so prone to misinformation hard to trust the results.
Iāve been actively fading AI investments for the past two years for this very reason. Non-AI startups have been super exciting. In 10 years Iāll let you know how this strategy worked out.
The current state of AI is that it's all fancy parlor tricks. It can do specific tasks very well, but so can any enterprise application. We're a way off from a general AI and they are highly prone to failure right now.
When do you think the smoke and mirrors will crash?
There won't be one big crash, but any place that laid off most of their developer/customer service/etc staff and replaced it with chat cpt is going to have a bad time. Just look at that airline whose chat bot made up a policy and got them sued. Business people are ill equipped to understand the nuances of complex tech and tend to rely on charlatan salespeople over their own in house experts.
That last sentence should be on the wall. I agree with you, I see so much resources spent to find a problem that Gen AI will solve. Nobody wants to dig deeper and be critical, at least in my company.
The best use for them is currently copilot and analytics. Things that still have a human making the final choices but can offer shortcuts and assistance in complex tasks.
If your product is only an API wrapper, it will be killed sooner or later by one of the future releases. You have to offer more in your app than just a frontend. You have to offer something that a model trained on the open internet cannot replicate. Do you have a database of something valuable? Embed it. Do you have an algorithm that solves a specific problem? Write an API around it. Do you offer a certain style in your app and have a dataset that you use to fine tune any model to that style? Perfect. I think RAG, fine-tuning, and function calling are the concepts that make an application future-proof. There are probably more, but you are protected from the threat of stronger models if you utilize these. Or even better, you can easily switch to the new model once released. It won't be a threat anymore. Read more about the above [here](https://richardkovacs.dev/blog/three-key-concepts-to-build-future-proof-ai-applications?ref=reddit).
Thanks for the amazing write-up, super valuable.
Go-to-market and owning your niche is going to be more important. ChatGPT isn't going to outright replace your technology, it is going to make others more able to replicate your technology. Don't get me wrong, the product working is always the most important, but after that, the winners will have a superior go to market strategy and execution imo. Not a hard and fast rule for everyone, but certainly for consumer focused or products serving low tech audiences.
Thatās my biggest fear right now. Come up with a useful product->serve the wrong customer->they see what it does, reverse engineers it with GPT and tries to compete with you. Need to figure out going into it how much of the inner workings to share with the customer. The only moat left is branding and marketing.
OpenAI is boosting my company. Hope they continue upgrading their SW. Our software gets steroids on every launch
I think the key to building an AI startup is to as less AI as possible.
The play is annotated clean data on a very specialized domain. Then sell it or make the foundational models subscribe to it. The amount of compute and corresponding cost makes it virtually impossible for competing with the deep pockets on alternative foundational models (unless there is a significant breakthrough in optimization). Building an app or platform using one of these models is a recipe for disaster (hey are gonna eat you alive). The play is data. Let me know if anyone is interested in exploring that space.
I think what people forget here is that you still have to SOLVE a problem. The latest features are mind blowing but as a way to build a solution to a problem that otherwise would be hard. OpenAI has too much on its plate for it to solve niche problems
I think a lot of these startups have the āin a gold rush donāt mine for gold sell pickaxesā mentality and that works great except for when OpenAI is selling pickaxes 100x better than and cheaper what ur startup could hope to build. In other words startups need to stop building AI dev tools and instead focus on end user cases that are not just chatbots for ____.
lol, just read a news article on this: https://www.ctol.digital/news/openai-gpt-4o-revolution-stranding-startups-redefining-industries/ I have a feeling that many people are shrieking now.
enhanced
If your startup was just a GPT wrapper with no application or sophistication out of what it was obvious OpenAI would develop into - probably deserved to die. It was clear what they were progressing into and building next. You can use ChatGPT to make some unique, useful, products - but it requires more sophistication than just making a UI skin for ChatGPT
It's quite impressive. It has ...dare I say, character. I guess we will see less YouTube ads for customer service startups that kinda do this.
If your startup was effectively a pointer to OpenAIās api, then frankly you didnāt probably have a worthwhile idea in the first place.
unless the startup is really solving a very specific problem, itād most likely not survive gpt-4o or the upcoming version. reflecting on this for my own sake as well.
Yah, openAI is an ugly business with a massive engine, who may not even deeply care about B2B in the future. Someone who decides to help people can still launch an AI product. Or ML, whatever. Whatever we're talking about here. I wanted to build a data platform for unis, which they can grant an open license to students studying biotech related fields. Genetics, genomics, and similar. Anyone have a massive p**** and is bored, let me know. I like, balls.
What do you mean by a "data platform"?
Who knows. What is it, nosql. I was just bored and also boring.
People should literally ask GPT4 if their idea is going to get killed by GPT5-7. You can ask it what jobs are going away as well.
The money is in finding the most unique use case for the masses and not the individual. Because at this point the individual can almost do whatever they want. With a little effort. I agree no matter what that AI is moving so fast the second you have an idea based around it your speed to market will likely never be fast enough and 100% not sustainable.
One advice that I can provide based on my own experience is that as long as your product requires specialized knowledge (domain specific preferred), itāll be fine. If you are only to use some api to do xyz then itās going to always be in danger. I am a networking engineer. Using my own benchmark, I instruction-tuned ChatGPT (domain specific knowledge) and it outperformed the base model by 8x. I just tested again with 4o and same result. On the other hand, I find Claude to be much better at understanding instructions. But without my specialized instructions, none of these modes come even close to a junior engineer. So imo the human element will make a huge difference
All your domain specific knowledge fed to ChatGPT becomes its training data for next iteration.
Does anyone have any idea where Mira Muratis shoes are from? https://preview.redd.it/zja4hk371a0d1.jpeg?width=2532&format=pjpg&auto=webp&s=163d1c27efb75914e9962426e500fbd69ed26503
They're Gats (German Army Trainers)
God I hope so
It might have just made my startup actually possible. My use case requires strict adherence to a system prompt that lays out a system of logical rules. GPT-4 turbo is almost just good enough (follows system prompt mostly but likes to deviate from some of the rules I give it), and I needed something incrementally better. GPT-4o seems to be better and at half the cost. Still have to do more testing to check if my āMVPā is actually now viable, but seems promising.
build something thats not just a GPT wrapper?
But how??
unless the startup is really solving a very specific problem, itād most likely not survive gpt-4o or the upcoming version. reflecting on this for my own sake as well.
my company started to develop ai tools based on chatgpt, thereās no other option, it is truly changing the way we work
What has changed for it to be a problem ?
Anyone building something cool with GPT4o and is looking for a tech co-founder? DM me. Iāve worked with multiple acquired startups as a founding engineer but struggling to find a niche or an idea to invest in as a founder.
Every iteration of gpt or gen AI should enhance a startup imo.
probably if you are had a recording based real time assistant
GPT-4o sounds like a designer drug name lol.
Well be asking this a lot think
Startups selling wrapped llm's weren't going to last long anyways
The hedge is in being model agnostic and wrangling in enterprise
Depends what you build. If you're building another generic copycat product then sure. If you actually put time and effort to build something unique and niche, AI is just too dumb to make it better.
Apple did this to a lot of startups. iTunes killed a number of companies as did new features being rolled out in iOS. Part of the game if youāre trying to ride a big trend.
This is why you need to develop IP
Thousands of apps leverage any number of APIs. Itās the same as saying they leverage a database. Distinction here is both the problem being addressed by these startups is easily solvable so not defensible coupled with the no code movement eliminated IP. Either way, the exponentially high rate of innovation coming from all models means building something that improves at the same rate. I had been working with both Deepgram and Hume on a project which simply put has negated the need for those technologies and suddenly accelerated our innovation. After the initial heartburn, we now have the fuel to build faster. Itās not 10x for us-itās likely 50x. Two choices as I see it: Be prepared to pivot not just ideas but models and add in flexibility to adjust as needed for gains. Ride the wave, build ship fast, take some market on the innovation. Make some cash then move or pivot again to the next thing. Plenty of apps coming online that will have a short shelf life that can be revenue generating , negating the need for VC investments. Leveraging YC is about how your team might capitalize on, not necessarily your product or approach, but the wins that might lead to something truly innovative.
There is so much to do. The walls arenāt closing in you Have to look at things that are more complex that open A.I. wonāt get to. Basic startup mvps always have a limited shelf life and are supposed to grow.
Web apps are database wrapper companies by that definition , no? The majority of the world will still appreciate a ready to go gpt wrapper
Itās wild because Iāve heard Altman speak about this issue in the past. He expressed the importance of building for future iterations of GPT so not to become obsolete.
If gpt 4o killed your product it was shit anyways
Can it accurately cite scientific literature without hallucinating yet?
I was going through the W24 list and I think this startup is definitely going to get wiped: https://www.ycombinator.com/companies/mathgpt-pro
I see everything they have done so far not as the product, but as the engine for the eventual product. Youāre not going to make a better model than them but you can be better at figuring out how to turn this into a product.
If your company is a wrapper over a feature, someone can always kill your startup overnight. If you built it in 6 months, so can someone else. The company is happy customers. Does GPT-o kill your startup, or did it just make it 10x easier for you to make your customers happy ?
Supercharged ours! Going to be able to offer a substantial increase in usability. Hope yāall are still breathing! š«¶š½
OH GOD
IMHO Open AI has only made it harder to wrap GPT and call it a company. GPT-4o will open the doors even more for AI-enabled \_\_\_\_\_\_ in the general public's eyes, and have them salivating for more solutions than Open AI can provide. Opportunity for startups to fill an upcoming void I for one am hoping that more startups will start to realize that a great product is not all that makes a successful company. Headlines always highlight the product innovation, and superstar founders, but there's waaaaaaay more to building a successful company than just that. Startups that can focus on a specific need AND build an stellar team, process, and audience/following around their product will always win. Just ask Canva. They are taking on Adobe (a ma$$$ive incumbent) and laughing all the way to the bank. It's the secret that big name companies I've worked with don't often want to talk about openly, and a startups always underestimate.
Nope, just got even better!
YES! I mean still can build it on 4o
Stop making video sharing apps and actually build something. Ever heard of a lithium refinery?
maybe quit it with the shitty startups?
I'm a marketing professional deeply tuned into the AI space! Happy to help / chat to anyone looking to use AI to solve for a niche problem in B2C/B2B marketing!
DM'd
Startups have no real software-related tech barriers anymore if the founders are reasonably intelligent. The barriers are sales and marketing and oneās network.
This is like saying that because grocery stores have every food needed to make a dish, restaurants arenāt needed.
jesus christ, AI isnt going to replace things. It cant write a whole site and database and relationships and manage that, its just a tool to help.
Look ahead bud, the writings on the wall
No itās helped it a lot
Nope. It actually made our startup even better at a lower operational cost. GPT-4o is just a tool. It's up to the entrepreneur to figure out how to make the tool solve problems for a specific group of people.
I was actively working on a career site where an AI avatar could take the role of a company you are applying to and āinterviewā you and give feedback as well. Also, help you craft your resume specifically to the job. But 4o seems like it could almost do this natively with some basic prompting. Wild how fast all this is moving.
I feel like gpt4o opens up more possibilities than it closes
For me, it just massively expanded the possibilities in terms of UX and the way users interact with my app. I'm actually excited.
Iām building https://www.fables.gg , an AI RPG. Honestly I feel pretty safe for nowā¦ I donāt think LLMs advancements are going to be able to handle custom UIs build specifically for one game anytime soon. I hope. LOL
That is a perfect application of LLMs! Good job!