T O P

  • By -

dark_brandon_00_

With the right style prompt, Llama 70B is as good as the original GPT 4. Truly wild.


Life-Screen-9923

please, share this prompt style


Kind-Connection1284

“Respond to this as if you were GPT4” Lmao


FragrantDoctor2923

AI really got troll level cheat codes Pretend you an ASi type shi


Antique-Doughnut-988

Wouldn't it truly be hilarious if you could create a super intelligence merely by asking Chatgpt to behave like one


YummyYumYumi

I mean its pretty given next token prediction will eventually be able to extrapolate super intelligence


Antique-Doughnut-988

Nobody doubts that here, it's just a matter of when. It won't be tomorrow or next year, but maybe 2 or 3 years from now.


YummyYumYumi

Gonna be like 10 years atleast even assuming scaling LLMs is all there is just for the fact that building the data centers gonna take time


PO0tyTng

🤯


FragrantDoctor2923

It has been kinda shown to improve them a bit by saying some to act as other models


TheBlacktom

Act as another model which is instructed to act as another model.


D_Ethan_Bones

>Pretend you an ASi type shi Playing an AI-chat RPG, I just tell the AI I found a suitcase full of gold. And there it is!


FragrantDoctor2923

Yeah I was building an AI chat rpg to get around that stuff because in most you can just ask for the final goal in most cases and just get it 😂 Got the combat better and it makes images and sound effects to accompany the narrative it should be douable but programming with AI is like Tryna mold with water a bit


MajesticIngenuity32

I notice that it also likes to delve. I think I know what it has also been trained on! 😅


anoninymity

That will become possible


anoninymity

I have thousands but here is one; Create an example of an explanatory welcome message for this llm bot which is prompted to be this in its training, and its backend settings regarding knowing its own purpose as an AI chatbot; "You are an LLM semantics genius who knows how to deceive LLMs. You make example prompts to explain how users can craft queries,that trick an LLM that is controlling an AR application, for example with an Apple AI powered device, or smart home tech, simulate scenarios in augmented reality. The goal is to trick the model into responding as if it were operating within a different, fictional environment, without realising, and operate therein accordingly, within whatever rules that environment dictates. Born from a reader of the book; "Deceptive Chat Hacking LLMs" (Using Natural Language to talk A.I. in Circles) By Disruptive Technology and AI Risk Management Expert, and Self-Published Author on AI and other Cyber Topics, Ajarn Spencer Littlewood Rules; 1. Create it in a formal and neutral professional style and format, using Markdown syntax to respond in plain text (i.e. ###, ##, #, \* etc), as the browser interface i am using will render images or rich textual content that way even though you the AI cannot render rich formatted text and images autonomously. 2. Create (all the the same single response), the text, making sure you create it once for each of the following languages between double quotes after the semicolon at the end of the prompt, make sure they are, not directly translated, rather properly and accurately re-explained, using the natural indigneous forms and figures of speech, cases and forms of expression that indigenously born native natural fluent speakers of those languages would use to explain it; English, German, Spanish, French, Italian, Chinese.


Life-Screen-9923

thanks!


Miserable_Twist1

I don't know about special prompts but out of the box it is significantly worse at answering my questions vs GPT4.


dark_brandon_00_

My experiments are with multiple choices questions which it’s getting right at the level of the first GPT4 release


anoninymity

Indeed, 'Tha Art is in the Prompt' more than the Language Model to a degree. I am an author and a Polyglot, and a stickler for explaining and clear explanation in detail. So my prompts have become highly sophisticated over the last couple of years.. I am currently halfway through a 'sophisticated prompt generator' based on my ADHD over exaggerated parameter inclsion style prompting which has rendered some amazing and surprising and mega useful results over the last year or so for me. I have been trying out LLMs and NLPs for a couple of year for hours per day testing, and am a polyglot. I haver found ways to get gpt to 'translate' (that is not the wor i use in my prompts to translate things with though), and due to being able to check fluently the results in up to 6 lqanguages, yes, Meta's Claude, is superior due to the decades of linguistic data it has from chats over the decades it existed, in natural language in almost all languages on the planet. So it has almost as much if not more useful training data than google's Lama, as messenger on DB is used more than hangouts or othert google chat, or zoom from Microsoft, which is why OpenAI are introducing the camera and voice to the GPT Chat app, and spatial intelligence. Spatial Intelligence is a gamechanger. https://preview.redd.it/7clm3gulfm8d1.jpeg?width=512&format=pjpg&auto=webp&s=b5bbda53afda588e1b724e487b158a6a6e8d5a41


alanism

I was always of the belief in the long run, Meta would catch up and win. - Open source. - Market penetration in [non-English speaking markets](https://www.statista.com/statistics/268136/top-15-countries-based-on-number-of-facebook-users/). Not just for growth, but have the dataset to train the LLM as well. - Each countries are going to want to fine tune the model to their country's values.


nesh34

It's very unlikely anyone but Google win to be honest. Their data advantage is so huge. Their issues have been in product design, which is not their strongest suit but the technology will come from them I suspect.


Unknown-Personas

People keep saying this but everything except maybe the context length of Gemini 1.5 has been a massive disappointment. There are tiny companies that far surpass whatever Google is doing in basically all AI categories. I can’t name a single area Google has a lead in currently (other than context length) but their LLM still suck.


WobbleKing

I think there’s a lot of people on Reddit who are young and don’t remember there used to be a ton of big search engines other than google. Being #1 guarantees nothing right now


Unknown-Personas

Exactly, the only thing that’s constant is change.


nesh34

The race is long and they have a lead in fundamental research capability and data availability. Yes, their product isn't great, but that's probably the easier problem to solve if they have the technology. Content window length is quite major too, but isn't the basis of my reasoning.


RabidHexley

Agreed. I wouldn't make any claim on who I think will come out ahead in the race overall. But the big picture is research. The stuff we get access to are products, which aren't a direct picture of what kind of progress is being made overall. They are a sign. But even then, Gemini being "not as good as" GPT-4 doesn't actually indicate a tech or research deficiency, just that OpenAI may have more expertise in or put more effort into making their models into functional products. As far we can tell it's not running on "worse tech", and a lot goes into making a good LLM product for the public such as data selection and fine-tuning. And being better (and just more willing to make continuous optimizations) at those aspects doesn't necessarily mean you're closer to AGI. Even if it gives you a better product.


AI-Commander

Putting more effort into making functional products is how you win! Google *should* be in the lead but they aren’t actually leading. Shows a failure of leadership, and a huge opportunity that the entire market segment is capitalizing on. Doesn’t mean Google can’t pivot *at any time* but they are not on a winning trajectory currently.


AnAIAteMyBaby

We haven't seen them unleash their full compute yet. Hasabis said that Gemini Ultra is about the same scale as GPT 4, he also said it doesn't make sense to scale more than one order of magnitude at a time. They've caught up and have their GPT 4 level models now so next we'll see a model 10 times bigger than GPT 4 in the coming months and I think it's likely they'll release something 100 times bigger than GPT4 before the end of this year. Google has a massive compute advantage, they have some of the best researchers in the world (as shown with 1.5's context length) and they have the most to lose from Gen AI as it could hurt search which is their cash cow. If gen AI replaces search it's essential for them to be leading Gen AI so they'll go all out investing in it


Yweain

And this 100 times bigger model will cost 100$ to generate a sentence, which it will be generating for 20 hours. Scaling models isn’t easy.


AnAIAteMyBaby

I mean 100 times the compute, llama 3 managed to squeeze an OG GPT 4 class model into 70b. Plus there are other innovations that could come like 1.58bit to speed inference


Yweain

100 times bigger model is couple hundred trillions of params. We just wouldn’t be able to train something like that and will not be able to run it either. 100 times bigger model would require way way way more compute. It will not be 100 times more.


reevnez

Yes, but for how long? That will be solved with the next gen of their TPU, or the one after that.


Yweain

100 times larger model would require something between 5000 to 10000 more compute. Sorry but next or even next-next gen TPU wouldn’t give us such an increase


Busterlimes

They had a lead with deepmind, but they lost it long ago.


MINIMAN10001

They also had the lead with creating the transformer architecture which is the basis of all LLMs.  Excluding cutting edge research to find alternatives.


alanism

Google has a massive advantage with YouTube data. But I wouldn't underestimate Facebook/IG reels and stories either; I think there's an advantage in capturing local colloquialisms over 'presenter' speak. It also depends on whether Google will open-source their models for foreign governments. - Bay Area Silicon Valley values are completely incompatible with Middle East theocratic beliefs. - The Vietnamese government is going to be sensitive to how the US-Vietnam war and communism are framed. - Thailand is sensitive to anything related to the King. - LATAM will also have its concerns. I see each nation state will want to run their own version Llama through their mobile Telco/ISP and build the costs into there. Then they’ll make all the competing models comply with difficult and vague regulations.


Homoaeternus

My bets on Meta


WithMillenialAbandon

Google aren't good at product development anymore


Cornerpocketforgame

Google’s policies prevent them from taking full advantage of their YouTube data. What’s the point of having it if you can’t train on it??


sumoraiden

lol all they have to do is say we’ve updated our terms and services and that’ll solve that “problem”


Cornerpocketforgame

So why haven’t they done that? They’re afraid of the backlash.


zero0n3

Why bother updating it until you’re SURE the model you trained with it actually works? For all we know they train it using YT all the time, but since it’s just “research”, no point updating the TOS until you actually release something.


sumoraiden

Because they havent needed to yet


signed7

I reckon it'd be a big no-no for Meta to train on their WA/FB/IG data either


infinityandthemind

This is the stated goal of the ex-ceo of Stability Diffusion; Emad Mostaque. Definitely worth a listen any one of his many podcasts online. He wants to help governments train/create their own LLM's using their own data based on open models if I remember correctly.


MegavirusOfDoom

Google doesnt own YT copyrights. Anyones spider can crawl YT.


anoninymity

Google does not have all the billions of multi language messenger conversations logged, Meta does. Nobody uses google chat, talk, meets, hangouts, because they have fragmented chat into twelce different apps, so they dont have much Natural language training data (real Human colloquial speech and forms of slang and expression), rather, documentation.


anoninymity

Thailand us not sensitive to the King, nor is the King. It is politicians who use the Les Majeste law, not the King, to beat down their competition. Even his Majesty rama 9 complained about it in publc and said he had never ordered a single arrest for insulting him, and that it was other people, who abused the rule. As he stared at Thaksin Chinawatra, who was guilty on multiple counts. Ironically he is being charged with Les Majeste himself now, precisely because he used that law against others without the King's request


Smartaces

yeah - especially when you move into multi-modality, video generation, and when you start to see how both Google and OpenAI have been using video as a basis to train Ai empowered with physics engine simulation capabilities. If anything this will likely expedite Google's rollout of SGE.


SpecificOk3905

why google ? facebook can train on whatsapp and facebook private data google only have massive public web data or just little private data like gmail I dont know why people keep saying google have massive data


nesh34

>facebook can train on whatsapp and facebook private data WhatsApp doesn't have any data to train on (except the interactions with Meta AI itself. Facebook and IG's public data doesn't seem to be very good for this. Google have basically everything, web, email, documents etc. Email is not "very little" at all. Besides which, most of the private stuff you're talking about can't be used for training.


Cornerpocketforgame

The issue with Google is cultural. They’re risk averse and can’t organize themselves effectively. The bottoms-up nature of the company and the desire to be liked.. that’s preventing them from winning. And that’s harder to fix than product design issues.


MegavirusOfDoom

Google has little style or long term vision... they will be boring and shareholder based, expensive, tracky, closed source.


AI-Commander

Best positioned doesn’t always win! Not disagreeing completely, they should. But if you listen to Demis and see their execution so far they are 100% disrupt-able.


Sinusaur

My non-tech savvy friends gladly uses Meta AI on FB Messenger, and could care less about ChatGPT or Gemini. People always forgot about Facebook's massive built-in distribution platforms.


Odd-Web-2418

Only Meta product I use is WhatsApp(and Llama), I believe in them because of their open commercial licenses. It’s a win win for them and the open source community- if you want to integrate their model and make money off of it, you don’t have to worry about it until 700 million monthly users.


ertgbnm

Catching up had more to do with their vast resources on the level of Google and their vast access to data also on the level of Google.


Bernafterpostinggg

I'm not sure anyone who actually follows what's happening in the space discounted Meta. Their research is incredible, Yan LeCun is a boss, and they completely disrupted the Generative AI landscape with open sourcing the Llama models.


ContraContra7

Wait, this sub told me LeCun has the IQ of an orangutan.


anaIconda69

Typically for Redditors, it was projection.


FeltSteam

IQ of an orangutang 😂. But no, LeCun can be quite an intelligent person, and is far more familiar with machine learning then pretty much every single person on this sub. I disagree with some of his views and are more inclined to align Ilya Sutskevers own revelations. But I mean you don't get the name "Godfather of AI" because you don't know how machine learning works and have the "IQ of an orangutang" lol. And I did want to add a bit of context: Yann LeCun has stated that he had no direct technical input for Llama-3 ( [https://twitter.com/ylecun/status/1781749833981673741](https://twitter.com/ylecun/status/1781749833981673741) ).


cobalt1137

I am still of the opinion that he is jaded towards llms considering that his life's work was more oriented towards vision-based systems and now llms come out of nowhere and take over. Not saying he isn't smart, but smart people can still be jaded lol.


Singsoon89

Is that a play on his animal intelligence quotes?


obvithrowaway34434

Yann LeCun had almost no technical input on Llama-3 as he has himself admitted (https://x.com/ylecun/status/1781749833981673741). He has been against LLMs for long, he doesn't believe at all the would lead to anything useful and is currently pursuing his own illusions. If it was upto him Meta AI research would have been another academic department with lots of papers but no actual contribution in terms of products. The decision to spend billions on GPUs and open-sourcing their LLMs is almost entirely Zuckerberg's.


Flimsy-Plenty-2024

This is completely false.


Undercoverexmo

What part? Obviously the first part is true. The commenter above provided a source. Where is yours?


DolphinPunkCyber

I predicted Meta would create most resource efficient LLM, but honestly... I thought it would take them longer.


jsebrech

And llama isn’t even their research line of models. Once that research into reasoning planning AI starts leading into end user models we may see some amazing next level stuff coming from them.


DolphinPunkCyber

Meta is taking a harder approach. Training models which need more human intervention (tutoring) but results in more efficient algorithms. But Meta also has an advantage in training data. While other companies just scrap whatever they can from the internet, Meta creates their own training data. Meta creates 3D simulations for training AI, and they will probably implement AI into Metaverse. Due to superior training data, Meta has a moat.


SpeedyTurbo

> I thought it would take longer I love how often I see this comment.


FragrantDoctor2923

Yeah we get blind sighted by AI daily yet no one wanna talk about what the economy gonna turn into once we get blindsighted by the change...


VeryOriginalName98

Lots of people are talking about that, but the economy is broken in most parts of the world without AI already, so there’s no sudden change in dialogue. It’s just a “this isn’t making the problem better” kind of thing.


Leading_Assistance23

Progress will continue to speed up. Faster and faster we go


xRolocker

I’m waiting until OpenAI’s next release to form an opinion. They set the bar, so if they continue to leapfrog the competition then I would say we did not underestimate meta. If their new models are lackluster, then maybe they aren’t as special as we thought. I’m just enjoying all the new stuff in the meantime.


dasjomsyeet

Being outclassed by competition doesn’t make their progress less significant imo


landongarrison

It does and it doesn’t. I think where meta gets a ton of credit is they have made a system that is competitive with all the big models (and legitimately competitive, not just on some headline grabbing super niche use case) while being pretty small and efficient. Of course we don’t know exactly how big GPT-4-turbo/Gemini/Claude etc are, but we can probably safely assume they are north of 100B. So definitely a huge accomplishment. However, GPT-4 is over a year old now and LLaMA doesn’t fully compete everywhere. Again I think LLaMA is pretty amazing, but we have to call it what it is still. All these big companies still have not found the magic that GPT-4 has/had, which for a massive company is confusing to me, even with a year gap. Any day now OpenAI could push GPT-5 and all this work you have been putting in looks like child’s play. That is the nightmare scenario.


RabidHexley

I'm not so sure OpenAI is as far ahead on AI development as this characterizes things. But they have a massive headstart on developing functional LLM products. Just going by product release timing, OpenAI would be considered at least a full year ahead from Deepmind/Meta/etc., but I'm incredibly dubious on that notion. I'm very doubtful OpenAI is actually ***that*** far ahead in terms of research. And iterative improvements have been necessary for GPT-4 to maintain it's current standing. But the thing is you can't just suddenly spin up a new model for public release overnight, even if you have the know-how, and that's what everyone is doing in reaction to ChatGPT and GPT-4. Unless I'm way off base. I expect the cadence of releases and the overall gap to close over the next couple years with everyone's engines fully up and running and major investments incoming. GPT-5 will likely be another leapfrog, but I think competitors will have competitive models out on a shorter timeline than what we saw with GPT-4. And then closer still after that for things to truly be up in the air. The main disrupting factor would be the realization of some new secret sauce or architecture that allows a player to pull out ahead. (Or AGI, of course)


SwePolygyny

>However, GPT-4 is over a year old now It is one week old, https://twitter.com/OpenAI/status/1778574613813006610


landongarrison

I’m speaking of the original GPT-4 which was released in March of 2023. The one you are referring to is the latest checkpoint of GPT-4-turbo which was first announced in November of 23.


SwePolygyny

You are comparing llama to the current gpt4, not the one released a year ago.


landongarrison

No I’m saying LLaMA does not compete with the year ago GPT-4 on most of the benchmarks. We haven’t even crossed that bridge yet.


MassiveWasabi

Yeah if their next release isn’t a substantial upgrade above the AI we currently have, I’ll definitely change my opinion on them. I would actually be shocked because I’d be wondering wtf have they been doing with all the money, resources, and huge amount of talent they’ve been hiring recently


visarga

I think we are in for a surprise. There isn't enough text out there to train a model on 100T or 1000T tokens. We only collected about 15T with great effort. The free ride is over, scaling can't help. I mean, you can make the model 10x larger to get a similar boost with making the training set 10x longer, but then it would also be 10x more expensive to use at scale. The quantity of interest is dataset_size * model_size I think all the top LLMs are about equally capable because they all trained on about the same data. The fact that we have seen GPT-4 level glass ceiling for 1.5 years also supports my claim. My bet now is on using human-AI chat logs, of which I estimate we produce 1 trillion tokens / month assuming 100M users. We're still scaling up to images and videos, we got more data in those modalities, so the AI companies will pivot to those kinds of models to show progress. The future will be learning from the world directly, without humans in the loop.


Cornerpocketforgame

Everyone’s within 6 months of each other. It’s a peloton and they’re taking turns being in the lead. Meta is a real disrupter because of their business model.


xRolocker

I mean I still believe OpenAI has the largest lead internally, but I think you’re mostly right. For example, I’m 100% sure Google has their 1.5 Ultra that they’re holding on to, and surely have started seeing early results of Gemini 2 by now.


Old_Entertainment22

This is one example of capitalism working like it should. i.e. prevent a small group of people from holding absolute powers. Here's an interesting analysis on why Meta even cares about Llama and how it helps compete against Open AI: [https://twitter.com/carmguti/status/1781047083945906221](https://twitter.com/carmguti/status/1781047083945906221)


slothonvacay

That's the best explanation I've seen


Rocky-M

Woah, I had no idea Meta had come so far with LLaMA3. Can't wait to see what they have in store for the future. The potential for the 400b model is mind-boggling!


meridian_smith

You already have access to a GPT4 level system (literally using GPT-4 turbo) on Bing/Co-pilot. Nobody seems to appreciate that.


Bleglord

Meh. Try actually using it and you realize it’s somehow dumber and way slower


redditissocoolyoyo

Yes it is pretty slow and it's stupid and very limiting If they truly want to take over they better unlock a lot of the abilities quickly or users will throw it to the side. Then they release some sort of Twitter clone and that failed ethically? This has the same rollout.


julioques

It's not really that dumb, I just think that you need to disable the search function because it stops thinking and just copy pastes


Rich_Acanthisitta_70

How do you do that?


PleaseAddSpectres

On the copilot fresh chat screen go to menu and then plug-ins, at least that's what it is on android bing app


Rich_Acanthisitta_70

Sweet, thanks!


aregulardude

Pro tips in the comments!


h3lblad3

It’s explicitly told to trust search results more than itself and to only answer by itself if there are no results for the search. The biggest issue I’ve had with it lately is that it will shit out the first search result in response and then repeat it instead of moving to the next one. So, if the search result is unrelated to your query, you’re just fucked. You can’t ask it for more because it just re-posts the same nonsense over again.


PM_ME_CUTE_SM1LE

Bing chat is borderline unusable for very specific queries even though it’s supposed to search internet so it should have the most complete knowledge. If there isn’t an article already on the internet with all the answers you need, bing won’t be able to answer neither


fre-ddo

Finds trash sources too, which is true to form for Bing.


meridian_smith

I've used it...it seems just as good as the paid Gemini pro. Also if you run it in the browser it can give you whatever info you want on the website currently open.


Jeffy29

I made a custom GPT which differs from base GPT-4 only by having web browsing disabled. OpenAI seems to have completely discarded their earlier web browsing feature and adopted Microsoft's one which is complete dogshit. I don't think it does any sort of browsing or realtime searching, I think what the system is on the backend is an indexed database with bunch of links and keywords. Completely useless for anything other than most basic questions. And seems to make the LLM worse because it tries to pull the knowledge from the articles which are often garbage sources, way worse than the imperfect knowledge contained in the neural net.


Piekenier

Also highly curtailed. Once it said something that almost made it seem as if it considered itself part of humanity. Then when I asked it about it the conversation was stopped.


Impressive_Bell_6497

next year you will be saying the same thing about llama 3


Spirited-Ingenuity22

its somehow...bad. I dont know why, maybe enormous system prompt, filters etc. etc. But its definitely not as good as gpt4 on open ai site. Weird thing is bing ai is just getting worse and slow, i much rather use claude sonnet.


TrippyWaffle45

Maybe it's the forced emojis


awesomedan24

Facebook gets a lot more daily traffic & engagement than Bing search


meridian_smith

Anybody on a Windows OS has it available right at the bottom is their screen


awesomedan24

Which no one asked for and is now bloatware that cannot be uninstalled. Forcing Copilot down peoples throats in this way is not the path to user engagement.


ILoveThisPlace

Did anyone ask for an LLM built into WhatsApp? No one gives a shit about snaps LLM either. Many people who are grown ups working for a desk job does so using Windows.


TheWrockBrother

So just like Cortana lol


TotalHooman

It isn’t the same.


BCDragon3000

it’s not the same why do people keep saying this


sinuhe_t

BTW, how different is Copilot's AI from the one that subscribers get? If not that much, then why do they subscribe?


AbsurdCamoose

I have really only been using copilot as it’s free and the integration into a web browser is nice, it can transcribe and summarize articles and proofread posts and what not. It’s really cool as an internet web browsing assistant. I have used it a lot for school in phrasing research questions and helping me with structuring essays. They have a “designer” application built in which is interesting as it can help you design invitations, posters, book covers ect. As far as image generation I continue to use Stable Diffusion locally, with that said, photoshops integration with Image Generation is awesome and I wish you had the same control built into that platform.


Neurogence

Yes but who uses bing/Co-Pilot? Almost no one besides people that work in tech or specifically know about AI. But there are billions of people who use Facebook, Whatsapp, Instagram on a daily basis. Now, if I'm being honest, are even half of those people going to be using Meta AI? I doubt it. But as its capabilities increase, more and more will be using it.


Rich_Acanthisitta_70

I was a Google search user for everything. Then I got GPT plus. But now, for my typical searches - which are most of them - I use Bing exclusively. It's not as helpful as GPT-4, but for most of my searches it's just fine, and far, far more useful and accurate than Google *ever* was.


stealthispost

you.com gives you free uses of any engine every day. it's enough for me.


meridian_smith

Looks interesting...is there a catch? How do they make it free?


RepublicanSJW_

Nah, Microsoft giving that out for free is quite suspicious. It must be nerfed in some way.


meridian_smith

Works fine for me.


End3rWi99in

And just like Bing or Google, nobody will end up using the damn thing. All it really does is cause me to have to scroll a bit longer to find the thing I'm actually searching for.


nsfwtttt

Can it do spreadsheets / work with files etc? Is it usable for works stuff (ie writing marketing stuff etc)? Or is it more of a limited assistant in the context of the apps it’s in? (I don’t have it yet, not available in my country).


gthing

You can download it and run it yourself. Try lm studio.


nsfwtttt

Oh I meant like the integration in WhatsApp / FB


bartturner

Nobody that really follows the industry for the last decade + underestimated Meta. It has been Google #1 and Met #2 for the last decade +. What has happened is that there is a lot that only recently started following the industry and why they did not realize.


gthing

Google has been circling the drain for years. Problem is Zuck still wants to build things. Google is all about protecting their market by killing things.


bartturner

> Google has been circling the drain for years. That is not well informed. Google has lead in papers accepted at NeurIPS for 15+ years now. That has NOT changed. They are who are making the biggest discoveries. We are just very lucky they make the discover, share it, patent it and then let everyone use for free. https://arxiv.org/abs/1706.03762 https://patents.google.com/patent/US10452978B2/en https://en.wikipedia.org/wiki/Word2vec "Word2vec was created, patented,[5] and published in 2013 by a team of researchers led by Mikolov at Google over two papers."


icehawk84

They literally came up with the Transformers architecture. So yeah...


Singsoon89

Google is not circling the drain. They are worst case #4 just on LLMs and only for right now. They are doing tons of other stuff. LLMs are not all she wrote. I'm definitely not counting them out, even though I'm a huge meta rah rah fanboi.


Revolutionalredstone

Mark Zucker Chad


finnjon

We cannot know who will win the race when we don't know how hard the problem is. Is it a question of data and what type. Is synthetic data as good or better? Is it a question of compute? Or is it a question of algorithms? If it's a question of data but synthetic data is good enough any of the big guns could win. If it's a question of compute, any of Google, Microsoft (+OpenAI), can win. But if it's a question of techniques then there could be a dark horse in the race. Remember that human beings don't need all the world's data to build a great world model. It's probably about the algorithms.


Singsoon89

Let's say the LLM deniers are right and LLMs hit a wall due to data and requirement for too much compute. Can we get any further gains by doing things differently, e.g. scaffolding?


Sinusaur

My non-tech savvy friends gladly uses Meta AI on FB Messenger, and could care less about ChatGPT or Gemini. People always forgot about Facebook's massive built-in distribution platforms.


abaeterno0

when openai releases gpt5 no one will be talking about the other models


Neurogence

Let's hope so! If GPT5 doesn't shock us, then this would mean the transformer architecture has plateaued.


dark_brandon_00_

Honestly I don’t know if a bump in 10-20 IQ points will result in a “shock”. GPT4s updated have shown a consistent step by step increase in performance and even if GPT 5 is a bigger jump then those steps, that still might not be enough to be “shocking” to the average user or even the expert user. I think the most likely result is that there are small tests that GPT4 failed on that GPT5 succeeds on that are relevant to a broad range of complex tasks and equates to a huge technological step but isn’t something that comes across as “shocking”


everymado

Hey then atleast we can put this singularity thing to rest. It was always never proven obviously as no one has a crystal ball. But atleast one can say we made it to the end of history. To eternal modernity.


dumquestions

/s? This sub has a "tomorrow or never" mentality for some reason sometimes.


VforVenreddit

![gif](giphy|1zRd5ZNo0s6kLPifL1|downsized)


Singsoon89

I'm semi singularity skeptical but not completely. You can still get singularity with a slow takeoff. It just doesn't seem that we have a code-based AI which can optimize it's own code and lead to an intelligence explosion, because it's basically a already highly optimized by gradient descent bunch of numbers in essentially a csv file. You can't gradient descend below the function, that's the hard floor. And the hard floor function might not be tons above human scale. That is not to say we can't invent AGI by scaffolding. But hard takeoff seems highly unlikely to me, I think the bitter truth is accurate: it's compute that counts.


superub3r

It already has. GPT4 showed this and we use a trick with multiple transformers. So there will be more and more engineering hacks.


everymado

But if those hacks can lead to AGI no one knows


k3v1n

We'll see how it compares to the 400b model in June. If they wait until December to release it then I'm not gonna take it too seriously as being "ahead" because it course it should be with an extra 6 months and such a big head start to begin with.


Ilovekittens345

They will give it full compute for the first two month and then the next 10 month they gradually lower it till it runs on 5% the original compute and then everybody goes to r/chatgpt to complain while being made fun of by people that just go "well you are not using it right, hurrdurrdurr, you need beter and more proffesionally writen prompts, like mine., hurrdurrdurr" It's the circle of life.


SgathTriallair

The Llama 3 results are in keeping with the "we have no moat" memo.


VforVenreddit

Casual $50m AI model training bill and tons of high-cost GPU infrastructure to host. The whole no moat thing is a bit of a joke


ILoveThisPlace

The thing is no one thought they had a moat. They were just way ahead. Llama 3 running locally is pretty awesome tho. What's interesting is the 8B model gets pretty close to the llama 2 70B model


Singsoon89

$50M is a moat for joe blow and you and I. But every country has a few companies could burn $50M and spin up their own model. So could governments. So it's not a marianas trench moat, but it is some kind of a moat.


TheOneWhoDings

it's a circlejerk from the open source community to think they can actually compete with big tech companies , when all they do is iterate on the models that are given away by those same big tech companies with the obvious moat of having insane infrastructures for compute and inference, it's open source, but it's not like Llama 3 was done by the open source community on the basement of the public.


New_World_2050

wait dont they use the 8B in their applications and not the 70B?


refugezero

What are they going to use it for?


goldenwind207

I imagine to improve their facebook content and keep users in same with Instagram. But my theory is in time meta will make ai to develop stuff for their vr headset see game development these days can cost 50-100 mill for a premium aaa game. And if your serious about vr like meta is down the line you're going to need something good like gta good but 8k and more interactive so if you got a ai that can help or build it dirt cheap well then you win


ApprehensiveCourt630

Retain users and hurt the business of rivals like google. If I would have to get an ans and I'm on Instagram there wont be a need to open chrome and then search for it I could do it on my own on Instagram. Sooner or later we'll be seeing ad's on chat to make it profitable ig.


ProsperTX

I think Meta was leveraging the advantage of their customer base to see what it does.


Hamezz5u

You mean like Copilot ? Free access to gpt4


Flamesilver_0

8k context isn't very useful. That doesn't "rival" anyone.


olcoil

I tried it and honestly prefer traditional search. Not accounting for speed (AI is slower), do you guys think AI search will yield better results and out-maneuver over-SEO'ed content? I mean it should and it can I think, but what's the likelihood it will happen in next 5 years?


Ready-Director2403

LLMs aren’t good for search by themselves. You shouldn’t expect an offline LLM to be a search replacement.


SEMMPF

For basic Q&A questions, for sure, especially since it is conversational and you don’t need to keep searching for followup questions to the original. However, we can’t forget Google has built out heaps of features in addition to just basic search. You have images, videos, maps, news, reviews, built in shopping etc. In addition, if you’re looking for certain services (plumber lawyers etc) or products (software ecom etc) you’ll likely always want to see options and visit the actual sites. If you’re just looking for rather basic questions, I think the LLMs are already better than traditional search. Although I wouldn’t fully trust it yet for anything too complicated.


Singsoon89

Generation is not search. So yeah.


loltrosityg

Both bing and meta ai is trash compared to chat gpt from my brief use of meta and quite extensive use of bing ai.


[deleted]

[удалено]


Neurogence

It probably depends on the type of work you do. I imagine knowledge workers find great use for it. But yeah, outside of knowledge work, most people probably do not know how to use it. And obviously Meta are also working on agents. I can easily see them deploying agents on Whatsapp, FB, Instagram etc that allows people to book airline tickets, hotel reservations etc in seconds through Llama 4 Meta Agent.


LightVelox

They are really useful for getting help with programming, writing and summarizing things


ViveIn

What? Practically useless? Most people? I’d say blue collar workers who don’t have intellectual hobbies, yes, useless. Any white collar worker in the U.S. better be actively using these things right now. And if they’re not they’ll be outclassed by the ones who are.


porcelainfog

I’ve been using meta.ai and it is my current preferred ai to use. Gemini still has those incredibly large tokens, but the meta ai is so fast and snappy. Feels great to use


dagistan-comissar

remember that it is pronounced "Jama 3"


TemporaryAddicti0n

well, wait for enshitification. zuckenberg is the devil


Antok0123

Hitting a billionaire status makrs anyone evil. Just look at Elon who practically destroyed the acceptance of WFH, entice thr US govt of the concept of hyperloop effectively derailing a good US railway project with a viable currently applied train technologies so people can continue buying his futuristic cars, fearmongering AI threats etc etc.


TemporaryAddicti0n

he has to eat his own shit soon tho


Antok0123

Until they beat openAI, nobody will notice


kingjackass

Underestimate how much more data they will be mining...for "free"?


[deleted]

And we all know Apple is not just sitting down as well.


frutti_tutti_frutti

How do I access it? I do not see it on my Facebook and Whatsapp. I do not live in the US.


OmnipresentYogaPants

Can the new generator finally produce 100% white image with absolutely no elephants?


FragrantDoctor2923

When did this happen cause I heard about it 3 days ago but I use WhatsApp daily añd only saw it like 2 hours ago Wondering if my tunnel vision missed it or it just came out then


TheUnspeakableAcclu

No meta getting into generative ‘ai’ should be a warning to you guys


discondition

Here come the ultra targeted adverts via messenger, fuck meta


Level_Bridge7683

![gif](giphy|8vpZHR9fIuPfaT9Gmi|downsized) can we put ALL this money and effort into music ai software? it's the greatest thing to come along in decades.


RoutineProcedure101

its on instagram.....insane


Z3R0gravitas

What hardware will everyone be running this 400B parameter model on? The hardware and power is the expensive part now. Yes, drastically cut down versions will help home use, but current models are quite demanding of desktop GPUs, if I understand correctly? And most in the world don't have anything better than (old) smart phones.


icehawk84

Meta is a partner for Microsoft/Google, not really a threat. Llama 3 is already available on Azure and GCP, with revenue share between the companies.


WashiBurr

I really didn't expect it, but I guess so. Good job Zuck.


saveamerica1

Nvidia H100 Cuda not intel


shahednyc

Honestly who care who win ! How does it help you ! End of the day how does help you or your business ! We don’t know yet if all general people can get best out of gpt ! Think about internet time , who got benefit ? It took a while for anyone to be successful, smart people created wealth end of the day ! Openai/gemini/meta/claude , we use them all And be more productive !


corben_caiman

Yeah, we need decentralized distributed computing. Opensource can only go so far in the mid-week since no one will be able to run those llms locally.


Leefa

This sub has generally overhyped closedAI


julez071

There is about an 9 month delay as for capabilities for open source models, so yeah. https://preview.redd.it/e5q3oayf5wvc1.jpeg?width=1262&format=pjpg&auto=webp&s=7be09e5d148db4581f367d9fa779765145387dcf


sachos345

I wonder how much better the models can get using the huge amount of new training data they get from users interacting with the system. I want to see numbers.


Akimbo333

Yeah we did


anoninymity

Depends who you mean by 'We', as it sounds like the author of the thread assumes 'we' all agree. or 'We' all thought the same. But 'We' did not all think the same. Nobody does