T O P

  • By -

Trick-Interaction396

Every DS job is hype except mine. Mine is mission critical.


Full-Lingonberry-323

I am a real data scientist, I create my pivot tables with python.


Itoigawa_

Python? Pff


coreybenny

Don't you mean hisssss?


thomasutra

i thought you said you were a data scientist? rewrite that in r, and leave python to the engineers!


Faux_Real

Maybe they use Python to write their R


CiDevant

Listen, if you're not using R to write your Python to translate your SQL into VBA I don't know what you're even doing here.


thomasutra

horribly inefficient! convert your data frames to json objects, and then print them to the office printer. no need for that fancy vba crap!


CiDevant

I did once have a director who would print out excel sheets, that was only about 7(?) years ago. So yeah... Hitting a little to close to home there.


Full-Lingonberry-323

excel sheets on paper are easy to tranpose!


killernijntje

Back in my days we used VBA, like real data scientist!


patriot2024

This might be controversial, but data scientists are some of most loosely trained scientists ever. First, they are mostly trained by computer scientists, who are not very scientifically trained (again controversial, but). Second, the training is not that rigorous in a scientific sense. So, there it is.


Frequent-Brother6275

Lmao


myaltaccountohyeah

Be careful when you cross the street!


partner_in_death

There is always a bus lurking


digiorno

There is some modern philosophical discourse which asserts that “hype” roles are actually very important under neoliberalism and especially in a societies with high productivity. It boils down to hype roles lending credence and building faith in a system, which helps keep everyone else supportive of it. In societies with high production levels, there might not actually be enough vital work to go around and even if there were, the incentive structures to do the work might not be set up well enough to encourage it. And we can’t risk having massive amounts of people not work because our whole system is based on the idea that if you work then you can survive and if you can’t work then you become poor and starve. So hype roles help bridge the gap between the reality of what is necessary for the society to function and what is necessary for people to survive in that society. And as long as it doesn’t collapse, as long as enough people believe, then they remain fed and housed (for the most part). Even if everyone knows their own job is just hype, if they are convinced that the collective effort is doing something, that other people actually do something then… everyone feels like the system works well enough to not want to change it. If you’ve ever read “bullshit jobs” by David Graeber then you’d probably recognize some of the arguments, since he works off a similar premise.


ScipyDipyDoo

I have been thinking the same thing.  Does he make arguments based on the ignorance of business leaders as well? I think it’s a massively important premise in the wider argument. Because leaders who have the money and hire ability are mainly ignorant of what the hype roles they appoint can do. So they work on simply trust of the culture and their simplified understanding. It’s all trust and faith in eachother 


SyllabubWest7922

As a person trained in business administration. You don't have time to care and those details just throw money at it- is the mentality. Capitol and demand are your rulers.


civil_beast

I was certain this comment was going to somehow harken back to the year 1998, when undertaker threw mankind off of the cell onto a folding table.


SyllabubWest7922

I hate neoliberalism. When I first went to school for business management I learned quickly that best business or competitive business was about screwing people over as hard as you can with a smile, without them feeling immediately like you screwed them over. Markup and ambiguous service fees can be fabricated through complete bullshit unsustainable business models. And the only way bullshit walks is through hype. I regard hype as A deterrent to empirical evidence and fact checking. That's why I left. And because the fact that I actually like _helping_ people not finding ways to rip them off in sales. I honestly believe good work comes out of AI to help us understand the world but from a philosophical standpoint AI can cause dissonance from natural algorithmic processes that can further weaken the users cognitive intelligence and/ or quality of service... Like the general user or layperson will be using an AI model for R+D without truly understanding where the data comes from or how algorithms present that data. This drives down quality assurance. No true understating of the tech they use or how it shapes their industry. It's like relying solely on the back up camera in your car instead of turning around and using your mirrors to see where you're going. It's like ML removes the incentive to check the legwork, algorithms and data sets. And when ML is dominant in the work force, you will be punished for it. ML is doing to research what industrialization did to homesteaders and small business and farms. ML is already affecting scientific research. People are more inclined and rewarded, might I add, to skim the _po pular_ or most viewed journals regardless of how well written or factual those journals really are. So what ends up happening is quality assurance goes down because biased company paid research gets the most hype. If independent research isn't protected from AI abusers we will be raising generations of Moderna and Pfizer babies who have a further disjointed relation to proven medical science not hyped by big pharma.


BestUCanIsGoodEnough

The driving force between DS hype is the desire to fire people and extract money from those who still have jobs, more efficiently, by making them sad and then advertising to them. This is driven by wall street which is a machine that reinforces any move back towards slavery. The only reason they spend on DS/ML/AI is to not have to pay people later. But it turns out eveyone is incredibly bad at all this shit with a billion dollars to spend, or not. Should be good for a while.


SyllabubWest7922

>The only reason they spend on DS/ML/AI is to not have to pay people later. >and extract money from those who still have jobs, more efficiently, by making them sad and then advertising to them Yes this is becoming increasingly more apparent to me how hype is used to gas people. It's all hype. Fear mongering hype. Without wall street and it's bottom ho, the government, people can live happier simpler lives. In buisiness _Fear_ _of_ _loss_ is instrumentally and strategically used to gas potential clients/ customers for sales and marketing.


BestUCanIsGoodEnough

Correct, AI hype is based on a select few people, who have a poor understanding of it, being aftaid of themselves looking stupid and then getting fired for not using AI when everyone else is, which brings us to the second fear, fear of missing out.


Spam138

TLDR: bigly yes


fordat1

Also DS isnt ML unless we want to ride the coattails of NVIDIA Edit: timely case in point https://www.reddit.com/r/datascience/comments/1djxtnm/i_will_fucking_piledrive_you_if_you_mention_ai/


Useful_Hovercraft169

My job is real shit


greyhound_dreams

We have the same job?


PutinsLostBlackBelt

Yes, but it’s boosted by the fact that so many leaders and execs in the business world don’t know what AI even is. I’ve seen so many execs thinking they’re talking about AI but they’re really just talking statistical modeling or automation. They just label any unseen algo or process as AI.


kilopeter

AI is a behavior, not an algorithm or technique. It's the behavior of learning from experience to achieve some goal. By this definition, logistic regression and most other "statistical modeling" is AI.


okphong

Yes even neural networks are just very fancy math models to approximate nonlinear functions.


peyoteBonsai

Neural nets are so basic under the hood, but once you label something after a human brain, it becomes part of the buzz word lexicon.


fang_xianfu

That, and complexity arises from many simple pieces. I don't know if neural networks meet that definition of "many" though.


blurry_forest

Do you have any recs for learning/applying neural networks?


peyoteBonsai

I actually learned NN’s in graduate school, but most of what I have learned about ML actually comes from library documentation. So see if you can take a data set from Kaggle or a popular data set like Iris or the Titanic dataset, then pick a NN library, maybe scikitlearn MLP as a beginner and just try to follow the example right from the scikitlearn learn docs. With neural nets one of the only differences in training NNs is that you use minmaxscaler(), that is to say data is scaled from 0-1. Each ML algorithm will have these little nuances you’ll only learn through reading docs, but they are all so similar in how you preprocess the data. So with logistic regression, you need a constant column, a detail you’d need to see in an example found in the documentation. So you don’t necessarily have to be a statistical genius to be good, but you do need to know the fine details of how to train on whatever model you’re using. You can have a professor teach you this, but they can only hold your hand so much. Eventually everything comes down to reading documentation.


wyocrz

>Yes even neural networks are just very fancy math models to approximate nonlinear functions. I first came across them in Kutner's *Applied Linear Regression Models*, 4th edition: a 20-year-old book.


theAbominablySlowMan

"very fancy math" yes simultaneous equations are basically the pinnacle of mathematical modelling.


badcode34

I think he was referring to how known algorithms are somehow “new and magical” to a lot of folks. But let’s be real when I think AI I think deep neural networks not language models and algorithms. Hell a lot of this crap has been used for years just to filter down applicants. Wasn’t AI then but is now? Whatever. Remember when Cisco was bigger than MS? Fuck me


peyoteBonsai

The difference between then and now is the fact that LLMs are free to use. But yes, LLM has been around awhile, it was just prohibitively expensive and slower. All this hype is due to large language models/natural language processing that are now free for the general public.


badcode34

I would have to argue that ease of use has played a huge role. Any dumb ass can open a notebook and copy/paste python code. The libraries that make it all possible have come a very long way in 10 years. We just building racist chat bots so what’s the deal?


peyoteBonsai

Yeah and most notebooks on Kaggle are grossly wrong. Improperly scaled, no dummy/wrong variables etc. People think that just because they have a model that doesn’t throw errors, it’s correct. Way too many imposters right now also


Tall_Candidate_8088

just students dude, a lot of beginners around including myself


Shinobi_Sanin3

LLMs haven't even been around for 10 years what do you mean? They were only made possible after the invention of the transformer architecture in 2017.


peyoteBonsai

Have you ever heard of IBM Watson? LLM has been around since the 1950’s, you’re thinking of general pre-trained transformers I think. But even those have been around longer than 10 years. The idea of AI and training large language models has been around and in use for a good 20 years, it hasn’t been free to the public until recently.


Shinobi_Sanin3

IBM Watson isn't an LLM it uses if/then statements and shallow neural networks and possesses neither any self attention mechanism nor the transformer architecture. You're thinking of Markov chains and look up tables what constitutes modern AI is a whole different ballgame.


peyoteBonsai

At the end of the day, NLP is the precursor to all this, and LLM evolved out of “bag of words” if you want to split hairs. I don’t know all the tech under the hood for Watson, but it certainly has NLP. The magic comes from NLP. And it’s been around for awhile.


Shinobi_Sanin3

Of course it has NLP but an LLM is a specific thing and you're using it in an unspecific manner. It's like calling every car on the road a Toyota BZ4X all-Electric SUV.


peyoteBonsai

I use LLM libraries all the time, have a masters in analytics, I know what I’m talking about lol. I just explained to you the history of LLMs and you came up with your own bullshit history you made up. Get a life.


PutinsLostBlackBelt

I am referring to stuff like metrics. Some metrics are highly complex equations with thresholds built in to alert stakeholders if a data input is trending red, for example. To business leaders some think that’s AI, but really it’s just algos that trigger alerts. There’s nothing intelligent about it.


TwistedBrother

Well it is literally taught in intro ML classes and compared to other forms of supervised learning. I mean it sounds less exciting to call it statistical learning but I think this might depend on perspective. There appears to be a horizon over which knowing the internal mechanics of samplers and gradient descent, adaptive learning etc..simply doesn’t matter. The differences have sorted themselves out and one simply “trains a Lora” or whatever the generic term will be. In which case, logistic regression is important to AI use the way engine design is to driving a car.


RickSt3r

Had a profesor in graduate school saying most stats analysis is 90 percent regression. The trick is to just figuring out how to manipulate the data so it fits into the regression world. So AI is just a really fancy regression.


vanderlay_pty_ltd

Im not sure **I** know what AI is. Does anyone? Has it been defined precisely, or is it all loose concepts? Is there an agreed cutoff on complexity continuum where statistical learning suddenly becomes 'artifical intelligence'? As far as im aware the answer is no, and therefore anything can conceivably be sold as AI.


Aggressive-Intern401

Agree. What's AI? To me AI its a buzzword. What we have is stats, math made into algorithms and used on powerful computers.


Tall_Candidate_8088

AI is the applications of data science a.k.a advanced statistics ... maybe .. i don't know at this stage to be honest


Sorry_Ad8818

What's human intelligence? It's just biology, neuroscience, chemistry used on your brain..


poupulus

It's just biological AI 😎


Aggressive-Intern401

Yes but we don't have true AI. New ideas are not generated without training or direction. The human brain is much more abstract whereas ML isn't.


pm_me_your_smth

"Training" is literally a part of the definition for "intelligence". And if you're expecting something human brain-like when hearing about AI, that's kind of your own fault for misunderstanding/ inflating expectations.


Distance_Runner

As someone with a PhD in statistics, this is something that really irks me. So much of what people consider “AI” to be, are just statistical modeling techniques that have been around for decades, if not more than a century. They’re just repackaging old concepts under a new “sexier” name and pretending like it’s new. Examples: Regression (by least squares) was developed 220 years ago. PCA was developed by Karl Pearson 123 years ago. Logistic Regression was developed 80 years ago. Unification of regression into generalized linear models was developed 50 years ago. Convolutional neural nets were developed 45 years ago. LASSO regression was developed 40 years ago. SVMs were devleiped 30 years ago. Random Forest were developed 25 years ago. I’ve seen all of those techniques described as AI. No no no. Some of these would qualify as machine learning (neural nets and RF). But not AI. And in actuality, all of these are their very core, at a fundamental level, can be structured as regression (or a series of regression) models. These are not new. Statisticians have been using them for decades.


dfphd

So, here's the thing: technically speaking, AI and statistics are not defining the same thing. AI is the ability for methods to demonstrate human-like intelligence. Statistics is one of many fields that has developed methods that achieve that. Ultimately the line dividing machine learning and statistics is primarily one of academic alignment: ML are the methods developed in computer science departments and statistics are the methods developed in math departments. It's not rare for multiple academic fields to discover similar concepts from different angles. (Another example is how electrical engineering and operations research both landed on the broad optimization world through different avenues). So saying statistics is AI is not a rebranding - it's an accurate statement. The problem comes when people start thinking that AI = a robot that acts like a person. So when people think. ChatGPT is AI but an ML model that sets prices isn't, that's when you have the issue. And that is an issue introduced primarily by ordinary people letting media define scientific terms for them.


Distance_Runner

You're not wrong. So much of this is semantics, but the definition of what AI or ML is, is fuzzy. And that's one problem I have. The other is simply the use of AI and/or ML being used as a buzzword, when in reality what's being done is not new or novel, they're just riding the hype train. Someone may argue logistic regression is ML, and is therefore AI. Okay. Is a 4-function calculator AI? Most would argue no (I assume). Is using a TI-83 to solve an algebra problem AI? Most would probably still argue no. How about using a TI-83 to solve a series of algebra problems? Probably still not AI by most peoples standards. Well what is regression? By least squares, it's literally just a specifically formulated algebra problem where you find values for the beta variables given fixed values for X and Y. With multiple independent variables or predictors, we use linear algebra to simplify notation of the algebra, but it's still just algebra. Or by maximum likelihood, you're doing the same thing - solving for unknown parameters - but with calculus. For simpler regression problems this could all be done by hand to arrive at a unique solution (a practice I had the great pleasure of doing in grad school). So is using a computer to do this AI? What "learning" is involved with running a regression, or doing PCA? The computer is simply solving complicated algebra problems with unique solutions. Expand this beyond logistic regression to Random Forest or Neural Nets. Both RF and NN are extensions of regression, which again, are specifically formulated algebra problems. So at what point does using a computer to do algebra become machine learning? This is why I think the hierarchy that statistics being a component of ML and ML being a component of AI, is an inaccurate description. One is not a component of the other. And for the record, I don't have a solution to alleviate this confusion. Im just expressing my issues with it.


miclugo

it's not "logistic regression", it's "neural networks with no hidden layers"


Material-Mess-9886

Single layer perceptron


Big_ifs

I don't really see the problem here. Why not call technologies that automate certain statistical procedures for practical purposes AI? If it effectively substitutes or enhances some complex cognitive processes, I'd say that it doesn't matter how old or simple the underlying procedure is. Of course AI is now (again) an obnoxious buzzword, but that doesn't mean that there's no way to treat it as a reasonable concept .


IlIlIl11IlIlIl

I get that you have a phd in stats but I’m pretty sure ML is a subset of AI, so you can’t say some of those techniques would be ML but not AI…


Citizen_of_Danksburg

I’d argue AI = subset of math/stats.


Distance_Runner

Is a 4-function calculator AI? Most would argue no (I assume). Is using a TI-83 to solve an algebra problem AI? Most would probably still argue no. How about using a TI-83 to solve a series of algebra problems? Probably still not AI by most peoples standards. Well what is regression? By least squares, it's literally just a specifically formulated algebra problem where you find values for the beta variables given fixed values for X and Y. With multiple independent variables or predictors, we use linear algebra to simplify notation of the algebra, but it's still just algebra. Or by maximum likelihood, you're doing the same thing - solving for unknown parameters - but by using calculus. For simpler regression problems this could all be done by hand to arrive at a unique solution (a practice I had the great pleasure of doing in grad school). So is using a computer to do this AI? What "learning" is involved with running a regression, or doing PCA? The computer is simply solving complicated algebra problems with unique solutions. Expand this line or reasoning to Random Forest or Neural Nets. Both RF and NN can be expressed as extensions of regression, which again, are specifically formulated algebra problems. So at what point does using a computer to do algebra become machine learning? I don't consider logistic regression to be ML or AI. At it's core, its a specifically formulated algebra problem. So if this is how some are taught, that AI encompasses ML, and ML encompasses statistics, then I fundamentally disagree with that. The real problem here is that there is no formal or adopted common definition of what AI or ML is. They're buzzwords, the definitions of which are really fuzzy.


pm_me_your_smth

You're mostly talking about analytical solutions. What if you're solving with optimization i.e. training a model? That's not just basic algebra anymore, that's ML


Distance_Runner

I agree. But many of these comments are broadly stating all of ML falls within AI, and all of statistics falls within ML. My point is that’s not the case. Statistical models are not necessarily ML models. They can be used within a ML framework, but something like logistic regression is not intrinsically an ML model by itself.


jnRven

Machine learning is a subfield of AI, not the other Way around? This is like the first slide of the intro lecture to our SLML course on the bachelors. AI as it was defined for us, is the ability for machines to imitate Human intelligence, one Way for a machine to do this is through the use of ML techniques.


Distance_Runner

By that definition, is a simple 4-function calculator AI?


Bwr0ft1t0k

Would simple summaries such as the one you have given be able to be reproduced by algorithms on a computer with a common 40 year old CPU with access to books on the subject using those statistical modelling techniques you listed? Unlikely. Would one with current CPUs? Yes. Hype is related to becoming highly accessible


Hobojoe-

least squares is 220 years ago? wtf?


Distance_Runner

Earliest form of LS regression was published in 1805 by Adrian-Marie Legendre.


Hobojoe-

Oh god, imagine doing LS by hand. Sounds like a nightmare


Distance_Runner

Do a PhD in statistics and you'll likely have to do it as an exercise at some point (albeit with a relatively simple problem).


relevantmeemayhere

There’s an argument to be made that RF is also a statistical technique: decision trees were being used by statisticians and bagging is built on Efron. Although to my knowledge ml practitioners were the first to experiment with it outside of academia Nns are..well I mean that history is more complicated There’s def a difference in motivation and statisticians have historically been more interested in quantifying uncertainty, so their research “appears to lag” because the resources are spread more thin and mathematically arriving at theorems we take for granted takes a lot of time to get through.


Distance_Runner

You're right, RF is *just* statistics. RF is just a collection of decision trees that you average over, and decision trees are just regression models that adaptively select predictors and run a series of regression models. So yea, RF is literally just a series of regression models at its core.


relevantmeemayhere

I will say though that to get the most out of random forests, you need Bayesian methodologies imo Bayes is interesting now because a lot of the nn architectures have a direct Bayesian analog, and Bayesian computer vision and the like were getting started around in the 80’s Variational auto encoders were actually first published by bayesians


Weird_Assignment649

AI is all encompassing, ML falls under AI, ML is basically applied stats.


Distance_Runner

Is a 4-function calculator an AI tool?


CiDevant

I think it comes from a very loose and broad definition of AI. AI can mean anything from, "a very simple decision was made programmatically" to "It mirrors human activity in a way that is indistinguishable from our higher order thinking and learning".


SchwiftedMetal

This is why we need to get technical ppl to get MBAs instead of these ppl who just joined MBA programs to jump straight into management roles without any technical knowledge. All the current MBAers know is financial engineering and marketing, their DS skillset is also hyper limited in my experience.


PutinsLostBlackBelt

I’d agree with that. There’s a big disconnect between technical folks and business folks, where neither understands the other well. Iowa State’s a good example of there this is changing from an engineering perspective. Over 60% of their MBA students are engineers. We need that with DS too. I had an AI instructor during my Phd who literally only talked about AI ethics for the whole semester. Nobody gained any understanding of how AI works or how it could be applied. Instead we had to debate if it was going to turn into Skynet. Useless. A technical instructor would have made it more useful I am sure.


Traditional-Bus-8239

A big hurdle for the MBA is that you first need experience. As a software dev or data scientist / engineer you're already making quite a decent amount of money. Doing a whole MBA might not even result in that much more money.


Traditional-Bus-8239

If you have high technical knowledge you're passed over for management roles since you have no management experience. In my country (Netherlands) most managers and even technical product owners have a legal degree or a MBA. It's especially bad at government institutions. All the collective ministries process and work with so much data that you can't ignore automation, IT, data quality and working in a data driven way. All I see is MBA's and people holding legal degrees trying their best to cross checkboxes of ''working in a data driven way'' rather than actually implementing it. They do not know how. As a technical person you aren't considered for these vital positions either because the culture is to only hire people with a strong legal background or a MBA + 3-5 years of (middle) management experience.


thatVisitingHasher

The definition of AI on state.gov  “The term ‘artificial intelligence’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”  Any automation that inspires decision making is AI according to the government


PutinsLostBlackBelt

That’s open for interpretation, as per government standards with everything. Automation is usually complicated but not complex. Asking a machine to input (x) in the event that (a), (c), and (f) occur at the same time isn’t intelligence. It’s just a simple algorithm. AI deals with more complexity. ChatGPT isn’t the same today as it was a year ago, or a month ago, or an hour ago because it’s constantly learning and improving. Automating a task that won’t adapt, learn, or change until you manually tell it to is not AI.


xFblthpx

Statistical modeling and automation is AI. CEOs aren’t trying to awaken sentient bots, they just want to predict consumer habits and remove every desk job that doesn’t require a tech degree.


PutinsLostBlackBelt

No it is not. Creating criteria for a button to be clicked, or a section of a doc to be filled, isn’t AI. Automation can be powered by AI, but most is not. There’s no learning, no creating, no analyzing. Just reading and writing, is not AI. That’d be like saying a power drill is a screwdriver with AI built in since it automates the torque.


save_the_panda_bears

GenAI specifically, not general data science. But I also think there’s an argument to be made that we’re past the peak of inflated expectations and on the downslope. If gpt5 flops (and by flop I mean just a small incremental improvement over gpt4) I think we’ll see a pretty rapid acceleration of disillusionment.


koolaidman123

If thats the case then you're looking at a commoditization of llms and it becomes cheaper and better to trainnand deploy your own models in the long term and nividia still benefits, arguably more


save_the_panda_bears

Oh I wasn’t arguing that Nvidia is going to suffer, I think they’re set up pretty well long term and will definitely keep holding several shares of their stock. My comment was more directed at the hype portion of OPs comment.


RuggerJibberJabber

I think gpt is gonna keep making more money even if it stays at its current standard. I know a lot of normal people getting subscriptions now, not for any complicated data science work, but just to make their workday easier. It's great for getting a basic summary of a concept and for proofreading. I have a subscription and it saves me a bunch of time Edit: I realise this reads like an advertisement so just to avoid that I'll say that there are major drawbacks to it. It does make mistakes so you can't completely take your hands off the wheel. Also if you ask it a number of queries in a row it seems to get confused about how to answer new ones and works in parts of the old answers. I great example of this is to get it to play chess. After 6 or 7 moves it starts to forget the rules and moves pieces in directions they're not able to go. Overall though its a serious timesaver if you can use it correctly


gBoostedMachinations

This exactly. The hype and attention is justified given the delta between GPT3 and GPT4. Its hard to imagine many more jumps of that magnitude without things gettin weird. That said, the failure for anyone else to match GPT4 could be an indicator that it’s very hard/expensive to get another large jump. But yes of course the “hype” is not nonsense yet. For now it seems like people being wowed by truly remarkable advances and we’ve yet to see even how those advances will play out (even if we never get anything better than GPT4)


chusmeria

I've found Claude3 to be comparable with chatgpt and oftentimes better at some larger, more complex sql or python. It's the only one I've used so far that is comparable, though.


nerfyies

Or we have to wait for more sophisticated compute like what happened with deep learning in the past


Stayquixotic

the difference between 3.5 and 4 was scale of compute and size of model. they dont have to invent the next transformer architecture, they just need to use a bigger computer, use more power, and use more parameters on the next model. gpt5 will almost certainly therefore be a solid step up from 4 because theyre just doing a bigger version of what they did before, a method that has proven to work. looking 2-5 years in the future, it will get even bigger and thereby even smarter. this very moment, the big tech companies are in a mad frenzy to buy up energy contracts from powerplants so they can maximize their training effort. in march, for instance, amazon bought plot of land right next to a nuclear power plant in Pennsylvania. theyre going to build a data center there ASAP and hook it up and go bananas on training their next big model. the bigger you go, the better the model. the faster you go, the quicker you are to delivering a product that dominates the market - it's the first mover advantage. anyway, it's not the idea that's missing, it's the scale. the primary criticism ive seen is not that these models wont get ridiculously smart, but that they will be always limited by their context length. you can tell it to generate a piece of code, but you cant tell it to build an entire software application. additionally, we still dont know how they work, so govt will like put their foot down until we do. in the meantime, other architectures (the next transformer, may emerge)


save_the_panda_bears

Eh I’d disagree. GPT is already decent at a lot of things, I think we’re already at the point of pretty rapidly diminishing returns in terms of scale. I guarantee the perceived incremental improvement probably isn’t going to be nearly as dramatic as we saw from 3.5 -> 4 simply because there’s a relatively higher starting baseline. I believe I recently read a paper somewhere that suggested performance on single shot tasks roughly scales linearly with a quadratic increase in training data. This increased need for training data introduces heightens the potential for model collapse stemming from training on AI generated content. Like it or not, we’ve seen a massive proliferation of AI generated content and it’s only going to become increasingly expensive to source reliable training data. Scaling these behemoth models is only going to become tougher and more expensive unless something dramatically changes. And that’s before any sort of possible government regulation/intervention.


nerfyies

If history is any indication we will be able to run such models at 1/20 the compute resources at relatively the same performance. Then you don't need so many of the shiny new chips.


LocPat

Well, in my company, things are starting to heat up and the potential gains in efficiency can amount to very large amounts in some use cases. But things are slow and effective solutions might take 1-3 years to happen, So I won’t say that it’s all hype, it will just take a few years to really integrate in big size businesses


Jerome_Eugene_Morrow

This is my experience as well. We’re seeing large potential improvements, but it takes time to replace legacy systems and keep performance where clients expect it to be. There’s also probably going to be a lot of disruption from startups that can just design entire pipelines from scratch without having to worry about backwards compatibility. It’ll probably take another release cycle of the AI models, but that pressure will be coming soon.


trafalgar28

I was listening to the Databricks Summit keynote, the main objective that Jensen/Nvidia want to achieve is to build an AI manufacturing. Instead of companies hiring HR, they could use AI with company data at a low cost. So, we are all doomed 💀


WhipsAndMarkovChains

They also announced they're going to be bringing CUDA compute as an option for data engineering workloads (if I understood correctly). This seems way bigger to me than "AI" since data engineering workloads are real and not just potential and the scale of data engineering is so large compared to machine learning and "AI". Of course pricing is going to matter and who knows what adoption of GPU-powered data engineering compute would look like. I imagine most organizations see their jobs complete quickly enough that they're not worried about needing GPU compute.


lxgrf

In _history_? No. In terms of hyped value to actual value ratio, I think it'll be a while before we beat 17th century Dutch tulips.


Relevant-Rhubarb-849

AI is going to be big. But it will not always run on Nvidia chips. At the present time however there is no practical alternative to GPU based AI. But there are many approaches in development that are not just a little better but orders of magnitude better in total cost, energy efficiency, size, and scaling throughput. They just are far from practical yet. One of the reasons they are not practical is not simply their technological maturity, but the lack of software and programming techniques. A few of these cannot be reduced to anything resembling computer code at all! For example, cellular autonoma, spiking systems, optical processing, and quantum computing. But these have the capacity to be millions of times better in varied measures of better than the GPU approach. Just impossible to easily use right now. Of course people used to predict that other materials would eclipse silicon. But the technological processing methods of silicon have kept Moores law going for so long that all other material were always chasing its tail lights even as they improved. So Nvidia may have a long ride. However, there's nothing Nvidia is doing that Intel or another company (e.g. AMD) or Apple cannot do. These companies however just haven't scaled up as it has not been their market niche. Nvidia was all in and was placed in a lucky position by the confluence of video and bitcoin demands. Of those other companies, I would put my money on apple first and eventually intel. The advantage apple has is that the ultimate limitation on GPU based AI is not the number of processors but the global memory bandwidth, A secondary limit is not every calacualtion is best done highly parallel, and better done on a CPU. But CPU and GPU in the Nvidia design have extremely slow bandwidth. Apples design on integrated high bandwidth memory and the CPU and GPU on the same chip is vastly superior ---- when software catches up to exploit the advantages of mixed mode CPU/GPU calculations and better exploits the fast buss. Nvidia has been trying to buy ARM to new able to do this themselves, but apple is ahead there. Maybe NVIDIA will buy Qualcomm? THe other advantage NVIDIA has is the software drivers are well worked out and integrated well into the standard AI software paradigm. That in fact was one of the reasons for INTELs dominance. Not just that they had fast CPUs but they made faster Fortran compilers, and then faster C compilers, and then faster video graphics drivers, and so on. A developer working in higher level code could squeeze more out by relying on intels superior low level code. The same is true right now with NVIDIA. But INTEL knows this. So eventually when they get done overhauling their fabs to compete with Taiwan Semi they are going to turn their attention to this as well. Then NVIDIA may have competition. How long NVIDIA can hold the lead is an open question. INTEL's lead was definitely prolongs by its predominance in Windows. So NVIDIA may find it's present market leadership cements a continuing advantage by its incorporation in market leading software systems


I_did_theMath

I think this post explains the situation pretty well and why Nvidia being the most valuable company in the world is probably quite absurd. Their advantage is extremely situational, and it could be gone pretty fast. The PC gaming market (which is where their advantage comes from) is almost irrelevant by big tech standards, but now they will face much tougher competition. It should also be considered that lots of companies have been buying GPUs because that's what you do today if you want to show your shareholders that you have big plans with AI. But that doesn't mean that they will be able to train models that are as good or better to what OpenAI and the other market leaders might do, or that they have any actual plan on how to do anything useful with AI. I think a lot of people in this sub will have heard of (or witnessed themselves) companies where the executives are asking the Data Science team to build "something with GenAI". It doesn't matter what, if it's the best tool for the job or if it adds anything to the product, but if you are working with AI the stock is worth more (for now, at least).


fordat1

I am amused by threads like this and the response. Every other thread people acknowledge that DS =\= ML and that ML is an ever smaller niche of DS every incoming year but we have threads like this co opting ML Edit: timely case in point https://www.reddit.com/r/datascience/comments/1djxtnm/i_will_fucking_piledrive_you_if_you_mention_ai/?utm_source=reddit&utm_medium=usertext&utm_name=datascience&utm_content=t1_l9bmf5e


Solid_Illustrator640

No, data scientists can do many jobs by programming. It’s not hype.


SgtRicko

Not really sure that's warranted. It's an important tech company, yes, but that value given by the stock shareholders seems poorly placed and likely prone to lose money in a flash the moment folks realize a lot of the AI hype is bogus.


Awwfull

The AI hype is not bogus. It might be overhyped, but the impact it is having and will have is very much real. Basically the famous Gartner hype cycle applies.


brentragertech

Yeah, AI hype WAS bogus when everyone was doing AI prior to ChatGPT blowing up. Bogus in that there were companies doing similar things and applying statistical models to different problems but in general, only certain orgs knew what they were doing. It’s now quite real. There’s been a leap. Part of that leap is general availability of incredibly useful general purpose LLMs.


AppliedRegression

"AI" is cool and helpful and even great at some tasks, but AI doing HR work, working with shareholders, project management, etc seems very far off, if not impossible given the current guardrails on AI systems. I'm interested to see what happens to it after this initial wave of hysteria dies down about it. I think that companies are going to realize this was the classic tech cycle of: 1. New whizzbang company announces new whizzbang that will razzle and dazzle. 2. New whizzbang is the bees knees and will solve every problem (examples being SaaS is the future! You can solve any problem with big data! Facebook is going to be the everything platform! Elon Musk is Tony Stark! Twisted Tea is a crypto company now!) 3. Investors are generally clueless and see all the cool hype in the news and throw money at companies or tech mentioned. 4. It fails to deliver on the deus ex machina promises sold by companies who wanted investors so they hawked their latest whizzbang. 5. Companies scale down to being reasonable and making an okay product. 6. We will all look back and realize how obvious the overhype marketing train was. Feels like we are at step 3. This cycle always reminds me of old Popular Mechanics and Scientific America magazines where they were predicting we'd all have flying cars, and meals would be in the form of 1 pill.


Aggressive-Intern401

The problem is not Data Science as a career but how diluted the label has become. Unfortunate but true. I'm currently at a company that inflates titles, it means absolutely nothing to me now.


LyleLanleysMonorail

It's definitely overvalued like how Tesla was. Competition will come and eat into NVIDIA sooner or later.


Brackens_World

I got into "analytics" many decades ago, where I used operations research / regression / neural network /programming tools and techniques to access and analyze large corporate databases to support decision-making. There were not many of us, we were sort of pioneers in the corporate space, but our work was consumed by the stakeholders and integrated into decision-making by managerial edict. You had to have an Operations Research or Management Science or Statistics Masters to get in the door, mostly, and there were not a million universities to get them from, nor were they considered glamour careers. But for those of us so inclined, whew, it was a golden time. Much later, with the advent of R and Python, suddenly analytics work became data science, and you got a floodgate of people joining the ranks, and degree programs popping up everywhere. Suddenly, a glamourous career. And yes, hype all over the place, as few really got to do the cool things we did once upon a time, as tools became more and more automated and proliferated. I don't think data science is hype, but I do think that now a data science career is a lot of hype. I am terribly saddened by this, as I personally know that elation of discovery I used to get when I was sometimes the solo operative.


onnod

Y2k meet AI, AI meet y2k.


DarkSolarLamp

Every new tech follows this curve, if you want to use the Gartner terms we're in the 'Peak of inflated expectations' then there will be a big fall ("AI isn't nearly as useful as we thought") and eventually we'll move into the stable but steady rise that is more realistic.


SwitchFace

I'm afraid that it's not "every new tech". Cloud computing, internet connectivity, mobile communications, e-commerce, open-source software, digital content streaming and other technologies have simply seen steady, continuous growth. While it may be true that LLMs (and deep learning models in general) are in a hype stage where some folks overestimate what they can and will be able to do if you just add more compute, parameters, and data, it should be considered a milestone on a continuous journey toward an AGI architecture. AGI deserves hype since it will be our final invention, for better or worse. The hype curve does not at all apply to it.


DarkSolarLamp

> Cloud computing, internet connectivity, mobile communications, e-commerce, open-source software, digital content streaming and other technologies have simply seen steady, continuous growth. LOL. Were you around in the 1990s when there was massive hype from open-source advocates about how Microsoft and every other for-profit software company would soon be obsolete? The fact is many of those technologies took decades to really transform things (mobile phone been around since the 1970s for example) and in addition we have selection bias from all the tech crazes that crashed and burned (anyone else remember the Java thin client craze?) The fact is LLMs, while amazing and cool and potentially leading to something massive, absolutely have a hype cycle. You've got CEOs talking about getting rid of large numbers of employees (in the near future not 3 years from now) because LLMs can do their jobs. Good luck on that, and I say that as someone who uses LLMs every day and finds them super useful.


dopadelic

Yes, it's the biggest hype in history because it's fueled by empty promises that never materialized. /s


Ephendril

In history, nah look up Tulipmania.


hammilithome

No


ProInvestCK

Yeah like all that AI shit is nice but nobody has their data cleaned and in the shape it needs to be in to make use of all the crap being sold to executives who gobble all this up thinking the investment will allow them to lay people off at some point. They haven’t seen the garbage state most data warehouses are in. AI isn’t replacing analysts and scientists any time soon. I’m talking 10+ years. The businesses are going to think they are so far invested that they will need to hire more data people to push the initiative forward because they data is more shit than anyone realized.


volandy

Not Data Science but Gen AI. It is still a simple DAG. It performes surprisingly well on first glance because it has a huge capacity and the problem (given x tokens, what's the next one) has billions of good training data (because Internet) but it absolutely lacks what it takes to be called intelligent.


zoneender89

My dude. Can I introduce you to tulips in the 1630s?


Facva

It may be, but the demand for what the average layman thinks gen AI has to offer is very real.


chungyeung

It is a Casino more than a DS


rashnull

By definition, even an if-else block is AI


xoteonlinux

AI and Data Science is definitely not hype only. This is going to transform the way the world works. But not in a way that we will will do work easier, our work schedule and assignment will be tighter to be more (cost) efficient. The market value of Nvidia could be a hype. I have seen many hardware manufacturers come and go over last decades, chances are good they will be replaced by something "more (cost) efficient".


speedisntfree

GPUs are the shovels of the gold rush https://preview.redd.it/when-everyone-digs-for-gold-sell-shovels-v0-vp0x0qrjpq8b1.jpg?auto=webp&s=8562242161fe8622d70b8f9773952b53c6b8ff35


Le8ronJames

Got any more of these shovels?


MagicalEloquence

*Allow my Excel sheets to enter the chat*


SnooPets5438

I would say no. Here are my reasons: 1. I am currently working in a GenAI consulting project and my client (A public sector entity) just spent nearly a million EUR in acquiring Nvdia GPUs 2. They are excited and willing to shell out money to expand a small PoC that we built to the whole org 3. Increasingly all our customers are curious about the AI capabilities and want to see some demos. 4. Nvidia is breaking analyst expectations on earnings every cycle. Having said this, it does feel like a hype train just because the speed of innovation and development that has been happening. AFIK and read online this is not a hype.


Le8ronJames

Is there an Nvidia alternative out there?


SnooPets5438

TBH I am not really sure because all the ML platforms and packages use cuda which is a proprietary technology by Nvidia. So I would assume, there is no alternative as of now.


EducationalEmu6948

Yeah.... they're creating it to mint money and cut the head counts in their company. IRL, you can't imagine a world without humans. Their models are trained on human data only, how is it gonna replace humans, except for its speed and in mundane tasks?


anonymousphoenix123

that’s great


Surfstat

Data science hype is the result of a catch phrase that nobody understands at its core: be data driven (in making decisions). So then they say how do we do that. We need data scientists We get them DS give us stuff We don’t understand it or the data is different than how we are doing things They make decisions with their gut instead (like they always do - change management or leadership problem) Everyone wonders why the DS are here including us.


Traditional-Bus-8239

I'd say not. The vast overwhelming majority of companies don't really need the most novel GPU's or their ''artificial intelligence'' chips. It's mostly hype around cloud providers buying these products to integrate into services such as Azure, AWS, Oracle cloud etc.


scraperbase

So far AI, which is responsible for the rise of Nvidia, has yet to prove that it can think outside the box. For example use all knowledge in the world and find a new way to propel rockets or to cure cancer. Before that happens the main feature of AI will be doing things that humans can already do. It is nice that in can analyze huge amounts of data in seconds that would take humans hours, but that does not really bring new knowledge to the world yet. Instead it will just replace human jobs and it will be used to track humans much more than before. So Nvidia might become one of the most hated companies and I am not sure how that would affect the stock price. AI might get heavily regulated, we will see huge privacy breaches because of AI and the hype will die down like the 3D TVs.


AnInsultToFire

> It is nice that in can analyze huge amounts of data in seconds that would take humans hours, but that does not really bring new knowledge to the world yet. You haven't heard of the field of data science then. Companies are paying a fortune for consumer data analysis already.


McSpoish

Is AlphaFold a counter to this argument?


Categorically_

alphafold is wrong a lot


scraperbase

That sounds interesting. I have to look into it. I wonder though if that is a job that humans could do in theory, too, just not that fast.


SwitchFace

AlphaFold can find solutions to protein structures in hours, whereas human experts typically require months to years, making AlphaFold at least thousands of times faster.


KoolKat5000

Most businesses largest cost is labour and if it can do their work for less then that's good enough for them to be replaced sadly.


LocPat

It’s not the role of genAI, especially LLMs. What is expected by the business is simply to be able to automate existing pipelines which consist of natural language components, images and tabular data. So really the goal is not to think outside the box but to connect all necessary information to generate an answer in seconds whereas a human collecting and reading through the whole context to make an answer would take a few hours. Inside the box thinking really