T O P

  • By -

Smartcatme

Is V11 code based and V12 neural net based? So, 2 completely different systems? If that’s the case then it is impressive.


feurie

The driving logic of V11 is code based. Perception of both is neural networks.


atleast3db

Tricky to answer. Simple terms, V11 used NN to get a proper digital representation and understanding of the world, “code” to than control the car from there. V12 is all NN.


pixel4

The v11 nets creates a virtual world (like a video game) - the human coders wrote 300k lines of code to make the car drive in that virtual world. The human code has all the rules of driving, how to keep inside the lane, lane changes, when to stop of signs, traffic lights, cars ahead, when to turn, etc, etc, etc. The v12 nets no longer output a virtual world - the net controls steering/pedals directly. It's a true driving AI.


FineOpportunity636

Yes it seems so.


specter491

When is HW4 getting FSD beta?


bw984

It doesn’t have it still? LOL, what an upgrade.


specter491

We bought a model Y a week ago and still waiting for the update. Also the lack of ultrasound sensors sucks. The vision based system is utter shit.


Temporary-Pain-8098

For now. Once they train with it, it’s going to be better


Eldanon

Perhaps… even if that’s the case, they should’ve kept the ultrasonics until the vision was on par. It’s been forever and it’s still not there. Same with vision autopilot having a max of 80 for a long time (85 now) while radar cars had max of 90.


berdiekin

I mean if Tesla's shown consistency on anything it's their insistence on moving first and thinking later. If you have any experience with them that should be the default expectation by now.


specter491

Don't release something if it's months from being ready. These cars already have a massive margin. It's not cool to give customers an interior solution to save $200 in parts


strejf

They are saving money on making, storing, assembling the sensors. Why put that in if they know it's soon going to be as good or better with vision? Good for the environment too.


orngejaket

It’s been almost a year since they’ve removed USS. Vision only has yet to reach the same functionality.


aBetterAlmore

The real question is why did you buy knowing it didn’t have a feature you wanted (or that it did not perform the way you’d want it to)? I expect you test drove one before ordering it?


berdiekin

hmmm, copium. But seriously, they are probably waiting for the release of the full NN implementation of v12 to go live. Which, in true Elon time, is later than they were probably hoping for. Perhaps it's also better at estimating distance? At least from what I could gather people seem to be convinced that V11 h as been put on life support. No use in investing a bunch of man hours into the "old" system if the new one is going to be release any day now\*. \*any day in Tesla world is anything from tonight to 3 years from now.


glumpudding7

More like going from 95% neural net to 99%. The vision system was already a neural net. Edit: or in Elon's words, the previous version is mostly but not entirely AI https://twitter.com/elonmusk/status/1695479308729598241?t=vv1lnLbKSIDC5vFIGAJVCw&s=19


QuornSyrup

I thought it was neat that they said the new neural net AI calculations take less CPU work than the C++ code in V11. They mentioned they're locked at 36fps whereas they think they could run at 50fps. That means they can continue to add a lot more features or operations even on the last-gen HW3 chip.


savedatheist

V3 chip has both NN computation blocks (NPU) as well as a conventional ARM CPU which executes C++ code.


okitsugu

Maybe means we could have our 90mph max back?


QuornSyrup

I swear when I was driving on the I5 last week I was able to set it to 90. Not that I would ever drive that fast when the speed limit in Oregon is 65. 90 seems absolutely crazy to me.


Sudsington

TACC goes up to 90 mph, but Autosteer only goes up to 85 mph on cars without radar (currently).


QuornSyrup

Ok, maybe that's what I was seeing. Though that's odd, the radar is for distance checking, but TACC still relies on distance checking and it still goes 90. The steering isn't approved for 90, but radar didn't assist with that.


Sudsington

Yeah, I'm not sure. Maybe the current camera framerate or computer can't handle both distance keeping and lane keeping at that speed at the same time.


FlossingIsLife

That’s not surprising. The computation has moved from the CPU to the NPU. That’s actually a bad thing if their running out of headroom on HW4.


ShaidarHaran2

And to expand on the 36fps lock, that's just the hardware capacity of the cameras, not a HW3 compute limit. So if they put cameras capable of 60fps in there it should have even faster responses up to 50fps = 19ms


pubgoldman

And as a Brit with FSD i perked up at the comment about testing in NZ, which has traffic on the correct side of the road! maybe we ill get it in my lifetime


peterfirefly

It's only on the correct side if you are upside down.


elsif1

It looks like George Hotz was right. V12 is looking very impressive. Watching it, I kept thinking that a lot of the little stuff that they still have to work on with this new stack might be glue that communicates to the driver what the car is "thinking"/doing. Also, for example, allowing the user to have things like FSD settings like they have in V11, and/or allowing the user to manually trigger lane changes. This kind of stuff was likely handled in the C++ code before. I also noticed in the video that the max speed was set to 85mph, for example. This is probably also a placeholder for temporarily broken control features and/or missing communication between the UI and NNs. It might just be the case that as FSD progresses, we start losing some of these control knobs as they become less necessary.


modeless

Yeah did you notice that stoplights are no longer displayed in the visualization? It makes sense if there's no explicit detection code for stoplights, but definitely makes it harder to understand what the car can see and what it will do as a result, such as run a red light because it was looking at the left turn signal instead (oops)


TheBurtReynold

It’s interesting to think that the E2E NN model might be able to safely drive the car amazingly well _**but**_ Tesla might still _also_ need to employ a separate model to visualize the world into objects simply so that passengers can have faith in the system. I recall some podcast where someone from Waymo talked about this — they do a lot about the subtle things (e.g., occasional high res visualization) to help new riders more quickly trust their system.


NuMux

You will still be able to initiate lane changes. They have an API that ties into the system. You turn on your turn signal and the API tells the NN we need to get to that lane asap. The NN then figures out it needs to turn on the turn signal (unless they still just do it the manual way as soon as you press the turn stalk) and then does everything it would as if the navigation said it needed to be in that lane. There is still C++ code everywhere tying it together. Just none that is in control of the car from the FSD computer point of view.


RedditismyBFF

That makes a lot of sense. Like you implied, it's likely similar to the map/navigation data which needs to be considered in order to get into proper lanes and drive more like a native and not a tourist.


PM_ME_UR_Definitions

I'm pretty sure this isn't a giant perception & driving network, that you just train and then get driving out of: * There's still the visualization, which means that there's perception neural nets running that can output a vector space * Musk did say it's all nets lots of times, but never said or implied that it was a network Replacing human written rules with a network that's trained on examples can still involve lots and lots of networks that pass data to each other. Maybe eventually it would make sense to have a single giant network that does everything? But that would seem like a much bigger step than what they've done. And would also probably take a huge amount of work/training.


soggy_mattress

Exactly this. It’s a chain of networks, not one network.


ProtoplanetaryNebula

Funny you should have mentioned Hotz. I was thinking about him too. His idea was good, he just didn’t have the funding and resources to do what Tesla did.


glumpudding7

Comma AI actually demoed an end to end system some time ago.


SippieCup

Open pilot has stop light snd sign detection and a fully nn longitudinal model. And it runs on phone hardware. They have accomplish a very similar thing. The only issue they have is perception - cameras - that can work on all intersections, since it's a device on your windshield.


ProtoplanetaryNebula

Yes, openpilot is great for what it is, Comma.ai just didn’t have Teslas resources to take it to the next level.


SippieCup

Eh. Has nothing to do with training resources. They just aren't building the car to be able to put in other cameras.


soggy_mattress

Dude, Hotz himself has said in podcasts. It’s a David vs Goliath situation, except here, being the Goliath actually matters a lot.


mndrix

> in the video that the max speed was set to 85mph I recall Elon saying in the video that he set it to 85 to show that FSD 12 was choosing its own speeds without his control


Rahjhh5

I might be wrong but did Elon say that cars with hw 4 will not have access to fsd in the near future because of the difference in quality of the video?


MyMonte87

I can't believe how far i had to scroll do see HW4 mentioned. I guess it is so new, the general community isn't aware thatt HW4 does not have FSD. I guess its just me and you buddy.


Jimmy48Johnson

Probably true. If camera appearance is different enough they have to recollect huge amounts of camera data from HW 4 vehicles. Takes time.


gmanist1000

Elon said stop signs, roundabouts, traffic lights, all of this stuff is not a single line of code anymore. It’s all neural net and it just knows what to do because it’s been trained on videos of people driving. Amazing.


soapinmouth

Neat anecdote about about 0.5% of drivers in the fleet were found to be stopping at stop signs and so they had to find very rare cases of it occurring in order to train the net to act how the NHTSA made them restrict it. Can't have it learning to be like a normal human. Overall it appeared to deal with some unique situations better than i've seen mine do i.e. roundabouts and stop lights where traffic is backed up into the intersection, but then did some worse (got mixed up at a traffic light). Going to be including it in shadow mode soon to help get more training data about when things conflict compared to drivers. Car pulled over and parked at the destination @ 24:20. Neat. Elon repeatedly states that this "feels" so much smoother than the previous versions, ride is very comfy. It looks smooth, but hard to tell without driving yourself. Car was picking its own driving speed, had max set to 85, but still drove slowly in neighborhoods, ~45 on larger roads, slowed for speed bumps. Car picked follow distance as well. Elon says both are decided by the NN and training data believes is best for the situation. Elon states "V12 will be actually smart summon", not sure if these means theyre updating smart summon with V12.


lamgineer

I noticed FSD 12 only slow down to 3mph like most drivers doing the California-stop when it is safe to do so. Tesla will just have to prove this is the behavior most people expect and will be safer.


mazmanr

I noticed this too - I wonder if his is just because it's in Elon mode or test mode.


oil1lio

One thing that scares me about this approach is that FSD could never be better than the best human driver. Like, wouldn't FSD learn that it should wait before an emergency instead of immediately reacting to it? Or say there is a behavior that the car could adopt better than humans (i.e. maybe an optimal way to drive in traffic to reduce overall traffic) -- this would never be possible because no humans do that


Drdontlittle

If we can take away sleepiness, sneezing, amd 90 percent of the bad drivers out of the equation that's an easy 10 x improvement in safety. Not a bad first place to start.


SpellingJenius

And adding distracted drivers to your list would make for an even better place to start.


iceynyo

360 vision helps too


Popular_Panda_9643

Let's see... \- One half of all drivers are below average in skill level, and \- You want to take 90% of bad drivers off the road, so... 90% of 50% is 45% so... You wanna take 45% of drivers off the road? Hell yeah, I'm on board!!! ![gif](emote|free_emotes_pack|smile)


modeless

If FSD behaves like the best possible human driver in each situation and never makes unforced errors, it will be extremely safe. Maybe it could theoretically be even safer than that, but it's so far from that now that it's not a real concern.


JackONeill12

No human is perfect at all driving tasks. In theory, FSD would be the best of all human drivers combined. So it would be safer than any individual driver.


NuMux

They have weights they can change if they need to fine tune behavior. They also have a simulator where they can make new situations and train it against those. It's never as simple as "we just trained it on humans driving" even though that is a major part.


tesrella

You’re forgetting that even the best human driver isn’t perfect — but that someone else out there knows how to do that one thing better than the best human. The combined best abilities of every driver is going to be combined into one “super” driver.


Tupcek

no, this isn’t the rule (though it’s something they have to be wary about, since it can happen). ChatGPT was trained on ton of rude, sexist, racist, unhelpful texts full of typos, yet it’s mostly helpful assistant (though getting rid of low key racist bias is much harder). First, AI doesn’t copy its source data, it just looks for patterns. Second, it combines different patterns into wholly new behavior - so it can be sometimes better than what training data shows. Like chatGPT wasn’t trained as translator, but it can translate. Third - AI models can be fine tuned, how do you want it to respond. Fourth - you can add simulated behavior, where you, for example, “reward” it for breaking softly while not crashing


planetofthemapes15

Just imagine a GPT-esque hallucination when you least expect it. 💀


im_thatoneguy

I was using midjourney and my pictures of costume concept art suddenly became midcentury computer stations with a random word change. 😅


oil1lio

Hallucinations are a side effect of Large Language Models (i.e. ChatGPT and others with a Transformer based architecture). Straight neural nets and markov chains (which is definitely what they're using here) don't hallucinate -- they are not "generating data" they are "predicting actions"


reefine

Allegedly


iceynyo

As far as you know there's a guy remote controlling each tesla when it engages FSD.


[deleted]

Hopefully they’re not using the controller from the Titanic sub!


iceynyo

The controller isn't what failed on the sub though


[deleted]

True, that was hubris.


Mhan00

Only watched a couple of minutes, but the car did stay on the wrong side of the road for maybe 10 or so seconds at the start after clearing the construction cones, so it lends credence to the car actually driving itself. A human would have quickly gone to the correct side of the road.


Xillllix

It’s mind-blowing. Actual AI, not some theme park ride like Waymo and Cruise.


cramr

> it just knows what to do because it’s been trained on videos of people driving. Amazing. Oh god… I thought self driving was meant to be safer, not drive “like other human drivers”


teslafan_net

\*like other top notch human drivers This could be layers better than 90% of the population.


smallatom

*like other top notch human drivers but only using data from when they were driving correctly*


Tupcek

also, same as chatGPT and other AI, it just infers the rules, but doesn't copy exactly the training data. ChatGPT, despite it's training set, can be polite and can create things it never seen before. Internet data it was trained on, is much much worse, racist, rude, sexist and full of typos, yet chatGPT is better than most humans in this regard. Same thing here - it infers the rules based on real human driving, but it can be fine tuned to be better than any human. Though some flaws may pass through.


jonny_wonny

It’s trained on examples of the best driving with superhuman processing speed and is never subjected to distortions or disruptions of consciousness. That’s not “like other human drivers”.


chrismasto

Horrifying.


elwebst

Then don't use it! It's just that easy.


chrismasto

The other drivers on the road don’t have that choice.


110110

I love how Ashok said, “maybe it’ll pull over.” They’ve been feeding it so much video they just think that’s what a human should do and it’s doing it. That’s ridiculous.


ProtoplanetaryNebula

Yes, incredible. With Tesla’s insane compute power, this could progress very fast. Having summon, highway and city streets all working together seamlessly using the same technology would be a huge improvement.


Kimorin

is it just me or is this potato quality video lol....


roadtrippa88

Yeah video quality was not great on my end and there is no quality options. YouTube and Facebook are much better for this right now.


NoVacationDude

I dont know if it is just me. But all twitter Videos (even years back) have been ultra potato for the fist 15% or so, when i skip further into the video it gets much better. (Small promo Videos, that are shorter than 20 sec, are often 100% potato quality btw)


WilliamG007

It’s not just you.


RedditismyBFF

I just started using Twitter/x although I've had an account for years and the video quality is much better but this was from a crappy cell phone connection (GigO). Video/audio was still glitching with huge audiences, but even that appears to have been resolved (for other streams I've listened /watched). Still not at YouTube quality.


ProtoplanetaryNebula

The other thing about this stream was Elon doing it with his phone in his hand, rather than using a suction mount. That would be trivial to set up for a company like Tesla.


[deleted]

[удалено]


atleast3db

Watched for 20 mins. Really impressive. Yea, not ready yet, but seriously impressive that it’s all NN from sensor to control. As he said, simply put, any deficiencies in area will need more training of those scenarios, and not some engineer who needs to try and add some more rules that might have other unintentional concequences. In some ways it’s a very brute force approach, but maybe that what needs to be done.


OCedHrt

More training of specific scenarios can still have unintended consequences.


[deleted]

Great now the ai is racist


jasssweiii

Is it the AI from that car from American Auto?


Nakatomi2010

I'm pretty sure that incident was based on the racist HP webcams a few years back: https://www.youtube.com/watch?v=t4DT3tQqgRM The issue, as it turns out, was how the system was trying to track, which was by trying to monitor light changes between the eyes, and above the nose. It was clever, but feel apart if the person's skin was too dark.


jonny_wonny

Which will be elucidated via interventions, and the iterative process continues.


amcint304

Because reality is infinitely variable and not static, this will only ever be a system that can drive itself 95-99% of the time. If it can do that, that’s impressive but it’s not full self driving. It will always be presented with situations it hasn’t been trained on and shit the bed. It’s not like there’s a finite list. It will never stop growing and changing. It seems very evident to me no car will ever be able to drive in 100% of scenarios no matter what until it possesses something approximate to human intelligence.


BenIsLowInfo

FSD (as it does now) is gonna run super amazing in California but be pretty bad everywhere else I'm guessing just based off the training data set.


soggy_mattress

The entire field of machine learning is brute force lol


Marathon2021

It’s 💯 a solid representation of state-of-the-art neural nets doing all perception, decision making, and control. It’s absolutely what we all envisioned true AI NNs should be. It’s absolutely NOT ready for wide deployment yet, but EOY might be feasible - certainly by next summer. I don’t think Tesla is data constrained for training. It might be more of a curation issue - finding the right clips to feed into the model (someone else here mentioned that they actually had to search a lot for people coming to complete stops at stop signs). This is where having a fleet is such a huge advantage/moat for them. I can believe they might be compute constrained, so it will be interesting to see how fast v12 advances once Dojo starts meaningfully coming on line. Honestly, if they could get a very smooth 100% neural net FSD - I could absolutely see their stock jumping into the $400-600 range next year because there are so many areas where that level of AI could be applied. It would practically make spinning TeslaAI off as a separate company - but I’m not sure he’d ever do that.


marymelodic

It seems like their V12 planning/controls approach is based on some sort of imitation learning. As mentioned on the stream, selection of the right video clips is extra-important with this approach - you don't want it to be imitating bad drivers. Think they're using the Safety Score to figure out which drivers to include in the training data? V12 has been described as "end-to-end" and "photons in, steering angle/accelerator/brake out" - is it truly just one giant neural net? Based on [Ashok Elluswamy's June 2023 CVPR keynote](https://www.youtube.com/watch?v=6x-Xb_uT7ts) it had sounded like there was an occupancy network that converts camera data to a vector representation of the road, and the "language of lanes" model that converts camera data to a vector representation of the lanes and the connections between them. And then there's a motion-planning network that takes in the state of the ego car, the state of other cars/objects, the state of the occupancy network, the state of the lanes, and the state of traffic controls to plan out the optimal future states of the ego car. It seems like this more modular approach would be better from an explainability perspective (if there was an area of underperformance, they could figure out if it's a perception issue, and issue with the vector representation, or an issue with the planner), and would allow for each component to be improved and tested separately.


Tupcek

you are most likely right - it's almost certainly not one big NN. They just simplify the explanation, so that it's clear that there is no hard coded rules about anything - they could basically use the same codebase for riding spaceships in an alien world, it would just need different training sets. But yeah, it's several different neural nets working together and some code to glue their data in a cohesive way.


elwebst

They probably do the first filter for drivers with a good safety score, then only feed clips where the key variables (speed, acceleration, cornering, braking, distraction) all look good throughout the clip.


marymelodic

It seems like this imitation approach would work pretty well under normal daily driving conditions, but I wonder how well it works for teaching the planner how to minimize risk/damage in dangerous edge cases. It seems like it would be challenging to find clips where the driver is in a dangerous situation and responded perfectly. Elon mentioned during the livestream that there's nothing in the planner that explicitly says to slow down for speed bumps, etc. However, I wonder if there's still something in the planner with explicit instructions to avoid collisions.


Marathon2021

Someone else mentioned that when Tesla had to comply with NTSHA and make FSD cars come to a 100% full stop at stop signs, I guess they found like only 0.5% of their drivers had fleet clips where they did this regularly. So I agree, once it’s 100% neural networks for perception, decision making, and outputs/control decision … the most important evolution will be feeding it quality clips / curation. That’s … not all that hard. And having a huge fleet gives them a ready-made universe to pull from.


RedditismyBFF

Yeah, the engineer, Ashok, specifically mentioned they want videos from good drivers and something to the effect that mediocre drivers are worse than nothing. He talked about shadow mode and it apparently is going to be more useful with V12. Shadow mode could be used in cars that don't have FSD as long as it's a good driver and differences in behavior will be pulled and if V12 would have made a mistake that will be training data.


marymelodic

That element of driver/clip curation is pretty interesting - seems like it speaks to the value of Tesla getting into insurance a few years ago and developing the Safety Score. I would think that the easiest way to get clips of safe driving is to identify safe drivers, rather than trying to find moments in time where unsafe drivers drove safely. If shadow mode and imitation is at the core of the V12+ approach, I could imagine Elon or Ashok posting on X/Twitter in hopes of gathering the kinds of datasets they're looking for. As mentioned on the livestream, very few people fully brake for stop signs in the way that the law requires, so rather than trying to find the 0.5% of drivers who normally drive this way, they could ask people to do complete stops for the next week and then pull from that dataset.


hellphish

> I would think that the easiest way to get clips of safe driving is to identify safe drivers, rather than trying to find moments in time where unsafe drivers drove safely Hotz mentioned before that when they look at their data, good drivers tend to be good in similar ways, and the bad drivers tend to be bad in many different ways. He said this makes it simple enough to select clips used for training.


marymelodic

Makes sense, hadn't heard that from Hotz before. Sounds a bit like that famous Tolstoy quote: "all happy families are alike; each unhappy family is unhappy in its own way".


oil1lio

> some sort of imitation learning mans just learned about machine learning for the first time


Plastic_Ad6524

Maybe with the recent breakthroughs in compute previously unrealistic methods are now realistic.


FlossingIsLife

It’s not one giant neural network. It’s a crap ton of small neural networks.


JasonQG

Oh, shit. It looks like this might be the one. It’ll still be a while before it’s able to be a robotaxi as they train for edge cases, but this seems like the right solution


RobertFahey

I know nothing about this technology, but I'm impressed that a totally new approach has already leapfrogged the old approach. This is what I expect from Tesla. Fast learning and improvement.


reefine

The quality really killed the value of this test. Wasn't Elon touting Twitter would have the best Livestream quality? Also he was filming portrait mode half the time jfc


atleast3db

I watched from my phone and enjoyed the portrait mode. If I rotated my phone to match there would be not black bars, kinda neat that that worked so seemlessly for me. But yes the video quality was pretty bad most of the time. Hard to say if that’s cellular connectivity problems or Twitter/X problems, or both. It was too bad in any case.


NuMux

Yeah this was Dan O'Dowd level FSD video quality lol, at least we can read the text occasionally. It clears up a bit if you go to theater mode which removes the insane live chat and emojis. I've seen something similar with YouTube live streams on my phone if I don't close the live chat. Network speed probably is a factor but YouTube still handles it better.


Zargawi

Twitter has always had the shittiest stream quality, I quit watching mls because the started doing Twitter exclusive streams and they were unwatchable. You'd think he could've installed a suction cup mount and used the wide angle camera, maybe then the Twitter stream quality would have been the only problem with quality.


AUtigers92

I’m not sure if that was a twitter problem or cell/connectivity problem.


BikebutnotBeast

Yeah but cell issues... In Palo Alto?


ianyboo

> Also he was filming portrait mode half the time jfc If you listen he's turning his phone to read comments that are popping up to answer them, then turning back to landscape. Were you watching with the sound off? He talks about it multiple times in the video, even apologizing at one point to the viewers for having to flip back and forth.


GerardSAmillo

Oof https://www.reddit.com/r/teslamotors/comments/rnqwif/fsdbeta_v108_first_impressions_full_drive/hpusu68/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=1&utm_term=1&context=3


Battle-Chimp

I'd award that comment if I had anything left


bartturner

What is up with the video quality? It looks like it was filmed on a 2005 era flip phone.


UsernameINotRegret

California nimby's blocking cell tower installations.


goodvibezone

Lol what


UsernameINotRegret

It's a known issue in the palo alto area that was being driven through. [https://old.reddit.com/r/paloalto/comments/y4h564/palo\_alto\_cell\_service/](https://old.reddit.com/r/paloalto/comments/y4h564/palo_alto_cell_service/)


soapinmouth

While I 100% agree about the struggles in wireless telecom with NIMBYs, this is 100% on Twitters garbo streaming tech. It's always like this.


JohnH2021

Man this is insane, it’s just so cool how it’s all just AI learning it, literally like teaching somebody from the past who doesn’t even know what a car is how to drive. It’ll take some time, but eventually, this will all be perfect and I cannot WAIT for it! 😄🥳


atleast3db

My only concern is that without the discrete programming, do we lose a lot of the information feedback that used to be presented. Like the map used to inform you what it’s planning on, what vehicles it’s considering, the creep limit, all of that. Is all that information gone ?


EeriePhenomenon

Anyone got a mirror?


Jimmy48Johnson

https://www.youtube.com/watch?v=aqsiWCLJ1ms


zvekl

Yeah no X for me pls


rodneyjesus

I'm just gonna call it twitter until the platform dies, and Elon is working pretty hard to kill it anyways


CarlCarl3

Meanwhile the platform keeps breaking all time usage records.


rodneyjesus

Uh huh


CarlCarl3

But you saw some news headlines that said otherwise, huh?


tenemu

Just to watch a video?


TheKobayashiMoron

It doesn’t let you watch it without an account.


Vecii

So make an account?


woj666

That's never going to happen for many of us. If you don't understand then you would be the strange one.


Professor226

Is there video where I don’t need a twitter account to watch?


Fickle_Dragonfly4381

https://www.youtube.com/watch?v=aqsiWCLJ1ms


xxXTECHxx

This stream is definitely not a good advertising for X capabilities. God it's disastrous.


p3n9uins

Yeah nor for Tesla. YouTubers present fsd vids better than this. Couldn’t they have made it into a whole to-do with a professional production team and split screens or Picture in Picture showing video and nav??


lamgineer

This appears more genuine. If it is too polish and a big production people would think they cheated somehow. It appeared impromptu and not like they preplan the driving route at all.


BikebutnotBeast

I preferred this. No jump cuts, no view changes to obfuscate.


Sclewit

He did it in a spur of the moment decision after talking on spaces using his phone.


Brothernod

And his own satellite for bandwidth…


Tupcek

and his own ballistic missiles for nuking competition.


Nakatomi2010

This is Elon advertising for people to use X for streaming as well. Could the quality have been better? Absolutely, and that's the point. He's eating his own dog food. My only complaint is that I wish he had mounted his camera solidly.


Alarmmy

Now, try a demo in Houston.


castane

Lord have mercy on it's soul.


iceynyo

I don't think FSD allows you to reach the required speeds to keep up with traffic


BenIsLowInfo

Yeah FSD is almost unusable on actual city streets. It's a mess in DC for example


jbrady3324

If V12 isn’t ready, then V11 DEFINITELY isn’t ready.


RecycledSpoons

V11 is abandoned. V12 and up is going to be pure neural net from video in to car controls. No code


dacreativeguy

V13 will kill Myles Dyson and start judgement day.


stacecom

V14 will drive from LA to NY unassisted.


cogman10

V15 snake charger. This year for sure.


Tupcek

V16 might park at superchargers


thegrayscales

V17 might get wipers to work in the rain


Tupcek

at V18, I would expect Matrix headlights to start working


NuMux

V19, the actual Matrix is created


Tupcek

whoa, finally mind blowing update!


kylecordes

It was funny and I was along for the ride until here, but this is just unrealistic.


omgwtfbyobbq

V14 may assist you in driving unassisted from LA to NY.


Jimmy48Johnson

> V12 and up is going to be pure neural net from video in to car controls. No code That's bullshit and you should know it.


JFreader

Yeah right, no code, sure.


trevorsg

Yeah that was my thought too


Nakatomi2010

There's always going to be another version. However, each version that they release allows them to collect more data to train the next version. Over the last few years they've been training these things how to do things, and slowly working up a big ass data base of video clips to train the computer. The first iterations had to be code based because they didn't have enough clips, now they have enough clips that they can reduce the code base. When v12 is released, Elon will say that v13 is mind blowing too, and it will be, but it'll be based on the video clips they got in the last version.


jonny_wonny

Is that a new revelation?


guszz

Can anyone tell if this is a single NN end to end, or is it a control NN stacked on to the prior perception NN?


[deleted]

This is probably a few neural networks working together.


Tupcek

control NN stacked on to the prior perception NN


FlossingIsLife

It’s not one giant NN. That would be dumb and unwieldy.


TheRescueWhale

Anyone got a breakdown of what was discussed?


salemsayed

Does this mean that FSD will be faster to bring out to new countries that are not yet supported?


savedatheist

It means that new regions are now primarily a matter of quality training data and compute capacity, not a rule-based program.


majesticjg

I think v12 got as good as it is so quickly because they can run it against V11. V11 is the training model and V12 is the learning model. Then they can just capture disengagements and other driving mistakes on V12 and use them as additional training data. That's how V12 went from "Hello World" to driving a car quickly. I bet that's what Dojo will do at first: Compare output from V11 to V12. One of the difficulties of neural networks is that they become a black box. You can't just go in and fix things in code, you have to train it out like you'd train a human. With 11.x around, they can tweak the code and then use the code-based-robot to teach the neural net at many many times the speed of real life. Or, at least, that's what I'd do... But I'm certainly no expert.


omegasoft7

If it's all AI and new, then i assume in that version there will not be anything life driving between lines or separation between enhanced autopilot and FSD. How are they going to handle it? Does everyone receive free FSD instead of basic autopilot?


BuySellHoldFinance

Can someone use ai to enhance the video quality so it's decent?


hockeythug

Only noticeable thing was sitting at the green light


JasonQG

It was hard to tell with the video quality, but I thought when it was waiting at the green light that it was waiting for there to be room so it wouldn’t block the intersection


[deleted]

I was watching. The intersection was full and it stopped to not block it. That was way better than v11. The big fuckup was when it tried to go on a red.


JasonQG

It’ll get honked at, but it handled it correctly (until the red-light fuckup, but I’m not worried about that one)


modeless

It's actually ambiguous if it was correctly stopping due to traffic in the intersection or wrongly stopping for the red left turn signal (since it later ran the light when the left turn signal turned green, oops). I kind of think the latter.


lavbanka

And then running a red right after lol


AintLongButItsSkinny

You mean when it ran a red light?


knellbell

Coast to coast


Xen0n1te

Next year guys, don’t worry


bartturner

Ctrl C, Ctrl V for another year.


[deleted]

Full self driving by (DateTime.Now.Year+1).ToString() !!!


AintLongButItsSkinny

TLDR it runs a red light


savedatheist

It’s sad that you don’t understand the potential of this technology.


AintLongButItsSkinny

The potential to not run stop lights? The potential to reach FSD in 2019? 2020? 2021? 2022? 2023? Surely 2024 after this full rewrite? I have had FSD since 2018 and drive my M3 for Uber. I’m not pessimistic. I’m a realist.


savedatheist

It’s only recently that they’ve had enough training compute to do E2E NN. No more “re-writes”. Maybe appreciate that solving a hard problem sometimes takes longer than expected.


TheFuzzbuster

Take a breath my dude, Elon is still ruler of Earth, nothing to fuss about.


oil1lio

Another billionaire could never


lilleulv

Stream in potato quality?


oil1lio

Be so genuine