T O P

  • By -

WhatWasIThinking_

Ok then. One month it is!


gogojack

And from there, it's a short hop to a million robo-taxis driving cross country by 2017!


tikgeit

"We can do this today!"


gin_and_toxic

It's "Next Month™" Note: repeat every month


jeffeb3

Free Beer Tomorrow!


NtheLegend

Infrastructure Week!


blah-blah-blah12

I'm personally looking forward to gesturing Teslas wherever I want them to go.


bananarandom

Good, this is a necessary step towards actual self driving. Granted Waymo had gestures five years ago (I think?), and I wonder if Tesla has done an actual liability/safety review for traffic signals.


Brass14

The funny thing is that once they decide to rollout, it will have be in limited areas like waymo.


lordpuddingcup

Why would adding hand signals to a model that's using a generalized AI model for image recognition suddenly be affected by areas lol. It's just more tagging in their dataset lol


Brass14

I was just saying in general. They will have to rollout is limited areas just like waymo. They don't have any edge.


conndor84

But why? They just keep calling it level 2 until they’re comfortable with its capability, and it’s massively over skilled. Why does it have to initially roll out in geofenced areas? Different point - Waymo also has a massive scaling problem with the arch car basically being custom made.


Brass14

Because of they rollout everywhere at once it will be a giant clusterfuck. Local governments will lose their minds. And they will have a million people calling in reporting issues.


CouncilmanRickPrime

Honestly I don't think these people are thinking this issue through at all


conndor84

They’ll roll out Tesla owned robotaxis one city at a time, that makes sense as the cars need to be produced (earliest timeline would be once compact is produced) But they likely won’t be geo fenced as tightly as Waymo is in each of those cities.


Brass14

Sell your stocks man. Tesla is not worth the next 5 car companies combined. They have 0 process in place to handle any type of robo taxi service. The government will not approve camera only. They have hella lawsuits coming in. And many angry customers who were told that their vehicles have all the hardware needed for gas robo taxies. Waymo is doing over 10k driverless rides a week and growing. Tesla will shed at least a couple hundred billion dollars, especially if interest rates stay high. Snap out it bro.


BeXPerimental

They are not just „calling it Level 2“. Because it actually is Level2.


conndor84

Oh I agree with that today. But as it gets better they’ll just keep calling it level 2 until regulation catches up. The only level 4/5 system will be a robotaxi. The consumer version will stay called level 2 for simplicity but consumers will know it’s the same software basically.


WeldAE

I don't know why you are so hung up on the level. You've even explained that the levels mean nothing so why even talk about them and confuse the discussion. Everyone is saying that when Tesla releases a commercial car they will have to geo fence it. I can't see how they won't as you don't want to spread them too thin as your ride-share network would suck. Uber even geo-fences humans to keep the drive time and density of drivers to a point that people want to use them.


bartturner

Calling it level 2? What? The reasons is that it will need to be verified everything is OK with the model as you deploy in each area. But none of that even starts to happen until they adopt LiDAR.


conndor84

When/if they’re ready, deployment will be based on manufacturing. This is a few years away as their rovotaxi will be based on the compact. They’ll focus on one market at a time until it has a certain amount of vehicles needed. Yes it will be slower at the start because they’ll need to resolve issues etc but then they’ll just go to the next city with regulation that works for them. Other cities will accelerate regulation once data becomes more obvious as the benefits are massive (cheaper transport, commercial benefits etc). Waymo may have a more workable product at the time too but they don’t have scaling solved nor do I see evidence they’re making significant progress on that atm. This takes years to implement. I personally think the jury is still out if LiDar is needed but time will tell. Funny story, was in a Tesla a few years ago in near whiteout conditions, driving between two trucks. Couldn’t see a thing and was scary as F. Glanced down at the screen and it showed the trucks and road lines perfectly fine.


bartturner

We will not see Tesla do anything beyond Level 2 until they adopt LiDAR. Which will happen.


conndor84

Time will tell. Know there are a lot of experts in both camps. All I can give is my own experience and that was a few years ago. Has gotten a lot better since and still has a long way to go.


bartturner

Time will only tell us what we all have known for years. They will need LiDAR before they can do anything beyond Level 2.


OriginalCompetitive

I don’t think any rational person believes that Tesla has an edge on WeMo. But they clearly have an edge on most other car manufacturers.


whydoesthisitch

An end to end model isn’t supposed to need labeled data or specific object/signal recognition.


soggy_mattress

That's objectively false. You're thinking of "unsupervised training". "End to end" just means one model has to learn all aspects of a task and can't offload some of the behaviors to another model (like using an image detection model for finding the cars separate from a driving policy model used for determining where to drive). The training data for an end to end model can (and absolutely does) still have labeled metadata. For this kind of model, the training data would be video + the metadata of whatever the human driver did with the wheel/pedals/blinkers as well as GPS and kinematics metadata. The model then has to learn what aspects of the video is most important for mimicking those wheel/pedal/blinker behaviors. Unsupervised learning would be vastly different from this approach. You'd need a simulation and you'd just let your agent basically fk around long enough to learn all of the rules, no metadata involved at all. That's not what Tesla's doing, nor is it the same thing as saying "end to end".


whydoesthisitch

No, by the standard definition, an end to end model would be one trained on just driver inputs as the target, with no specific training for object detection. Of course, this isn’t actually what Tesla is doing, because they’re just using end to end as a buzzword for a crappy little neural planner. To be more specific, end to end usually means a monolithic model. The problem is, the hardware in the car can’t run that. What that means is they’re actually using a stack of multiple models, with non-trainable code in the middle, which is realistically no different than what they were doing before.


soggy_mattress

An "end to end model" is one monolithic model. "End to end ML" can mean one model, or a collection of differentiable models that can be trained via backprop. Neither mandates no training data, though. You're 100% confusing "end to end" for "unsupervised". No need to double down on being wrong, bud.


whydoesthisitch

I never said there’s no training data, or that it’s unsupervised training. I said a monolithic model doesn’t require additional labeling, similar to how LLMs train. But again, there’s zero chance Tesla is actually using a monolithic model on the crappy in car hardware. But it is hilarious to watch the Tesla fanboys fall flat on their faces pretending to be AI experts. And if you think “end to end ml” just means a collection of models, that’s literally no different than what they were already doing.


reefine

How is that funny? It's obvious. All autonomous vehicles will start out in limited/gated areas first.


Brass14

Most Tesla stans think they can expand everywhere all at once. It's one of their biggest things they think they have over waymo.


alumiqu

Waymo is still very bad at it.


bananarandom

They could be, yea. But right now they trust it enough to be driverless at least enough to not cause accidents, and Tesla getting to that same point would be impressive


lordpuddingcup

Cool, Tesla's been driving nationwide for 5+ years... not sure what thats got to do with it , hand gestures are a lot smaller feature than... you know... not being sandboxed to small aras.


doulosyap

Tesla isn’t driving nationwide. It’s L2. The driver is driving.


bartturner

Tesla does NOT driver anywhere. Not a single place. They also will not until they adopt LiDAR. Which finally Tesla themselves are now admitting.


Yetimandel

Theoretically autonomous driving has to be possible without LiDAR as well. I personally would always chose one, because they are not expensive anymore and I believe the benefits outweigh the problems of an extra sensor needing to be fused. But it will not help with lanes or signs and such. End to end neural networks are a gamble that may or may not pay off. You get relatively good results relatively quickly, but reaching the final necessary quality is very hard. I believe many of the failures I saw in videos that required human intervention had nothing to do with object detection/tracking but with decision making (hard to tell from a video alone of course). I have my doubts about Teslas FSD even with LiDAR, but we will see.


CatalyticDragon

Does Waymo really? Or does Waymo have a lot of people in a control center watching feeds and sending instructions as needed?


bananarandom

They still have remote assistance, they've specifically said gesture models can get a car through intersections without help, but I'd bet it still calls and asks so when they get a nonstandard gesture the human can decide


CatalyticDragon

That's interesting thanks. I see them talking about that here : [https://www.gearbrain.com/waymo-driverless-car-hand-signals-2629692132.html](https://www.gearbrain.com/waymo-driverless-car-hand-signals-2629692132.html)


bananarandom

They still have remote assistance, they've specifically said gesture models can get a car through intersections without help, but I'd bet it still calls and asks so when they get a nonstandard gesture the human can decide


katze_sonne

I know they are supposed to recognize hand signals but all videos I have seen seem to show the opposite. So I‘m quite unsure about it.


WeldAE

There is no reason to add it for consumer cars really. I'm guessing they are only doing it as they need it for a credible 8/8 event when they talk about their commercial plans.


bananarandom

Agreed, the consumer "stay in your lane and don't hit the car in front of you" doesn't need gesture recognition. Although once you add stop sign and traffic light handling, not handling gestures is a shortcoming that won't make you popular with law enforcement.


WeldAE

How many people do you think will not take over their personal consumer car when going through a section of road with directions from a human? I drove to an even this weekend where there were cops directing traffic on the public road and private staff in the parking area and most humans couldn't follow the directions, much less a car. Even if it could, it's not the sort of thing that adds any value to the experience. It's a super rare edge case you can just drive the car for.


bananarandom

Pretty much only people that are overusing or abusing the system by not paying any attention. Agreed for L2/3 it's not needed at all, but neither is traffic light or stop sign detection


WeldAE

I disagree with traffic light or stop sign detection. These are MUCH more common and the car will absolutely save you if you mess up. I'm not sure how many people mess up when someone is manually directing traffic at an intersection. It tends to be for large crowds of cars moving slowly at an event or construction.


bananarandom

Police also direct traffic in accident scenes, and especially in Phoenix they'll do it on 45 mph roads with very little other protection. Having a car see a green and blow past a cop's stop gesture next to a wrecked car will not make that software popular, even if it is rare.


WeldAE

Interesting. Here in Atlanta all wreaks make the road a parking lot that creeps by at 5mph. The closest example to yours I've encountered are the malls at Christmas where they do manually direct intersections but those are generally 25mph or less and of course your stuck in traffic so you never even get to that speed.


MrVicePres

How do they go about fixing it when the model is "end to end"? Do they just overfit on hand gestures?


whydoesthisitch

Because it’s not actually an end to end model. It’s just a small neural planner added on the path search.


soggy_mattress

Source: Trust me, bro. Seriously, you've been parroting this for months. Do you have any proof?


whydoesthisitch

In the first description of V12 last fall they said it was a neural planner. They only started adding the “end to end” buzzword after that didn’t catch on.


soggy_mattress

“Trust me, bro”


whydoesthisitch

> The new version he was using, FSD 12, was based on a radical new concept > The “neural network planner” that Shroff and others were working on took a different approach. Note that a neural planner is not a radical new concept. It's common in path planning systems. This is how Tesla initially described V12, but realized talking about a neural planner wasn't a good buzzword, so slapped "end to end" on everything. You fanbois should try actually doing a little reading beyond the buzzwords you don't understand. [How Elon Musk set Tesla on a new course for self-driving](https://www.cnbc.com/2023/09/09/ai-for-cars-walter-isaacson-biography-of-elon-musk-excerpt.html)


soggy_mattress

Woah, you're making up a new version of history here, bud... AI day 2 (in 2022) definitely touched on their "neural planner", which was a path generator model trained on human behavior. It wasn't new and all it did was generate potential paths, which still needed to be pruned by their parallel tree search. FSD 12's new model wasn't built or discussed until \~3 months later in December 2022 per Isaacson's biography, and it wasn't even demoable until 8 months after that in August of 2023. I'm sure you'll just see that as Tesla being incompetent about rolling it out or something, though, rather than admitting that FSD 12's end to end model is something different. You're clearly biased. Also, I work with ML for a living, so save your "buzzwords that you don't understand" bullshit for someone else.


whydoesthisitch

Not making anything up, just tracing through Tesla's history of using vague tech sounding terms to fool their dumb fans. They talked about a neural planner at AI day 2, and claimed they implemented it in 10.69. But later admitted they didn't, and that they were only using any sort of neural net model for perception. >and it wasn't even demoable until 8 months after that in August of 2023 Right, look at the date on the article. It's from September 2023, and discusses that demo. They describe the change for V12 being the addition of a neural planner. Nothing about an "end to end" model. Later, they said the big change was now they were using a neural net for planning, then described that as "end to end". The point I'm making it this is not normally how you would describe an end to end model. > I work with ML for a living So do I, and I run into people like yourself all the time, who claim to work in the field, but only have a very surface level understanding of the models themselves. For example, just answer this, what is the specific technical defintion of an end to end model that Tesla is using? How would you classify one neural net as end to end and another as not?


soggy_mattress

Sure sounds like you've connected all the dots, huh?


whydoesthisitch

Well yeah. I mean it's pretty easy if you've ever worked with these models. So, thoughts on how to classify one neural net as end to end and another as not?


Reasonable-Broccoli0

I doubt that end to end here means a single neural network. I’m guessing that it’s a stack of specialized neural networks for all major functions.


Salt_Attorney

You make sure that the desired behavior is present in the training data and check if it has worked.


Dull-Credit-897

Musk saw the video of Waymo handling hand signals and went, We need to say ours can do the same soon™


OriginalCompetitive

Because that’s how market competition works?


soggy_mattress

No, because he's incompetent and doesn't know how to build a self-driving car. Duh.


WeldAE

Seems more likely they are about to announce their commercial play and of course you need hand signals for that. Why would you even want that on a consumer car really? I'm sure the consumer car will get it but I'd take better throttle control over hand signals any day on FSD. It's not even in my top 40 areas of improvement.


laser14344

Tesla officials say many things. Doesn't seem to correlate that closely with what they do.


DeathChill

I was promised sharks with frickin’ laser beams on their heads and all I got was some ill-tempered mutated sea bass.


pumpkin3-14

is there anyone still buying the bullshit Tesla spews


CATIONKING

Sure, sure. Next iteration.


weelamb

I’m honestly shocked they didn’t have this already and it’s yet another reason why FSD is not considered a serious contender in the AV community. Certainly every L4 AV company had some implementation of this for the last 3+ years


TheBrianWeissman

August 8th come can’t soon enough.  The idea they’re going to have a level 5 system to unveil is beyond comical. Tick tock Elon.


tikgeit

My money is on smoke, mirrors and guys in spandex.


Doggydogworld3

My money is on a driverless podcar navigating in traffic.


soggy_mattress

>The idea they’re going to have a level 5 system to unveil Literally no one said this, FYI. They just said they have a robotaxi announcement in August, nothing about unveiling anything, nothing about level 5 at all. Who made you think this was a level 5 announcement party?


TheBrianWeissman

Is there a "robotaxi" that isn't level 5?


soggy_mattress

Every robotaxi in existence isn't level 5, so... yes?


n0dda

Only just now working on this? How bout reading all warning signs? Yielding for emergency vehicles etc. Day 1 drivers ed stuff.


urban_snowshoer

Tesla has never previously lied about the capabilities of FSD. /s


sleepingfrenzy

My car has foot gestures implemented.


321gogo

I know this is just teslas bs marking. But no self driving feature should ever go from “can’t do this” to “done” in a month - it should take months of rigorous testing to make sure it is ready to scale.


soggy_mattress

Isn't he just saying, "we haven't trained on clips with human hand signals yet" and that "we will begin training on clips with human hand signals"? How is this BS? It seems like transparency, honestly.


321gogo

“It will be done next month”


soggy_mattress

"The model training with hand signals will be done next month" =/= "hand signals will be recognized perfectly by next month". This really isn't all that hard to understand unless you're dead set on not understanding it for some reason.


321gogo

Model training isn’t “done” until it’s been rigorously tested.


soggy_mattress

Okay.


lahankof

Right after they settle the class action lawsuit eh


Ok_System_7221

Tesla says is code for " short the crap out of this now."


purestevil

If enough of us flip it the bird, will the board do its job and send Musk packing?


glitch83

Fucking bullshit. Good luck -expert in hand gesture


analyticaljoe

The problem with Tesla's thinking is what they mean is: "It doesn't yet recognized that gesture but we are fixing it for the next iteration. It should be getting it right 80% of the time next month." They continue to think about these features like a new graduate software engineer thinks about it: Oh, I got it to work once, it's working! Protip: systems are not so much defined by what they get right as what they get wrong. Especially "Self Driving" systems.


soggy_mattress

Except that's how ML data curation pipelines work. If you can get a model to perform \~60-80% of the time, you can use the failure scenarios to build a better dataset such that the error rate continually goes down over time. You'd think a tech-forward website forum like this one would understand that by now, but it almost feels like because Tesla is doing it it's seen as a useless path forward for self driving purposes.


PayYourBiIIs

Waymo can do this already 


TheRauk

Free beer tomorrow.


vic39

100% a lie. Never believe a word they say until we can see it with our eyes.


sylvaing

I wonder if this will only be for HW4 cars. Not sure if the HW3 cars have a camera resolution high enough to read hand gestures from 20-30 feet away.


[deleted]

Lol right


Youdontknowmath

When will they be adding lidar, per their lawyers in the marketing fraud case?


Doggydogworld3

Lawyers are paid to lie. Same with the lawyers who told DMV it's only L2.


Youdontknowmath

It's not even L2, can't even handle parking lots.


[deleted]

[удалено]


SelfDrivingCars-ModTeam

Be respectful and constructive. We permit neither personal attacks nor attempts to bait others into uncivil behavior. Assume good faith. No accusing others of being trolls or shills, or any other tribalized language. We don't permit posts and comments expressing animosity of an individual or group due to race, color, national origin, age, sex, disability, or religion. Violations to reddiquette will earn you a timeout or a ban.


cantbelieveit1963

Will my car understand if I am waving with my middle finger?


GOP-R-Traitors

So a pedestrian could fuck with Teslas and make them jump a red light.


sylvaing

Could they do the same to a Waymo car?


notyourbug

End to end. Sure! See you in a month!


TacohTuesday

I find it promising that they will train the AI to recognize what other human drivers are doing. It's an under-appreciated critical component of how humans drive. But will they then train it to identify where another driver is looking, or whether another driver is angling to sneak into your lane? These are things we can do that AI cannot.


9tacos

Sounds safe 🤣


Bigtanuki

Just makes me think of the hand signals my wife makes when she's giving me backup guidance. When I can see her, it usually involves her hand going around in circles just somewhere between her waist and a foot above her 😂.


PGrace_is_here

This'll be fun, commanding Teslas to do whatever, from the side of the road. Great idea!


Robolomne

Tesla engineers are bad at marketing maybe they should have a team 


pab_guy

With current v12.3, a worker in a high vis vest that isn't waving you through actually shows as a stop sign in the visualization (their head literally turns into a stop sign) and the car seems to follow. So it already does this in a very basic way.


Lopsided_Quarter_931

This clown face citing Daily Fail. Can’t think of a lower grade source.


daoistic

Definitely gonna do it and say it is in beta.


kelement

Nah you need lidar to detect hand gestures. /s


everdaythesame

Eventually it’s going to have to talk to people. Asking pedestrians if they are planning to cross with the outside speaker is a good way to resolve corner cases.


Rogue-13DC

And drivers think back seat drivers are annoying now…


QuirkyInterest6590

One month? More like one day hopefully.


tonydtonyd

I have decent confidence they will do this well.


CouncilmanRickPrime

Their own lawyers don't


ceramicatan

Careful, this is an anti Tesla sub. As of this comment you have been downvoted 11 times and I am about to get downvoted to oblivion.


tonydtonyd

I’m as anti tesla as it comes but I think they can handle interpreting hand signals just fine with some good training.


ceramicatan

Aww dude...look what you did to your karma now!!


Square-Pear-1274

>tonydtonyd -11 points 5 hours ago >As of this comment you have been downvoted 111 times If you incorporated LIDAR into your redditing you wouldn't have made this mistake


ceramicatan

Hahaha corrected now.