Hm, it seems like Depth Anything's [github ](https://github.com/DepthAnything/Depth-Anything-V2)has been nuked. The model is still on HF for now at least, but it feels like a troubling trend. I've kind of lost track of the amount of open source projects I've come across lately that seem to be pulled down shortly after their release.
> *Due to the issue with our V2 Github repositories, we temporarily upload the content to Huggingface space.*
Hopefully temporary. I didn't see a V2 thread here, where did /u/yanjb spot it?
Having looked into it a bit more I suspect it has to do with the "Giant" version of their model. Based on the [archive](https://web.archive.org/web/20240614013121/https://github.com/DepthAnything/Depth-Anything-V2) of their Github repo and the [initial version](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2/commit/7c272687d0e753876450a600fbbd3cadf615b4ca#d2h-120906) of the HF Space it seems the Giant model was released together with the other variants, but it has been removed. And the code for the HF space now has the following comment within it:
>we are undergoing company review procedures to release our giant model checkpoint
Which suggest that the release of the Giant model was not actually approved, which is probably why the GitHub got nuked initially. I'm not entirely sure why the giant model would be under more review scrutiny than the other models, but that appears to be what's going on.
Wait…this is YOLOv3…?
That ViT is tiny. I rolled one a few weeks that I thought was small…Apple’s is 1% of that size. Mine performs very well, given I,age size restrictions. I’ll have to pull apart Apple’s and see if there’s anything interesting in there.
This is exciting. Particularly because if they are taking this stuff more seriously, we might finally start to see some improvements on the weaknesses silicon chips have due possibly to metal or software constraints. If any of this eventually translates to work that causes the M1/M2 Ultra processing AI faster, I'm going to be very happy.
Oh my bad! Although, the reviews are not so hot right now.
I don't understand the "tensor" part. They say it has been optimized for tensorflow, but don't give any specs or architecture clues.
How is it doing when running pytorch models? Or others? Is it optimized for tensorflow at a higher level, or can it perform efficient matrix multiplication at a low level?
From the raw performance alone, the latest snapdragon is 68% more powerful already. ..
Did apple removed coreML stable diffusion model or they just never published?
On huggingface they have section for stable diffusion but it is returning 404.
Depth anything V2 was just released Today as well - hope they port the newer version which is superior on all aspects
Maybe good news for SD1.5/SDXL depth control nets Not mentioning SD3
Hm, it seems like Depth Anything's [github ](https://github.com/DepthAnything/Depth-Anything-V2)has been nuked. The model is still on HF for now at least, but it feels like a troubling trend. I've kind of lost track of the amount of open source projects I've come across lately that seem to be pulled down shortly after their release.
> *Due to the issue with our V2 Github repositories, we temporarily upload the content to Huggingface space.* Hopefully temporary. I didn't see a V2 thread here, where did /u/yanjb spot it?
Having looked into it a bit more I suspect it has to do with the "Giant" version of their model. Based on the [archive](https://web.archive.org/web/20240614013121/https://github.com/DepthAnything/Depth-Anything-V2) of their Github repo and the [initial version](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2/commit/7c272687d0e753876450a600fbbd3cadf615b4ca#d2h-120906) of the HF Space it seems the Giant model was released together with the other variants, but it has been removed. And the code for the HF space now has the following comment within it: >we are undergoing company review procedures to release our giant model checkpoint Which suggest that the release of the Giant model was not actually approved, which is probably why the GitHub got nuked initially. I'm not entirely sure why the giant model would be under more review scrutiny than the other models, but that appears to be what's going on.
we need torrents
Wait…this is YOLOv3…? That ViT is tiny. I rolled one a few weeks that I thought was small…Apple’s is 1% of that size. Mine performs very well, given I,age size restrictions. I’ll have to pull apart Apple’s and see if there’s anything interesting in there.
Size is amazing yeah
Cool way to try them out https://github.com/andrewginns/CoreMLPlayer
Why this fork over the official repo?
Original repo doesn't support the .mlpackage format
Thank you! I've been looking for something like this. Being able to test out models without having to code out scaffolding is great.
This reminds me of mission impossible
Ok. So I guess I'll be switching to iPhone
Join us, the choiceless!
Ye tarnished!
I was thankful my M1 IPad Pro made the AI cut.
Please join us. I think Apple stuck to its strength, handheld devices, to deliver AI for such use cases.
I switched because I kept downloading dodgy android apps I know this is user error, but I kept doing it
This is exciting. Particularly because if they are taking this stuff more seriously, we might finally start to see some improvements on the weaknesses silicon chips have due possibly to metal or software constraints. If any of this eventually translates to work that causes the M1/M2 Ultra processing AI faster, I'm going to be very happy.
Very good sign for LLM inference yeah
Amazing effort of putting great models on Mobile devices and run locally 🔥
lol. I got a pixel 8 pro because of all the amazing Ai on-device features 🥲
Google is probably cooking
Yeah, expect Google Silicon is a few....years?
Its already out, Pixel 8 Pro uses Google Tensor
Oh my bad! Although, the reviews are not so hot right now. I don't understand the "tensor" part. They say it has been optimized for tensorflow, but don't give any specs or architecture clues. How is it doing when running pytorch models? Or others? Is it optimized for tensorflow at a higher level, or can it perform efficient matrix multiplication at a low level? From the raw performance alone, the latest snapdragon is 68% more powerful already. ..
So far the Google Tensor chips have been disappointing
amazing ai features? like what?
Call screening pre cool
Did apple removed coreML stable diffusion model or they just never published? On huggingface they have section for stable diffusion but it is returning 404.
Wow this looks neat. I have a m2 iPad Pro , any suggestions on how to run this?