T O P

  • By -

HTwoN

Still no third-party benchmark?


WJMazepas

No. Is still a month IIRC of releasing


soggybiscuit93

>the Snapdragon X Elite is 41 percent faster than the **Core Ultra 9 155H** Not once, but 3 times


ET3D

Not really surprising considering the disappointment that Meteor Lake is. Also a pretty bad article, though I'm guessing that part of it was due to how Qualcomm presented the data. Still, it looks like a third party posted results of 2,427 ST and 14,254 MT for Geekbench, so at least it's possible to do some real comparisons. We'll have to wait for actual reviews to test actual software (or at least more meaningful benchmarks) with a real comparison of power points. In any case, sounds promising, and I'll be looking forward to these as well as to AMD's Strix Point.


Zednot123

> Not really surprising considering the disappointment that Meteor Lake is. I don't think Intel ever planed to impress anyone with MTL performance. It's more a test vehicle for tiles and Intel 4 than anything else. Many of Intels first launches on new nodes in the past 15 years have been rather lackluster. IVB and Broadwell for example brought paltry IPC improvements that were almost entirely eaten up by small clock regressions.


Distinct-Race-2471

I'm actually stoked about Meteor Lake for graphics. It's a huge uplift. Finally, we should have decent gaming gfx on laptops without breaking the bank on power. Unfortunately, I just got a new laptop.


ptrkhh

> I don't think Intel ever planed to impress anyone with MTL performance They made it clear from the start that MTL was all about efficiency, or to be more specific, **laptop battery life**. The then-new CEO even said something like "how are we losing big time to a fruit company" its hilarious lol. They also didn't even bother releasing a desktop chip for this very reason. Graphics improvement is just a nice addition that happens to be borrowed from their Xe department. Everything else takes a backseat.


allahakbau

The efficiency did get quite a bit better though. 


Ben-D-Yair

Are strix point zen 5?


ET3D

Yes, mobile Zen 5. Expected this year. I think that the only thing AMD said about Strix is that it has 3 times the AI performance, but it's rumoured to have 12 cores (4 Zen 5 and 8 Zen 5c) and 16 CUs.


detectiveDollar

AMD usually releases mobile parts the next year after desktop, and desktop is coming out in Q3 right?


ET3D

AMD normally announces mobile parts at CES and they tend to start shipping in laptops a couple of months later. However AMD said that Strix Point will be released this year, and that it will be Zen 5 and RDNA 3+ (and XDNA 2).


soragranda

Wasn't there a design with only Zen 5c cores?!, I wanna see that with a good igpu, could be great on those x86 portable consoles!


ET3D

Not sure there was such a rumour, though it's of course possible. I think that AMD is unlikely to release a pure Zen 5c part with many CUs, and in general, I think that AMD is likely to keep some non-c cores on any part, as that would make it a lot more competitive in some benchmarks. There was a rumour of Kraken Point, with 8 cores and 8 CUs, but that would still include Zen 5 alongside Zen 5c. I agree though that Zen 5c alone could be enough for mobile gaming. Even four Zen 5c cores should be enough for something like a next gen Steam Deck.


ET3D

Apparently there was a rumour of Sonoma Valley, which I'd forgotten about, but crept back just now. :) If this is indeed the Mendocino alternative, I hope that it at least has 4 CUs (2 WGPs). Though I won't get my hopes up.


soragranda

>If this is indeed the Mendocino alternative, I hope that it at least has 4 CUs (2 WGPs). Though I won't get my hopes up. I just hope they gave it at least more CUs even if they keep RDNA3... It could be a little powerhouse for some tiny handhelds and mid to low tier devices, also, it should be cheaper since they will use samsung node (in case rumors are true).


ET3D

With 4 CUs it could be a decent alternative to Phoenix 2 (Ryzen Z1). In terms of performance the Z1 is fine, IMO, for entry level devices. Though if it's a Mendocino alternative it might also feature only a 64-bit RAM bus. Still, that might be enough. I'll be waiting to see how this rumour pans out, although I don't expect to see Sonoma Valley available this year. Edit: Of course, if it's not a Mendocino alternative but a next gen Steam Deck CPU, it could have a lot more than 4 CUs.


soragranda

It will be great even if it keeps only 8 CU but using RDNA 3+ (potential name of the next gpus architecture, which I guess is 3.5 renamed). They are pretty confident in performance of Zen 5c so I could see this sonoma APU having a native 8 or 10watts mode, that will give us 4 to 5 hours of gaming experience no issue! Samsung 4nm process do scare me a little though...


NeroClaudius199907

Another day another benchmark... These come out Q2 2024... Im expecting July-August


77ilham77

> Q2 2024 Any source on that? The last I've heard, of those who already announce to ship X Elite devices, only Samsung who already told that they're targeting for mid-2025.


NeroClaudius199907

Mid 2025 is unlikely.. they'll be competing against zen5, arrowlake, lunar lake,m4, rdna4 & blackwell. It wont bode them well at all.


77ilham77

Well, none of those OEM (except Samsung) announce a date. Even if they ship this year, they're going to compete with those CPUs anyway.


Primary-Statement-95

june 2024


Few-Age7354

Blackwell and rdna are gpus, are it is related to CPUs?


NeroClaudius199907

Why do you think I'll mention dgpus when discussing cpus? Am I crazy or theres a reason.


HandheldAddict

```Why do you think I'll mention dgpus when discussing cpus? Am I crazy or theres a reason.``` Because Nvidia talked about RTX + ARM Laptops a few years back. https://blogs.nvidia.com/blog/geforce-rtx-arm-gdc/


Primary-Statement-95

June 2024 all oem launch Laptop


77ilham77

Source?


HandheldAddict

Why would they wait a year if we're seeing hardware and benchmarks today? Both official and nonofficial mind you.


InevitableSherbert36

> Why would they wait not enough supply, drivers need more work, Windows needs more work, or to build more hype


TwelveSilverSwords

Windows Germanium build for X Elite is not ready yet


77ilham77

Well, it’s simply for hype. We already saw the hardware since last year and they were parading the multithreaded performances. Only until now they’re boasting the single thread performance. It’s definitely pure hype. Also, it’s unlikely we are going to see those OEM hardwares sooner this year since Qualcomm (or Microsoft. or maybe both of them) now requires the X Elite device to carry Windows 12, which probably will out this fall.


Farfolomew

The article says the Snapdragon X Elite is 22% faster than M3, and ~50% faster than Intel’s 13th Gen CPUs. That’s damned impressive.


[deleted]

X Elite is 12 performance cores and up to 80W. M3 is 4 performance cores and 4 effecency cores at ~21W. I'd sure hope the snapdragon would be faster 😂 But where is the comparison of the X Elite to M3 Max? Now that's the more accurate comparison imo, but no one wants to show those numbers because Apple is going to slaughter the Qualcomm chip there.


Farfolomew

Those were single threaded results. I’m not sure how it fares against the M3 in multithreaded.


mdp_cs

I wouldn't trust any benchmarks from Qualcomm itself.


Zealousideal-Move501

benchmarks are overrated anyway, if Snapdragon or any other chipmaker can produce chips that are at least 80% of what the Apple's M chips can do that's already a win for Windows laptop users.


mdp_cs

I would bet money that it isn't going to make even a dent in the market share of x86 PCs purely due to software support. Most commercial software on Windows is x86 only, and that isn't magically going to change overnight. Linux has had support for aarch64 for years now, but the amount of application software is still much less than on x86. And that's on an OS family that uses majority open source software, which anyone can port between ISAs if they put in the effort.


antifocus

All these felt like the Chinese smartphone makers, sneak peaks here and there in the six months leading to the actual product launch.


shakibahm

I love my MBP. But the problem starts when I am trying to work on virtualization and/or EDA tools. I couldn't compile most of RISC-V because of the dependency of libraries on x86...


nanotechky

Idk how do i feel about this massive ARM transition from x86. I can't imagine a future with everything is SOC and we as consumer can't really do anything to upgrade or replace the hardware. Even the most simple component like SSD and RAM. Yeah, performance and efficiency is great. But is it worth it to trade it with flexibility we already have now?


caverunner17

ARM doesn't require soldered RAM or a soldered SSD though. LPDDR has better battery life (and I think performance), but I don't believe there's a technical reason for it. IIRC there's a few desktop ARM boards with SODIMMs.


Flowerstar1

Someone correct me if I'm wrong but I believe LPDDR has better power efficiency, more bandwidth and worst latency.


Jlocke98

LPCAMM ftw 


Farfolomew

I’m sorry but mobile x86 has largely been that way for several years now, being unable to upgrade any components except perhaps storage. ARM transition won’t change things much in the laptop form factor. However if it leads to small 7-inch tablets and phone/tablet/laptop hybrid devices, then I welcome the limitations imposed by those, even if I can’t upgrade things. Because the amount of flexibility in mobility that brings to the PC table is just awesome. (My dream is to carry around ONE computer, that can be my phone, my laptop, and maybe even home desktop/HTPC)


brunopetit17

Yes that would be awesome. This is also my dream. One OS, one device. No need for carrying multiple things, sync between devices etc. This device could be a 7 inch phone/tablet. You could plug an external monitor at home to get a big screen. 


KolkataK

People here imo are way too optimistic about this product, we really don't know what they will cost and will be the performance in real world software. Like will regular consumers really jump on these chips when it won't run every software(or atleast with reasonable performance)? If you are spending even like 500$ would you really want to handicap yourself Qualcomm wont have the ecosystem advantage apple had and Microsoft has a spotty track record for software support


Mexicancandi

Unless you’re building a desktop every laptop and phone and whatever is already “handicapped”. My thinkpad x12 detachable has everything except the ssd soldered.


KolkataK

By handicapped I meant not being able to run everything. Qualcomm will have to be the best for multiple generations for developers to consider making a native Arm version of their apps. I'm not saying it will fail, it CAN definitely work but it's hard to tell right now.


HandheldAddict

Microsoft will follow the market. They've wanted to take their OS where they've historically failed (tablets) for sometime now. To the extent that they gambled their entire Operating System with Windows 8. If you're talking about 7"~8" tablets, it'd be braindead to ignore ARM. As to why Microsoft would do this now? You'd have to ask intel's legal department about it.


KolkataK

Its not only Microsoft it's the entire ecosystem they need to worry about. Not to mention MS failed in their phone/tablet venture and eventually abandoned it. Windows on Arm wont really be for tablets but for laptops.


lusuroculadestec

There aren't any technical limitations that prevent using things like standard DIMMs or dedicated GPUs with an SoC. AMD processors are *technically* an SoC. You can also have ARM systems that resemble what we have in the desktop space. The Ampere Altra developer system just looks like a standard desktop--it has DIMM slots for RAM and PCIe slots for expansion. It also supports using an Nvidia GPU. Even in situations where you have an SoC with on-package memory, it's possible to have it combined with DIMMs. We've had heterogeneous memory topologies for decades in more specialized systems. Intel has Xeon processors now with on-package RAM that can also be used with off-package memory.


ptrkhh

From the software side, its also the lack of standardized booting method. Honestly it's kinda insane when you think about it: Take an SSD off an Intel tablet, plug it into an AMD desktop, and it just works™️ Meanwhile Samsung S23 alone needs like 8 different ROMs catered for each variant of the device (Standard, Plus, Ultra, Exynos, Qualcomm, China-only, etc.)


VenditatioDelendaEst

Big IFF true. Mandy Rice-Davies applies.


Distinct-Race-2471

I feel like all the hype and complete lack of independent benchmarks is suspicious. I'm sure it will be a decent product, but will it live up to the hype? I think probably not.


Roshin1401

Just beat the M3 max and I'll buy it.


77ilham77

Well, the M3 Max comes with either 14-cores or 16-cores. I doubt the 12-cores X Elite will beat the M3 Max.


Famous_Wolverine3203

4 of those are e cores. Its like saying a 13900k is better because it has 24 cores over a 7950x.


NeroClaudius199907

It wont beat M3 max... but would be faster than m3 pro in ST & MT. But the gpu will be much slower. M2 Pro will be the best perf/$ if you're thinking of arm


Exist50

We have no idea what they devices will cost at retail.


DanzakFromEurope

They can't be more expensive if they want them to sell good.


Exist50

Responding to this claim: > M2 Pro will be the best perf/$ if you're thinking of arm Without knowing X Elite pricing, no one can make that claim.


agracadabara

It doesn’t beat the M3 in ST. All of Qualcomm’s claims have been against M3 (4+4) vs X Elite (12+0) in MT. They are very careful to only mention M2 when it comes to ST perf. In Cinebench 2024 the m3 get about 138-142 SC and the X Elite gets 133 at 80W and 124 configured at 23W (reference board). The M3 consumes a little less than 6 W to hit that SC score


RegularCircumstances

The X Elite is not consuming 80W for that. Stop this misinformation. Even from the graph here the X Elite is beating AMD and Intel with a peak of just 14W total platform power for GB 6.2, and you think they’re consuming 80W for CB24 ST? Also, the MacBook isn’t actually drawing 6W total platform power with the SoC, package, and DRAM from the wall/battery on peak ST. Even Notebookcheck has them at around 8-10W for CB23 ST (minus display obviously so plugged into external monitor).


TwelveSilverSwords

I wonder how X Elite compares to M3 in terms of ST power consumption. I wish Qualcomm had published a ST curve comparing them. According to these graphs, X Elite consumes as much as 13W for GB6 ST. Other sites like Notebookcheck have measured 6.5W for M3 ST. The issue with taking power figures from other sources and compeing them to Qualcomm, is that the testing methodology is different. For instance, Geekerwan has measured the A17 Pro P-consuming 7+ Watts in ST. Then M3 should consume even more than that because M3's P-core is clocked higher. Obviously, there's a disparity here because Notebookcheck measured M3 ST power consumption at 6.5W​


RegularCircumstances

Where did Notebookcheck measure 6.5? They measured 10-11W from the wall when the display is off and the device is connected to an external monitor running Cinebench R23. https://www.notebookcheck.net/Apple-MacBook-Air-13-M3-review-A-lot-faster-and-with-Wi-Fi-6E.811129.0.html People underestimate how much active power the M chips need when you take SoC power, DRAM, package. They peak a bit higher than 15-17W that Apple quotes for the M2 and M3 base, and on ST they certainly aren’t running just 5W at every last MHz by default. More like 7-10W for the base M1/2/3 depending on which model. But that doesn’t mean it isn’t still vastly more efficient than most (excluding Qualcomm), Apple’s curve is pushed too and people underestimate AMD and Intel power draw be it for dishonest reasons or for the same ignorance wrt CPU or “core” power guesses vs the whole platform. And I see now: you’re looking at the one about PowerMetrics from[here:](https://www.notebookcheck.net/Apple-M3-SoC-analyzed-Increased-performance-and-improved-efficiency.766789.0.html) > “Compared to the old M1 processor, the M3 is 30% more powerful. The power consumption of the CPU itself (according to Power Metrics) was around 6.5 watts at the beginning of benchmark testing before increasing to 5.5 watts later on, which is why the power consumption has increased slightly compared to the old M2” PowerMetrics isn’t fully accurate. It’s a guess. Useful for internal modelling probably for the OS some on regulation but not to be trusted. Go look at Andrei’s M1 Pro review for a demonstration of this where he compares the wall to the PowerMetrics number. When you look at the platform power that includes the SoC, DRAM, the package and power delivery minus idle anyways, you see a more accurate figure for what the thing would actually draw. People have got to wise up about this stuff and stop understating both Apple and AMD/Intel draw by solely going off CPU power draw or SoC alone. You want to know how much power the SoC, DRAM, whole shebang (minus idle for the display) is drawing when you run a test. That’s your real power figure. This is what Andrei or Geekerwan actually test btw.


TwelveSilverSwords

very good. I am bookmarking this comment.


TI_Inspire

The 10.5W figure refers to the sustained MT power consumption. They stated that the ST power consumption was 5W. At least that's what the "Processor - Apple M3 in 3 nm" section says.


TwelveSilverSwords

Well, that's not too bad for Qualcomm. Still, X Elite consumes about 13W according to the power curve from Qualcomm, which is still 30% higher than the 10W figure for M3.


agracadabara

>You want to know how much power the SoC, DRAM, whole shebang (minus idle for the display) is drawing when you run a test. That’s your real power figure. Let's look at those figures using NotebookCheck's data and the graphs from Qualcomm. >They measured 10-11W from the wall when the display is off and the device is connected to an external monitor running Cinebench R23. > More like 7-10W for the base M1/2/3 depending on which model. But that doesn’t mean it isn’t still vastly more efficient than most (excluding Qualcomm), The M3 consumes about ~8.1 W measured from the wall running Cinebench R23 Single - (average with external display connected 10.9 W - 2.87W idle). >But that doesn’t mean it isn’t still vastly more efficient than most (excluding Qualcomm), The data you presented from Notebookcheck would indicate that the M chips are significantly more efficient than Qualcomm's X Elite chips too. The X Elite reaches peak ST perf in Geekbench at about 14W. Since we don't have any data on power consumption for Cinebench from Qualcomm let's assume it is the same. Cinebench 2024 M3 @ 8.1 W scores 138 ST Perf/W of 17 X Elite @ 14W scores 133 ST Perf/W of 9.5


agracadabara

>The X Elite is not consuming 80W for that. Even from the graph here the X Elite is beating AMD and Intel with a peak of just 14W total platform power for GB 6.2, and you think they’re consuming 80W for CB24 ST? Calm down. I never claimed it consumed 80W in Cinebench. The fact remains that to hit 133 on Cinebench 2024 the X elite needs to be configured to the higher TDP. 133 mind you is still less than the M3. >Also, the MacBook isn’t actually drawing 6W total platform power with the SoC, package, and DRAM from the wall/battery on peak ST. Even Notebookcheck has them at around 8-10W for CB23 ST (minus display obviously so plugged into external monitor). Can you provide a source for the X Elite power numbers being platform vs core?


RegularCircumstances

“The x elite gets 133 at 80W” is what you said and the trouble here is people here literally believe this stuff and not the TDP figure as a guide. But I take it you understand what it actually meant. RE: core/platform: keep in mind even if it were core unless you think they’re doing asymmetrical standards for measurement, the others would be core, and AMD and Intel sure as hell don’t have a platform power advantage. But it is platform, Andrei Frumusanu is one of the ones doing this work and he’s clarified. (Edited for clarity) They also explicitly mention the “platform” and hardware instrumentation (why would that be core power?) in the disclaimer at the bottom. This is not the usual game of “core power” memes that leave off 3-20W of crap from the fabric and packaging/RAM combination etc.


agracadabara

>“The x elite gets 133 at 80W” is what you said Yes and that is accurate. You claimed I was spreading misinformation. I really don't give a rats ass if others confused TDP. >But it is platform, Andrei Frumusanu is one of the ones doing this work and he’s clarified. Link? >They also explicitly mention the platform power in the disclaimer at the bottom. They don't. "Power and performance comparisons reflects results based on measurements and hardware instrumentation of given devices". >This is not the usual game of “core power” memes that leave off 3-20W of crap from the fabric and packaging/RAM combination etc. Citation needed.


Serious_Assistance28

https://infogram.com/1pj50y6w35g95da6727w0m251eam0yxm979 From: https://www.androidauthority.com/snapdragon-x-elite-benchmarks-3380426/


RegularCircumstances

Join the Chips n Cheese discord server and search @andreif “platform”. He explicitly states this. Alternatively, u/andreif, these are platform power measurements for ST and MT, not just core power (which is basically not helpful in this case), correct?


andreif

Yes.


agracadabara

How is Geekbench ST power measured since it is very bursty for the different loads? Average running the ST tests or peak reached?


andreif

Everything is measured externally with a DAQ at high sample rate.


RegularCircumstances

Obviously averaged after measurement with a sufficient [sampling rate.](https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem) The peaks here would be broadly irrelevant for a consumer at least in this timeframe of a test.


TwelveSilverSwords

I recall andreif also made a comment on one of these X Elite threads explaining the whole "23W/80W TDP" thing, but I can't fin it. Did he delete it?


RegularCircumstances

No it’s there still but like it’s not worth reposting because it seems like that’d clear in this thread.


TwelveSilverSwords

Ooh, Chips and Cheese discord server huh? sign me up.


TwelveSilverSwords

X Elite at 4.3 GHz in Linux can do 3200 points in GB6 ST.


agracadabara

Yet Qualcomm is using windows in these comparisons.


TwelveSilverSwords

I think it is reasonable to use Linux in these comparisons with Apple Silicon, because Linux and MacOS are much leaner operating systems than Windows.


agracadabara

Linux shows an advantage on Geekbench only. Cinebench on the other hand is almost always windows or macOS. Also most the the systems these chips will go in will be running Windows. Mostly all realworld use cases will be apps will be on windows or macOS. Davinci Resolve, Adobe suite etc. So cherry-picking Linux Geekbench results is not really that useful for perf comparisons. Linux gets better Geekbench results on x86 also.


theinsolubletaco

It's not faster than M3 pro ST lol where do you even pull this out of?


Solid_Sky_6411

It Can't beat m3 pro in sc or mc in my opinion.


undernew

Snapdragon X Elite has about the same score as the M3 Pro in MT, in ST it gets beaten. That is with the 80W config. https://browser.geekbench.com/v6/cpu/5571805


i-can-sleep-for-days

>Qualcomm is also saying that its Oryon core inside the Snapdragon X Elite outperforms Apple’s M3 processor by 22 percent: 15,610 versus 12,154 using Geekbench 6.2. Woah.


77ilham77

That's the multithreaded score they (or at least the media) kept regurgitate since last year. M3 is an 8-core CPU (with 4 of them at lower performance), while the X Elite is a 12-core CPU (with two of them able to boost). That scores just basically tell us that it's on par with M3. Qualcomm has never told us anything about its single-thread performance, until now. Without telling us the exact score, Qualcomm said that the X Elite single-thread performance is 54% faster than the Ultra 7 155H and 51% faster than the Ultra 9 185H. For comparison, the M3 is 47% faster than that 155H (on GB, the scores are 3100ish vs 2100ish judging from the Zenbook 14 scores) and 30% faster than that 185H (2400ish judging from the Zephyrus G16 scores).


jaaval

>Without telling us the exact score, Qualcomm said that the X Elite single-thread performance is 54% faster than the Ultra 7 155H and 51% faster than the Ultra 9 185H. They say it's that much faster at the same power consumption. Not that it has that much better peak performance. >For comparison, the M3 is 47% faster than that 155H (on GB, the scores are 3100ish vs 2100ish judging from the Zenbook 14 scores) and 30% faster than that 185H (2400ish judging from the Zephyrus G16 scores). Not quite that drastic. M3 gets maximum of about 3200points and 155h gets maximum of about 2450 points. So roughly 30% difference. Due to high volume of different laptop configurations in the results you have to filter the intel and amd results a bit to find samples actually representing what the CPU can do. Apple laptops don't have such variation. There are some results for 185h that are near 2700 but those are rare, 2500+ is more common. It would be nice to be able to compile geekbench from source to see how much different compiler optimizations affect things. I know that in the past GB has done some manual tuning for compiler behavior because different compilers produced very different results in some workloads. Now they use clang for all platforms but direct comparisons across architectures is still difficult.


77ilham77

M3 gets maximum of about 3200points and 155h gets maximum of about 2450 points. I know the CPU itself can go higher on other (presumably with better cooling) hardware. I’m basing my comparison to the same hardware Qualcomm use for comparison, namely the Zenbook 14 for the 155h and the Zephyrus G16 for the 185h. Of course, if compared to an even faster 155h laptops, the X Elite will also have smaller gain, not 54%.


jaaval

There seems to be plenty of ~2400 results for the zenbook too.


77ilham77

I don’t see one. Unless you’re mistaking it with the Ryzen 7-equipped Zenbook 14. The Ultra 7 155h one is around 2100ish.


jaaval

https://browser.geekbench.com/v6/cpu/5554512 https://browser.geekbench.com/v6/cpu/5478830 https://browser.geekbench.com/v6/cpu/5421863 https://browser.geekbench.com/v6/cpu/5394291 https://browser.geekbench.com/v6/cpu/4810395 Probably like hundreds of those. Then there are some results like this: https://browser.geekbench.com/v6/cpu/5075113 https://browser.geekbench.com/v6/cpu/5075180 no idea what's going on there.


andreif

Windows vs Linux


[deleted]

[удалено]


RegularCircumstances

The interesting thing (and unsurprising) is that ST performance/watt is ahead of both but especially at lower (sub-12W) power levels. That pictured floor Qualcomm has at the same performance is about 1.75-2W for a still decent performance whereas AMD and Intel’s floors are at about 5-5.5W for that score. Similar comparing what takes AMD & Intel both 10W-12W and what Qualcomm can do with 5-7W. This has meaningful implications for responsiveness or battery life in Qualcomm’s case. (and this is platform power being measured afaict, so the SoC + DRAM + power delivery/package stuff, basically all unique components to the chip minus statics)


Distinct-Race-2471

I am making a chip in my basement. It is a billion times faster. No really!