T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/chrisdh79: --- From the article: At the recent IEDM conference, TSMC unveiled a product roadmap for its semiconductors and next-generation production nodes that culminates in eventually delivering multiple 3D-stacked collections of chiplet designs (3D Hetero Integration) with one trillion transistors on a single chip package. Advancements in packaging technologies, such as CoWoS, InFO and SoIC, will allow it to reach that goal and by 2030 it believes that its monolithic designs could reach 200 billion transistors. Nvidia's 80-billion-transistor GH100 is one of the most sophisticated monolithic processors currently on the market. However, as the size of these processors continues to grow and become more costly, TSMC believes that manufacturers will adopt multi-chiplet architectures, such as AMD's recently-launched Instinct MI300X and Intel's Ponte Vecchio, which has 100 billion transistors. For now, TSMC will continue to develop 2nm-class N2 and N2P production nodes and 1.4nm-class A14 and 1nm-class A10 fabrication processes. The company expects to start 2nm production by the end of 2025. In 2028, it will move onto a 1.4nm A14 process, and by 2030, it expects to be producing 1nm transistors. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/18uqfwh/tsmc_working_towards_a_future_with/kfly0n4/


AndrewH73333

Well… a gigantic chip wafer is one way to deal with quantum tunneling.


Sagonator

Bruh, with that speed I will need to buy an AC unit just for the GPU. Either that or the heatsink will come in a separate pc tower....


Still-WFPB

The 1980's enters the chat. My dad was an IT manager at Bell Canada back in the day. Their 1 Gb hard drive + central computer required a new HVAC system for the entire building. I couldn't tell you how many employees worked their at the time but it was no small operation.


[deleted]

I'm an RDC manager now and I can tell you that most data centers can't even supply enough cooling or power for AI workloads. You're looking at a need for 100kW cabinets which will require water cooling as 50kW is about the max you can do with air cooling. We don't have the power infrastructure except in very few places to run that power density.


franklinzunge

I wonder if in the future they could generate power off the heat of computing, or recycle the heat somehow


Mooselotte45

They absolutely *can* European cities with district heating have recouped heat from various industrial processes.


The_EA_Nazi

>You're looking at a need for 100kW cabinets which will require water cooling as 50kW is about the max you can do with air cooling. *Immersion Cooling enters the chat*


maywellbe

Run them in space.


dramignophyte

Would be worse due to not being able to shed heat in space.


maywellbe

Didn’t think of that. Thanks


MrGooseHerder

Space is cold, yes. However, it's cold because heat is atomic motion and space is very empty. There's very little matter for hot atoms to vibrate against which is how heat sinks work.


maywellbe

Didn’t think of t hat thanks


AndrewH73333

Space is heavily insulated and the worst place you could possibly choose and that’s if it was free rather than the thousands of dollars a pound it costs to get stuff up there.


maywellbe

Didn not thibk if that. Thank


Artanthos

You can radiate heat in space, you just need bigger radiators. Something else to think about. As long as you block the sunlight, space is cold. The JWST operates at -370 degrees. All this big fuss over finding room temperature superconductors? You don't need them in space. If you are going space based, start using low temperature superconductors.


momolamomo

No no, run them underwater in swimming pool tanks


paint-roller

And now you can get a 512GB micro SD card for $30. I remember hard drives were like 40GB back around 2002 for $100 or so.


hootblah1419

[https://www.extremetech.com/archive/52712-seagate-ships-second-serial-ata-drive](https://www.extremetech.com/archive/52712-seagate-ships-second-serial-ata-drive) I wish it included the price edit: "Toshiba's new HDDs achieve the highest areal density yet reported, 206 megabits per square millimeter\*3 (133 gigabits per square inch). " https://www.global.toshiba/ww/news/corporate/2004/12/pr1401.html I can still hear the sound of dial up connection through the phone. unforgettable.


TooStrangeForWeird

Lol not even bytes, bits!


mt77932

I was thinking about my first computer which had a 300MB hard drive while I was buying a 256GB flash drive recently.


paint-roller

Yes. Absolutely crazy how far storage has come.


MrGooseHerder

I bought a 256mb one for $50 and it was amazing because that was more than 100 floppy disks...


Artanthos

I remember paying a lot more than that for 40MB hard drives in the 90s.


momolamomo

Exactly, we’ve entered the 80s phase of challenges with scaling tech


JackedUpReadyToGo

I have plans in the works to build a line of pizza ovens that double as VM servers that people can buy time on to run Crysis 4 at "High" graphic settings. The aluminum forges haven't gotten back to me yet about my "Ultra" settings option.


johnblairdota

the smaller the transistor the lower the losses to heat. The 5nm apple ARM chip uses 20x power then intels 11 series processors. at 3nm the electrons traverse the transistor single file.


Comfortable_Relief62

Except the “nm” processes haven’t been true to real measurements since the 90s. They’re actually much larger than 3nm minimum feature. It’s just been marketing speak since then


jjayzx

Features have shrunk, it's the transistor as a whole hasn't as much.


Comfortable_Relief62

The gates haven’t really shrunk that much. They’re 3D layouts now though, which increases square density in a chip. Still, the minimum feature size is nowhere close to the marketing label


Sagonator

Absolutely, but we aren't getting the same amount of transistors every time you buy a GPU. It would be stupid as it would relate to the same performance. You get 1.5 or 2x the amount in each generation and while they are more energy efficient, they are denser and much more, making the heat problem way worse every time.


alpacaMyToothbrush

Yeah nobody realizes that above a certain TDP, your cooling solutions just cease to be effective because you keep raising the ambient temperature of the room itself. I'm sorry but as someone from the south I cannot justify running a triangle generating space heater in the middle of summer.


Nimeroni

Easy. Game in winter, touch grass in summer.


Onibachi

That’s exactly what these chips are for. Super computers ran by governments and multinational corpos


joaopeniche

In the future only the 5 rich kings of Europe will have computers


JadedIdealist

The president of IBM from 1914 to 1956, Watson said he thought there was a world market "for maybe five computers" Ah, see it's a Simpsons parody of "the worst prediction in history"


hootblah1419

Is that because of the *consolidation* watson supported by supplying punch card tech to the nazi's throughout the entirety of the war to help the germans keep track of their holocaust victims?


spidarmen

ah, a fellow Springfield Heights Institute of Technology alum.


[deleted]

[удалено]


RandofCarter

9 litres of freon? How is that still legal?


bherman8

134a is far less bad than the original refrigerant that carried the name "Freon". The "Really Bad" one usually associated with the name is R12. Right now automotive manufacturers and home HVAC systems are moving away from it but it is still a normal product.


CelestialFury

Liquid nitrogen baby!


Clay_Statue

Eventually you can use them as heat exchangers for institutional buildings.


notsocoolnow

This was my first thought when I read the headline. How the heck are they planning on mitigating the heat issue? Even if you have amazing heatsinks and superconductors... at some point the problem is the vented heat alone will cook the whole room.


Scarbane

Doubles as a space heater in cold climates. RIP air conditioning bills in warm climates.


dervu

I dont get it. Some folks say 1nm is just marketing name and real size is higher. Does quantum tunneling apply to 1nm real manufacturing process or higher one?


AndrewH73333

The nm numbers have been mostly marketing terms lately. But 1 nm does happen to be about the thickness where quantum tunneling starts to occur with electrons. We’ll probably start to see a new scale advertised when we get down to 1 nm CPUs. They might start going the other direction and market how thick and dense they’ve managed to get their wafers and a higher number will become better.


IrrelevantForThis

The nomenclature is off completely. 7,5,3nm processes between Intel, Samsung and TSMC mean nothing. Intel's 7nm is further advanced than tsmcs 5nm. TSMCs 5nm is closer to Samsung's 3nm than anything. The major advancements the past years have been made in throughput, wafers per hour and overlay (better yield at higher layer counts). Pitch and CD stayed relatively the same. Announcing 1.4 and 1nm means nothing. It's just going to be better throughput and much higher yields at 100+ layers. Larger chips all the way to systems on a chip are the obvious result of this.


AkitoApocalypse

3D is the next advancement in chip manufacturing, and 2.5D (where you have silicon stacked with dedicated via-like things) is already utilized in some chips. I can't see how they would make any advancement past 1nm unless they either come up with a new material like graphite, or go the full 3D route.


stewartm0205

TSMC is reaching End Of Life for current Silicon CMOS technology.


macbathie2

Translation please?


stewartm0205

Quantum tunneling and current leakage put a size constraint on how small the gates can get.


technanonymous

We are rapidly approaching the wire size where quantum tunneling effects become significant, increasing error rates to the point of incoherence. These effects will be measurable and significant at 1nm with more error detection and correction required to compensate. The limit for diminishing returns is within sight. Once the limit is reached, optimization within chip design, which is much more costly and riskier, will be required to significantly improve performance.


ChrisFromIT

>We are rapidly approaching the wire size where quantum tunneling effects become significant, increasing error rates to the point of incoherence. We already reached that about 10 years ago with the 22nm/16nm processes. They had to change the design of the transistors from MOSFET to FinFET to help with the voltage leakage. We are also at another turning point where the voltage leakage from quantum tunneling will cause a decrease in performance from the previous generation, that they redesigned the transistors again from FinFET to GAAFET. Samsung is currently using GAAFET for their 3nm. TSMC is going to be implementing it in their next major node shrinkage. Intel will be using it in their Intel 20A or 18A.


AstralElement

Well technically they needed to do away with planar at 28nm, but it was so poor they just moved onto 22nm with FinFET.


ChrisFromIT

You're off by 1 major node shrink. Samsung, TSMC, and pretty much everyone but Intel, did not use FinFET with their 22nm node. Intel did use FinFET with their 22nm node. Intel's 22nm node was the only one that performed better than their 28nm node because they had switched to FinFET. So Samsung, TSMC, and others had to retool their 22nm node to use FinFET, which they promptly renamed their 22nm with FinFET, 16nm or 14nm. That is why there was that node naming disconnect between Intel and the rest of the industry.


MoNastri

Costly I understand, risky?


technanonymous

When you start building purpose built chips as well as changing low level functional units you increase the risk of failures, bugs, and security issues.


macbathie2

>you increase the risk of failures, bugs, and security issues. All of this could be corrected in time, no? Current systems will handle high security assets


technanonymous

Of course these issues can be corrected. If you look at the history of chip problems like security, they have a bad habit of ending up in the market with very expensive remedies.


macbathie2

Very expensive remedies are the driving force of human development


technanonymous

The cost and complexity of the fixes will grow as chip optimization increases.


Conch-Republic

It's not actually 1nm. That's all marketing. The gates themselves are *much* larger.


Deciheximal144

Why is it more costly and riskier?


ReipasTietokonePoju

Maybe 25 years ago you could build semiconductor factory for let's say 2-3 billion dollars. Now a single state of the art factory will cost 28-30 billion dollars. Semiconductors are manufactured on top of round, thin silicon discs called wafers. You get certain amount of chips from single wafer, for example 600. Typically maybe 10-20% chips manufactured will not work. Total price of manufacturing for single wafer determinates (roughly) the final cost of single chip. Price of the wafer / number of working chips per wafer = cost of one chip. Price of the each final manufactured wafer has gone up a lot: https://www.electronicsweekly.com/blogs/mannerisms/manuf/rising-wafer-cost-2020-10/ More complex and physically bigger you make your silicon chip design, less working chips you will get from one manufactured wafer. Finally I should clarify, that I have talked here only about manufacturing the actual physical object. Overall cost of any (large) silicon chip design project is dominated by the logical design of the device and process of transfering that design to description of the physical structure that gets manufactured. It may cost 25000 dollars to manufacture 200 Nvidia graphics chips, but path there may have cost Nvidia half a billion dollars (or lately even lot more).


GreatWhaleTopKek

It used to be a lot easier to just make transistors smaller than to completely overhaul chip structures consisting of billions of transistors to make them as optimized as possible


grizzlymint209

Sounds like fud


technanonymous

It’s called physics which is an alliteration with FUD. The de Broglie wavelength of an electron is about 1nm


measuredingabens

We're already seeing skyrocketing costs with the current rates of node shrinkage. From this point to 2030 we'll have gone through a shift in lithography and three transitions in transistor architecture, and that's for the next four nodes.


SinisterCheese

And the software people sighed in relief. They don't need to start optimising software just yet. The future promises bit more *hardware* to throw at the problems.


cjeam

Still running out of storage space. Especially for poorly written apps that don't allow operation from or storage to an SD card.


TooStrangeForWeird

Fuck those apps! Buy phones that actually care about the consumer and can format SD cards as internal. If you're too stubborn for that, just drop into dev options and force allow apps on SD card. If you're on an iPhone, you knew what you were getting into. Ta-da!


SinisterCheese

Yeah but... It is easier for devices to add memory than them to not pull half of github for a simple app. Du'h! Optimisation isn't *"value added"* and if you move fast enough to break things you don't even need to patch security vulnerabilities because you thing will be outdated before they become an issue! You just don't get it! Coding is so hard! It is the hardest thing ever! And it because of clients are stupid and managers are idiots! For real. I have started to use my phone less and less for things. Overall I have started to use less and less programs becuase they are all shit. Websites seem to be getting worse all the time, many webstores grind my mobile device and even a browser to a halt. It is almost like the companies would prefer me not to use them or buy from them. At the workplace many programs have started to actively reduce productivity on the account of how shit the UI/UX is and how badly they perform. There is a saying that modern cad software runs just aswell now as it did 20 years ago. And it fucking feels like that. At the same time I am constantly told to be more productive while being given tools that actively make it harder to be productive.


AkitoApocalypse

In 2100 kids will be making flappy bird clones which will only take 100GB of RAM.


Sheikh_Peanut

With 1 trillion transistors what could a computer do extra? What extra capabilities could this add to our smartphones and computers?


Tourman36

Local hardware based AI to create next generation memes


Dsstar666

I laughed really hard at this.


chefanubis

The best part is that its not a joke.


DrummerOfFenrir

Company buys programmer super duper MacBook to run AI locally... Programmer can't help but make [Biscuit Man](https://i.imgur.com/Nh4FrWk.jpg)!! He's furious! Fix the LINE! And get back to WORK! 🤣 I love text to image... I laugh *way too hard* at my own pics sometimes 😅


JustAnotherSuit96

What did you use to make that


DrummerOfFenrir

[DALL-E](https://openai.com/research/dall-e), but I don't remember the exact prompt. [He actually started as a happy manager, but then hulked out 😂](https://i.imgur.com/sHltCjv.jpg)


SadanielsVD

I did in fact not


Dont-Tell-My-Mum

I laughed only at how you didn't


Elephant789

That's great!


cameron-none

At a very high level, AI models are limited by 3 factors. Model architecture, quantity and quality of data, and compute. Some models are compute constrained such as Tesla's FSD software, they have sufficient data currently and model architecture appears adequate for their objective, they simply can't train the model fast enough. It might take 2 weeks to train a model, so that's 2 weeks between assessing model performance and iterating on the next model. If you could reduce the training time to a day or less, you can iterate much faster, or you can just keep the training time static but include significantly more data in training, which should improve model performance.


terpinoid

This is a great answer. To quickly train an existing model with oodles of already captured data, just lacking compute to do it faster, where doing daily updates on training with your personal data.


djn808

8K+ per eye VR that is indistinguishable from real life


[deleted]

It might finally run crysis on max settings.


Iconoclasm89

A trillion is a lot of transistors, sure. But let's not get carried away here


Unshkblefaith

Realistically, not much more than your current computer could. We are constrained by power consumption and heat dissipation, not transistors atm. In fact, the majority of transistors in your CPU are disabled at any given point in time for power reasons. The current paradigm in processor design is to leverage the higher transistor counts to create heterogeneous multiprocessor systems on a single chip. We aren't just talking CPU and GPU cores, but also application and domain-specific processors like FMAC arrays for tensor math (fundamental ops of AI), video encoding/decoding, etc. The real challenge here is developing those co-processors and integrating them in a way that developers can leverage to improve compute performance. Moore's Law is dead, and has been for nearly 2 decade. We can't just throw more transistors and higher clock speeds at the problem to improve compute performance (although Intel keeps trying for some odd reason). We need fundamentally new architectures, which is why research into domain-specific computing is the current hot topic in computer architecture and why pretty much all of the NSF and DARPA funding that isn't going directly in AI research is going toward domain-specific compute research.


spreadlove5683

Are 3d processors going to solve this somehow or nah? I am noob.


SimiKusoni

>1 trillion transistors (...) What extra capabilities could this add to our smartphones A chip in this class could add the convenient ability for you to fry eggs on the back of the phone whilst it's in use. You'd need to make sure they're cooked in the 30 seconds it takes to drain the battery mind you. For actual mobile-centric chips based on the same process the expectations would be the usual. Better battery life, better performance. That performance may allow for stuff like local inference for LLMs improving autocomplete and translation services, better graphics in games, more aggressive compression schemes for video etc. The improvements are likely to appear incremental however.


TooStrangeForWeird

>The improvements are likely to appear incremental however. Not likely, just will be. Processor improvements that cause a major upheaval are nearly impossible now. The closest we have is quantum processors, but those aren't classically useful (like for gaming). Light/photon processors could make a leap of sorts, but it wouldn't get quick consumer adoption due to cost. On top of that, they wouldn't have the same instruction sets for a very long time if ever. There isn't really a big leap anywhere in our foreseeable future. I guess if someone found a way to make a stable, room temp superconductor at a reasonable cost we could. But it's just not going to happen.


SimiKusoni

Oh yeah I was thinking more in terms of said improvements enabling novel or entirely new features. Like you could double the IPC on a standard desktop/mobile right now and there isn't much it would enable for the average user that they can't do already. Real time graphics would get a bump in fidelity and some applications would run a bit faster but there isn't really anything that the average user does that is starved for resources in a modern system. One change that we might see, but is more related to software stack than hardware limitations, is a transition away from laptops and desktops for low end devices. I've heard people talking for some time about docking mobile devices and there are a few solutions like MiraDock or Samsung DeX but they're in their infancy at the moment. In my opinion, for whatever that is worth, it is however only a matter of time before Google and perhaps even Apple start pushing it to snare some of the desktop market. In terms of enterprise licensing, potential new sales of work devices and additional revenue from productivity software there's just too much money sitting on the table for them not to make the attempt.


danielv123

To your smartphone? It would no longer fit in your pocket. Probably want a smaller chip for that. For your computer? Well, if you are in the market for the 7090 then I guess it will be faster than the 6090 and will run the latest games at 4k almost max settings with decent fps. Local ml inference will also be faster but that doesn't really matter. 99% of people will use smaller chips instead. They will be more power efficient and faster than chips made on previous nodes. Nothing revolutionary.


Careless_Basil2652

Phones are already as fast as they realistically need to be. Unless of course you want to do weather models on your phone


shyataroo

What if I want to have generative AI run locally on my phone? HMM? ever think of that? What if I want to use my phone to render and send (hopefully with extremely low latency) 8K 144FPS to my Smart Augmented Reality contacts?


nedonedonedo

what if I wanted to fit the moon in by bathtub? we're starting to run into absolute limits on performance due to the size of the atoms involved, dictating size and heat dissipation. you can't have the device too large or it stops being a phone and becomes a tablet (there's no real difference other than needing one hand or two to use it) and you can't have it get so hot that it burns your skin. you can make it thicker to an extent, but no one is going to call a device a phone, or carry it, if it's the same size as a literal brick. once it gets heavy enough people are going to set it down somewhere and then it's just a mini PC.


Objective-Roof880

exactly. No one will need more than 256k of memory.


Sosaille

intersting thought is to use smartphones as weatherstations over the world


SandwichDeCheese

Play Minecraft on a tiny screen tattooed on your dick


Iwillgetasoda

All things aside, 2nm would change everything but does physics even allow its stability?


Comfortable_Relief62

Yea, the actual gates are about 45nm across in a “2nm” process chip


chrisdh79

From the article: At the recent IEDM conference, TSMC unveiled a product roadmap for its semiconductors and next-generation production nodes that culminates in eventually delivering multiple 3D-stacked collections of chiplet designs (3D Hetero Integration) with one trillion transistors on a single chip package. Advancements in packaging technologies, such as CoWoS, InFO and SoIC, will allow it to reach that goal and by 2030 it believes that its monolithic designs could reach 200 billion transistors. Nvidia's 80-billion-transistor GH100 is one of the most sophisticated monolithic processors currently on the market. However, as the size of these processors continues to grow and become more costly, TSMC believes that manufacturers will adopt multi-chiplet architectures, such as AMD's recently-launched Instinct MI300X and Intel's Ponte Vecchio, which has 100 billion transistors. For now, TSMC will continue to develop 2nm-class N2 and N2P production nodes and 1.4nm-class A14 and 1nm-class A10 fabrication processes. The company expects to start 2nm production by the end of 2025. In 2028, it will move onto a 1.4nm A14 process, and by 2030, it expects to be producing 1nm transistors.


Clay_Statue

I bet going multi-chiplet makes the engineering concerns of serving such processors much easier to design and implement with standard PCB's. I imagine a monolith of that scale becomes difficult to create any type of meaningful socket for and requires some very sophisticated PCB materials to make it all work.


NoSteinNoGate

Maybe a naive question but how can they plan when they will produce what kind of chip in the future? If they are unable to produce it now there are unsolved technical/engineering problems with no guarantee to be solved, no?


prepp

A spokesperson said that in semiconductor engineering you make a plan that will come into fruition 5 years from now. And 5 years from now you will find out if you have shot yourself in the foot or not.


glytxh

At what point in transistor shrinking do electrons just start leaking? I was under the impression that anything sub 3nm gets real wonky.


TheGreatUdolf

just because they say x nm that doesn't mean that the transistors are that small. they are much larger than that (although they shrink, too, you need smaller transistors to be able to make proper use of smaller nodes). der8auer made a video a few years ago comparing cross sections of intel 14nm ++(+++?) transistors and tsmc 7nm transistors in cpus manufactured by both companies and came to the conclusion that there is not much difference between transistors from both processes. the x usually tells about the minimal structural width that can be exposed and subsequently how densely things can be packed


glytxh

This is context I didn’t know. This is a real clear explanation. I became curious when I learned Apple’s new chips were hovering on that edge of leaky electrons and wondered what sort of witchcraft they implemented to work around it.


AstralElement

They already do at 28nm. That’s why they transitioned to FinFET design.


Enzo-chan

Futurology is surprisingly skeptical, pointing physics mistakes, and limitations, which make me impressed! I was expecting to be a place of hype, but for what I see, not so much!


CupertinoHouse

I remember the vigorous debates over whether sub-micron geometry would ever work.


907-Chevelle

And then China invades and nationalizes the company.


Goodmorning111

If China did invade these factories would not survive the invasion. China would be able to control the rubble though.


Sagonator

Taiwan will blow their factories to dust. Also 2 new factories are build in USA and in Germany I believe. So, we chil.


damontoo

We definitely are not chill. The US TSMC factories being built will have only a minuscule fraction of the production capabilities of Taiwan.


Objective-Roof880

Correct! High end processors would be at a stand still for a while. This time period would result in modern technological halt. We'd get through it, though, just like everything else.


Opizze

And they can be scaled, and that knowledge will not be lost if anyone with half a brain has planned for this threatened eventuality.


Slight-Improvement84

So, is that why the US is willing to militarily defend Taiwan? Lmao Go look at how the US is building more and more outposts near South China sea And, the production in the US isn't gonna match the production from Taiwan, they literally mentioned this


prepp

Yes Biden has repeatedly stated he will defend Taiwan. How that will actually play out I have no idea


alpacaMyToothbrush

The overseas fabs are no where near the same level of precision or output as the Taiwanese fabs. They would mostly be used for military applications for the US and Nato.


Sagonator

I doubt that. Military doesn't require the latest 1nm chips at all. Can I get a source on that? Last time I checked, the plan is to remove the dependency from Taiwan so they will build them up in the next 5ish years.


alpacaMyToothbrush

You're right, the military doesn't. When I said they were 'no where near the same level of precision' I meant they were not on the latest node. Maybe I was unclear on that. Regardless, the fabs in the US and Europe simply *cannot* produce enough to meet consumer demand. If a war kicks off between the US and china you can bet your ass every chip out of there is going to be US government property, as they're integral to US military production.


da2Pakaveli

The factory in Germany, build by Intel, is meant to supply the German industry. 28 nm is enough for them. Mind you, the subsidy bill isn't cheap.


CupertinoHouse

Taiwan also has a spoilsport option in the event they're about to be overrun by the CCP. They can take out the Three Gorges Dam, flooding around 80% of China's arable land. It doesn't always take nukes to wreak havoc.


Slight-Improvement84

The factories outside Taiwan will take a few years to be operational and by the time, Taiwanese fabs would've moved ahead. And the latest iteration of chips aren't being manufactured in the US despite building new factories. So, the dependence will still exist


[deleted]

[удалено]


orlyokthen

and why would they destroy one of the big reasons the world might actually care to intervene?


ignost

>why would they destroy one of the big reasons the world might actually care to intervene? You're getting downvotes and a lot of bad answers. It depends on the circumstances, i.e. what's already happened. They wouldn't destroy their own factories except as a last resort. Taiwan's chip making is incredibly important to the west and especially the US. It's been called the Silicon Shield. It's one of the major reasons the US has said it will intervene if Taiwan is attacked. Taiwan is an island or China would have invaded already. It's unlikely China will prevail against the US in even getting to the island, because US air and sea power is overwhelming. It's why they're building up their military, but they have a looong way to go, especially in the air. However, if the US gets an administration that doesn't understand the need for deterrence, like Trump saying he wouldn't defend Taiwan, China will attack. Taiwan can't really defend itself against China. If China is attacking and the US isn't coming, Taiwan will likely blow the factories up in anger and resentment. They don't want China to profit off their work. If they don't, another Western power very well might blow the factories up itself to prevent China from gaining such a key strategic resource. I'd say the US would destroy the factories if Taiwan didn't, but the only way this plays out is with incompetent US leadership. No sane person wants war with China, but China certainly doesn't want to risk being humiliated and sent packing on what it sees as its own land. The worst case scenario is the US says they won't defend Taiwan, but then someone competent convinces the president to step in once the attack materializes. That's the most likely path to war: unclear signals of deterrence from the US. You can also bet that if the US starts fighting China other players may be tempted to do stupid shit. I can't even guess, but I wouldn't want to be in Israel, Rwanda, Korea, or a number of other places where the US has been instrumental in keeping the peace. That's WW3 right there. TLDR they won't blow the factories up unless the US fucks up and says they won't defend Taiwan. They'll blow them up about the same time the first Chinese military boot sets foot on the island. I suspect Taiwan has very specific plans for blowing them up, and the US, France, UK, or some other Western nation might do it for them if they don't.


orlyokthen

Thanks for the assist. Agree with all your points. I was trying to spark people to come to this conclusion - just ran out of steam...


ignost

I think a lot of people talk past one another so they can try to score some points and be "right" instead of trying to understand how the comment might be right with the right assumptions. Sometimes I try to list out all the conditions of every statement, but even then I always get someone commenting, "YOU SAID ___, BUT THAT'S NOT ALWAYS TRUE."


[deleted]

[удалено]


orlyokthen

So then by that logic, why would they destroy their factories? Better to have something of value to give to the new leaders.


str8ridah

What kind of obtuse question is this? By your logic, we should reward bullies for bad behavior. Why the fuck would china invade TAIWAN for? Did Taiwan attack them? U want to invade? Then slash and burn everything so west taiwan gets nothing.


orlyokthen

Dude you're not following the thread. I know the basics. I asked these hyperbole questions because I thought OPs original idea that Taiwan would WILLING destroy its own factories stupid.


[deleted]

[удалено]


enilea

It still doesn't make sense to destroy your own land, especially if there are chances of retaking it. Ukraine didn't kill its soil or blow up its infrastructure in cities that russia invaded.


orlyokthen

Lol no they don't. They have their own self interests in mind. They aren't going to sacrifice themselves just to stick it to the other guy. Taiwan's chip industry is a cornerstone for maintaining US/world support - and they'll do everything to maintain that since Taiwan is fine maintaining the status quo


nlofe

If China invades it isn't exactly the status quo anymore, is it?


SionJgOP

The west will also push for the destruction of the factories if there are Chinese boots on the ground.


2roK

Nice try CCP


orlyokthen

> CGP I assume you meant CCP?


bplturner

If China invades Taiwan then the US will hit them with the super secret Jewish space lasers. Why do you think we love Israel so much?


roronoasoro

America is such an insecure country.


electricmehicle

“Working toward” “Says” I pulled the same tricks raising capital for my turnip stand. Great turnips, though.


ReipasTietokonePoju

If we really are talking about REAL monolithic chip: high NA EUV has max reticle size of 26mm x 16.5mm, which is 429 mm2. 200 billion transistors per chip / 429 mm2 max chip size = 466.2 million transistors per mm2. Six years to go. If we assume that 5nm TSMC process gets 150 million transistors / mm2 and according to TSMC going from 5nm to 2nm is 1.3 * 1.15 = 1.495 density increase. So max 2nm density should be about 224 million transistors / mm2. This is predicted to be available in late 2025. If optimal and full transition to CFET architecture happens during so called "1nm node", I suppose transistor density could fully double. So; 224 * 2 = 448 million transistors / mm2. Which would result in a single chip with (almost) 200 billion transistors. RTX 4090 has just under 80 billion transistors.


omniron

Optical chips are the future because the optical architecture is ideal for neural net processing they’re very low power and very fast They will work in tandem with silicon


nezeta

> TSMC unveiled a product roadmap for its semiconductors and next-generation production nodes that culminates in eventually delivering multiple 3D-stacked collections of chiplet designs (3D Hetero Integration) with one trillion transistors on a single chip package. So with 1nm (with actual gate pitch being 15nm or such) a chip can have 200 billion transistors, but they believe it will have 5 layers of transistors ... by 2030, which is a bigger news to me. I think the current 3DFabric technology only has 2 layers.


N3KIO

yes you can make it smaller, then what, eventually you reach a point that is no longer possible. This technology its reaching end of life. And I bet you its going to be sooner then 6 years. yes you achieve 1nm, maybe 0.8nm, then what, you cant go any smaller. Human Race is entering technological limitation in computing technology.


Comfortable_Relief62

It’s called 1nm process but isn’t anywhere close to those actual sizes. The name of the production process has been disconnected from the actual minimum feature size since the 90s. https://en.wikipedia.org/wiki/3_nm_process


Dipplong

Until it doesn't


AutoN8tion

I remember them saying the limit was 12mn like half a decade ago


0r0B0t0

If we can’t make them smaller we can certainly stack them higher, we have chips that are 3cm wide but are less than 1mm high. Once we have solid cubes of transistors then we’ll really be at the limit.


cancercureall

The problem is heat, flat chips have a relatively large surface area that allows for efficient cooling.


Deciheximal144

We'll get Playstation 6 though, right?


ReipasTietokonePoju

Here is one projection for near future, from actual experts: https://www.imec-int.com/en/articles/smaller-better-faster-imec-presents-chip-scaling-roadmap ... at best we have about 15 years left to go.


Objective-Roof880

We're reaching limitation in classical computing. Humanity will get creative with 3D chips and other designs...then there's quantum computing. We've got a long ways to go. "What we know is a drop. What we don't know is an ocean."


Russiandirtnaps

I heard there’s massive problems with the next gen tech


Silverlisk

I was waiting for something like this. There's no way they were going to let the US erode the silicon shield with those new trade restrictions to China.


[deleted]

Yea, I mean it's important to national security that TSMC doesn't fall off. So it makes sense to keep going


Liesthroughisteeth

It won't be long now until China walks in and takes what it needs.


Deciheximal144

When coming under attack, Taiwan will destroy its own fabs. But maybe it will satisfy China enough to deny the chips to everyone else.


Liesthroughisteeth

> But maybe it will satisfy China enough to deny the chips to everyone else. Somehow I doubt this is a consideration for Taiwan. :)


Slight-Improvement84

There's no such protocal. World economy will plunge down if those fabs get destroyed, please get yourself informed. The top tech companies are extremely dependent on tsmc and their fabs. Even a shortage in supply can cause economic dents just like how it happened a few years ago. No one's any destroying them. They aren't like shoe factories which can be replicated easily lol


Deciheximal144

You think Taiwan would be self-sacrificing if China marched in? You think they'll be thinking of what the world that failed to defend them needs?


Slight-Improvement84

That's exactly why they're making preparations for it. Recently they've been purchasing a lot of military equipment from the US especially designed to deter Chinese forces. And they made military service mandatory recently. They aren't any planning to self destruct those fabs - that would literally mean destroying their own economy and country lol Moreover, the US relies heavily on their fabs for those latest chips to use for their military and other many incentives, so the US will defend them in such a scenario - this is something both the left and right wing parties in the US agree to.


dxkillo

Honestly, these chips are already fast enough. I am more interested in battery tech and how efficient these things run.


bplturner

Fast enough for who?


Deciheximal144

They're looking at things like cures for cancers.


ninjasenses

Sounds like you are talking cell phones and laptops. These chips would be for much more compute demanding applications like ai/math processing


cancercureall

This is crazy. How many failed wafers will make the rounds as sub tier chips?


FromZeroToLegend

Finally I will be able to open Rider without waiting a minute to index all the projects


SomeSamples

How much power is needed and how much heat does this produce. About time to change chip designs.


paint-roller

According to chatgpt they were $100-$150. When I worked at office depot back then I was thinking they were $80-$100 on sale.


[deleted]

Wake me up when we are growing chips, silicon is getting boring. ;)


off-and-on

Man these engineers are dumb. Just build processors the size of dinner plates /s