There's a saying: The feature size is equal to the number of fabs that can make it.
This number is actually a lot lower than previous estimate, which is close to double those number.
Apple sell roughly 60%+ Flagship iPhone every year, that is close to 140M unit, it would cost them roughly $2 per unit. As far as I am concern, I don't think Qualcomm sells anywhere close to that number of SD845, although the design cost can be utilise in their mid range phone at a later date.
And it basically means from 7nm onwards, Only a few companies can afford using leading node, namely Apple, Qualcomm, Huawei, Samsung, Nvidia and AMD. Other smaller players like Mediatek might have to wait longer before adopting it.
Of course, at those prices there will be fewer SOCs. That means, on average, that each SOC will get more design wins, and therefore have higher volume...
It also means that each SoC will have more pressure to cater to more customers which seems more dark silicon which will reduce the advantage of smaller process sizes
Wouldn't dark silicon help with both yield and heat though?
It's very difficult to save power with unused silicon on newer process nodes due to high leakage. You actually have to turn it off. Putting each IP into it's own power domain just isn't feasible. Not good for power sensitive applications.
We were being asked to look at a design where we literally didn't have anything in the middle of the chip in order to reduce local power (specifically due to current limitations of the bumps)!
Is there actually an advantage of the new process for those applications? Will any leading-edge company actually choose to use an older process because it performs better for them?
Yield makes sense. Surely heat issues are better done on purpose not incidentally?
From the article:
One of the factors that prevents smaller companies from designing FinFET chips is development cost. Some estimates put the average cost to develop an SoC at around $150 million in labor and IP licenses. With N5 generation, these expenditures will rise to $200 – $250 million, according to EETAsia, which will limit the number of parties interested in using the tech.
Ouch. A $250M NRE cost for an SOC, really puts pressure on guaranteeing the volume of the part in order to recover any sort of margin on a part like that.
Pretty cool to see EUV seem to finally emerge out of the 'someday' status into the 'some now, more next year' status.
Basically everybody is lying by 1 full step except Intel, who is only lying by 1 half step.
>"The naming of process nodes by different major manufacturers (TSMC, Intel, Samsung, GlobalFoundries) are partially direct lies (aka “marketing”) and not directly related to any measurable distance on a chip"
Ha! Love Wikipedia so much.
Awesome thanks, this is exactly the sort of thing I was looking for.
best compare densities but apparently intel's 10nm will be on par if not very close or better than competition's 7nm. So it's not late to the party, source: GN iirc.
Does someone have an up-to-date table comparing the different fabs' interpretations of "Xnm" process size? I recall them all having very different values (to the point where one might actually be a whole process size behind, but not admit it for marketing purposes).
Well the solution is simple: use longer exposures, of course, that has the unpleasant side effect of annihilating your throughput on extremely expensive machines...
EUV is truly amazing in that it works at all. The shot noise is simple to fix in theory - just get more photons. In practice, it's going to be really challenging to get more light out of the molten tin droplets.
IIRC the efficiency is hilariously bad. It's something like 100W of EUV light takes 1,000,000W (1MW) to produce.
Efficiencies did increased like 10 times since the time of first experimental machines, yet even that got EUV just to the level of barely usable.
So you still need the multi-megawatt death ray as a primary light source for a fab scale use.
Free electron lasers didn't pan out?
Not yet, 1 megawatt death ray is more economical than a freaking cyclotron+fel combo for now.
I did an internship for one of the two big lithography optics companies back in 2005 in their R&D. Their engineers and scientists working on EUV were very cynical about the EUV prospects. Some of that will have been because switching from tech you understand (lenses) to something you don't (mirrors). But wow, did they NOT think that was going to work.
QED It's hard to change company culture.
I've heard it's something like 2.5x longer exposure, which might negate the benefits of the newer process in the first place. : \
Why? The whole fab becomes 2.5x slower?
It becomes a bottleneck, yes.
Octa patterning? ;)
Time to buy ASML stock. ;-)
So is ASML a monopoly? I was curious some time ago, and couldn't find any other lithography machine manufacturer that could really compete. I had previously assumed Intel made their own stuff, but nope, ASML there as well.
> By contrast, TSMC’s second-generation 7 nm manufacturing technology (CLN7FF+, N7+) will use extreme ultraviolet lithography for four non-critical layers
As in, "like everyone else, we haven't really figured out EUV shot noise yet".
Nice marketing though, lolz.
That's sort of right, basically it's the first real production runs, when you think everything is ready & will work, but you haven't actually made a real product yet.
At the start of risk production, by definition there have been no customer designs put through the fab. Actually this is not quite true, Sun explained. Before the start of risk production the company has already run a number of shuttles with test chips from customers, so foundry and clients are already starting to wring out the more critical structures in the first customer designs. But these test shuttles are not full chips either.
So, it's a point in the process where the foundry is taking a chance, and the customer is also taking a chance.
I've also heard it applied to a customer design, e.g. the customer starts to volume ramp production before they finish fully testing the design.
As a customer I've always heard it used as the latter when we tape out (or spin from a bug) and think we're ready to go so we start some wafers down the line behind the first ones to reduce time to market
>> "Risk Production means that a particular silicon wafer fabrication process has established baseline in terms of process recipes, device models, and design kits, and has passed standard wafer level reliability tests."
It sounds like a batch produced to certify the process for insurance and contract purposes.
From starting a batch to where products are actually testable takes >3 months. "Risk production" means that the foundry says, "okay, we think everything is fine now, but we make no guarantees that it will work". Customers then have the option of purchasing super expensive wafer starts that might or might not work. You might luck out and have lots of great next-gen chips months before your competitors, or alternatively you might get the GF100 thing happen to you where you get five working chips out of each $10k wafer.
When chips are manufactured there are process tests using test structures embedded in the pad ring and scribe area between the die images, as well as functional tests that are specific to the design.
Risk production just means the "customer" will accept the parts from the fab without the parts being fully tested.
In other words, the customer is accepting the "risk" of parts not working. When the fab settings and test flow are mature then the fab assumes responsibility for parts that do not pass the testing.
It's basically pre-calibration production. As wafers come off the line you learn that there are miscalibrations, parts that don't work perfectly, places where things are misaligned or maybe other minor issues that impact performance and yields.
You don't need to do anymore R&D or major NRE, but you may need to do a lot of tweaking and tuning. Risk production is generally low volume because stuff doesn't work so well yet.
Its industry terminology I think. I've seen it used at some companies I worked at with IC development.
What is "risk production"? I can't help but reading it as something like, "we even risk producing some chips at some point!", which I presume isn't quite it.
Not quite yet. When I worked for Zeiss in 2005, it was also the next big thing. Well. Maybe the next next big thing but still.
I remember when I was doing my master's thesis for ASML in Eindhoven in Holland (their headquarters). They had a separate building on the campus just for the development of EUV, which was the next new thing. This was in 2011 and they had already been working on it for a couple of years. Happy to see that chip manufacturers are now using this technology for their production. It looks like the bet paid off!
In general, the US maintains unofficial diplomatic relations with Taiwan, which means that although the US does not officially recognize the government of Taiwan, it is subject to different tariffs than those imposed on China. Similarly, tariffs imposed by Taiwan are set by the ROC, not the PRC, and therefore are totally distinct from this trade war.
More information can be found in this position statement from the Department of State:
Interesting, so in that case, does Taiwan benefit from this ongoing trade war, or is the heightened tension and fear of political/military escalation nullifying any benefit that might be felt?
> Taiwan benefit from this ongoing trade war
Taiwan is hurt indirectly. Taiwan has invested heavily on the mainland China and manufactures there.
Many Chinese products exported to US have very low Chinese value added. Chinese manufacturer buys Taiwanese, Japanese or US made components, assembles them and sell them to the US.
For many high-tech products, Chinese value added may be just 20% of the price of the product. Chinese tariffs hit everyone in the value chain. Even the US.
For companies, those who have much bigger stakes in China are suffering (e.g., Foxconn). But there are also companies benefiting from the trade war since they don't incur the extra cost like their China-based competitors. The companies like TSMC are barely affected since they have a reasonably low percentage of yields in China. For labor/domestic businesses in general, they benefit from it like India and other SEA countries because the global supply chains are coming to diversify and they are attracting more manufacturing investment now.
That is not strightly true because there isn't really an alternative to Foxconn in size and scale. But in other industry where there are lots of alternative around the world, the heat are quite intense.
Most of the world considers Taiwan (a.k.a. "Republic of China") to be a distinct country.
The mainland Chinese government disagrees.
I don't know about these particular tariffs, but I believe the U.S. would not consider tariffs aimed at (mainland) China to apply to Taiwan.
The RoC consider themselves the rightful "China" too. Some Taiwanese politicians have advocated declaring themselves an independent country but that's never been the official government position.
> The RoC consider themselves the rightful "China" too.
That's a minority/extremity opinion which isn't remotely prevalent even among those who feel intimate with China. It used to be the 'political-correct' belief held by the former authoritarian ruling party. But it's not considered a rational worldview anymore after its democratic modernization.
Eh? Chen Shuibian did declare independence, like 10 times
Most of the world? Not in any official sense.
> Most of the world? Not in any official sense.
There are things that are de facto true, even if they're not de jure true.
One things that can be confusing/frustrating to many people about geopolitics is that there's a substantial body of "agreed upon" fiction. These are positions that are held officially by numerous, competing state actors that, while factually untrue, hold value to each.
The status of Taiwan is one. Israel's nuclear status is another famous example.
While the common sense reaction is "everybody knows it, why pretend?" those fictions generally exist because they help maintain delicate balances and order between parties that are generally at odds with one another.
What it comes down to is that it'd be lovely if we could all just say openly and outright that mainland China and Taiwan are separate entities, because, in practice, they are. But it's also pretty swimming for the global community for the PPRC and ROC to both claim that they're the real China while everyone does business with each other anyway. So while it's true that they're distinct and it's also true that everyone knows that to be the case, everyone also wins out to varying degrees by officially pretending that it isn't the case.
Since both Chinas claim the entirety of China including Taiwan, no government can officially recognise both. But in practice that means governments recognise one China officially and deal with the other unofficially. Taiwan obviously is not under the PRC government's control and the US tariffs don't apply to it. The US is a close ally of Taiwan.
> The US is a close ally of Taiwan
Heh. Exactly how close an ally is yet to be tested in any meaningful way. No U.S. aircraft carrier group has transited the Taiwan Strait since 2007. Some would forecast that no U.S. carrier group will transit - or finish transiting - that strait ever again.
I wouldn't be so sure, sounds like the kind of thing El Presidente would do.
Its up to the US who to put its own import tariffs on, so the US can just say "imports from China" but not directly from Taiwan.
China may be able to force import tariffs on goods imported by Taiwan (idk not an expert) but I bet Taiwan imports relatively little directly from the US?
Just after World War II, the Communists took over China from the previous government. The previous government ran to the island of Taiwan, which had been part of China. If I understand correctly, you now have two different governments, one on the mainland and one on Taiwan, both of which claim to be the government of China, and both of which claim to rule all of China, and both of which claim that there is only one China.
Other nations play pretend to various degrees, trying to placate (mainland) China without too much swallowing their own pride in not being China's lapdogs. But they're effectively two separate countries.
So: The tariffs being applied to China do not apply to Taiwan. However, Trump seems to be throwing tariffs at several targets lately; he may or may not have applied some to Taiwan.
The current government of Taiwan does not claim sovereignty over the mainland. They still call themselves "Republic of China" though. Indeed most countries do not recognize Taiwan to placate the mainland regime which adheres to the "One China" policy of threatening war over official separation by Taiwan.
Sometimes you wonder how things can stay broken for so long.
> Sometimes you wonder how things can stay broken for so long.
Taiwan doesn't want the mainland's government, and the mainland doesn't want Taiwan's government.
If Hong Kong had been militarily defensible, it'd be in the same situation.
> If Hong Kong had been militarily defensible, it'd be in the same situation.
If the UK had really wanted to keep Hong Kong, could it have militarily defended it? No doubt the geography is less than favourable; but, keeping in mind that UK and PRC are nuclear powers, would regaining Hong Kong be worth the risk of war between two nuclear states? PRC might well have decided it wasn't worth that risk.
But in reality, the UK had no real interest in keeping Hong Kong, and were more than happy to hand it over to PRC. (Yes, there was a treaty saying they had to give back a big chunk after 99 years, but if the UK really wanted to get out of that, they could have found a solution – e.g. if they hadn't switched recognition from ROC to PRC, they could have asked ROC for a lease extension. Or, they could have made their switch of recognition from ROC to PRC conditional on PRC granting them perpetual sovereignty over the whole of Hong Kong.)
> If the UK had really wanted to keep Hong Kong, could it have militarily defended it?
The Chinese would not have to have fired a shot to make HK surrender. HK was then a city with more than 6 million citizens, which was entirely dependent on water imported from the mainland. The treaty that allowed the importation of all that water was only set until the end of the lease, and the Chinese very clearly stated that they had no interest in extending it.
Also, earlier when HK expressed some interest in investing into desalinization, the Chinese threatened to immediately cut off the water supply if it seemed like the HK local government would have actually tried to build up enough water resources to make them not dependent on the mainland.
The result of shutting down the taps would have been millions dead within a week. Keeping HK British would have required a massive invasion of China by the UK, just to secure water. I don't see that as very likely.
Hong Kong only began importing water from PRC in 1960. This was as a solution to Hong Kong's water shortages. However, there were other possible solutions – e.g., nuclear powered desalination. Importing water from PRC was the cheapest option, and the UK was more interested in saving money than maximising independence from PRC. If they had prioritised water independence, they would have never begun water imports.
Hong Kong also had its own desalination plant 1975-1981. Why was it shut down – because of pressure from the PRC government? Or because it was significantly more expensive than importing water? I don't know for sure, but my impression is the later was a much bigger factor than the former.
[getting far from the TSMC topic]
If the UK had thought this through, they would have setup real democracy in Hong Kong a generation before the lease ended, instead of doing the colonial "high commissioner" thing. A legitimate Hong Kong government would have survived a lot longer with their people's support.
One could argue in the opposite direction: a real democracy would have been (and still is) an extreme threat to the CCP (Chinese Communist Party) which is an anti-democratic cadre party in the Leninist tradition, and has explicitly stated that it wants to remain one for all eternity. One of the narratives of justification that the CCP uses is that Chinese people neither want nor need democracy.
A democratic HK would squarely falsify this claim.
Indeed, I'd argue that the single most urgent reason (but not the only one) for China 's bullying of Taiwan is precisely this, Taiwan is a thriving democratic and Chinese society.
The PRC kept them from doing that.
Chris Patten tried to introduce democracy in 1992, with less than 5 years to go before the agreed handover, and that angered Beijing yes. And then, when the handover happened, Beijing reversed Patten's reforms.
But if the UK had done the same thing in 1980 or 1970 or 1960 or 1950? I don't think the PRC could have done anything about it.
The UK never really cared about Hong Kong democracy. If they had, they would have implemented democratic reforms decades ago, instead of leaving it until the very last minute.
Not to put too fine a point on it, but the British (colonial historically and more recently) have always seemed to find democracy a bit... inconvenient.
Both for their colonies and for themselves.
Russ Roberts did a podcast this week on HK and it includes a discussion about the handover.
China did not control the island before the refugees from China took it over. Japan controlled the island. However, after the war Japan did hand control over the island to China however China never did get to control it after the war or before.
This is correct. Taiwan was part of Japan before the war.
When it comes to China, it's best to think of dealing with governments than land mass. Both Chinese governments claim the entirety of the Chinese land mass. Trump's tariffs are levied against the government People's Republic of China. Taiwan's government is Republic of China, which is not a target of the tariffs.
Just to help me clarify, the tariffs that are being levied between China and America, do or do not also apply to Taiwan? I’ve never fully understood how the separation of Taiwan and China is made since it seems to be such a touchy subject.
Most fundamental technologies take 30 or 40 years to commercialize, if you look at everything from flat screen TVs to digital cameras. The height of the .com bubble was 30 years after the invention of the Internet, and most .com companies didn't actually become profitable until another ten years after that.
I figured they'd have done X-ray etching long before now, but somehow they squeezed a lot of life out of UV.
My understanding is that with X-ray etching/lithography it's significantly more difficult to produce the coherent beam and also to do the focusing of it through a masking plate so you can etch a whole wafer at one time. That's the main reason that they've been working on EUV because they can produce a coherent pulse and focus it more easily. For X-rays you have to turn it into a diffraction grating to make the mask and that ends up more difficult to produce (probably not impossible but much more difficult).
EUV has been the on the roadmap for what, 15, 20, years? It must be amazing for your work to finally pay off after all that time.
At some level this situation was inevitable, as more and more companies offload their fab capability to TSMC, that gives more and more money for TSMC to invest in boosting their capability, and of course while their margins are a lot less than Intel's with enough money that advantage goes away as well.
Intel's instruction set architecture dominance will keep it going for a long time but ultimately it would probably make sense for Intel to spin off its fabs into the US equivalent of TSMC and capture more margin revenue from their designs versus their process.
Intel's advantage over AMD has been it's (1) manufacturing technology and (2) architecture, especially since Sandy Bridge.
It turns out that Intel's architecture cheated by skipping privilege checks during speculation, while AMD designs did the correct thing. SMT turns out to be a security nightmare, but in any event AMD now offers the same capability. Either way SMT is another lost advantage.
That leaves Intel's process technology and vertical integration. If it outsources manufacturing it will have effectively ceded these completely, meaning Intel will have lost all of its competitive advantages.
I suppose Intel's human capital might be a competitive advantage, but their missteps cast serious doubt on that.
TSMC's gross margin is something like 50% iirc so it's not even that bad
50.9% on 32B$ for TSMC vs 62.3% on 62.7B$ for Intel
That is closer than I expected it to be. In terms of actual cash (if you are wondering) Intel ended last year with $22B more of it than TSMC did.
Would it not be better to compare Net Income?
TSMC ~ $11B
Intel ~ $9.5B
TSMC has become more profitable than Intel since 2016.
[Disclaimer I own a small amount of Intel in my IRA account]
If you have ever been in an investment club (basically friends who get together and exchange ideas on what stocks to invest in) you have probably had the 'revenue/income' discussion.
Amazon is the poster child for 'net income doesn't matter' as it has reinvested most of its margin in itself over the years to grow.
So it can go either way.
TSMC does pay a dividend which Intel doesn't so from a financial trading/investing point of view you could argue that TSMC is a better stock to hold long as it will generate income.
From the 'future of the business' point of view re-investing more would seem to be the wiser strategy. Time will tell.
The problem is I haven't seen any major results from Intel's investment for the past 4+ years. And I have no idea where are all of their R&D into. TSMC has been building more Fabs every single year, higher expenditure, while Intel has had little to no Fab capacity expansion ( Apart from the recent 14nm shortage ). In terms of CAPEX TSMC is only second to Samsung, but Samsung is the world largest DRAM and NAND maker.
In terms of roadmap TSMC has been extremely open about their progress all they way down to 3nm and has been executing to perfection. They take pride to be Apple's pure play Fab.
I do believe in re-investing for the future, the problem is I have trouble seeing that future from Intel. And their management has been lying for far too long doesn't bring me confidence.
Time will Tell.
Intel sees the writing on the wall and has been making progress on opening up access to their fabs.
From what I've heard their isolation has made their tech largely incompatible with external workflows. Similar to how mainframes are technically computers but the software is alien to most people. They don't even use the same names for common things.
That idea of Intel spinning off and developing it's own TSMC sounds great! What other factors have prohibited from someone else developing a more viable TSMC competitor in the US?
The idea isn't exactly unique, in that AMD did it and called it Global Foundries(GF). Then in 2015 IBM divested their foundry capabilities and sold them to GF as well. It hasn't been a stellar experiment which is largely laid at the feet of the challenge of managing that business.
There realy should be some kind of ISO about node naming.
I guarantee this "5nm" would be as much missleading as their "7nm".
Only thing that matters is wafer price, yield, transistor count and TDP.
So the node scaling was standardized (before ~28nm). It very literally meant MOSfet density (as 28 nm was 1 MOSfet + gap for the next MOSfet). The IEEE had a road map about what to expect out of X sized components, and when those components would be out base on historical data.
But when we hit 20/18/16/14nm that went out the window and it became a marketing term, not so much a literal description. A lot of this was driven by moving to FINfet's which are really MOSfets, as they have lower leakage at smaller sizes, but they also aren't square which makes generalizing a singular node to density a bit wonky.
Yea for consumers these days, the physical size really doesn't matter much at all. ICs are small enough, nobody's looking forward to 5nm because of the physical size. What matters is cost and electric power usage.
The density improvements from 5nm absolutely matter and that comes from being smaller. If I can fit the functionality from two chips on one chip I save significant space on my board. Especially if the size of my chip is defined by the beachfront available for escaping IO. Not to mention all the supporting components required for each chip. And with half the chip turned off I can finally handle the power/heat issues.
We care about the TDP, which can be improved in various ways, not just moving to 5nm. Physical size in a phone or whatever is small enough, we don't care about smaller chips. The amount of space saved is negligible.
It's never been just about phones. These chips go into all kinds of products, some of them with real thermal and size restrictions.
TDP = thermals, which I did mention. You can reduce thermals in other ways, not just smaller features.
Size is small enough for anything consumers care about, even watches.
> Only thing that matters is waffer price, yield, transistor count and TDP.
Wouldn't 5 nm allow shorter wire traces, which would reduce another limiting factor in computational speed?
The fundamental limit of clock speed is power draw. As clocks increase the wattage ~ frequency relation goes from
frequency = Constant * Power Draw
frequency = Power Draw * Power Draw
I think you have that backwards, unless you mean to say the the power draw is proportionate to the square root of frequency at higher clocks.
Well, you can fight the power monster with dark silicon.
Packing things in more tightly lets you spend less time in transit, which might let you squeeze more gates into a cycle, or do the same things a little faster.
I always wondered why... I mean I understand the processes will be different for each vendor, but shouldn't 1nm the same for all vendors?
They might use different technologies to achieve it but it should be a measuring unit...
What is the actual difference between Intel and everybody else as to count 10 Intel nm < 7 TSMC nms?
Maybe it's easy to find out in google but cant find the correct combination of keywords...
short answer is there are many parts with sizes in different dimensions x/y/z so what dimension do you measure or average to come up with a final "nm" rating for your cpu?
I wonder when the rapid erosion of Intel's technical lead in fabrication technology be represented in decreasing evaluation of its market cap.
It was rumored TSMC's 7nm wouldn't be the same as Intel's 10nm. But if they reach ~5nm then they'll likely be knocking on Intel's 10nm door. Combined with the double whammy of 7nm Eypc servers from AMD it seems like Intel's technical offerings are rapidly getting commoditized by the rest of the market.
> You need almost as sophisticated equipment for verification as needed for fabrication. The the fabrciation companies are the only ones that have that capability.
That's not true. Imaging ICs (or other structures) down to sub-nm resolution is much much easier than making them.
Most physics labs will have no problem doing that..
The various imaging techniques (AFM, STM, etc.) used can routinely achieve sufficient resolution to verify these claims with fairly moderate amounts of tweaking and cost.
STM is often used to image things down to atomic resolution.
> That's not true. Imaging ICs (or other structures) down to sub-nm resolution is much much easier than making them. Most physics labs will have no problem doing that..
For example, see Chipworks's analysis of Intel's 14nm process:
The link you shared only supports my point: either the claims are false, or they're not verifiable independently.
"As yet we don’t have any detailed TEM imaging to look at the transistors or fins in close-up"
And that's 14nm from years ago, not the 7nm or 5nm being claimed these days.
> ... almost as sophisticated equipment for verification as needed for fabrication ...
> ... imaging techniques (AFM, STM, etc.) ... with ... tweaking ...
I don't see the 'not true' part here. (Also as I responded to the other commenter who shared a third party link, even 14nm could not be verified in 2014: "As yet we don’t have any detailed TEM imaging to look at the transistors or fins in close-up").
What does even 7nm or 5nm mean at this point? And who verifies the claims made by the manufacturer? You need almost as sophisticated equipment for verification as needed for fabrication. The the fabrciation companies are the only ones that have that capability.
node ratings are a marketing joke at this point. Actually most likely it became a joke more than 10 years ago.
For EUV, yeah. That shit's tricky.
The wiki page on EUV is uncharacteristically excellent.
> Since the optics already absorbs 96% of the EUV light, [...]
WTF!? Why? Is there any hope for better optics?
Mirrors on EUV machines are already the type that exceeds reflectivity of all mirrors ever produced.
That's why you have to build fabs around a multi-megawatt laser source.
The only known hope going forward are photonic crystal mirrors
But, why does 96% of the energy gets absorbed?
Each mirror absorbs about 30% of the EUV incident upon it (and the mask counts as a mirror). With about 9 total mirrors on the path, you're down to 0.7^9 ~= 4% of the initial light left.
Because everything absorbs EUV light. Every element out there has has bandgap wide enough for EUV
Is the die mask one of the biggest challenges for the 5/7 nm chips? I would of assumed it would of been the uniformity of the wafer process.
Maybe it's because hiring Taiwanese R&D people cost way less than hiring in the U.S. I live in Taiwan and I have friends with master degree in EE who work for TSMC for 5-10 years and none of them getting close to 100k a year.
Normally thst could be a factor, but look at how insignificant it is.
Intel spent 13 billion on R&D last year, and had $26 billion left over ebitda profit still. You can argue a different profit number than ebitda should be used, but pick any one you want, they’re all massive.
Say your friends make 50k salary (just for arguments sake and let’s not calculate a fully burdened salary for simplicity).
If 1,000 Intel engineers were averaging 150k in the US, the difference would only be 100MM.
That is so insignificant to them, they couule use it as fire wood just for fun. So it can’t be higher labor costs.
My best guess is they made a few “big bets” that went wrong and cost them dearly. Or there’s always the old standby thst the wrong leadership is in place at the EVP/VP levels and it’s had a negative ripple effect.
100K? I think Ruby Rails Developers gets 130K a year?
Sometimes I think the difficulties between Jobs and salary don't make any sense at all.
How close is that not close? 70k, 80k let you live like a king in Hsinchu or Tainan
Why is Intel seemingly so behind?
Even if you account for feature size marketing and nomenclature differences, it doesn’t look good.
And the $250MM mentioned, pardon my language but they crap that much for breakfast. I don’t mean to overstate it, look at the financials, the net profit available to them after existing R&D investments is staggering.
Samsung is going mass production with their EUV in Q1 or 2 in 2019. I'm also speculating that Qualcomm's next SD would be made by Samsung LSI. How does it compare to TSMC?