Intel formally kills its tick-tock approach to processor development
Intel formally kills its tick-tock approach to processor development
Nearly 10 years ago, Intel formally unveiled the new design and manufacturing procedure it would use for its microprocessors. Before 2007, there was no exact, predictable alignment betwixt the deployment of new manufacturing techniques at smaller process nodes and the debut of new architectures. From 2007 forward, Intel followed a distinct cadence: New process nodes would exist designated as "ticks," and new architectures built on the same process node would exist called "tocks."
This approach ensured Intel was never attempting to build a brand-new CPU architecture at the same fourth dimension it ramped a new process node, and gave the company almost a decade of steady (if slowing) progress. That era is over.
In its contempo 10-One thousand filing, Intel stated the post-obit:
Equally office of our R&D efforts, we programme to innovate a new Intel Cadre microarchitecture for desktops, notebooks (including Ultrabook devices and 2 in i systems), and Intel Xeon processors on a regular cadence. Nosotros expect to lengthen the amount of time nosotros volition utilize our 14nm and our adjacent generation 10nm process technologies, farther optimizing our products and process technologies while meeting the yearly market cadence for product introductions.
The company also released an image to show the difference betwixt the erstwhile tick-tock model and the new system:
Intel goes on to land that it intends to introduce multiple production families at futurity nodes, with advances integrated into those architectures in means that aren't communicated by node transitions.
Nosotros also programme to introduce a tertiary 14nm product, code-named "Kaby Lake." This product will have central functioning enhancements equally compared to our 6th generation Intel Core processor family unit. We are too developing 10nm manufacturing process technology, our next-generation process technology.
We take continued expanding on the advances anticipated by Moore'southward Law past bringing new capabilities into silicon and producing new products optimized for a wider variety of applications. We expect these advances will result in a significant reduction in transistor leakage, lower active ability, and an increase in transistor density to enable more than smaller form factors, such equally powerful, characteristic-rich phones and tablets with a longer battery life.
In other words, Intel believes information technology tin offer improvements in dissimilar areas that correspond to better user experiences — and it may exist right.
The evidence of iteration
In recent years, ARM, AMD, and Nvidia accept all introduced architectural improvements that essentially improved on ability consumption and operation despite beingness built on the same node.
In AMD's case, Carrizo offers essentially better CPU and GPU performance at low TDP compared to the Kaveri APU it replaces. While it'southward true that AMD's APUs often aren't shown or priced to all-time issue by OEM system designs, Carrizo is a notable improvement over AMD's previous offerings. Part of this is likely due to AMD's decision to use Adaptive Voltage and Frequency Scaling instead of the Dynamic Voltage and Frequency Scaling that Intel (and AMD, historically) have both relied on. More data on AFVS vs. DFVS can be establish hither, if yous're curious on the technical arroyo and why AMD adopted it.
ARM tends to be a chip more closemouthed than Intel when it comes to aspects of CPU design, simply its own public slides bear witness how Cortex-A9 functioning evolved over time.
ARM claims that its architectural enhancements to the Cortex-A9 improved its per-clock functioning by nearly 50%, over and to a higher place whatsoever frequency enhancements. When combined with improvements via process node and CPU clock, the final chip was most 3x faster than the kickoff models that debuted on 40nm.
Finally, there's Nvidia. While we hesitate to draw too much from GPU manufacturing, given the vast differences betwixt CPU and GPU architectures, Nvidia'south Maxwell was a huge leap forward in performance-per-watt over and above what Kepler offered. The end result was college frame rates and a more efficient compages, all while staying on TSMC's mature 28nm procedure.
Each of these companies took a unlike route to improving power efficiency. AMD introduced new types of power gating and binning while simultaneously making architectural improvements. ARM took an iterative approach to fixing performance-sapping issues without declaring the afterward revisions of the Cortex-A9 to exist different processors. Nvidia built a new architecture on an existing, mature node, blending some approaches it had used with Fermi with its existing Kepler compages, and then adding amend color compression and other enhancements to the GPU stack.
Intel's ten-K goes on to mention other long-term investments the company is making into EUV, and the firm has previously discussed how it sees a path forrad to 10nm and below without relying on the side by side-generation lithography system. The firm isn't giving up on process node scaling, it's simply not going to try to hitting the same cadence that it used to.
The counter-argument
In that location is, withal, a counter-statement to the optimistic scenario I just laid out. Dissimilar AMD or ARM, Intel'south x86 processor designs are extremely mature and highly optimized. Carrizo may innovate some innovative ability management techniques, only AMD had to find a way to take Bulldozer — an compages designed for loftier clock speeds and high TDPs — and stuff it into a 15W power envelope. The company's engineers deserve a month in Tahiti for managing that feat at all, but it'due south no surprise information technology took the business firm multiple iterations on the same procedure node to do it. ARM's Cortex-A9 was a fabled mobile processor in its day, but it was besides arguably ARM'due south first stab at a laptop/desktop-capable CPU core. There was going to be low-hanging fruit to prepare, and ARM, to its credit, fixed it.
Nvidia's Maxwell GPU might demonstrate the performance and efficiency gains of advances to one's graphics compages, simply Intel has actually made some significant strides in this surface area already. Modern GPU designs also aren't as mature equally their CPU counterparts — Intel has been building out-of-order CPUs since the Pentium Pro in 1995; the first programmable GPU debuted in the Xbox 360 in 2005 (AMD) or Nvidia's G80 (2006) depending on how you desire to count.
This view would argue that the minor clock-for-clock performance improvements to Haswell and Skylake over their predecessors reflects neither laziness nor market place abuse, just a more fundamental truth: Intel is currently building the best, most power-optimized processor information technology knows how to build, with no near-term astonishing technology other than process improvements to push button the envelope farther.
Whichever view is more than precise, it's not peculiarly surprising to come across Tick-Tock passing into history. As nosotros've covered at length over the by few years, it'southward getting harder and harder to hitting new node targets, and Intel typically sets density and gate length requirements that are harder to hit than its competitors. Even now, Intel'south 14nm node is more dense than the hybrid fourteen/20nm arroyo offered by Samsung and TSMC; TSMC'south 10nm node in 2022 is expected to hit the same densities Intel achieved in 2022. The question is, does leading the industry in such metrics actually give Intel plenty of an advantage to justify the cost?
Intel's decision to transition away from the tick-tock model is a tacit recognition that the future of semiconductors and their connected evolution is considerably murkier than it used to be. The visitor is retrenching effectually a more conservative model of hereafter progress and betting it tin detect complementary technologies and approaches to proceed to deliver steady improvement. Given the fourth dimension lag in semiconductor design, it'll be a year or two before we know if this arroyo worked.
Fortunately, tick-tock continues to work beautifully in rather different contexts.
Source: https://www.extremetech.com/extreme/225353-intel-formally-kills-its-tick-tock-approach-to-processor-development
Posted by: youngiriplard1942.blogspot.com
0 Response to "Intel formally kills its tick-tock approach to processor development"
Post a Comment