banner
News center
We focus on delivering exceptional products, prompt deliveries, and attentive customer care.

NXP MRAM Automotive Chips And Lam’s Semiverse

Sep 06, 2023

Silicon Wafers and Microcircuits

NXP announced jointly developed embedded magnetic random-access memory (MRAM) in TSMC 16nm FinFET technology. This MRAM will be used in NXP's S32 automotive processors. NXP is emphasizing that they are doing to this to support frequent software upgrades for smart automobiles. These software updates allow carmakers to roll out new comfort, safety and convenience features via over-the-air (OTA) updates in order to extend the life of the vehicle and enhance it functionality, appeal and profitability. The image below shows how NXP uses the S32 processors to enhance vehicles.

Applications for NXP S32 Processor Platform

The MRAM is replacing NOR flash which is often used for code storage in embedded devices. Embedded NOR flash has scaling limits that present devices with less than about 28nm features. The press release goes on to say that, "MRAM can update 20MB of code in ~3 seconds compared to flash memories that take about 1 minute, minimizing the downtime associated with software updates and enabling carmakers to eliminate bottlenecks that arise from long module programming times. Moreover, MRAM provides a highly reliable technology for automotive mission profiles by offering up to one million update cycles, a level of endurance 10x greater than flash and other emerging memory technologies."

TSMC's 16FinFET embedded MRAM technology exceeds the requirements of automotive applications with its one-million cycle endurance, support for solder reflow, and 20-year data retention at 150°C. Test vehicle samples are in evaluation and customer availability for vehicles using this technology should be available in early 2025.

Rick Gottscho, EVP and strategic advisor to the CEO and former CTO at Lam Research recently spoke with me about Lam's article in Nature that showed how AI can help accelerate process engineering for semiconductors (there was also a March IEEE Spectrum article on this topic).

He said that the company is developing ways to accelerate semiconductor process development in a virtual environment, creating digital twins for everything going on in semiconductor processing. Particularly in etch and deposition operations. Traditionally these have all been developed using empirical methods. There is lots of tweaking in chemical processes, particularly as the process complexity increases. He said that there are more than 100 trillion different chemical process recipes that can run on Lam equipment. Traditional design of experiments to develop the best processes is time consuming with so many variables and expensive in time and money.

To create effective modeling and optimization of these processes doesn't need the greatest level of accuracy, it just needs to be good enough, to enable fast learning at low cost. An initial approach to accomplish this is to develop a model that is simple, but not too simple. It should allow evaluating problems that are pretty close to what is done with their machines and using variable parameters. It should include important non-linearities and basic physics. It needs only to show trends in the right direction, not quantitative accuracy.

In order to make progress with such an approach they needed to have the ML algorithm learn from process engineers. Results from human designed experiments could be used for rough tuning of the model and at the end to further tuning. The goal was to get within 10-25% of the multidimensional target. A particular process that makes wide use of LAM equipment is making high aspect ratio holes for 3D NAND flash. The 3D NAND flash announcements by Micron and SK hynix requires stack of over 230 layers and future 3D NAND flash could go to 1,000 layers or more. Rick said that it can cost $1,000 for a half day etch for a 3D NAND hole in a real environment.

The Lam approach uses a Baysesian optimization routine, rather than true deep learning. Once prior known information was incorporated into the model the algorithm designed experiments based upon these previous results. These new experiments could involve, for example, 11 parameters. When the new results were obtained with process tests these were fed back into the model to create a new set of designed experiments. This process was iterated to develop a final optimized process. The algorithm uses a statistical approach drawing upon the distribution of parameters. Virtual experiments might be run for 100 times for each set of conditions to build up these statistics.

The combination of human experience and expertise combined with ML algorithms to do the modelling and optimization of processes like this resulted in having tighter tolerances and at less than half the cost and time of doing the same process development using only human experts. The actual model calculations are being done in the cloud using a modified version of commercial software that LAM provides to customers to simulate their process results in 3D. The code is modified to add physics-based mechanisms and is calibrated to the data. Heuristic input from publications is also inputted into the simulator.

One of the big issues for the physical analysis of the process recipes is metrology. The experiments can take less than a day to run, but measurements of process results can take longer times.

Rick also talked about taking this work to another level to create what he called a Semiverse. This would start by creating a "digital cousin" that will improve with more data to become a "digital twin." The image below shows how this concept could improve semiconductor process development at lower cost, and provide workforce development and the barriers that stand in the way of developing this concept.

LAM Research Concept for the Semiverse

There is a great value one can learn by using environments that are close to but not identical to the processes being modeled. He said that such digital cousins can be a workforce development tool to teach process engineers and reduce the cost of learning using actual physical equipment and for real time learning evaluation. Also, accessing such virtual environments is much easier than using the actual physical equipment.

Rick said that a Semiverse won't be created overnight. It must be built virtual brick by virtual brick. Models will fail and need to be improved and this whole system is learning over time. The path to precisions is iteration, so that eventually a digital cousin will become a true digital twin!

NXP announced that its S32 automotive processors will include embedded MRAM from TSMC. Lam is using machine learning tools to create new processes faster and cheaper and working on creating a true semiconductor manufacturing Semiverse.