Interconnection is critical to heterogeneous integration

tech

Carefully designed interconnects are crucial for achieving the advantages of heterogeneous integration and chiplets.

As the chip industry gradually evolves from monolithic planar chips to a trend of in-package chip and chiplet integration, the design and manufacturing of interconnects become increasingly complex and more critical for device reliability. What was once as simple as laying down a copper wire has evolved into tens of thousands of microbumps, hybrid bonding, through-silicon vias (TSVs), and even optical fiber connectors. The main goal remains to send signals from point A to point B with the lowest possible power, with the smallest RC delay, while ensuring that these signals are intact and reach their destination. However, making all of this work effectively is an ever-growing challenge.

With the increase in data rates, we are pushing the limits of what can be pushed on physical channels, requiring parallel processing or nested parallel processing to increase speed. This means that more interconnects are needed than ever before.

This is particularly evident for chiplets, where data needs to flow in and out of the chiplets to connect them to other components in the package. This approach may be more complex, but it offers significant rewards in terms of power.

Marc Swinnen, Director of Product Marketing for the semiconductor division at Ansys, said, "Conventional chips have high-power drivers on their output pins, which are strong enough to drive electrical signals through relatively large and long signal traces on a PCB. However, chiplets do not need those really large drivers because 2.5D interconnects are much smaller, so you can use smaller I/O drivers on each chip to save space and power."

Advertisement

The main reason for this shift is the physical principle of packaging more functionality into a fixed area. While digital logic will extend to the single-angstrom range, reducing line widths increases resistance and capacitance, while introducing a host of new physical effects. Devices may run hotter, signals may run slower, and signal integrity becomes more difficult to maintain. Overcoming these issues requires new materials with higher electron mobility and a broader range of critical data paths. It also requires a deep understanding of how devices operate under different workloads, which can affect the overall layout of interconnects along the x, y, and z axes.

Frank Schirrmeister, Vice President of Solutions and Business Development at Arteris, said, "You break down the original content on the chip into a broader set of multiple chiplets. The communication methods on the chip need to extend to the communication methods between chiplets, but this is independent of the substrate you use between the chiplets. The complexity of the modules on the chip has increased."As chips grew larger towards the end of the 90s, the industry began to focus on how to connect them, leading to the emergence of virtual socket integration schemes and various buses tailored to each situation. With the number of blocks becoming unmanageable, test buses, high-performance buses, peripheral buses, and the like emerged. Over time, bus systems became too power-hungry, leading to the development of protocols to reduce overhead.

Arm began to address this issue by creating the Advanced Microcontroller Bus Architecture (AMBA), an open standard for the connection and management of modules within SoCs. Over the past 30 years, AMBA has been revised and expanded, establishing multiple secondary protocols. Recently, Arm announced the new CHI C2C specification, extending AMBA to Chiplet.

Interconnect Protocols

The abundance of interconnect PHYs and protocols has a certain irony. "One of the early great advantages of monolithic chips was that there was no interconnect," says Swinnen. "Technically there was, but they were all made in one process step. There's a rule that the reliability of a system decreases with the number of interconnects in the system. Nonetheless, there are even more connections now. Even a common 2.5D design can easily have 500,000 bumps."

Furthermore, Andy Heinig, head of the High-Efficiency Electronics department at the Adaptive Systems Engineering division of Fraunhofer IIS, states that the complexity of reliability may be inevitable. "At some point, assembly technologies are linking, for example, from solder balls to copper pillars, or later to hybrid bonding. With new assembly technologies, we might see some new reliability issues. Here, the small chip interface might bring new challenges, as the number of interconnects in a certain area is quite high."

However, well-designed interconnects are crucial for achieving the advantages of heterogeneous integration and small chips. As more signals and an ever-growing amount of data must be transmitted in increasingly complex layouts, interconnects may become a bottleneck due to increased latency caused by so many connections.

"Your speed depends on the slowest interconnect in the design," points out Mick Posner, Vice President of Product Management for High-Performance Computing IP Solutions at Synopsys.Interconnect Classification and Hierarchical Structure

In multi-layer integrated circuits, thin and short local interconnects provide on-chip connections, while thick and long global interconnects facilitate transmission between different blocks. As detailed by Larry Zhao, the Technical Director at Lam Research, through-silicon vias (TSVs) allow signals and power to be transferred from one layer to the next.

The main difference between 2.5D (and future 3D-IC) chiplet interconnects and traditional PCB interconnects lies in the fact that 2.5D features thinner, higher-density interconnects, which are also typically shorter. New functionalities such as TSVs, micro-bumps, and hybrid bonding also complicate the interconnect map, especially for 3D integration.

"On the positive side, this means that communication between 2.5D chiplets is faster, has higher bandwidth, and lower power consumption than PCBs," said Swinnen. "The downside is that it is more expensive than PCB technology. Many high-speed signals require comprehensive electromagnetic coupling analysis for design, which is more complex than the simpler RC modeling that can be used when staying on the chip."

However, issues such as IR drop and RC delay begin to degrade performance. To address this, the industry plans to supply power from the backside of the chip, thereby reducing wiring congestion on the metal layers of the device. This helps maintain signal integrity across the entire device while also ensuring that transistors receive sufficient power, but it adds a whole new level of complexity that high-volume manufacturing has not yet fully resolved.

Figure 2: Classification of interconnect work. Source: Intel

As standards continue to evolve and more detailed variants emerge, the choice of interconnect solutions becomes more complex, offering defined nodes.

"If you look at the interconnect within an SoC, you immediately think of something like the AMBA bus," said Posner from Synopsys. "With the development of streaming interfaces, CHI extensions, and the expansion to more networks on the chip."

Arteris focuses on the scalability of heterogeneous, block-to-block topologies, and the mesh topology of dividing an SoC into multiple chips. "It's a process that becomes complex due to protocols, and version adoption conflicts further complicate the process," said Schirrmeister. "Most companies working with RISC have chosen CHI, so the issue is in the details: which version are they using? For example, the latest Arm cores have the CHI-e interface, while older Arm cores have the CHI-b interface. You have to go through version control and have different features in different versions."This means that communication and compatibility are crucial.

 

Simplifying interconnect protocol options

Debendra Das Sharma, a senior researcher and co-general manager of Intel's Memory and I/O Technology Group, stated that the proliferation of protocols is unlikely and should not be curbed quickly. "Some people mistakenly believe that there should be one interconnect to do it all. This is incorrect. I believe the industry has rallied around the right set of interconnects—UCIe for within the package, PCIe and CXL for outside the package, and Ethernet for rack/pod level as well as for networking."

Therefore, it is very important for all these interconnects to communicate with each other, and interoperability remains a necessary goal for designers. Priyank Shukla, Chief Product Manager of Interface IP at Synopsys, said: "To address the challenges of multiple interconnects, the industry indeed needs an interoperability standard that can scale both vertically and horizontally." "The entire ecosystem is striving to integrate and match this performance. We see the UltraEthernet Consortium providing a backend network that can scale horizontally, while AMD has open architectures and CXL technology that can provide cache coherence. For chip-to-chip partitioning, UCIe is the best choice. These interoperable open standards offer innovation in addressing the interoperability issues faced by the industry."

 

Chiplets

Although different implementations have different interconnects, there is a clear trend towards standardization in chiplet interconnects. Mayank Bhatnagar, Director of Product Marketing at Cadence Silicon Solutions Group, said: "Even users with connections on both ends tend to adopt standards because they want to benefit from the collective work done by large standard organizations like UCIe." "We will never have enough engineers to design all possible interconnects, and relying on standards allows users to learn from the collective work of others in the field."

At the same time, the tension in the advanced packaging supply chain also prompts more users to consider organic packaging. Bhatnagar said: "Organic packaging, also known as standard packaging, can shorten the turnaround time, and the supported bandwidth density can meet the needs of many customers who initially thought their designs required advanced packaging."

Nevertheless, as the industry moves towards chiplets, there is still an unresolved key issue. "A very important challenge for chiplet interconnects comes from the fact that no one can test the interfaces by probing with needles or probe cards as before," Heinig from the Fraunhofer Society pointed out. "Such testing is necessary if the startup is not successful or if some errors occur during operation. Here, we need new solutions, such as on-chip monitoring and testing."Addressing New Complexities

As the complexity of new 2.5/3D packaging designs continues to increase, the demand for new solutions is a significant aspect. Product development has now become interdisciplinary, incorporating various fields of expertise and different analytical tools.

"High-speed digital, RF, photonics, power electronics, ASIC design, thermal, mechanical, and more must be closely integrated," said Mueth from Keysight Technologies. "This is one dimension of complexity, as these disciplines are often interdependent, further complicating the design process. Requirements, processes, and data must be managed throughout the entire engineering lifecycle of design, testing, and manufacturing, adding more complexity to product development efforts. Finally, the small chips must operate within higher-level hierarchical systems, so top-down design and bottom-up verification elements must be considered."

tech
1832 55

Comment Box