Computex Taipei 2024, tech’s biggest event in Asia, is currently underway. Our analysts are on the ground to bring you the latest updates. Day 2 kicked off with keynote addresses from industry leaders. Here are the standout announcements:
Intel: Advancing AI and Data Center Technology
Intel introduced its new Xeon 6 data center processors, emphasizing its ongoing commitment towards AI chip development. Additionally, Intel unveiled the Gaudi 3 chip, which boasts superior training and inferencing speeds compared to NVIDIA’s H100. The Gaudi 3 offers 40% faster performance in training a GPT-3 model, 15% faster inferencing throughput on Llama2, and on average 2x faster inferencing performance compared to NVIDIA’s H100. Another highlight was the Lunar Lake SoC, designed for AI PCs and laptops. According to Intel, Lunar Lake offers up to 3x more AI computing power than the current Meteor Lake processors. Intel aims to ship 40 million AI PCs by the end of 2024, with over 8 million already in the market.
MediaTek: Highlights AI and Connectivity
MediaTek showcased its new Dimensity 9300 chip and DaVinci platform, aimed at enhancing GenAI app development for mobile devices. The company also highlighted its innovations in the automotive sector with a 3nm smart cockpit SoC capable of running real-time edge AI applications. Additionally, in the wireless connectivity arena, MediaTek presented the 5G RedCap RFSoC platform T300, designed for IoT products requiring high connectivity efficiency, long battery life and compact space. Customers are currently testing this platform, with plans to introduce it into wearable devices by 2025. MediaTek’s presentation included a comprehensive product lineup featuring mobile SoCs, smart home devices, Chromebooks, tablets, 5G CPE, satellite communication and Wi-Fi solutions. These developments reflect MediaTek’s continuous and extensive focus on AI technology and wireless connectivity.
Source: Counterpoint Research
From the Floor: Gigabyte Facilitates Clustering for Turnkey AI Infra
Gigabyte stole the spotlight with its GIGA POD solution, highlighting the importance of clustering in the AI boom. The scalable, turnkey AI supercomputing infrastructure offers high throughput and compute power necessary for running deep learning models at scale. Leveraging advanced NVIDIA and AMD accelerators, GIGA POD ensures optimal performance for heavy AI/ML workloads. These innovations position Gigabyte as a major player in the AI infrastructure space, ready to support the growing demand for AI and cloud services.
Source: Counterpoint Research