Skip to content
Sections

News

AI coverage from Taiwan, by Wei-Lin Chen.

Research

Research Pinpoints Why LLMs Stumble When Juggling Multiple Tasks at Once

via arXiv

A new arXiv paper systematically examines how LLM performance degrades when processing multiple instances simultaneously, identifying both instance count and context length as compounding factors. The research provides a structured analysis of the trade-offs involved in batched inference workloads, a core challenge for production AI deployments. Findings suggest the two variables interact in ways that current benchmarks often fail to capture.

AnalysisFor Taiwan's TSMC-anchored AI chip supply chain, this research has direct hardware implications — understanding where LLMs break down under multi-instance loads helps fabless designers and HPC customers better spec memory bandwidth and on-chip context capacity for next-generation inference accelerators.

Research

Industrial Technology Research Institute opens generative AI lab in Hsinchu

via Taipei Times

Taiwan's Industrial Technology Research Institute has inaugurated a generative AI research laboratory at its Hsinchu campus, positioning the facility as a bridge between academic AI research and commercial applications for Taiwanese industry. The lab is equipped with a dedicated GPU computing cluster and will focus on developing generative AI technologies for semiconductor design automation, smart manufacturing, and Traditional Chinese content generation. ITRI president Edwin Liu said the lab addresses a critical need for Taiwanese companies that want to adopt generative AI but lack the in-house research capacity to develop and customize models for their specific industry requirements. The lab will operate as an open innovation center, offering collaborative research programs where companies contribute domain expertise and use cases while ITRI provides AI research capabilities and compute resources. Initial industry partners include several major semiconductor equipment manufacturers, electronics companies, and a leading Taiwanese financial services group. Research priorities for the first year include developing AI-assisted chip design tools that can generate and verify circuit layouts, creating manufacturing defect detection systems powered by vision-language models, and building Traditional Chinese conversational AI systems for customer-facing applications. The lab plans to publish its research openly and release reusable AI components as open-source tools for the Taiwanese tech ecosystem.

Industry

Taiwan startup Yating AI raises $20M for Mandarin speech AI

via TechCrunch

Taipei-based speech AI startup Yating AI has closed a $20 million Series A funding round led by AppWorks Ventures with participation from National Development Fund and Japanese venture capital firm Global Brain. Yating specializes in Mandarin Chinese speech recognition and synthesis technology, with particular strength in the Taiwanese Mandarin dialect and its distinctive vocabulary, pronunciation patterns, and code-switching behaviors that differ significantly from mainland Mandarin. The company's speech models are trained on proprietary datasets collected through partnerships with Taiwanese broadcasters, podcast platforms, and government agencies, achieving word error rates that outperform global competitors on Taiwanese Mandarin benchmarks by over 30 percent. CEO Lin Yu-hsuan said the funding will support expansion into real-time meeting transcription for enterprise customers, voice-enabled interfaces for government digital services, and automated dubbing and subtitling for Taiwan's media industry. Yating's technology already powers the real-time transcription system used in Taiwan's Legislative Yuan and several municipal government agencies. The company is also developing specialized models for medical dictation in Taiwanese hospitals, where accurate recognition of mixed Mandarin-Hokkien medical terminology is essential. The raise brings Yating's total funding to $28 million.

Infrastructure

TSMC and NVIDIA deepen partnership on next-gen AI chip packaging

via Nikkei Asia

TSMC and NVIDIA have announced an expanded collaboration on advanced chip packaging technologies specifically designed for next-generation AI processors, with TSMC committing to significantly increase its CoWoS advanced packaging capacity to meet NVIDIA's surging demand. The partnership will develop new packaging architectures that enable tighter integration of AI compute dies with high-bandwidth memory, increasing the memory bandwidth available to AI accelerators by up to 60 percent over current solutions. TSMC CEO C.C. Wei revealed that the companies are co-developing a next-generation System-on-Wafer technology that will allow NVIDIA to build AI accelerators with more compute chiplets than current designs support, potentially doubling the transistor count per package. The enhanced packaging capabilities are critical for NVIDIA's roadmap beyond its current Blackwell architecture, where further performance gains increasingly depend on memory bandwidth and die-to-die interconnect performance rather than transistor density alone. TSMC is investing over $5 billion in new advanced packaging facilities in Taiwan and is constructing additional capacity at its Arizona campus. Industry analysts note that advanced packaging has become the critical bottleneck in AI chip production, with demand far exceeding available capacity across the industry.

Research

National Taiwan University Establishes AI Governance Research Center

via Taipei Times

National Taiwan University has established a new research center focused on AI governance, ethics, and policy for the Asia-Pacific context. The center will study AI regulation models, algorithmic fairness across Asian cultural contexts, and the geopolitical implications of AI development. Founding faculty include researchers from law, computer science, and political science departments, reflecting the multidisciplinary nature of AI governance challenges.

AnalysisTaiwan's AI governance center has a unique vantage point — it sits at the intersection of US-China tech competition, semiconductor dominance, and democratic AI values. The geopolitics research angle is what differentiates this from Western AI ethics centers.

Infrastructure

MediaTek Unveils On-Device AI Chip for Edge LLM Inference

via Nikkei Asia

Taiwanese chipmaker MediaTek has announced the Dimensity AI 9400, a system-on-chip designed for running large language models directly on mobile devices and edge hardware. The chip includes a dedicated neural processing unit capable of running 7B parameter models at interactive speeds without cloud connectivity. MediaTek targets smartphone manufacturers, IoT device makers, and automotive companies looking to embed AI capabilities without cloud dependency.

AnalysisOn-device 7B inference is the threshold that unlocks privacy-preserving AI for the mass market. MediaTek powering 40%+ of global smartphones means this chip could put local LLM capability in a billion devices within two years.

Infrastructure

Foxconn Builds Largest GPU Supercomputer Cluster in Southeast Asia

via Reuters

Foxconn has completed construction of a GPU supercomputer facility in Kaohsiung, Taiwan, housing over 10,000 NVIDIA H100 GPUs. The cluster is designed for AI model training and will serve both Foxconn's internal AI development and external enterprise clients. The facility represents Foxconn's strategic pivot from pure manufacturing toward AI infrastructure services, a higher-margin business line aligned with the company's 3+3 transformation strategy.

Analysis10,000 H100s makes this one of the largest commercial AI clusters globally. Foxconn selling AI compute is a margin play — hardware manufacturing runs at 3-5% margins, cloud GPU rental at 40%+. Watch whether they can actually operate a cloud business.

Industry

Appier Launches Enterprise LLM Suite for Asia-Pacific Markets

via Nikkei Asia

Taipei-based Appier has launched an enterprise LLM product suite targeting Asia-Pacific businesses. The platform offers marketing content generation, customer behavior prediction, and automated audience segmentation powered by models fine-tuned for Asian markets and languages. Appier reports 200+ enterprise clients across Japan, Taiwan, and Southeast Asia signed up during the preview period. The company's stock rose 8% on the announcement.

AnalysisAppier is positioning as the Salesforce Einstein of Asia-Pacific — AI embedded in marketing tools rather than sold as raw model access. 200+ signups in preview suggests strong product-market fit in a region underserved by US AI enterprise vendors.

Policy

Taiwan's National Science Council Commits NT$10B to AI Research

via Taipei Times

Taiwan's National Science and Technology Council has announced NT$10 billion in dedicated AI research funding over three years. Priority areas include semiconductor-AI co-design, Mandarin language AI, and AI applications for Taiwan's manufacturing-heavy economy. The funding includes establishing five new AI research centers at national universities and a compute grant program giving researchers access to NVIDIA GPU clusters hosted at NCHC.

AnalysisSemiconductor-AI co-design as the top priority is Taiwan playing to its unique strength — no other country can integrate AI research with cutting-edge chip fabrication at this level. The NCHC compute grants democratize access beyond TSMC-adjacent labs.

Infrastructure

TSMC Announces 2nm Process Node Optimized for AI Accelerator Chips

via Nikkei Asia

TSMC has unveiled details of its N2 process node featuring gate-all-around transistor architecture specifically optimized for AI accelerator manufacturing. The node delivers 15% speed improvement and 30% power reduction compared to N3E for AI workloads. Major customers including NVIDIA, AMD, and Apple have committed to N2 production starting in late 2026. The announcement reinforces Taiwan's central position in the global AI hardware supply chain.

AnalysisThe 30% power reduction is the headline number for AI data centers — energy costs are becoming the binding constraint on AI scaling. TSMC optimizing for AI workloads at the transistor level means the hardware-software co-design era is here.

Models

Taiwan AI Labs Releases BLOOM-zh Traditional Chinese Language Model

via Taipei Times

Taiwan AI Labs has released BLOOM-zh, a large language model specifically optimized for Traditional Chinese. Built on the BLOOM architecture with additional pre-training on Taiwanese web data, government documents, and academic literature, the model addresses the performance gap between Simplified and Traditional Chinese in existing multilingual models. The 13B parameter model is available under an open license and shows strong performance on Mandarin comprehension benchmarks.

AnalysisBLOOM-zh is as much a cultural sovereignty play as a technical one — Taiwan needs NLP models that understand Traditional Chinese nuance without mainland training data bias. The government document pre-training is the strategic ingredient.