Skip to content
Sections

Large Language Model and Artificial Intelligence Updates for Taiwan

Research

Research Pinpoints Why LLMs Stumble When Juggling Multiple Tasks at Once

via arXiv

A new arXiv paper systematically examines how LLM performance degrades when processing multiple instances simultaneously, identifying both instance count and context length as compounding factors. The research provides a structured analysis of the trade-offs involved in batched inference workloads, a core challenge for production AI deployments.

AnalysisFor Taiwan's TSMC-anchored AI chip supply chain, this research has direct hardware implications — understanding where LLMs break down under multi-instance loads helps fabless designers and HPC customers better spec memory bandwidth and on-chip context capacity for next-generation inference accelerators.

Research — Research Pinpoints Why LLMs Stumble When Juggling Multiple Tasks at Once
Top Stories
Research

Industrial Technology Research Institute opens generative AI lab in Hsinchu

via Taipei Times

Taiwan's Industrial Technology Research Institute has inaugurated a generative AI research laboratory at its Hsinchu campus, positioning the facility as a bridge between academic AI research and commercial applications for Taiwanese industry. The lab is equipped with a dedicated GPU computing cluster and will focus on developing generative AI technologies for semiconductor design automation, smart manufacturing, and Traditional Chinese content generation. ITRI president Edwin Liu said the lab addresses a critical need for Taiwanese companies that want to adopt generative AI but lack the in-house research capacity to develop and customize models for their specific industry requirements. The lab will operate as an open innovation center, offering collaborative research programs where companies contribute domain expertise and use cases while ITRI provides AI research capabilities and compute resources. Initial industry partners include several major semiconductor equipment manufacturers, electronics companies, and a leading Taiwanese financial services group. Research priorities for the first year include developing AI-assisted chip design tools that can generate and verify circuit layouts, creating manufacturing defect detection systems powered by vision-language models, and building Traditional Chinese conversational AI systems for customer-facing applications. The lab plans to publish its research openly and release reusable AI components as open-source tools for the Taiwanese tech ecosystem.

Industry

Taiwan startup Yating AI raises $20M for Mandarin speech AI

via TechCrunch

Taipei-based speech AI startup Yating AI has closed a $20 million Series A funding round led by AppWorks Ventures with participation from National Development Fund and Japanese venture capital firm Global Brain. Yating specializes in Mandarin Chinese speech recognition and synthesis technology, with particular strength in the Taiwanese Mandarin dialect and its distinctive vocabulary, pronunciation patterns, and code-switching behaviors that differ significantly from mainland Mandarin. The company's speech models are trained on proprietary datasets collected through partnerships with Taiwanese broadcasters, podcast platforms, and government agencies, achieving word error rates that outperform global competitors on Taiwanese Mandarin benchmarks by over 30 percent. CEO Lin Yu-hsuan said the funding will support expansion into real-time meeting transcription for enterprise customers, voice-enabled interfaces for government digital services, and automated dubbing and subtitling for Taiwan's media industry. Yating's technology already powers the real-time transcription system used in Taiwan's Legislative Yuan and several municipal government agencies. The company is also developing specialized models for medical dictation in Taiwanese hospitals, where accurate recognition of mixed Mandarin-Hokkien medical terminology is essential. The raise brings Yating's total funding to $28 million.

Infrastructure

TSMC and NVIDIA deepen partnership on next-gen AI chip packaging

via Nikkei Asia

TSMC and NVIDIA have announced an expanded collaboration on advanced chip packaging technologies specifically designed for next-generation AI processors, with TSMC committing to significantly increase its CoWoS advanced packaging capacity to meet NVIDIA's surging demand. The partnership will develop new packaging architectures that enable tighter integration of AI compute dies with high-bandwidth memory, increasing the memory bandwidth available to AI accelerators by up to 60 percent over current solutions. TSMC CEO C.C. Wei revealed that the companies are co-developing a next-generation System-on-Wafer technology that will allow NVIDIA to build AI accelerators with more compute chiplets than current designs support, potentially doubling the transistor count per package. The enhanced packaging capabilities are critical for NVIDIA's roadmap beyond its current Blackwell architecture, where further performance gains increasingly depend on memory bandwidth and die-to-die interconnect performance rather than transistor density alone. TSMC is investing over $5 billion in new advanced packaging facilities in Taiwan and is constructing additional capacity at its Arizona campus. Industry analysts note that advanced packaging has become the critical bottleneck in AI chip production, with demand far exceeding available capacity across the industry.

Research

National Taiwan University establishes AI governance research center

via Taipei Times

National Taiwan University has established a dedicated AI governance research center, bringing together scholars from computer science, law, political science, and public policy to develop governance frameworks suited to Taiwan's unique technological and geopolitical context. The center, funded by a NT$500 million endowment from the Ministry of Education and private donors, will focus on four research pillars: AI regulation comparative analysis, algorithmic accountability in democratic governance, cross-strait AI policy dynamics, and international standards harmonization. Center director Professor Lee Chien-fu noted that Taiwan occupies a distinctive position in global AI governance discussions as both a critical node in the AI hardware supply chain and a democracy navigating complex relationships with major AI powers. Initial research projects include a comprehensive analysis of how Taiwan's existing legal framework applies to generative AI systems, a study of AI-driven disinformation threats to democratic processes, and the development of evaluation criteria for AI systems used in government decision-making. The center will also host an annual international conference on AI governance in the Indo-Pacific region and maintain a public database of AI policies and regulations across Asian economies. Partnerships have been established with governance research groups at Stanford, Oxford, and the National University of Singapore.

Subscribe to TaiwanLLM

Weekly. Free. No spam.

This Week's Briefing

The latest weekly briefing covers the most important AI developments in Taiwan.

Read the full briefing →
Latest