The classification of stellar light curves has become a key task in modern time-domain astronomy,fueled by the rapid growth of data from large-scale surveys such as Kepler and TESS. Although deep learning models have achieved high accuracy in this area,their computational costs can limit scalability. To tackle this issue,we propose LightCurve MoE,a Mixture-of-Experts (MoE) architecture that combines dynamic sparse routing with a dual-gating mechanism to balance accuracy,efficiency,and robustness. Our model includes five specialized experts,each using a different feature extraction method—such as wavelet transforms,Gramian angular fields,and recurrence plots—to capture unique patterns in the light curves. A dual-gating mechanism evaluates these expert outputs by analyzing both frequency and time-domain features,allowing the model to adaptively weigh each expert’s contribution. During inference,only the top three out of five experts are activated per sample using a Top-k routing strategy,reducing computational cost by 40% compared to dense models while preserving strong accuracy (≈96%). The model also includes entropy regularization and a technique to retain inactive experts during training, ensuring stable and effective learning. By combining sparse computation with multi-modal feature fusion, LightCurve MoE offers a scalable solution for future large-scale photometric surveys like LSST and Global Open Transient Telescope Array, where processing efficiency is crucial due to the massive volume of daily data.