Back to Article
AI for Time Series and Anomaly Detection
Journal of Artificial Intelligence and Big Data
| Vol 4, Issue 2
Table 2. Summary of Empirical Results
| Model Category | Representative Models | Primary Strengths | Weaknesses / Limitations | Average Performance (F1 / RMSE) |
| Traditional Statistical | ARIMA, Holt-Winters | Interpretable, low complexity | Poor scalability, weak with nonlinear data | F1 ≈ 0.60 / RMSE ↑ 15–20% |
| Machine Learning | SVM, Random Forest, XGBoost | Moderate accuracy, interpretable | Heavy feature engineering | F1 ≈ 0.75 / RMSE ↓ 10% |
| Deep Sequential | LSTM, GRU | Captures temporal dependencies | Slow training, gradient issues | F1 ≈ 0.85 / RMSE ↓ 18% |
| Deep Convolutional | TCN | Fast inference, robust to noise | Limited long-term context | F1 ≈ 0.88 / RMSE ↓ 20% |
| Transformer-Based | TFT, Informer, TimesNet | High accuracy, interpretable via attention | Computationally expensive | F1 ≈ 0.91 / RMSE ↓ 25% |
| Generative / Hybrid | Autoencoder, VAE, GAN | Excellent anomaly detection | Hard to tune, interpretability issues | F1 ≈ 0.93 / RMSE ↓ 22% |