LLMs can’t outperform a technique from the 70s, but they’re still worth using — here’s why

Share This Post


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


This year, our team at MIT Data to AI lab decided to try using large language models (LLMs) to perform a task usually left to very different machine learning tools — detecting anomalies in time series data. This has been a common machine learning (ML) task for decades, used frequently in industry to anticipate and find problems with heavy machinery. We developed a framework for using LLMs in this context, then compared their performance to 10 other methods, from state-of-the-art deep learning tools to a simple method from the 1970s called autoregressive integrated moving average (ARIMA). In the end, the LLMs lost to the other models in most cases — even the old-school ARIMA, which outperformed it on seven datasets out of a total of 11.

For those who dream of LLMs as a totally universal problem-solving technology, this may sound like a defeat. And for many in the AI community — who are discovering the current limits of these tools — it is likely unsurprising. But there were two elements of our findings that really surprised us. First, LLMs’ ability to outperform some models, including some transformer-based deep learning methods, caught us off guard. The second and perhaps even more important surprise was that unlike the other models, the LLMs did all of this with no fine-tuning. We used GPT-3.5 and Mistral LLMs out of the box, and didn’t tune them at all.

LLMs broke multiple foundational barriers

For the non-LLM approaches, we would train a deep learning model, or the aforementioned 1970’s model, using the signal for which we want to detect anomalies. Essentially, we would use the historical data for the signal to train the model so it understands what “normal” looks like. Then we would deploy the model, allowing it to process new values for the signal in real time, detect any deviations from normal and flag them as anomalies.

LLMs did not need any previous examples

But, when we used LLMs, we did not do this two-step process — the LLMs were not given the opportunity to learn “normal” from the signals before they had to detect anomalies in real time. We call this zero shot learning. Viewed through this lens, it’s an incredible accomplishment. The fact that LLMs can perform zero-shot learning — jumping into this problem without any previous examples or fine-tuning — means we now have a way to detect anomalies without training specific models from scratch for every single signal or a specific condition. This is a huge efficiency gain, because certain types of heavy machinery, like satellites, may have thousands of signals, while others may require training for specific conditions. With LLMs, these time-intensive steps can be skipped completely. 

LLMs can be directly integrated in deployment

A second, perhaps more challenging part of current anomaly detection methods is the two-step process employed for training and deploying a ML model. While deployment sounds straightforward enough, in practice it is very challenging. Deploying a trained model requires that we translate all the code so that it can run in the production environment. More importantly, we must convince the end user, in this case the operator, to allow us to deploy the model. Operators themselves don’t always have experience with machine learning, so they often consider this to be an additional, confusing item added to their already overloaded workflow. They may ask questions, such as “how frequently will you be retraining,” “how do we feed the data into the model,” “how do we use it for various signals and turn it off for others that are not our focus right now,” and so on. 

This handoff usually causes friction, and ultimately results in not being able to deploy a trained model. With LLMs, because no training or updates are required, the operators are in control. They can query with APIs, add signals that they want to detect anomalies for, remove ones for which they don’t need anomaly detection and turn the service on or off without having to depend on another team. This ability for operators to directly control anomaly detection will change difficult dynamics around deployment and may help to make these tools much more pervasive.

While improving LLM performance, we must not take away their foundational advantages

Although they are spurring us to fundamentally rethink anomaly detection, LLM-based techniques have yet to perform as well as the state-of-the-art deep learning models, or (for 7 datasets) the ARIMA model from the 1970s. This might be because my team at MIT did not fine-tune or modify the LLM in any way, or create a foundational LLM specifically meant to be used with time series. 

While all those actions may push the needle forward, we need to be careful about how this fine-tuning happens so as to not compromise the two major benefits LLMs can afford in this space. (After all, although the problems above are real, they are solvable.) This in mind, though, here is what we cannot do to improve the anomaly detection accuracy of LLMs:

  • Fine-tune the existing LLMs for specific signals, as this will defeat their “zero shot” nature.
  • Build a foundational LLM to work with time series and add a fine-tuning layer for every new type of machinery. 

These two steps would defeat the purpose of using LLMs and would take us right back to where we started: Having to train a model for every signal and facing difficulties in deployment. 

For LLMs to compete with existing approaches — anomaly detection or other ML tasks —  they must either enable a new way of performing a task or open up an entirely new set of possibilities. To prove that LLMs with any added layers will still constitute an improvement, the AI community has to develop methods, procedures and practices to make sure that improvements in some areas don’t eliminate LLMs’ other advantages.  

For classical ML, it took almost 2 decades to establish the train, test and validate practice we rely on today. Even with this process, we still can’t always ensure that a model’s performance in test environments will match its real performance when deployed. We come across label leakage issues, data biases in training and too many other problems to even list here. 

If we push this promising new avenue too far without those specific guardrails, we may slip into reinventing the wheel again — perhaps an even more complex one.

Kalyan Veeramachaneni is the director of MIT Data to AI Lab. He is also a co-founder of DataCebo

Sarah Alnegheimish is a researcher at MIT Data to AI Lab.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link
spot_img

Related Posts

spot_img