Since generative AI exploded, it's all anyone talks about. But traditional ML still covers a vast space in real-world production systems. I don't need this tool right now, but glad to see work in this area.
A nice way to use traditional ML models today is to do feature extraction with a LLM and classification on top with trad ML model. Why? because this way you can tune your own decision boundary, and piggy back on features from a generic LLM to power the classifier.
For example CV triage, you use a LLM with a rubric to extract features, choosing the features you are going to rely on does a lot of work here. Then collect a few hundred examples, label them (accept/reject) and train your trad ML model on top, it will not have the LLM biases.
You can probably use any LLM for feature preparation, and retrain the small model in seconds as new data is added. A coding agent can write its own small-model-as-a-tool on the fly and use it in the same session.
"classical ML" models typically have a more narrow range of applicability. in my mind the value of ollama is that you can easily download and swap-out different models with the same API. many of the models will be roughly interchangeable with tradeoffs you can compute.
if you're working on a fraud problem an open-source fraud model will probably be useless (if it even could exist). and if you own the entire training to inference pipeline i'm not sure what this offers? i guess you can easily swap the backends? maybe for ensembling?
This lets you not even need Python, r, Julia, etc but directly connect to your backend systems that are presumably in a fast language. If Python is in your call stack then you already don’t care about absolute performance.
Can you tell us more about the motivation for this project? I'm very curious if it was driven by a specific use case.
I know there are specialized trading firms that have implemented projects like this, but most industry workflows I know of still involve data pipelines with scientists doing intermediate data transformations before they feed them into these models. Even the c-backed libraries like numpy/pandas still explicitly depend on the cpython API and can't be compiled away, and this data feed step tends to be the bottleneck in my experience.
That isn't to say this isn't a worthy project - I've explored similar initiatives myself - but my conclusion was that unless your data source is pre-configured to feed directly into your specific model without any intermediate transformation steps, optimizing the inference time has marginal benefit in the overall pipeline. I lament this as an engineer that loves making things go fast but has to work with scientists that love the convenience of jupyter notebooks and the APIs of numpy/pandas.
reply