How Swarm Intelligence Solves Routing Problems in 20 Seconds Without Training Data

Episode 6
Feb 10, 2026 | 40:54

Summary

Fred Gertz completed his PhD in electrical engineering under the inventor of the modern magnetic hard drive, then left academic research to solve a problem that’s stumped manufacturers for decades: how to optimize complex operations when you have almost no data. At Collide Technologies, he’s applying swarm intelligence to tackle NP-hard scheduling and routing problems that LLMs fail at spectacularly.

His approach comes from an unexpected place. While most AI startups chase massive datasets and GPU clusters, Fred turned to ant colonies. These insects solve complex logistics problems without central coordination, training data, or computing power. Their collective behavior cracks the same mathematical challenges that paralyze manufacturing floors: which routes minimize delivery time, how to assign hundreds of workers to shifting tasks, what machine parameters balance throughput against reliability.

The methodology borrows from operations research and Taguchi’s philosophy, which Fred positions against Six Sigma’s dominance. Where Six Sigma optimizes for low variation, Taguchi argued customers deserve the best possible product every single time. That shift in thinking leads to different math: instead of reducing standard deviations, you map how every process parameter mathematically connects to business outcomes like profit or quality. The problem? Operations research textbooks are dense enough to intimidate PhD holders. Collide’s swarm algorithms make those techniques accessible to companies running on spreadsheets.

Topics Discussed

  • Ant colony optimization combining search functions and route optimization to solve scheduling problems in 20 to 30 seconds
  • Operations research and Taguchi methods versus Six Sigma’s statistical process control approach for manufacturing optimization
  • Delivering ROI with spreadsheet data instead of requiring IoT sensors and six month data collection projects
  • IQ OQ PQ validation frameworks from pharmaceutical robotics applied to AI model deployment in regulated industries
  • Why NP complete problems are better AI targets than tasks humans already perform well
  • Agent coordination across 500 enterprise agents as swarm intelligence’s next application beyond LLM reasoning models
  • Generating structured outputs from API calls without training data or few shot examples
  • Rate limiting and context window management for stateful applications like production planning tools
  • Manufacturing data environments spanning paper maintenance logs to live vibration sensors in the same facility
  • Evaluating AI without numeric metrics when outputs are text based recommendations rather than classifications

There’s no training data, right? There’s no ant that teaches all the other ants what to do. There’s no central coordinator.”

Fred Gertz
Founder and CEO at Collide Technologies
Transcript

Fred Gertz
I’m the CEO and founder for a company called Collide Technology where we are focused on trying to solve common optimization problems in manufacturing.

 

Saket Saurabh
So help us maybe understand this with a concrete example.

 

Fred Gertz
Our focus is on using a solution called Swarm Intelligence.

 

Saket Saurabh
What is the data environment like? I mean, you are working with live operational data or there are still things on spreadsheets?

 

Fred Gertz
It doesn’t necessarily assume a high data environment and an ant doesn’t have a lot of data. It doesn’t assume that we have a lot of training.

 

I think a lot of AI companies are focused on trying to solve problems that people are bad at. Whereas we are focused on solving problems that people are bad at.

 

Saket Saurabh
Hi everyone. Thank you for listening to another episode of Data Innovators and Builders. Today I’m speaking with Fred Gertz, founder and CEO of Collide Technologies. Fred, thanks for chatting with me today.

 

Fred Gertz
My absolute pleasure. Thanks for having us.

 

Saket Saurabh
Fred, you have an incredible background. Tell us a little bit about Collide and also tell us about how you got from an advanced PhD into working with Collide.

 

Fred Gertz
Right. So I’m the CEO and founder for a company called Collide Technology where we are focused on trying to solve common optimization problems in manufacturing, but also in things like marketing and transportation and logistics. And our focus is on finding these areas that have been problems for a long time, in many cases that are well understood, but people have a difficulty employing. They have issues trying to get solutions that help them or that are dynamic enough for their situations, especially for midsize manufacturers or companies.

 

And our focus is on using a solution called Swarm Intelligence to actually find optimal scenarios that we can easily feed back to their project and management teams and logistics teams to easily get the most out of all their resources. But ultimately, we answer the question, if we have a list of resources, what is the best way to deploy them? And I think that’s sort of universal across almost any company. Every company would like to get as much as they can out of what they have. And so we specialize in trying to answer that question as clearly and quickly as possible over at Collide.

 

But it was kind of a journey. I originally, as you alluded to, my background is actually electrical engineering with a focus on nanotechnology. And originally I was an R&D engineer. I have a PhD in electrical engineering. I studied with Sakharat Kiserov and Alexander Hitun. Sakharat is the person who invented the modern magnetic hard drive, a very famous engineer and scientist in his own right. And my background was in trying to design better devices. I spent a lot of time in the medical industry, robotics industry. And I did not really have a strong data background.

 

I think the only thing that made me unique is I had done a very fulfilling and informative internship in undergrad at Florida Tech, the Florida Institute of Technology, where we’d done AI for about 10 weeks. We learned what neural networks were and support vector machines and other machine learning models, genetic algorithms. They taught us a lot. And that stuff was in my head when I went into the real world, when I left academia.

 

And really what led me into this field wasn’t so much that I saw the potential in it. It was that I was an engineer that kept running into problems that I thought we should know the answer to. One part of the factory would go down and I would say, well, look, when it went down before, how did it go down? And a lot of times, sometimes very knowledgeable people would know, but other times nobody would know. And I ended up in a role where I tried to use some data techniques early in my career to hunt down problems. And ended up building a reputation in 2015 as a guy that uses data science to solve problems.

 

And I ended up sort of embracing that. Now, of course, the field has blown up, but that wasn’t really my intention. I was just another engineer trying to solve problems and I just happened to be a guy that solved them in a very mathematical way. And it’s led me to here where now I was head of data science at a large company and now I have an AI startup. But if you’d asked me when I finished school what I was going to do, I would not have guessed that I would be the guy that solved essentially math problems.

 

Saket Saurabh
Yeah, no, I think that’s incredible. And that’s how most entrepreneurs kind of trace their journey. It’s a problem that you get into and you get passionate about solving. And as you said, in your case, the market timing happens at the right time to that problem. So that’s incredibly fortunate when those two things sort of come together, your passion and the market.

 

So I wanted to double click on something you mentioned in your introduction, which was swarm intelligence. Help us understand what is that and how is it different from anything else we have seen?

 

Fred Gertz
Yeah, I mean, swarm intelligence is a really cool field. I highly recommend everybody read about it. The Wikipedia article is a great place to start. A lot of books about it. And it’s a branch of AI from the 1990s that really comes out of the field of artificial life. What does it mean to be alive? What is a digital life being?

 

But the way most AI people would understand it is the majority of the artificial intelligence that you work with, whether it’s a machine learning model or an LLM, is sort of based on the idea of how do we solve problems mimicking the way humans solve them? Humans are very good problem solvers. The dream is that we’ll get AGI and we’ll have computers that can solve things lightning fast, just the way we do, and very reproducibly.

 

But in the 90s, when we had less computer power, a lot of people got interested in the field of, well, what about how other things solve problems? And one of the fields that came out of that was swarm intelligence. That’s really looking at insects, flocks of birds, and trying to answer the question, how does an ant colony build complex structures, find food, raise young, protect themselves when there’s no central intelligence there? There’s no ant that knows what to do. There’s no training data. There’s no ant that teaches all the other ants what to do. There’s no central coordinator. Each ant is not particularly intelligent and they’re able to go out and do really complicated things.

 

So over a period of 10 or 15 years, it’s still an active field of research, but there were some really well-established algorithms that tried to replicate collective intelligence. Whether in the case of boids, it’s the flocking of birds, which you see in video games all the time. In the cases of things like what we try and solve, it’s ant colony optimization where we use how ants try and find food to solve NP-hard problems related to transportation and logistics. Probably the most common is particle swarm optimization, which is based on kind of loosely defined physics and simulated annealing, but it’s looking at our natural environment and looking at how it optimizes the world around it and replicating that.

 

And that’s really where swarm intelligence comes from. I would love to say that I’m brilliant and I figured this out myself, but honestly, in grad school, my favorite class was by Dr. Gerardo Beni, who’s actually the guy who came up with the term swarm intelligence if you read the Wikipedia article. I just thought he was a brilliant guy. I thought the class was amazing. I thought it was the reason you went to grad school, to take these more high-level, interesting topics courses. And so it was floating around my head when I started moving into the world of optimization and trying to find solutions. And it occurred to me, why don’t I try this stuff that I learned about in grad school? And it turns out it still produces very interesting solutions in that space.

 

Saket Saurabh
So help us maybe understand this with a concrete example, like what might be a potential problem, especially you mentioned manufacturing industry, where this sort of approach solves a specific problem.

 

Fred Gertz
Yeah, I mean, I think the classic example is something like routing and logistics. So if you’ve got a bunch of trucks and they all have to go to different places, they need to hit different areas to drop off, what is the ideal way to load the trucks and the routes they should take? That’s a classic traveling salesman problem.

 

And I think something that I learned early in my career that has played into this is there is no general solution, at least for NP-complete problems. That’s essentially what all these traveling salesman problems are, these non-polynomial time problems that geometrically grow to huge problem spaces and you can never find the right combinations. It’s something we run into in computer science all the time. But an early professor told me, he referred to it as a solved problem in a presentation once. And I asked, I did not get the memo that we had solved it. And he said, all right, let me be clear. Mathematically we haven’t, but from an engineering perspective, there are a lot of good approximations to this problem. And that always stuck with me.

 

What we find is you go into a company that’s doing a lot of logistics, routing all their trucks and managing their inventory. Classic NP-hard, NP-complete type problems. It’s a combination knapsack and traveling salesman. These swarm intelligence algorithms, they use the way ants try and find food. They drop little digital pheromones and they find the shortest route between things. We’re able to use that to, within usually 20 or 30 seconds, build out a nearly optimal route. And by not letting perfection be the enemy of good, we’re able to find really good solutions really quickly and solve what would be a problem that would be very difficult for a human.

 

We sort of focus, and I think maybe we’re unique in this compared to the majority of AI, in that I think a lot of AI companies are focused on trying to solve problems that people are bad at. People are not good at large-scale NP-complete problems. And we go after those because we think, great, if we can add that value to a company, that’s a value they didn’t have before.

 

In our case, companies don’t have a solution for this, or they don’t have a very good one. And so by using these swarm approaches, we can solve routing and logistics problems. Those are easy to see, but those same problems map into things like scheduling. How do you assign workers to tasks and when to assign them? We can solve production planning, convex optimization problems, trying to figure out the right parameters on machines and get the best balance of reliability with throughput. All of those map into this very cool space that, especially in the industrial area, are very interesting problems that a lot of companies just haven’t had the time to take a very mathematical approach to.

 

Saket Saurabh
Yeah, I think this is very fascinating. And I really like the point that with LLMs and generative AI, we are basically solving problems that humans have already solved. It’s becoming more like, can it be done automatically, maybe with less resources or more efficiently. But we are not talking about the problems that aren’t solved yet.

 

And I appreciate you bringing that up, like hey, there are a whole class of problems that we aren’t even solving, and why not apply our technology towards that?

 

So one of the things that I saw on your product was you talk about Six Sigma without struggle. Maybe help break this down a little bit for us. What is Six Sigma for those who are not familiar, and what is the traditional struggle with Six Sigma that you’re solving?

 

Fred Gertz
Yeah, and I would say that’s some verbiage that we’re probably going to move away from in the sense that it makes some sense for what we do and makes a dozen for others. And I would say largely in the community of people that improve manufacturing, I have something of a reputation as the anti-Six Sigma guy. I’m not really. I think there’s a lot of really good stuff with Six Sigma, but Six Sigma is essentially a manufacturing and management style that at its core gives you a tool chest of ways to solve problems in manufacturing.

 

And what it’s built around is this idea of, hey, if we can limit the amount of variation in our product and in our production, then we can hopefully really solve most of the problems we have. Now, I think a lot of people focus on that, but if you look at the original Toyota production system where I think this gets the most amount of original work coming out of Japan, Toyota was able to build fantastic products by taking this idea. But when you really look at that system, what it was about was understanding both from a practical and a mathematical perspective how parts of your process were related to each other and then trying to get them to act the same way every time.

 

The math for it, they’ve done a fantastic job of making it very approachable. It’s mostly around standard deviations. If we can calculate a standard deviation and divide it a little bit, you can figure out most of the metrics you need for Six Sigma. To their credit, they realized that if you could get three standard deviations of your specification limits, and of course if you have three standard deviations on either side, you have six standard deviations or Six Sigma, then you could cut your amount of failures down to something like several parts per million. 3.4 per million failures. And that’s huge. I mean, the average production place, if they only had three or four failures for every million products they made, they’d be ecstatic.

 

And they give a lot of really good tools on the management side, like how do you figure out problems, how do you control data, how do you do value stream mapping and figure out how things connect to each other. I’m something of a contrarian, and when I came into manufacturing, especially in the biotech space, this was the gold standard, and I asked myself the question, what if we didn’t do that? What would be point B?

 

And there was a guy in Japan named Taguchi, and Taguchi said, and I can never find this quote, but I swear it’s a quote from him, your customer does not deserve a low variation product. Your customer deserves the best product you can make every time. And that really rang true in me, especially in pharmaceuticals and biotech. Your customer doesn’t care if this is a low variation drug. They want good drug every time. Every drug should be perfect.

 

And his methodology for this was, let’s figure out our goals as a company, maybe it’s profit, maybe it’s quality, maybe it’s some combination, and then we will figure out how every part of our organization mathematically links to that. When you dig into that philosophy, there’s a whole branch of mathematics called operations research, sometimes referred to as management science. It’s essentially an operations-focused subset of advanced optimization theory and algorithmic design. His approach was, let’s take this advanced mathematics, let’s get these goals, and then let’s make sure we understand how every part of our decision-making and process design affects it.

 

You can answer questions like, if we change this parameter, we lower this pressure by 5%, how much more money will we make? What will that do to our quality? How will it affect our throughput? How will it affect our downstream? And to me, that seemed like the dream. Now, the problem is why doesn’t everybody do this? Why aren’t we using more operations research approaches? Why aren’t we using more Taguchi-style methods? The math is a nightmare. It’s some of the hardest math I’ve ever seen in my life. Even I look at an operations research textbook and I’m like, there’s a lot.

 

And so I came into it knowing, this is what we want, this is the gold standard. We use Six Sigma because the math is very approachable and they’ve done a great job in marketing and getting great results. And I’m very adept at Six Sigma approaches. I spend a lot of time dealing with Six Sigma shops, but knowing that we’re leaving meat on the bone, that there’s profit out there, that there’s better product that can be made more consistently if we just use this other stuff.

 

And so when I started Collide, that was the big question I was trying to answer, how can we make operations research more approachable, more easily applied? Specifically to your question, what is Six Sigma? It’s a methodology using statistics to reduce variation. How does that differ from what we’re trying to do? We try to go even further beyond that, and we try to make it as easy as Six Sigma has been to deploy and build a culture around, where you really understand how every part of your company and organization is affecting every other part.

 

Saket Saurabh
Yeah, you know, very right. You want that sort of reliability and repeatability of the process.

 

One thing I was curious about also was that you work with companies in manufacturing and traditional industries. What is the data environment like? I mean, are you working with live operational data or are things still on spreadsheets? Tell us a little bit about how you deploy in that sort of a space.

 

Fred Gertz
It’s a great question. It’s all over the place. Some of these companies, look, the average manufacturer organization is doing eight or nine figures in revenue. They have money. More importantly, they care about how their operation runs. I mean, we’ve worked with other companies too, large marketing organizations and sales groups. But you go into operations and this is a group of very smart people that have maybe for decades really dialed in and know what they’re doing.

 

And I think maybe one thing we do better than almost every AI company is we realize we are not the magic. Building something and manufacturing it, that’s a heroic process. Every company I go to, whether they make tortilla chips or rocket ships, they have done a gargantuan heroic task building a great organization. We think we can add a little bit to it, but they do amazing work and they’re all really smart people.

 

Some of these places are run on paper. And some of these places have advanced ERP systems where they track everything with IoT sensors. And I would say it’s all over the place. I think what makes us special is one of the cool things about our approach of using swarm intelligence is it doesn’t necessarily assume a high data environment. An ant doesn’t have a lot of data. It doesn’t assume that we have a lot of training. If I were to use a deep learning approach for a lot of these solutions, I would have to have training data. I would have to know the right answer over and over again to eventually train the model.

 

These ant colony optimization approaches, for example, combine both a search function, the way the ant finds food, and an optimization function where it then finds the shortest route. And if you do that intelligently, understanding your space, you can go into less data mature, low data environments where a lot of the people we’re talking to now are on spreadsheets. And so we’ve designed systems where we take the data you have and get you what we can out of that.

 

I think that it opens up a lot of doors because one of the come to Jesus moments I had as a data scientist was I ran a very large, very well-thought-of department at my last company. But I realized we didn’t get as many good results as I would like and that we weren’t always a joy to work with. And the reason for that was when you came to me and said, hey, we have this problem, my first question was going to be how much data do you have? Can you give me more data? And then I’m just constantly bothering you for data. Can we run engineering studies so we can really capture this?

 

And I realized, that’s half the problem that these organizations already have is they don’t have data. So I said, if we could design solutions, and they don’t have to be perfect, don’t let perfect be the enemy of good, if we could design solutions where we could use less data or maybe even not really any data and get an initial result and show some traction, it’s a lot easier conversation. You’re like, hey, we brought in this company or this solution and it’s already saving us money. Well, now why don’t I invest? It’s a lot easier to say, why don’t we buy those IoT sensors and glue them onto our machines, or let’s put up a webcam that can look at the throughput as it’s going out and measure it. It’s a lot easier to make that investment once you see a bit of a result.

 

Whereas the average project is partly an R&D project and then partly a production project. And it always kind of comes down to, you’re going to give me a lot of data and probably a lot of money and then hope that I come back to you with a solution that reaches your spec. And that process might take six months, might take a year in some cases. So it really builds up the resistance in a lot of organizations to getting these solutions, even if they know that’s the right answer.

 

Our solution is why don’t we give you not a perfect solution? It won’t cost a lot. Start running it. If you love it, then talk about doing more. So, again, to your original question, it’s all over the map. Everything from paper-based records to spreadsheets, all the way to real-time maintenance records and IoT tracking with vibration sensors. And I think it’s one of the things I love about the field is we’re seeing a whole spectrum of what it looks like in the real world to collect data. In many cases, these things are mixed. Your maintenance is on paper but you have IoT data. How do we bridge that gap? And I think we have a huge impact by focusing on, let’s help you solve that problem, let’s give you as much ROI as we can with as little data as possible, and then you’re going to be inspired to want to go down this path more.

 

Saket Saurabh
I think that’s fabulous. The approach is not like, hey, you don’t have data so we can’t serve you because we can’t train on your data. But instead, you’re able to deliver high quality solutions with the current state of things.

 

And comparing again with generative AI, that technology is well proven when it comes to generative actions. Whether you’re generating code or writing a piece of marketing content or blogs. But what the industry is right now trying to do is to bring it to actual operational use cases. Can it actually automate a function or part of a function? And the challenge there becomes how reliable can it be, how do you put guardrails on that, how do you evaluate?

 

So in your world, as you are working with companies and taking your solution from an idea to a prototype to production, maybe tell us how you approach that and to what extent some of these factors come into play? Or do you have a significant advantage maybe compared to the more generative approaches?

 

Fred Gertz
Yeah. I mean, I think our real advantage is that right now everybody at Collide has an industrial background. In some way, whether that was in data science or like me, maybe a more traditional engineering design role, at least initially. And so we’ve validated things before. When I used to sell robots to pharmaceutical companies, we’d come in and we had to do IQ OQ PQ, installation qualification, operational qualification, and performance qualification. And I’ve written those documents.

 

And so now the first question I get, especially from a regulated environment, whether that’s FDA or the Department of Defense, is how are you going to validate this? How do we know this is a good solution? Now, our solutions essentially propose an optimal solution and either it’s better or not. It rarely touches something that you would need to validate. But even when it does, we validated how we would as if it were a robot that you’re going to install. And with our experience, you can do that with generative models and traditional AI. We just have a lot of experience setting those up and saying, all right, this area under the curve is going to help us fit how much risk we want to put into the model, how much risk the humans are going to take, and how much risk the AI takes. And in many cases, we’re able to very quickly come up with a validation plan that somebody like the FDA would accept.

 

I think that’s a huge challenge in this space. Generative models have the misfortune of being difficult to get metrics around. If you’re using a generative model to produce documentation for your process, that’s a great use and can add a lot of value. We’ve done some work in that space. It’s something that could save people a lot of time. But what does it mean to be a good document? What is the number for that? Sometimes we do something with real-world human judges where we have a bunch of people who do the documents, they rate them, we use that score, we try and improve it. Or we use LLM judges.

 

But I think the switch to generative AI, at the end of the day, it’s not in some ways fundamentally different than the world we had when we were all doing machine learning and doing support vector machines and small neural networks. But what’s changed is all of the stuff we used to do in data science when I started out was so numerically driven. It had a metric. We put it under ROC curves and we would look at them and validate that way.

 

But I always kind of say, especially with LLMs and some vision models where you’re generating pictures, it’s sort of like we’ve gone into a world of data science without numbers. You’re producing words and then I see if the words make sense to me and if they do, then I’m happy with the model. Now, that’s a little bit flippant. I mean, they use masking techniques to see statistically how close the vectors being generated are to the actual thing. But I think there’s some truth to that, especially for smaller shops that are fine-tuning LLMs.

 

They end up in a situation where they don’t have numbers, and I’m always encouraging companies, whether they’re working with us or we’re giving some help out to our friends in the AI space, if you can come up with a metric of success, then that’s the way to go. Because then you’re going to know if you’re improving and you’re going to have agreed upon it. And a lot of companies are telling me, we don’t know what the metric of success is. My response to that is always the same. Then that should be the problem you solve. Figure out how to measure what good is.

 

I mean, I think these generative models, especially in places where people have come up with metrics for them, and there are some really good ones out there, it’s beautiful. They can provide a lot of value to a lot of organizations. And we’re seeing that world revolutionize how we thought about AI in the last four years. I’ve been in the space since about 2007. We’ve never seen anything obviously close to this level of interest. And I think now these questions are being answered both academically and in the real world. How do we measure what good is in these models? Some really cool solutions, whether LLM judging or inventing metrics like BLEU, ROUGE, or METEOR, are all headed in the right direction. I really look forward to a world where we start putting some more numbers back into our generative models.

 

Saket Saurabh
Yeah, I think in many generative AI agentic applications, the measure of good is still a human who is determining that the output coming out is correct. Let’s say code is being generated, well, is that good? Now there are cases where generative AI is being applied to downstream processes as well, but ultimately what is production grade and what can go in has to be judged by humans right now.

 

So I agree with you that judging the outcome is not an easy thing, but most use cases are human assistance use cases in many ways.

 

One question I had was, since you mentioned robots, physical AI is becoming a topic of big discussions, and it looks like that is one world where again massive automation could become possible. What’s your take on that and what approach might work there?

 

Fred Gertz
I mean, I would say first, I for one welcome our new robot overlords. I know that gets said a lot, but that’s where we were headed. Before AI, people were putting in lights-out manufacturing facilities. People were looking at things with robots. This is just accelerated. Now there’s a lot of work that has to be done. Some of it’s really cool and exciting. How do we make these things safe?

 

When I worked with robotic arms in the past, I never really appreciated how heavy they were and how fast they could move and how much damage they could cause. It wasn’t until I started installing them in factories and we had to put up safety grids and laser arrays and plexiglass that I realized somebody could get really hurt with one of these things. And I think those problems are being addressed. People are coming up with lighter robots, ones that are more collaborative, easier to train. Some guys are doing really interesting things on safety where the robot can kind of extrapolate into situations they weren’t explicitly programmed to avoid hurting people.

 

I’m seeing some great stuff out here where I am right now in South Carolina where BMW is using robots that just wind out if people aren’t wearing the right safety gear. There are some really interesting aspects. Jobs that people weren’t great at or you wouldn’t want to put a person in or they could get injured are becoming accessible to robots. It adds a ton of reproducibility to your process. If you’ve got one robotic line and you need to make it two, you order the robots and build it out, you copy the code over and it’s good to go. Whereas starting a new human line, you’ve got to get a bunch of humans together, retrain them, test whether that’s working, and you get the throughput you get which is highly variable depending on how sleepy people are and how well trained they are.

 

So I do think that this area of physical AI is going to be a huge hit, but I almost think in a lot of ways it was already a huge hit. People are throwing more money into it. I mean, Fanuc and KUKA weren’t going broke even before ChatGPT. But I think now a lot of people are getting into the AI field and much like myself, they’re saying, okay, everybody works on LLMs, I want to work on something different. And some of those guys are going over to robotics in the same way that we’ve gone over to swarm intelligence.

 

And in fact, swarm intelligence was highly interested in the robotics field originally. There’s a ton of great swarm robotics articles from the 90s where people tried to build ant-like robots that would build bridges automatically by interlinking. And I think the realm of physical AI is in some ways still in its infancy. That ability to see an environment and get insight out of it and then extrapolate to good decision making, I think a lot of that is going to cross over to the kind of work we do on decision making and optimization. And some of it’s going to extrapolate very powerfully into the generative field, which still really hasn’t made it into the physical realm yet. And I think that’s going to be very powerful, exciting stuff in the next decade.

 

Saket Saurabh
Do you think that these LLM-based agents and swarm intelligence kind of end up working together in some cases or sort of going side by side?

 

Fred Gertz
I straightly hope so. It’s an open field of research for us. I mean, we think we solve the kind of optimization problems that LLMs do not do well at. And we’ve tried. I mean, we have it trying schedules and maintenance routes and it does terribly. Outside of it trying to essentially replicate our code and then run it and then use the results, which it does some interesting things with.

 

But I think we’re now approaching a world where you’re really seeing that the LLMs are struggling. And I would say the number one place you see that is in agent coordination. If you have 500 agents in your company all capable of doing different things, you end up back in our space. How do I assign tasks? And each one of these agents only does one specific thing. These things start to look like anthills very quickly.

 

And I think there’s going to be, and there already is, some great research in this. That question of how does agent coordination work, at this point in time it’s either you hard program it or you try and use a reasoning model, which isn’t good at this sort of heterogeneous swarm problem. I think it’s going to be a very interesting thing over the next two or three years.

 

And I think it’s going to be a really interesting world that brings in the deep learning guys working on reasoning models, competing with guys like me in swarm intelligence, and the reinforcement learning guys that have been around figuring out how to solve problems for a long time, and probably some classical quantum optimization guys. And I think we’re all about to have a really exciting field over the next three or four years of borrowing from each other to find solutions that the end user is just going to look at like, oh, my LLM is better. They don’t realize 50 different agents touched that problem in the five seconds before they read the paragraph it made.

 

And I think it’s going to be some really cool stuff between us and everyone else once we all start kind of mixing together.

 

Saket Saurabh
Yeah, I mean, very well said. I think multi-agent coordination and orchestration is complex. And you’re very right, try to put 500 of those together and things don’t work out well. So definitely an area of research and quite exciting to see how you’re thinking the next few years will see significant advancements in that area.

 

And I would say what a great note to bring this conversation to a close today. Super excited about what’s coming. Any predictions that you want to throw in for 2026 as we wrap up?

 

Fred Gertz
You know, I am terrible at predictions. I don’t even play the stock market anymore. When I sports bet, I do that by the book. I’ve got my swarm algorithm helping me out.

 

But I will say, I think in the next year, obviously a lot of interesting stuff is happening with agents. I think we’re going to see a slowdown on the LLM side of things. I think we’re already seeing it. There’s no additional training data out there. We’ve trained these things on the internet. We’re going to see this sort of leveling off. And I think we’re going to start seeing areas that we knew existed but haven’t dedicated as much time to, like reinforcement learning. Optimization stuff, obviously we’re betting big on that because that’s who we are.

 

But I do think the next year or two is going to be about how do we get them more efficient, how do we get them training more efficiently, how do we solve areas like physical AI? I think we’re going to see a nice broadening of perspectives, and I’m really excited about that. Not to say that I have anything against my fellow AI researchers that focus on LLMs, but I think we’re all a little tired of it now.

 

Saket Saurabh
That’s a fair point, and I think that’s kind of the history of technology where hype and reality start to come together. And we realize that the hype may not be as close as we think it is, at least not in the next year. But yeah, a lot of incredible innovation happening, and I look forward to what some of the things that you guys are bringing into the picture.

 

Yeah, no, thank you so much for joining me today. It’s been a pleasure talking to you, and I look forward to exciting stuff coming from you.

 

Fred Gertz
We hope that we’re a big name in a big space in AI now, but we appreciate the time you’ve given us today to tell a little bit about our story. Saket, it’s been an absolute pleasure. Thank you for having me.

 

Saket Saurabh
Thank you.

Show Full Transcript

Ready to Conquer Data Variety?

Turn data chaos into structured intelligence today!