How much can you save using Sentispec? Try our free Calculator to know your savings
The Fundamental Problem of Scaling Computer Vision
"To a man with a hammer, every problem looks like a nail."
The reason Computer Vision type applications and AI in general is so hard to scale is simple: The type of skills used to build the core of the solution are not the same as those required to maintain, release, deploy, and scale. In fact they are very different, even at very basic levels.
Let's try and break a simple example it down into a few basic components to illustrate the problem. Let's assume we want to identify moving forklifts in a warehouse with the purpose of deducting their idle vs. active time, using already installed security cameras. The business case is a potential 10-20% reduction in lease costs associated with the equipment.
The elements in such a solution we have to consider are [vastly simplified]:
- Creating an AI module to identify forklifts
- Create a system for tracking forklifts and outputting statistics data
- Managing up to 30 video streams from existing security cameras across thewarehouse
- Utilising GPU
- Make it maintainable using a containerization solution like Docker
- Ensure that the system works across 50 different warehouses with different forklifts and lighting conditions
- Validate that every single AI- and code change still works across the 50 deployed warehouses prior to deployment
And here is the problem. Typically the people doing 1 will do so using Python, PyTorch, Tensorflow or similar to create Neural Networks. However, 2, 3, 4, and 5 require different enterprise tools and skillsets - typically .NET, unit testing, Docker, and Enterprise Architecture. 6 and 7 require DevOps people with a good mix of project management, release management, test architecture, etc.
The vast majority of implementation projects we see gradually realise this, and thus have a rapidly escalating cost of development and maintenance, reducing the viability of the business case.
This is why we built the Sentispec Core AI platform.
Simply put, the platform aims to solve the scalability problem in its entirety, such that as a customer you can focus on building the value adding new AI models on top, and the rest is provided for.
The platform ensures multicamera support, an easy and fast way of building new AI pipelines, GPU support, a comprehensive continuous validation framework, a statistics portal, and much much more.
Share this post
The use of AI in logistics can be cumbersome to scale because of the need for extensive training and huge data loads. But by prioritising only needed information and ignoring the rest, we can reduce a 10 MB sized problem into a 0.5-1 MB instead, meaning vastly reduced cost of training and operation as well […]
Logistics isn’t something we tend to think about on an everyday basis. We order a thing on Amazon, and it arrives at our doorstep. We take our car to the shop, they change the oil and the windscreen viper. We don’t stop and think about how it got there, but indulge me for a moment […]
Artificial Intelligence (AI) has been an area of significant technological advancements in recent years. With the development of large machine learning (ML) models, AI has demonstrated unprecedented capabilities in various fields, including healthcare, finance, and education. However, the question arises: has AI hit the ceiling? The answer to this question may not be as straightforward […]
End-end supply chain visibility – the holy grail of logistics for any manufacturer or retailer. This is an extremely difficult problem to solve, but the potential benefits are immense. The reason it’s hard to achieve true end-end visibility is generally the number of partners involved in moving this around the globe through highly complicated transportation […]