Moving to the Edge: Pushing Compute from the Cloud to the Fringe

By: Tom McNeill


More in this series:
Machine Learning and Analytics in the High Performance Cloud

In the cloud context, when we talk about “the edge” we are talking about sensors or lower power devices.  These are devices that take in some stimulus and then either using their onboard processing or contact the cloud for processing.  In the case of most AI/ML/DL applications, once the model has been trained, tested, and validated using the big horsepower of the cloud, it (the model) then needs to be converted into a form that can be placed upon an edge device and consumed.   

variable systems model

Often in the case of edge devices (mobile) and sensors, you are dealing with a very different type of hardware than you were dealing with in the cloud.  You are frequently looking at something that is battery powered or constrained by size.

Let’s take the use case of a camera that is on a freeway and needs to send a signal any time it identifies a red car.  This can be done one of two ways.  It can either stream data to the cloud for inference and be, essentially, a dumb camera, or, the camera can carry an FPGA or neuromorphic chip and accept the trained model for identifying a red car, and when it sees a red car, send the appropriate response to the cloud.  It is this second architecture we are beginning to see more frequently, as shown below.


What is particularly interesting is that depending upon the technology used you now have to either convert to an RTL (Register Transfer Level) for an FPGA application or you have to reduce the precision of the model to enable use on a neuromorphic chip.  In both cases, there is a physical constraint placed upon the model by the hardware it will be run upon on the edge.  This is another, very practical reason why multiple small, compact, specific models are preferred to large expansive ones.  The second reason is that updating the edge device requires pushing the new or updated models to the edge and in many cases, this is not just one device but an array of devices that need to be updated and validated anytime the model is changed.

Additionally, the edge now operates as a repository for new data to train the model.  

Imagine our model is to look for red cars and we’ve magically been transported back to 1957, the cars we could train our model with would be the vehicles of that era.  Would this model, constructed back during the Eisenhower administration, be able to identify a modern hypercar? Probably not, it would have trouble. So for a model to remain useful, it needs to evolve with the current language, in this case, design language.  We see this also occur in a variety of other fields where the language and field are evolving quickly. So, there needs to be a mechanism to consume new material and add it to the model over time.

When we begin allowing the model to alter itself, we are now entering into a field called cybernetics and the Viable Systems Model, our next section.


Related Articles