Q&A: Why is Intel DevCloud bringing edge computing back to the cloud?
Edge computing is nothing new. But itBuilding applications and solutions at the edge that leverage the cloud for analytics, as well as using a network as efficiently as possible, can be challenging.
But developing a solution that works isn’t the only challenge. How can developers actually guarantee post-deployment and maintenance? Deploying a cloud-native application at the edge can unlock a Pandora’s box with unknown interoperability, scalability, and maintenance issues.
“The biggest problem is that developers still don’t know, at the edge, how to bring a legacy app and make it cloud-native,” said Ajay Mungara (pictured), senior director of advanced software and AI, development solutions and engineering at Intel. “So they wrap it all up in a Docker and they say, ‘OK, now I’m containerized.’ So we [Intel Dev Cloud] tell them how to do it right. So we train those developers. We give them the opportunity to experiment with all these use cases so that they get closer and closer to what customer solutions should be.
Mungara spoke to industry analysts theCUBE Dave Vellante and Paul Gillin during the recent Red Hat Summit Event, an exclusive broadcast on theCUBE, SiliconANGLE Media’s live streaming studio. They discussed DevCloud, edge computing, use cases and solutions. [The following content has been condensed for clarity.] (* Disclosure below.)
Vellante: DevCloud, what is it?
Mungara: Often people think of edge solutions as just computers at the edge, but you also need to have a cloud and network component. And the edge is complicated because of the variety of edge devices you need. And when you create a solution, you need to consider where am I going to push the calculation? How much of the compute will I be running in the cloud? How much compute will I be pushing onto the network, and how much do I need to run it at the edge. Often what happens to developers is that they don’t have an environment where all three come together.
So what we’ve done is we’ve taken all of these edge devices that will theoretically be deployed at the edge and put them in a cloud environment. All these devices are at your disposal. You can put it all together, and we give you a place where you can create, test, and run benchmarks. So you can know when you’re actually going into the field to deploy it and what kind of sizing you need.
Velant: TTake this example of AI inference at the edge. So I have a cutting-edge device, I developed an app, and I want you to do real-time AI inference. You have some sort of data stream coming in. I want you to keep this data, send it back to the cloud, and be able to develop it, test it, and compare it.
Mungara: What we have is a product, which is Intel OpenVINO, which is an open source product that performs all the optimizations you need for edge inference. So you develop… the training model somewhere in the cloud. I developed with all this, I annotated the various video streams etc., and you don’t want to send all your video streams to the cloud, it’s too expensive — bandwidth is expensive. So you want to compute this boundary inference. To make this inference at the edge, you need an environment. What kind of peripheral device do you really need? What type of computer do you need? How many cameras are you calculating?
And the biggest challenge at the edge (and developing a solution is fine) is when you move into actual deployment and post-deployment monitoring maintenance. Making sure you manage it is very complicated. What we’ve seen is that over 50% of developers are developing some kind of cloud-native app recently. So we think if you’re bringing this kind of cloud-native development model to the edge, then your scaling problem, your maintenance problem, how do you actually deploy it?
Vellante: What does the edge look like? What is this architecture?
Mungara: I’m not talking about the far edge, where there are tiny microcontrollers and these devices. I’m talking about those devices that connect to those remote devices, collect the data, perform analysis, calculations, etc. You have remote devices, it can be a camera, a temperature sensor, a scale, it could be anything, right? It could be that far edge. And then instead of pushing all the data to the cloud, for you to do the analytics, you’re going to have some sort of set of edge devices, where it collects all of this data, making decisions close to the data – you do an analysis there.
So you have a bunch of devices sitting there. And these devices can all be managed and grouped into an environment. So the question is how do you deploy applications to this edge? How do you collect all the data coming through the camera and other sensors, and process it close to where the data is generated, make immediate decisions? So the architecture would look like, you have a cloud, which manages some of these edge devices, some management of these applications, some type of control. You have a network, because you have to connect to it. Then you have the whole plethora of edge, from a hybrid environment where you have an entire mini data center sitting at the edge, or it could be one or two of those devices that just collect data from these sensors and process them.
Here’s the full video interview, some of SiliconANGLE and theCUBE’s coverage of the Red Hat Summit Event:
(* Disclosure below: TheCUBE is a paid media partner of Red Hat Summit. Neither Red Hat Inc., sponsor of theCUBE event coverage, nor other sponsors have editorial control of content on theCUBE or SiliconANGLE.)