Skip to main content
FRAMOS Logo

Conventional AI vs. Edge AI

FRAMOS

FRAMOS

August 14, 2025

Conventional AI vs. Edge AI

Imagine a system that responds instantly without sending a single byte of data to the cloud. That’s the fundamental promise of Edge AI, a shift from centralized processing to smarter, faster, and more secure computing at the source

  • At ImagingNext 2025, Michele Lapresa will walk attendees through key differences between conventional and Edge AI, and show how this shift is already transforming smart city infrastructure and digital retail experiences.

What Is ImagingNext?

ImagingNext is FRAMOS’s annual innovation event dedicated to the future of imaging and machine vision. It brings together industry leaders, researchers, and solution providers to explore cutting-edge trends, real-world applications, and emerging technologies across various sectors. With expert talks, live demonstrations, and networking opportunities, ImagingNext 2025 offers participants a unique chance to gain fresh insights, spark collaboration, and discover how imaging is transforming industries – from robotics and automation to AI and beyond.

Edge AI in Focus

This session begins by explaining how conventional AI relies on cloud-based data collection and processing, which often raises latency, bandwidth, and privacy concerns. In contrast, Edge AI processes data locally on the device, enabling real-time insight while keeping personal data secure.

Real-World Applications

To demonstrate the real-world value of Edge AI, Michele will share two projects from Sony. In a smart city deployment in Rome, intelligent vision sensors were used to reduce traffic congestion and improve safety, all while processing data locally to meet EU privacy standards.

In the retail sector, Edge AI has also been used to analyze viewer engagement for digital signage in real time. A notable example is the Seven-Eleven project in Japan, which optimizes content dynamically while safeguarding user privacy. The solution has gained attention in Europe, particularly for its privacy-first approach.

About the Speaker: Michele Lapresa

Michele Lapresa is a Computer Engineer with a decade of experience in SONY working on imaging and Edge AI technologies. Since 2011, he has worked on Time-of-Flight (ToF) sensors and later on the SONY IMX500 image sensor, which integrates an embedded AI accelerator. He leads a multidisciplinary team composed of AI and System Engineers, driving innovation in Edge AI solutions for several verticals.
Over the past four years, Michele has focused on smart city applications, utilizing the IMX500 for real-time, on-device AI processing. He has also worked on AI-powered people monitoring systems in the retail sector. Michele holds a patent related to Edge AI for smart cities and has others filed.

At ImagingNext, Michele Lapresa brings practical insight into the shift from cloud-based AI to on-device intelligence — making his session a must-attend for anyone deploying real-time, privacy-conscious vision systems.

Why You Shouldn’t Miss This Session

Edge AI is no longer emerging – it’s here, and it’s shaping how industries handle data, deliver real-time performance, and maintain compliance with strict privacy standards. Michele’s session offers practical examples and technical insights into how Sony is helping accelerate this shift with AI solutions tailored for real-world use cases.

Register Now

Don’t miss your chance to see how AI-driven biometrics is enabling fast, secure, and reliable authentication in the real world