Unlocking the Potential: A Visionary Outlook on the Future of FPGA by Vangelis Vassalos

Niranjana R

0Shares

FPGA Insights has engaged in an exclusive interview with Vangelis Vassalos, Head of Project Management Office| PerCV.ai PM|Scrum Master at Irida Labs

Vangelis Vassalos

Q1) Can you provide an overview of your organization and the services/products it offers?

In a nutshell, Irida Labs is helping companies scale VisionAI. We are powering vision-based AIoT sensors and solutions by bringing computer vision and AI to the edge – helping companies around the world develop scalable vision-based solutions. We provide AIoT-optimized embedded vision software using computer vision and deep learning, transforming bounding boxes into real-world vision applications.

Our end-to-end AI software and services platform, PerCV.ai, unlocks a variety of computer vision and AI applications, like people, vehicle, and object detection, identification, tracking, and 3D pose estimation, enabling scalable vision solutions in diverse markets such as Industry 4.0, Smart Cities and Spaces and Retail.

Q2) Can you explain the benefits of using FPGAs over other types of processors?

The fundamental difference when we talk about FPGAs is that we talk about the circuit that does the processing itself. In the traditional FPGA approach, this is all there is. Not software that gets compiled and then executed on some fixed-capabilities processor. If you need more processing power, you can just add more circuitry to do the job. So, in an FPGA, parallelism is achieved at the hardware level.

You can design circuits that perform specific tasks and run them in parallel. Each circuit operates independently of the others, allowing for true parallel execution, while in a multi-threaded CPU environment, parallelism is achieved at the software level. The CPU’s cores can execute multiple threads in parallel, but this is managed by the operating system’s scheduler, which allocates CPU time to different tasks. And because FPFAs can be customized at the hardware level, they can be incredibly efficient for specific tasks.

The parallel circuits are designed to perform their tasks with minimal overhead, leading to faster and more energy-efficient operation. Another thing is that FPGAs can be more energy-efficient and faster for specialized tasks due to minimal overhead, while CPUs/GPUs face efficiency losses due to context switching and thread management.

Determinism is also another benefit spring, since FPGAs offer predictable, deterministic performance, making them ideal for real-time systems. In a nutshell, FPGAs excel in highly parallelizable tasks presenting low latency and energy efficacy.

Q3) What are the most significant trends observed in the FPGA industry over the past year? How will these trends shape the industry’s future?

It goes without saying that the FPGA universe has also been affected by the AI wave and we observe a significant shift of the FPGA industry towards AI/ML applications.

FPGAs are particularly well-suited for inferencing tasks due to their parallel processing capabilities and low latency. FPGA vendors are integrating software-oriented toolchains to translate neural networks trained in frameworks like PyTorch or Tensorflow, to infer them on their FPGAs and IP companies are building custom FPGA-based accelerators dramatically accelerating specialized tasks like image recognition, object detection, and natural language processing.

The rise of IoT has led to an increased need for edge computing solutions. FPGAs, with their low power consumption and high performance, are getting integrated into camera modules, combining AI/ML inference and data processing at the edge. This is crucial for applications like autonomous vehicles and industrial automation, where latency and quick decision-making are playing a key role.

Last but not least, FPGAs have found their way into major data centers, where tech companies like Microsoft and Amazon are integrating them into their data centers’ infrastructure. FPGAs offer a unique advantage in that they can be reprogrammed on the fly to adapt to different tasks, making them highly versatile for cloud computing environments.

Q4) How do you see FPGA development evolving to meet the demands of modern applications and complex workloads?

There’s a growing trend towards open-source FPGA development tools and platforms. There are initiatives that push in the direction of democratizing access to FPGA technology, allowing smaller companies and individual developers to experiment with custom hardware solutions without the high costs, traditionally associated with FPGA development.

The advent of High-Level Synthesis (HLS) was to enable SW embedded engineers to be able to design custom HW “functions” using common high-level languages like C++ and Python and implement them on an FPGA without the deep understanding of hardware description languages like VHDL or Verilog. However, after all these years of being generously offered and promoted by FPGA vendors, it still remains a productivity-boost tool for HW and FPGA engineers who seek rapid prototyping, rather than a go-to solution for a commercial product. This area is still dominated by hand-optimized HDL code.

Also, services like AWS EC2 F1 instances are providing cloud-based FPGA resources. This eliminates the upfront cost of FPGA hardware and allows for scalable, on-demand FPGA computing. It’s a game-changer for startups and smaller companies that may not have the resources for an in-house FPGA infrastructure.

Q5) Key drivers behind the increasing adoption of FPGAs in various applications and industries?

The three key reasons that drive the FPGA adoption are Versatility, Performance, and Energy efficiency.

FPGAs can be used across a wide range of applications and industries, from telecommunications to healthcare and automotive. This versatility makes them a highly attractive option for companies looking to invest in flexible and future-proof hardware solutions.

FPGAs excel in tasks that can be parallelized, offering significant performance gains over traditional CPUs and even GPUs in some cases. This is particularly important in applications like data analytics, financial modeling, and scientific research, where high computational power is required.

FPGAs are generally more energy-efficient than other types of processors when optimized for specific tasks. This is crucial in edge computing solutions where power consumption is a significant concern.

Other than the inherent FPGAs’ capabilities, their integration with general-purpose CPUs in SoCs really boosted FPGA’s adoption in a huge variety of applications in diverse market segments, from healthcare to computer vision and finance.

Q6) Sectors that stand to benefit the most from FPGA integration, and why?

AIoT can benefit a lot, since FPGAs are starting to be integrated into the back of vision sensors, offering specialized solutions, each one of them tackling specific AI problems, like object detection or classification, in very tight power budgets.

Telecommunications was always the forefront sector in FPGA integration. Their ability to excel in tasks like signal processing, network optimization, and cryptography makes them ideal for managing high-throughput data streams in real time, which is essential for 5G networks.

Advanced Driver-Assistance Systems (ADAS) increasingly rely on FPGAs for real-time decision-making. Their low latency and high performance are critical for applications like lane detection, collision avoidance, and adaptive cruise control.

Medical imaging devices like MRIs and CT scanners are leveraging FPGAs to process complex algorithms quickly. Their reconfigurability allows these devices to be updated easily, extending their lifespan and improving diagnostic capabilities over time.

The security sector, which heavily relies on the usage of monitoring equipment, like CCTV or IP cameras is also a playground for FPGAs due to their high performance and small energy footprint in image processing algorithms like background subtraction, tracking, etc.

Industries in the High-Performance Computing segment, heavily depend on FPGAs for calculation-intensive tasks, like scientific computations (e.g. weather forecasting) and financial modeling, and already benefit a lot from moving into an FPGA-based acceleration.

Q7) The role of FPGAs in accelerating AI applications and advancements expected in the near future.

FPGA vendors are trying hard to ride the AI wave, by providing developers with specialized interfaces and toolchains to make their “hardware-integration life” easier, and by promoting and marketing FPGAs as a cornerstone in accelerating AI and ML applications, something that is not entirely misleading, but in practice SW development remains a ton easier/faster than HW development and deployment, although less efficient in many cases. The considerable amount of development time invested in FPGA development and the specialized mix of AI and hardware expertise needed for the job, prevent companies from vastly investing in this technology.

FPGAs present all the potential to be part of the AI development flow, and SOCs like XILINX Zync Ultrascale+ or Intel Arria played a definite role in this direction. However, it is more likely to see specialized, FPGA-based circuits /accelerators, being commercially deployed to the market that will be (re)programmed to support and carry out specific AI- tasks, than complete solutions being solely built upon FPGAs alone.

Q8) Ensuring the security and integrity of FPGA designs, especially in sensitive applications like finance and defense.

FPGAs are pivotal for cryptographic tasks, including encryption and digital signatures. However, there’s a security concern: the potential malicious manipulation of FPGA configuration data. During initialization, an external bitstream configures the FPGA, presenting a vulnerability where attackers can understand and even alter the hardware. While leading companies like XILINX and Intel offer encryption for protection, their schemes have been compromised. The challenge for attackers lies in the proprietary nature of bitstreams and locating cryptographic components within vast FPGA designs.

Some research has aimed to reverse-engineer these bitstreams, but comprehensive understanding remains elusive. White-box cryptography is often deployed to intercept bitstream hacking, but there is also the file of malicious changes, commonly referred to as hardware Trojan attacks that have seen a lot of research in terms of detection and prevention, especially in the realm of application-specific integrated circuit (ASIC) design.

Yet, such Trojan attacks on FPGAs haven’t received the same focus. Often, designers use solutions and benchmarks specific to ASICs for FPGAs, which doesn’t fully address the unique vulnerabilities of FPGAs. It’s important to recognize that FPGAs, due to their distinct business model and architectural setups, offer unique avenues for Trojan attacks to potential adversaries.

Q9) Advice for students and professionals interested in pursuing a career in FPGA development to stay updated with the latest trends and technologies.

A career in FPGA design and development can be fascinating. You need to achieve a deep understanding of hardware description languages, like VHDL, Verilog, or System Verilog, and you must “think hardware”, not software-like. In the beginning, this is the most difficult to achieve, because most likely one’s first encounter with the information/technology world is through high-level software programming languages, where we are been taught to process information (semi)sequentially, and not truly in parallel. After this virtue has been acquired, everything will become easier, and one will get to know this hidden gem.

Like any programming procedure, hands-on experience is a must. One should purchase a low-cost FPGA board, like Intel’s DE10-nano, XILINX A100, or ICe from Lattice (most likely to have participated in university labs where XILINX or Intel boards were used) and start building hardware. After confidence has been built, one can participate in open-source FPGA projects and extend their knowledge and experience even further.

Nowadays there is plenty of information lying around from MOOCs to streaming services and LLM-based chats, one can find any kind of self-paced courses to follow, experiment with, and learn. To the more advanced students and professionals, seeking an internship position and attending conferences and events can be a great way to interact with real-world applications and cutting-edge research.

0Shares

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0Shares