Unlocking the Potential of FPGAs: Learn with Guy Eschemann

Piyush Gupta

Updated on:

0Shares

FPGA Insights has engaged in an exclusive interview with Guy Eschemann, creator of airhdl.com
and Senior FPGA Design Engineer at plc2 Design GmbH.

Unlocking the Potential of FPGAs: Learn with Guy Eschemann

Q1) Can you provide an overview of your organization and the services/products it offers?

My company, noasic GmbH, is based in Freiburg, Germany. We develop and operate a web-based VHDL and SystemVerilog code generator for AXI4 register banks called airhdl

I started this project in 2015 next to my FPGA consulting work out of necessity, as I was tired of writing all the code by hand. At the time, it was a bit of an experiment–one of the very first web-based commercial EDA tools! 

Since then, it’s grown quite popular and it is used by many companies and institutions around the world. Recently, the McMaster Interdisciplinary Satellite Team (MIST) at McMaster University in Canada started using our tool for the development of their nanosatellites.

During the day, I work as an FPGA design engineer at a German company called plc2 Design. We’re developing FPGA-based automotive measurement systems such as frame grabbers and data loggers.

Q2) Can you explain the benefits of using FPGAs over other types of processors?

The main benefit of FPGAs is that they allow you to create your own hardware architecture. This gives you the power to design solutions that are highly tailored to your particular application. 

In practice, FPGAs are often used to implement complex, fast, or high-count interfacing logic, often with some data buffering and processing in between. High-speed camera interfaces are a typical application for FPGAs: you need to implement the low-level interface(s) to the image sensor(s), maybe do some pre-processing and buffering on the received images, and forward them to a host CPU over a high-speed interface such as PCI Express or Ethernet.

But the power of FPGAs comes with great responsibilities. When you buy an off-the-shelf microcontroller such as an STM32, you can be sure that the chip’s hardware has been thoroughly verified and that it will work as expected. With FPGAs, as you’re designing your own hardware circuits, it’s up to you to check that whatever you have designed works as expected; the only thing that’s guaranteed is that the low-level components like LUTs and flip-flops will work fine! Designing an FPGA for an application is maybe an order of magnitude more effort than writing software for an off-the-shelf microcontroller. 

One of the reasons for this is the inherent parallelism of hardware, which requires different thinking than when writing sequential software; and it’s difficult for us humans (or at least, for me) to think in parallel! But there are many other annoyances along the way, such as long compile and simulation times, the difficulty of on-target debugging, and the limited availability of skilled developers.

During the last decade, with the advent of systems on a chip (SoCs), which are a combination of a traditional FPGA fabric and a CPU-based processing system, the lines between FPGAs and processors have blurred and we’re now in the age of heterogeneous computing. 

So instead of choosing between an FPGA-based and a processor-based solution, we’re now using SoCs so we can have the best of both worlds inside a single chip. But this poses other challenges such as how to best partition the application in hardware and software, and how to efficiently transfer data between the FPGA fabric and the processing system.

Q3) What are the most significant trends observed in the FPGA industry over the past year? How will these trends shape the industry’s future?

Things don’t change that quickly in our industry. In practice, it always takes several years for actual, non-marketing-invented trends to develop. Here are a few things that I have witnessed during the last few years:

  • The increased availability of high-quality open-source frameworks and tools. In the VHDL ecosystem, VUnit and OSVVM (both are VHDL verification frameworks) and GHDL (a VHDL simulator) come to mind, but I’m sure there are others. I’ve heard the Verilog/SystemVerilog ecosystem has great tools too, such as Verilator.
  • The increased adoption of CI/CD practices for automating the FPGA build process, including linting, bitstream generation, and design verification (simulation). Many teams use Gitlab or Jenkins for that purpose.
  • The rise of SoC designs, and with it the adoption of Embedded Linux.

In terms of development methodologies, it seems that people are spending more time on simulation-based design verification, which is a good thing. I have yet to see a single team use formal verification techniques, though.

Q4) How do you see FPGA development evolving to meet the demands of modern applications and complex workloads?

It seems that we have not figured it out yet, really. Ten years ago, I attended an FPGA conference in Portugal where high-level synthesis (HLS) was all the rage. HLS was supposed to allow software developers to design hardware. Fast-forward ten years and HLS is still there, but it’s really mostly a productivity tool for hardware developers like myself. I’ve not seen any software developer switch to designing FPGAs because of HLS.

One thing I can say for sure is that fancy graphical design tools will not save us. When Xilinx introduced block designs in Vivado, it was supposed to be the answer to our productivity needs. And indeed, block designs are great to quickly put together a working base system. But anyone who has done any kind of serious work with block designs will tell you that they quickly become a nightmare to understand, extend and maintain. So I think we need to go back to plain text for design entry, just like the software guys have been doing all the time.

What I’m curious to see is how artificial intelligence technologies will help us build better hardware designs with less effort. One issue that we might run into, though, is the fact that most existing high-quality designs are proprietary and closed source, which prevents them from being used as training data AI tools like Copilot. Still, there are productivity killers that could be addressed immediately using AI technologies. 

A few weeks ago, I was wondering about the clock domain for a memory controller port on the Xilinx MPSoC. I searched through a couple of documents and did a web search, but I could not find the information I was looking for. So I had to open a support case with Xilinx (oops, AMD) to finally have someone point me to the corresponding section of the right document. All in all, it took me several days to find this simple piece of information. Providing a natural language interface to the masses of existing documentation should be possible with today’s technology and it would already help increase our productivity by a lot. 

Q5) Key drivers behind the increasing adoption of FPGAs in various applications and industries?

Virtually all the FPGA applications I’ve seen so far are driven by interfacing requirements. This could be the need to support custom interfaces that standard processors do not support, or it could be standard interfaces (like UART, or I2C) that processors do support but you just need loads of them, or it could be the need to react quickly to what’s happening on an interface without the overhead of a software stack. I’m sure there are other uses for FPGAs, such as application acceleration in data centers, but in my experience, most of the time, the use of FPGAs is driven by interfacing requirements.

The other aspect of FPGAs, which makes them popular for industrial applications, is their very long availability. We’re talking about decades here. Other FPGA benefits, such as being fully or partially reconfigurable in the field, are sometimes given as marketing arguments but in practice, these are often not the main reasons for using FPGAs. If your application has very strict power or unit price requirements, you’ll probably have to use an ASIC even though you can’t reconfigure it in the field.

Q6) Sectors that stand to benefit the most from FPGA integration, and why?

I’m guessing automotive. When I started my career more than twenty years ago, we were developing an automotive TV reception system based on a Xilinx XC4000E FPGA. My Xilinx FAE told me that at the time, it was very unusual to have FPGAs in automotive systems and so people at Xilinx were quite curious and excited about what we were doing. How things have changed. I guess that nowadays, at least in Germany, automotive suppliers are the biggest purchasers of FPGAs.

I’ve heard that FPGAs are quite popular in financial applications as well, but as I’ve not worked on such applications yet, I cannot tell you much about it, except that it’s probably again about interfaces, such as high-speed Ethernet and PCI express.

Q7) The role of FPGAs in accelerating AI applications and advancements expected in the near future.

There is a lot of marketing talk about AI and FPGAs but in practice, I have yet to work on an FPGA-based AI project. In my current project, which is based on a Zynq UltraScale+ MPSoC, we’re using an external NVIDIA board to offload the AI algorithms.

Maybe in the future, we’ll switch to Versal and try to integrate everything (the interfacing logic and the AI algorithms) into a single chip, but we’ll need to have the software guys on board for that.

Q8) Ensuring the security and integrity of FPGA designs, especially in sensitive applications like finance and defense.

This is an area that’s still neglected by many. The moment your device can be connected to the internet, you need to put in place a number of measures to protect your system against different kinds of attacks. If you don’t know where to start, standards or guidelines such as ETSI EN 303 645 are a good starting point. For example, you should provide ways to keep the firmware updated and to check its integrity, store sensitive data such as cryptographic keys in secure storage areas, communicate securely, and things like that. Failing to do so is asking for trouble, and you may be exposing your company or its customers to huge liabilities.

Modern FPGAs already have built-in mechanisms to check the bitstream integrity and prevent them from being executed on unauthorized devices. If you are designing a critical finance or defense application, you should probably be looking at all the ways in which your system could get compromised, including by malicious code that’s been injected at development time. But in general, for the kinds of systems that I’m working with, the main attack target is the software. It’s just so much easier to hack a software system than an FPGA bitstream, which may require physical access to the system.

Q9) Advice for students and professionals interested in pursuing a career in FPGA development to stay updated with the latest trends and technologies.

Twitter (oops, X) and LinkedIn are good sources of information but I wouldn’t worry too much about the latest trends and technologies. A lot of it often is marketing vaporware anyway, which seldom affects our daily lives as FPGA developers in the short term. In our industry, because of the long product development cycles and the tendency to rely on established tools and technologies, it always takes several years for new tools and technologies to be adopted. 

So there’s really no need to constantly jump onto the hot new thing. For example, many design teams are only starting to adopt VHDL-2008 as a hardware design and verification language–that revision of the language is fifteen years old!

0Shares

1 thought on “Unlocking the Potential of FPGAs: Learn with Guy Eschemann”

Leave a Comment

New Podcast - Learn about Generative AI in Aerospace & Defence with Amritpreet.

X
0Shares