- Hardware
- A
"Если ты не страдал — ты не считаешься ПЛИСоводом": подкаст «Битовые маски» с Михаилом Коробковым
В подкасте «Битовые Маски» ведущие Лена Лепилкина и Антон Афанасьев обсуждают с экспертами системное программирование и разработку «железа». В 25-м выпуске гостем стал Михаил Коробков — создатель сообщества FPGA-Systems и одноименного журнала, ПЛИСовод-энтузиаст и по совместительству старший инженер по разработке СнК в YADRO.
We selected several interesting stories from our hero. More can be found in the full video and audio version of the podcast.
Mikhail Korobkov studied at the Ryazan Radio Technical University — it was there that his story with FPGAs began. Initially, they were FPGAs from Altera with code written in the archaic AHDL (Altera HDL) language and EPF 10K (Flex 10K), clones of which are still used in domestic production. The first commercial project on FPGA for the podcast guest was a switch for gigahertz signals — it was on the FPGA that Mikhail switched antenna arrays.
Before joining YADRO, Mikhail managed to work at the Space Instrumentation Research Institute, the Xilinx training center (Inline Group), Siemens, and Menta eFPGA.
Contents
On Mars and on TV — what is an FPGA and where can it be found
Mikhail: An FPGA is a wide-profile microchip. It is used in various fields — it can be digital signal processing, prototyping, portable electronics, and even phones.
Recently, I attended the Microelectronics-2025 forum and saw an FPGA from Lattice in one of the "Beshtau" servers. It functions as a board management controller — the FPGA monitors and sequences power, checks the sequence of turning on the processor and necessary peripherals.
In addition to servers, FPGAs are often used in outdoor surveillance cameras. Transmitting information from cameras to a server is not cost-effective, as it requires a large data transmission channel. As an alternative, a small FPGA and a system-on-chip like Zynq can be placed in the camera. The processor part will be responsible for booting Linux and the TCP/IP stack, while the FPGA will compute image processing algorithms — extracting numbers and determining any penalties. All of this can be done inside the camera itself with the FPGA. Moreover, it is quite economical in terms of power consumption — it definitely consumes less wattage than a graphics card.
FPGAs are very commonly used in space — both in low Earth orbit and high orbit. Accordingly, there can be two types of FPGAs. If the FPGA is flying in low Earth orbit, a large number of small satellites can be launched — this is roughly how Elon Musk operates, because such FPGAs are inexpensive compared to another type of FPGA — radiation-hardened ones, which are hundreds of times more expensive. For example, the regular and radiation-hardened Virtex-5 FPGAs cost one thousand dollars and one hundred thousand dollars, respectively. Radiation-hardened FPGAs have been launched to Mars on rovers.
FPGAs can also be found in consumer electronics or special-purpose technology. There is even a story that the first televisions with the HDMI standard were produced with Spartan FPGAs.
An FPGA as a product has a breakeven point. This means that if you are producing hundreds or thousands of units, you can use an easily reconfigurable FPGA, make some adjustments in the firmware, and quickly release an update. But if the number of devices exceeds one hundred thousand, it will be more cost-effective to make an ASIC. Thus, ASICs have expensive development costs but cheap mass production, while FPGAs are exactly the opposite. Therefore, choosing an FPGA may lead to losses in the long run.
Where else can you find FPGAs, listen to the podcast from the 38th minute.
About types of FPGAs and their ecosystems
The classification of FPGAs is based on performance. For example, it is possible to classify them by the filling of standard blocks and the availability of ready-made IP cores, various filters, codecs, and interfaces with different protocols. Are they supported by third-party CAD tools or not? Is there an ecosystem, and what is it like? For instance, MATLAB is not available for Lattice, but it is for Altera and Xilinx. The latter even has a wonderful system-generator, thanks to which you can draw everything in Simulink and get a ready prototype.
If there is a good ecosystem, an engineer working on digital signal processing algorithms can comfortably work in MATLAB without knowing any Verilog. To implement this mathematics in FPGA and run it at the required frequency, you just need to open MATLAB and check the algorithm written in Simulink. This means that the internal workings of the FPGA can be ignored; only frequency and the number of DSP cells or block memory matter—whatever is needed for your algorithm.
However, there is a reverse story when maximum performance is desired from the FPGA—then true professionals take charge, working almost down to the LUT level. Instead of DSP blocks, there are separate computational components, and oil is poured into the system for cooling—such is the power of the hardware.
Listen to the specifics of "low-level" programming on FPGAs starting from the 48th minute.
Entry threshold for FPGAs and how Vivado saves time
When I first started with FPGAs, everything was much more expensive. Now things are different—there's Gowin and AliExpress, so you can find a debugging FPGA for less than a thousand rubles. And just 10 years ago, one such FPGA cost me 10–12 thousand. This is quite a large amount compared to what could have been spent on mastering microcontrollers. They could generally be learned for free—install Keil Studio on your computer, play around with Assembler, and see how it compiles. In extreme cases, you could set up an emulator of a regular processor. However, this is not possible with FPGAs. The entry threshold for FPGAs is much higher than for microcontrollers.
At the same time, I cannot say that the production of FPGAs is more complicated than the production of microcontrollers. An FPGA is the same integrated circuit, the same ASIC, just, roughly speaking, reconfigurable. And all this is produced in some factory — everything is somehow tied to production.
When making a high-performance FPGA, one needs to sift through dozens of patents to understand how it works. Let's say, in modern FPGAs, a lookup table is not just a memory cell. Here, a separate LUT can serve as a shift register, compute logical functions, and even act as an interconnect. That is, it is a multifunctional device. Architecturally — a square, and inside the square — three or four different functions that are determined by your tasks. Do you need to create a shift register? You no longer need 32 triggers for that because it is all already in the LUT. Thus, you can save a bunch of resources and routing chains.
The only hitch is that even if your project synthesizes, it’s not a fact that it can be placed on the chip. Especially if this project is large. And even if it is placed on the chip, it is still not guaranteed that it will route.
I was amused by one command in Vivado — you could write it at the initial stages of the project, and it would immediately determine whether the project would route or not. This saves a lot of time since our projects can take up to 20 hours to build. Of course, it's not like with physical designers, where a project can take a week to compile. But we have more iterations, so time still goes quite a bit.
What to do if the project does not fit on the chip, what partitioning is, and how much an RTL emulator costs, listen in the podcast starting at 1:06:00.
eFPGA and the Unfulfilled Dream
Only four companies in the world are currently engaged in eFPGAs (Embedded FPGA) — in general, such FPGAs have very niche applications, for example, in
For eFPGA, you need to buy an FPGA matrix, which will function as an IP core after being installed in your processor. When I worked at Menta eFPGA, I had the idea to take the eFPGA architecture and implement it on "Mikron". Let it be square and look like a brick, but it will be a normal FPGA with tens of thousands of LUTs. We, in turn, will add DSP blocks or block memory — whatever we want. The advantage of eFPGA is that you can create the architecture yourself.
This architecture — DSP cells, block memory, LUTs — is all configurable at the stage of creating the eFPGA. You can make ten or forty memory columns, add DSP cells, and even insert your custom blocks. In other words, you can write some good accelerator that fits within a given width and length, place it, and you get a hardware block inside the FPGA. And then launch it into mass production at a factory. It was a cool idea, but it did not find much response ab initio.
For example, Dialog released the cheapest FPGA "ForgeFPGA" — it cost 50 cents. Specifically, this FPGA was one-time programmable — it had only 900 LUTs. You could just buy a kilogram of those and insert them into some small applications instead of CPLD. And inside this FPGA was a cased eFPGA. So essentially the same idea that I wanted to implement here in Russia — take eFPGA and encase it.
How the fate of ForgeFPGA unfolded and other details about where eFPGA finds its application, Mikhail discusses from 1:30:00.
High-Level Synthesis
FPGAs have been programmed since time immemorial — since 1984. VHDL, Verilog, and System Verilog are the three basic combinable components that implement native programming on FPGAs.
But VHDL is gradually dying out. And in the semiconductor industry, where ASIC development is conducted, VHDL is practically not encountered. Or when I worked at Menta, there were cases where clients would come and ask to rewrite VHDL "shell" to Verilog. The decline of VHDL is also facilitated by UVM verification on SystemVerilog and tools with extended support for Verilog. It can be said that Verilog itself is also gradually fading due to the popularity of SystemVerilog, as it has managed to combine both Verilog and a bit of VHDL.
Since the 90s, many companies have attempted to create something more universal and attract embedded systems developers who write in C/C++, OpenCL, or even some Java into this cohort—so that these developers would use FPGAs as a hardware platform. This is called high-level synthesis (HLS). One of my first streams was dedicated to this topic—I talked for 45 minutes about what HLS is and how to use it. It seems that Yuri Vladimirovich Panchul was also involved in this.
Why is it necessary? There is a class of algorithms that parallelize well and are not very convenient to compute on processors. For example, matrix multiplication, which underlies all video processing, involves several consecutive loops. It would be good to unroll these loops. Why should we compute eight times in a row when we can compute eight elements in parallel? And why do we need to make a second loop when we can unroll another eight elements and compute 64 elements at once? And so on. In the case of FPGAs, code generation turns out to be amazing. A trivial operation from a chain of loops easily unrolls and fits onto the parallel logic of the FPGA.
But you cannot just take code written in C and transpose it to the FPGA architecture—it has limitations. For example, for matrix multiplication, the elements of the matrix need to be stored somewhere and fetched from somewhere, memory is required, and it is dual-port, not eight or ten ports. Each time there is a dead end in the form of hardware limitations. Without considering the architecture of the FPGA, this code generation is meaningless. Nevertheless, this technology began to advance quite a while ago.
In 2012, Xilinx took a very close look at it, just when the Vivado HLS environment appeared. Xilinx released examples, and even OpenCV was adapted to their platform. This means you could take OpenCV code and get that code in Verilog, already tailored for FPGAs. In such environments, functions written in C/C++ could be accelerated simply by pressing a button to transfer them to the FPGA.
What these environments are and how to run Python on an FPGA can be learned in the podcast. Timecode—1:44:00.
About the future of FPGAs
FPGAs always evolve according to trends. For example, artificial intelligence is currently at the peak of popularity, and therefore special computational IP cores have started to appear in FPGAs specifically for such tasks. Or when it comes to servers, if large data processing is needed, FPGAs are well-suited for that.
Therefore, FPGAs have a future, but it depends on the industry. Just like in the 90s, when telecommunications were thriving — DSP blocks with block memory appeared in FPGAs, making it convenient to implement digital signal processing algorithms. In the 2000s, a boom in high-speed data transmission began — transceivers appeared in FPGAs and were applied in this area. It was necessary to eliminate the connection between the processor and the FPGA — they were made in one package. In other words, whatever the industry demands will be reflected in FPGAs. But, of course, there is a desire for something new.
What awaits FPGAs, listen from 1:52:00.
Also in the podcast:
8:00 How a platform for freelancers served as the beginning of the Russian FPGA community.
31:00 About the connection between FPGAs and three toxic industries — crypto, HFT, and AI.
45:00 Why Gowin and Lattice have the same logos.
58:00 What an FPGA prototyping engineer actually does.
1:24:00 How things stand with domestic FPGAs.
1:56:00 A chat in Telegram, a conference, and a magazine — what the FPGA-Systems community is up to.
The full version of the podcast can be viewed and listened to on the Passionate Engineer →
Write comment