Decodium (“FT2”) – the fastest mode?

By   20 February 2026 21:33

A quote from the website (ft2.it):

“Complete QSO in 7-11 seconds. Theoretical rate of ~240 QSO/hour, ideal for contests and DXpeditions.”

OK, game on. Let’s check.

What about FT8 Superfox?
The user manual states:

(…) Standard messages include up to 9 Hound callsigns: as many as four may receive numerical signal reports to start a QSO, and the remainder receive RR73 to acknowledge that a QSO has been logged. (…)”

If you assume that everything runs smoothly and that each message always contains four reports (start QSO), then the stations that received a report will receive RR73 during the next period and four new ones will receive a report. So a maximum of eight reports per minute can be reached, which I believe determines the maximum throughput. I would then arrive at 480.

There is a small advantage such that 5 RR73s can be sent simultaneously, allowing ‘pending’ RR73s from previous periods to be ‘caught up’. This happens, for example, when a hound’s R-NN report is missed due to fading or QRM. The number of reports of 4 remains the same, of course, but the ‘damage’ caused by missed periods is somewhat limited. After all, 5 RR73s can go together with 4 reports.

The practical rate is obviously less than ideal. I watched the Jarvis DXpedition via a web SDR to get an impression of the real world performance. Just as with usual FT8, a fair number of repeats was seen and the actual rate was about 2.5 to 3 Q’s Q per minute, which is about 150 to 180 per hour.

Initial Superfox tests as reported by K1JT
Joe reported initial superfox tests via groups.io and rates between about 250 and 350 per hour were attained.

The SNR was in the + range for the test and I think that the Jarvis number is closer to reality. The Jarvis team used the RIB concept with only 100 Watts output, which is more challenging than expeditions with kW amplifiers. Because I could see both sides, it was observed that the hounds regularly failed to decode the fox and, conversely, the fox did not always respond to the reports of the hounds.

In case expeditions use more power, the rate will become higher. I consider rates between 150 and 250 a fair reference.

Decodium versus FT8 Superfox signal to noise (SNR) requirements
K1JT and K9AN documented sensitivity comparisons between Superfox and “normal” FT8, see [1].

The signal to noise ratio threshold for Decodium is stated to be -10.8 dB. I assume that this it the theoretical limit. This is worse than the Superfox threshold, which is about -16, so 5 dB sensitivity is lost relative to Superfox.

Then we have the hounds. With Superfox, they transmit normal FT8, which means that the FT8 threshold of approximately -20 applies. Decodium hounds call with Decodium (FT2). About 9 dB disadvantage.

It is obvious that FT8 Superfox is far superior with respect to SNR’s. Now let’s dig into the QSO rates.

Attainable QSO rates
The claimed rate of 240 Q’s per hour is the theoretical limit. In real life, speed will be substantially lower. The theoretical limit for Superfox is 480 Q’s per hour, but the real world rates were around 250 to 350 or 52 to 73% of the maximum. If we would assume that Decodium attains similar percentages, the QSO rate would be between about 125 and 175 Q’s per hour.

But I doubt that these percentages can be reached. Because the required bandwidth for Decodium is three times the FT8 bandwidth. If the same amount of hounds would call in the same passband, there will be a lot more mutual interference between the Decodium hounds which implies that decoding will be a lot more difficult. Collisions reduce the decoding probability exponentially.

It is easy to understand that QSO rates further drop considerably in a busy band and at the end of the day, I estimate the real rate near about 120 Q’s per hour, maybe even less.

We should not forget that the theoretical limit for a single stream FT8 Fox/Hound is 120 Q’s per hour and a dual stream has a limit of 240 per hour. If we assume a 50% real world rate, the two stream FT8 is roughly equal to the Decodium estimate of 120. The power per stream is about 6 dB lower (if set correctly) and the loss of sensitivity is thus 6 dB whereas Decodium loses 9 dB. FT8 still has the advantage of longer transmissions which cope better with interference. In any case: advantage FT8.

QSO rates not only depend on the reception of the Fox by the hounds. If hounds are weak, it results in repeats or lost contacts. The SNR’s of the hounds at the (Super-)Fox are normal FT8 values and thus about 9 dB better than Decodium.

It is obvious that Decodium has no advantage for DXpeditions. It would result in lots of frustrations instead if being happy with a new one.

Other performance considerations
The shorter transmission durations will render the protocol more vulnerable for interference and (impulsive) noise. Longer integration enhances resilience and I expect Decodium decoding probability to be relatively lower.

Computer clocks need to be very wel synchronised which can be an additional hurdle. Because of the shorter transmissions and the inherently shorter decoding interval, computer requirements are a lot more stringent. Computers that perform well with FT8, can be too slow for Decodium. Good for computer vendors, but bad for the environment.

Conclusion
SuperFox demonstrably outperforms Decodium (FT2) in sensitivity, decoding robustness, and achievable throughput—making FT2’s main advantage (shorter exchange time) effectively irrelevant for real-world DX conditions.

In plain terms: FT2 runs faster—but only downhill with a tailwind.


 

Reference s and notes:

[1] SuperFox and FT8: Weak-Signal Performance, Joe Taylor and Steve Franke, September 6, 2024
https://wsjt.sourceforge.io/SuperFox_Performance.pdf

[2] My estimate of -20 is based on many hours of practical experience with FT8. This value is for non AP (a priori) decodes.


 

Appendix

Artificial Intelligence software “development” (Appendix 1)

The “developers” state that the code was based on FT8/FT4 code and AI was used to change the protocol for the higher speed. The amount of “development” is probably minor, since I expect that after changing some parameters, much of the code can be used “as is”.

The software is announced with a lot of fanfare, as if it is a revolutionary invention, which it is simply not. My analysis refutes the performance claims made and my impression is that the initiators failed to address fundamental considerations. Because otherwise, they would have drawn the above conclusions and abandoned the idea.

Addition 22 February 2026 (Appendix 2)

Initially, there was no source code published, or made available to some who requested the code. This was in infringement of the GPLv3 open source licence. A few days later, the source code was published on Github, presumably because of the complaints and discussion that followed.

Having a brief look at the code, it was clear that Decodium is based on the WSJT-X sources. DG2YCB, the developer of WSJT-X improved, said that K1JT and K9AN had been testing faster modes a long time ago when they were working on FT4 and the source code of WSJT-X contains a library “FT2”. I did not know that this existed. It is however likely that K1JT and K9AN concluded that it was not useful and abandoned the idea, which agrees with my earlier remarks above (Appendix 1).

Some research revealed that the FT2 library source code of Decodium consists of (partly adapted) copies of the WSJT-X FT2 code plus additional code, which is modified code from the FT8 library (and possible other existing code, which was not checked yet).

Let me quote the remarks of Reddit user Onesploit of today:

The code is now available at https://github.com/iu8lmc/decodium3-build/ but only after significant pressure to release it. Looking at the repository, the commits were automatically made by a Claude (Opus 4.6) agent. Whether AI wrote the whole thing or was used to port existing code into WSJTX, we can’t say for certain. What is now an open question is how much of the “hard work and sacrifices” claimed by the author is genuinely novel contribution, and how much was simply prompted into existence or just trivial modification. What we can say is that the source was withheld from a GPL v3 licensed project until people pushed for it. That’s the problem, not the use of AI, but the lack of transparency around both the tooling, the true extent of original contribution, and the license obligations that were apparently treated as optional.

I fully agree.