58 Commits

Author SHA1 Message Date
awe
c40df97085 ampl parser 2026-04-15 19:09:11 +03:00
awe
3cb3d1c31a voltage range 2026-04-14 20:39:44 +03:00
awe
d170fc11e5 fix 2026-04-14 19:59:48 +03:00
awe
2a65b7a92a fix freq range 2026-04-14 19:48:37 +03:00
awe
5aa4da9beb complex calib add 2026-04-14 19:47:28 +03:00
awe
cbd76cfd54 thinking fft 2026-04-13 14:15:56 +03:00
awe
70e18fa300 fix 2026-04-10 22:39:50 +03:00
awe
992ba88480 phase graph 2026-04-10 22:34:36 +03:00
awe
d0d2f5a59e low freq filter 2026-04-10 22:17:08 +03:00
awe
17540c3b11 fft new mode 2026-04-10 22:08:43 +03:00
awe
93823b9798 fix chan swap 2026-04-10 21:13:54 +03:00
awe
44a89b8da3 ch1 / ch2 add to pic 2026-04-10 21:02:14 +03:00
awe
0874a8aaf6 sqrt add 2026-04-10 20:43:11 +03:00
awe
fac0add45d new complex for --bin 2026-04-10 20:20:16 +03:00
awe
eee1039099 fix parser 2026-04-10 19:56:43 +03:00
3cd29c60d6 fix st 2026-04-10 18:01:43 +03:00
awe
934ca33d58 giga fix 2026-04-10 16:20:48 +03:00
awe
9aac162320 fix 2026-04-10 14:46:58 +03:00
awe
4dbedb48bc fix 2026-04-09 19:56:38 +03:00
awe
08823404c0 new logging 2026-04-09 19:47:30 +03:00
awe
bc48b9d432 try to speed up 2026-04-09 19:35:07 +03:00
awe
afd8538900 try new synchro method 2026-04-09 19:05:58 +03:00
awe
339cb85dce new e502 adc 2026-04-09 18:43:50 +03:00
awe
5152314f21 check 2026-03-26 20:01:56 +03:00
awe
64e66933e4 new adc 2026-03-25 18:54:59 +03:00
awe
fa4870c56c check background 2026-03-24 19:37:11 +03:00
awe
3ab9f7ad21 checkbox log det raw 2026-03-24 15:18:08 +03:00
awe
bacca8b9d5 new background remove algoritm 2026-03-16 12:48:58 +03:00
awe
b70df8c1bd cut the range feature 2026-03-12 18:50:26 +03:00
awe
5054f8d3d7 new fft 2026-03-12 18:09:44 +03:00
awe
f02de1c3d0 fix calib 2026-03-12 17:58:44 +03:00
awe
2c3259fc59 new calib 2026-03-12 17:47:21 +03:00
awe
f6a7cb5570 add new old fourier 2026-03-12 17:44:15 +03:00
awe
9e09acc708 fix scale 2026-03-12 17:03:41 +03:00
awe
dc19cfb35f new calib 2026-03-12 16:59:47 +03:00
awe
00144a21e6 fix plots 2026-03-12 16:53:16 +03:00
awe
157447a237 calib fix 2026-03-12 16:48:26 +03:00
awe
c2a892f397 new 2026-03-12 15:12:20 +03:00
awe
3cc423031c ref almost done 2026-03-12 15:07:57 +03:00
085931c87b repainted peak search bounding boxes to green 2026-03-10 15:48:14 +03:00
8e9ffb3de7 implemented background referencing and subtraction if from FFT window and B-scan. Continous ref calculation can be toggled 2026-03-10 15:28:20 +03:00
6260d10c4f fft: add GUI toggle for peak search with rolling-median reference and top-3 peak boxes 2026-03-05 22:02:02 +03:00
c784cb5ffc in --calibrate mode implemented peak intensity measurement (height above some reference) 2026-03-05 18:54:03 +03:00
6f71069d1b implemented new parser: _run_parser_test_stream, activates via --parser_test 2026-03-05 18:35:00 +03:00
6d32cd8712 updated parsers to be more robust. No changes in functionality 2026-03-05 16:39:08 +03:00
a707bedc31 fixed and updated frequency calibration mode. 2026-03-04 17:57:32 +03:00
553f1aae12 fixed frequency calibration constants: now on lines 55-75 calibration variables tweaked to match initial and calibrated frequency ranges 2026-03-04 17:15:15 +03:00
da144a6269 implemented --parser_16_bit_x2 key. If enabled -- receive values as 2 16-bit 2026-03-04 16:39:35 +03:00
e66e7aef83 implemented reference subtraction from B_scan. Reference is average from all visible B-scan. 2026-03-04 16:22:27 +03:00
6724dc0abc fixed app terminationg issues by Ctrl-C and window closing in both backends 2026-03-04 15:06:59 +03:00
a4237d2d0e tweaked PyQT backend 2026-03-04 15:01:16 +03:00
c171ae07e0 implemented --calibrate mode. In this mode frequency calibration coeffs can be entered via GUI. Also fixed some bugs in PyQT backend. Problem: matplotlib is so slow... 2026-03-04 14:34:41 +03:00
283631c52e implemented func calibrate_freqs --it can warp frequency axis. Also movide from abstract bins and counts to freqs and distances 2026-03-04 13:35:05 +03:00
ce11c38b44 --logscale enabled by default 2026-03-03 19:54:58 +03:00
1e098ffa89 implemented new binary mode (--logscale): 2 32-bit values: avg_1, avg_2. Also implemented log-detector mode: avg_1,2 are processed as lg(signal_power) in def _log_pair_to_sweep. Tuning variables: LOG_BASE, LOG_SCALER, LOG_POSTSCALER. 2026-03-03 19:50:44 +03:00
f4a3e6546a add 32-bit binary sweep parsing and percentile scaling for raw waterfall 2026-03-03 18:49:12 +03:00
7d714530bc implemented new normalisator mode: projector. It takes upper and lower evenlopes of ref signal and projects raw data from evenlopes scope to +-1000 2026-02-11 13:25:21 +03:00
awe
415084e66b graph upd 2026-02-11 13:21:37 +03:00
47 changed files with 7386 additions and 3898 deletions

BIN
().npy

Binary file not shown.

18
.gitignore vendored
View File

@ -1,12 +1,8 @@
my_picocom_logfile.txt
*pyc
.venv/
env/
__pycache__/
*.log
*.tmp
*.bak
*.swp
*.swo
acm_9
build
.venv
sample_data
*.py[cod]
.pytest_cache/
.Python
my_picocom_logfile.txt
sample_data/

205
README.md Normal file
View File

@ -0,0 +1,205 @@
# RFG STM32 ADC Receiver GUI
PyQtGraph-приложение для чтения свипов из последовательного порта и отображения:
- текущего свипа
- водопада по свипам
- FFT текущего свипа
- B-scan по FFT
После рефакторинга проект разделен на пакет `rfg_adc_plotter`. Старый запуск через `RFG_ADC_dataplotter.py` сохранен как совместимый wrapper.
## Структура
- `RFG_ADC_dataplotter.py` — совместимый entrypoint
- `rfg_adc_plotter/cli.py` — CLI-аргументы
- `rfg_adc_plotter/io/` — чтение порта и парсеры протоколов
- `rfg_adc_plotter/processing/` — FFT, нормировка, калибровка, поиск пиков
- `rfg_adc_plotter/state/` — runtime state и кольцевые буферы
- `rfg_adc_plotter/gui/pyqtgraph_backend.py` — GUI на PyQtGraph
- `replay_pty.py` — воспроизведение захвата через виртуальный PTY
## Зависимости
Минимально нужны:
```bash
python3 -m venv .venv
. .venv/bin/activate
pip install numpy pyqtgraph PyQt5
```
Если `pyserial` не установлен, приложение попробует открыть порт через raw TTY.
## Быстрый старт
Запуск через старый entrypoint:
```bash
.venv/bin/python RFG_ADC_dataplotter.py /dev/ttyACM0
```
Запуск напрямую через пакет:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0
```
Показать справку:
```bash
.venv/bin/python RFG_ADC_dataplotter.py --help
```
## Примеры запуска
Обычный запуск с живого порта:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --baud 115200
```
Больше истории в водопаде и ограничение FPS:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --max-sweeps 400 --max-fps 20
```
Фиксированный диапазон по оси Y:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --ylim -1000,1000
```
С включенной нормировкой `simple`:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --norm-type simple
```
Режим измерения ширины главного пика FFT:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --calibrate
```
Поиск топ-3 пиков относительно rolling median reference:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --peak_search --peak_ref_window 1.5
```
Вычитание среднего спектра по последним секундам:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --spec-mean-sec 3
```
## Протоколы ввода
ASCII-протокол по умолчанию:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0
```
Legacy binary:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --bin
```
`--bin` понимает mixed 8-байтный поток:
- `0x000A,step,ch1_i16,ch2_i16` для CH1/CH2 из `kamil_adc` (`tty:/tmp/ttyADC_data`)
- `0x001A,step,data_i16,0x0000` для логарифмического детектора
Для `0x000A` сырая кривая строится как `ch1^2 + ch2^2`, а FFT рассчитывается от комплексного сигнала `ch1 + i*ch2`.
Для `0x001A` signed `data_i16` сначала переводится в В, затем raw отображается как `V`, а FFT рассчитывается от `exp(V)`.
Параметр `--tty-range-v` применяется к обоим типам `--bin`-данных.
Logscale binary с парой `int32`:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --logscale
```
Complex binary `16-bit x2`:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --parser_16_bit_x2
```
Тестовый парсер для экспериментального `16-bit x2` потока:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --parser_test
```
Комплексный ASCII-поток `step real imag`:
```bash
.venv/bin/python -m rfg_adc_plotter.main /dev/ttyACM0 --parser_complex_ascii
```
## Локальная проверка через replay_pty
Если есть лог-файл захвата, его можно воспроизвести как виртуальный последовательный порт.
В первом терминале:
```bash
.venv/bin/python replay_pty.py my_picocom_logfile.txt --pty /tmp/ttyVIRT0 --speed 1.0
```
Во втором терминале:
```bash
.venv/bin/python -m rfg_adc_plotter.main /tmp/ttyVIRT0
```
Максимально быстрый replay:
```bash
.venv/bin/python replay_pty.py my_picocom_logfile.txt --pty /tmp/ttyVIRT0 --speed 0
```
## Удаленный захват по SSH
В приложении SSH-источник не встроен. Для удаленной проверки нужно сначала получить поток или лог на локальную машину, а затем либо:
- запускать GUI напрямую на локальном PTY
- сохранять поток в файл и воспроизводить его через `replay_pty.py`
Пример команды для ручной диагностики удаленного устройства:
```bash
ssh 192.148.0.148 'ls -l /dev/ttyACM0'
```
Если на удаленной машине есть доступ к потоку, удобнее сохранять его в файл и уже этот файл гонять локально через `replay_pty.py`.
Для локального `tty`-потока из `kamil_adc` используйте:
```bash
.venv/bin/python -m rfg_adc_plotter.main /tmp/ttyADC_data --bin
```
## Проверка и тесты
Синтаксическая проверка:
```bash
python3 -m compileall RFG_ADC_dataplotter.py replay_pty.py rfg_adc_plotter tests
```
Запуск тестов:
```bash
.venv/bin/python -m unittest discover -s tests -v
```
## Замечания
- Поддерживается только PyQtGraph backend.
- `--backend mpl` оставлен только для совместимости CLI и завершится ошибкой.
- Каталоги `sample_data/` и локальные логи добавлены в `.gitignore` и не считаются частью обязательного tracked-состояния репозитория.

8
RFG_ADC_dataplotter.py Normal file
View File

@ -0,0 +1,8 @@
#!/usr/bin/env python3
"""Compatibility wrapper for the modularized ADC plotter."""
from rfg_adc_plotter.main import main
if __name__ == "__main__":
main()

Binary file not shown.

Binary file not shown.

View File

@ -1,16 +1,7 @@
#!/usr/bin/env python3
"""
Эмулятор серийного порта: воспроизводит лог-файл в цикле через PTY.
"""Replay a capture file through a pseudo-TTY for local GUI verification."""
Использование:
python3 replay_pty.py my_picocom_logfile.txt
python3 replay_pty.py my_picocom_logfile.txt --pty /tmp/ttyVIRT0
python3 replay_pty.py my_picocom_logfile.txt --speed 2.0 # в 2 раза быстрее реального
python3 replay_pty.py my_picocom_logfile.txt --speed 0 # максимально быстро
Затем в другом терминале:
python -m rfg_adc_plotter.main /tmp/ttyVIRT0
"""
from __future__ import annotations
import argparse
import os
@ -18,7 +9,7 @@ import sys
import time
def main():
def main() -> None:
parser = argparse.ArgumentParser(
description="Воспроизводит лог-файл через PTY как виртуальный серийный порт."
)
@ -43,20 +34,18 @@ def main():
"--baud",
type=int,
default=115200,
help="Скорость (бод) для расчёта задержек (по умолчанию 115200)",
help="Скорость (бод) для расчета задержек (по умолчанию 115200)",
)
args = parser.parse_args()
if not os.path.isfile(args.file):
sys.stderr.write(f"[error] Файл не найден: {args.file}\n")
sys.exit(1)
raise SystemExit(1)
# Открываем PTY-пару: master (мы пишем) / slave (GUI читает)
master_fd, slave_fd = os.openpty()
slave_path = os.ttyname(slave_fd)
os.close(slave_fd) # GUI откроет slave сам по симлинку
os.close(slave_fd)
# Симлинк с удобным именем
try:
os.unlink(args.pty)
except FileNotFoundError:
@ -64,26 +53,25 @@ def main():
os.symlink(slave_path, args.pty)
print(f"PTY slave : {slave_path}")
print(f"Симлинк : {args.pty} {slave_path}")
print(f"Запустите : python -m rfg_adc_plotter.main {args.pty}")
print(f"Симлинк : {args.pty} -> {slave_path}")
print(f"Запустите : python3 -m rfg_adc_plotter.main {args.pty}")
print("Ctrl+C для остановки.\n")
# Задержка на байт: 10 бит (8N1) / скорость / множитель
if args.speed > 0:
bytes_per_sec = args.baud / 10.0 * args.speed
delay_per_byte = 1.0 / bytes_per_sec
else:
delay_per_byte = 0.0
_CHUNK = 4096
chunk_size = 4096
loop = 0
try:
while True:
loop += 1
print(f"[loop {loop}] {args.file}")
with open(args.file, "rb") as f:
with open(args.file, "rb") as handle:
while True:
chunk = f.read(_CHUNK)
chunk = handle.read(chunk_size)
if not chunk:
break
os.write(master_fd, chunk)

View File

@ -0,0 +1,3 @@
"""RFG ADC plotter package."""
__all__ = []

145
rfg_adc_plotter/cli.py Normal file
View File

@ -0,0 +1,145 @@
"""Command-line parser for the ADC plotter."""
from __future__ import annotations
import argparse
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
description=(
"Читает свипы из виртуального COM-порта и рисует: "
"последний свип и водопад (реалтайм)."
)
)
parser.add_argument(
"port",
help="Путь к порту, например /dev/ttyACM1 или COM3 (COM10+: \\\\.\\COM10)",
)
parser.add_argument("--baud", type=int, default=115200, help="Скорость (по умолчанию 115200)")
parser.add_argument("--max-sweeps", type=int, default=200, help="Количество видимых свипов в водопаде")
parser.add_argument("--max-fps", type=float, default=30.0, help="Лимит частоты отрисовки, кадров/с")
parser.add_argument("--cmap", default="viridis", help="Цветовая карта водопада")
parser.add_argument(
"--spec-clip",
default="2,98",
help=(
"Процентильная обрезка уровней водопада спектров, %% (min,max). "
"Напр. 2,98. 'off' — отключить"
),
)
parser.add_argument(
"--spec-mean-sec",
type=float,
default=0.0,
help=(
"Вычитание среднего по каждой частоте за последние N секунд "
"в водопаде спектров (0 — отключить)"
),
)
parser.add_argument("--title", default="ADC Sweeps", help="Заголовок окна")
parser.add_argument(
"--fancy",
action="store_true",
help="Заполнять выпавшие точки средними значениями между соседними",
)
parser.add_argument(
"--ylim",
type=str,
default=None,
help="Фиксированные Y-пределы для кривой формата min,max (например -1000,1000). По умолчанию авто",
)
parser.add_argument(
"--backend",
choices=["auto", "pg", "mpl"],
default="pg",
help="Совместимый флаг. Поддерживаются только auto и pg; mpl удален.",
)
parser.add_argument(
"--opengl",
action="store_true",
help="Включить OpenGL-ускорение для PyQtGraph. По умолчанию используется CPU-отрисовка.",
)
parser.add_argument(
"--norm-type",
choices=["projector", "simple"],
default="projector",
help="Тип нормировки: projector (по огибающим в [-1,+1]) или simple (raw/calib)",
)
parser.add_argument(
"--bin",
dest="bin_mode",
action="store_true",
help=(
"8-байтный бинарный протокол: либо legacy старт "
"0xFFFF,0xFFFF,0xFFFF,(CH<<8)|0x0A и точки step,uint32(hi16,lo16),0x000A, "
"либо mixed поток 0x000A,step,ch1_i16,ch2_i16 и 0x001A,step,data_i16,0x0000. "
"Для 0x000A: после парсинга int16 переводятся в В, "
"сырая кривая = ch1^2+ch2^2 (В^2), FFT вход = ch1+i*ch2 (В). "
"Для 0x001A: code_i16 переводится в В, raw = V, FFT вход = exp(V)"
),
)
parser.add_argument(
"--tty-range-v",
type=float,
default=5.0,
help=(
"Полный диапазон для пересчета tty int16 в напряжение ±V "
"(для --bin 0x000A CH1/CH2 и 0x001A log-detector, по умолчанию 5.0)"
),
)
parser.add_argument(
"--logscale",
action="store_true",
help=(
"Новый бинарный протокол: точка несет пару int32 (avg_1, avg_2), "
"а свип считается как |10**(avg_1*0.001) - 10**(avg_2*0.001)|"
),
)
parser.add_argument(
"--parser_16_bit_x2",
action="store_true",
help=(
"Бинарный complex-протокол c парой int16 (Re, Im): "
"старт 0xFFFF,0xFFFF,0xFFFF,(CH<<8)|0x0A; точка step,re_lo16,im_lo16,0xFFFF"
),
)
parser.add_argument(
"--parser_test",
action="store_true",
help=(
"Тестовый парсер для complex-формата 16-bit x2: "
"одиночный 0xFFFF завершает точку, серия 0xFFFF начинает новый свип"
),
)
parser.add_argument(
"--parser_complex_ascii",
action="store_true",
help=(
"ASCII-поток из трех чисел на строку: step real imag. "
"Новый свип определяется по сбросу/повтору step, FFT строится по комплексным данным"
),
)
parser.add_argument(
"--calibrate",
action="store_true",
help=(
"Режим измерения ширины главного пика FFT: рисует красные маркеры "
"границ и фона и выводит ширину пика в статус"
),
)
parser.add_argument(
"--peak_search",
action="store_true",
help=(
"Поиск топ-3 пиков на FFT относительно референса (скользящая медиана) "
"с отрисовкой bounding box и параметров пиков"
),
)
parser.add_argument(
"--peak_ref_window",
type=float,
default=1.0,
help="Ширина окна скользящей медианы для --peak_search, ГГц/м по оси FFT (по умолчанию 1.0)",
)
return parser

View File

@ -1,21 +1,17 @@
WF_WIDTH = 1000 # максимальное число точек в ряду водопада
FFT_LEN = 4096 # длина БПФ для спектра/водопада спектров
LOG_EXP = 2.0 # основание экспоненты для опции --logscale
# Порог для инверсии сырых данных: если среднее значение свипа ниже порога —
# считаем, что сигнал «меньше нуля» и домножаем свип на -1
"""Shared constants for sweep parsing and visualization."""
WF_WIDTH = 1000
FFT_LEN = 2048
BACKGROUND_MEDIAN_SWEEPS = 64
SWEEP_FREQ_MIN_GHZ = 3.3
SWEEP_FREQ_MAX_GHZ = 6.3
LOG_BASE = 10.0
LOG_SCALER = 0.001
LOG_POSTSCALER = 10.0
LOG_EXP_LIMIT = 300.0
C_M_S = 299_792_458.0
DATA_INVERSION_THRESHOLD = 10.0
# Частотная сетка рабочего свипа (положительная часть), ГГц
FREQ_MIN_GHZ = 3.323
FREQ_MAX_GHZ = 14.323
# Скорость света для перевода времени пролёта в one-way depth
SPEED_OF_LIGHT_M_S = 299_792_458.0
# Параметры IFFT-спектра (временной профиль из спектра 3.2..14.3 ГГц)
# Двусторонний спектр формируется как: [нули -14.3..-3.2 | нули -3.2..+3.2 | данные +3.2..+14.3]
ZEROS_LOW = 758 # нули от -14.3 до -3.2 ГГц
ZEROS_MID = 437 # нули от -3.2 до +3.2 ГГц
SWEEP_LEN = 758 # ожидаемая длина свипа (3.2 → 14.3 ГГц)
FREQ_SPAN_GHZ = 28.6 # полная двусторонняя полоса (-14.3 .. +14.3 ГГц)
IFFT_LEN = ZEROS_LOW + ZEROS_MID + SWEEP_LEN # = 1953

View File

@ -0,0 +1,5 @@
"""GUI backends."""
from rfg_adc_plotter.gui.pyqtgraph_backend import run_pyqtgraph
__all__ = ["run_pyqtgraph"]

View File

@ -1,663 +0,0 @@
"""Matplotlib-бэкенд реалтайм-плоттера свипов."""
import sys
import threading
from queue import Queue
from typing import Optional, Tuple
import numpy as np
from rfg_adc_plotter.constants import FFT_LEN, FREQ_MAX_GHZ, FREQ_MIN_GHZ, IFFT_LEN
from rfg_adc_plotter.io.sweep_reader import SweepReader
from rfg_adc_plotter.processing.normalizer import build_calib_envelopes
from rfg_adc_plotter.state.app_state import AppState, format_status
from rfg_adc_plotter.state.ring_buffer import RingBuffer
from rfg_adc_plotter.types import SweepPacket
def _parse_ylim(ylim_str: Optional[str]) -> Optional[Tuple[float, float]]:
if not ylim_str:
return None
try:
y0, y1 = ylim_str.split(",")
return (float(y0), float(y1))
except Exception:
sys.stderr.write("[warn] Некорректный формат --ylim, игнорирую. Ожидалось min,max\n")
return None
def _parse_spec_clip(spec: Optional[str]) -> Optional[Tuple[float, float]]:
if not spec:
return None
s = str(spec).strip().lower()
if s in ("off", "none", "no"):
return None
try:
p0, p1 = s.replace(";", ",").split(",")
low, high = float(p0), float(p1)
if not (0.0 <= low < high <= 100.0):
return None
return (low, high)
except Exception:
return None
def _visible_levels(data: np.ndarray, axis) -> Optional[Tuple[float, float]]:
"""(vmin, vmax) по текущей видимой области imshow."""
if data.size == 0:
return None
ny, nx = data.shape[0], data.shape[1]
try:
x0, x1 = axis.get_xlim()
y0, y1 = axis.get_ylim()
except Exception:
x0, x1 = 0.0, float(nx - 1)
y0, y1 = 0.0, float(ny - 1)
xmin, xmax = sorted((float(x0), float(x1)))
ymin, ymax = sorted((float(y0), float(y1)))
ix0 = max(0, min(nx - 1, int(np.floor(xmin))))
ix1 = max(0, min(nx - 1, int(np.ceil(xmax))))
iy0 = max(0, min(ny - 1, int(np.floor(ymin))))
iy1 = max(0, min(ny - 1, int(np.ceil(ymax))))
if ix1 < ix0:
ix1 = ix0
if iy1 < iy0:
iy1 = iy0
sub = data[iy0 : iy1 + 1, ix0 : ix1 + 1]
finite = np.isfinite(sub)
if not finite.any():
return None
vals = sub[finite]
vmin = float(np.min(vals))
vmax = float(np.max(vals))
if not (np.isfinite(vmin) and np.isfinite(vmax)) or vmin == vmax:
return None
return (vmin, vmax)
def run_matplotlib(args):
try:
import matplotlib
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from matplotlib.widgets import Button as MplButton
from matplotlib.widgets import CheckButtons, RadioButtons, Slider, TextBox
except Exception as e:
sys.stderr.write(f"[error] Нужны matplotlib и её зависимости: {e}\n")
sys.exit(1)
q: Queue[SweepPacket] = Queue(maxsize=1000)
stop_event = threading.Event()
reader = SweepReader(
args.port,
args.baud,
q,
stop_event,
fancy=bool(args.fancy),
bin_mode=bool(getattr(args, "bin_mode", False)),
logscale=bool(getattr(args, "logscale", False)),
debug=bool(getattr(args, "debug", False)),
)
reader.start()
max_sweeps = int(max(10, args.max_sweeps))
max_fps = max(1.0, float(args.max_fps))
interval_ms = int(1000.0 / max_fps)
spec_clip = _parse_spec_clip(getattr(args, "spec_clip", None))
spec_mean_sec = float(getattr(args, "spec_mean_sec", 0.0))
fixed_ylim = _parse_ylim(getattr(args, "ylim", None))
norm_type = str(getattr(args, "norm_type", "projector")).strip().lower()
logscale_enabled = bool(getattr(args, "logscale", False))
state = AppState(norm_type=norm_type)
state.configure_capture_import(fancy=bool(args.fancy), logscale=bool(getattr(args, "logscale", False)))
ring = RingBuffer(max_sweeps)
try:
ring.set_fft_complex_mode(str(getattr(args, "ifft_complex_mode", "arccos")))
except Exception:
pass
# --- Создание фигуры ---
fig, axs = plt.subplots(2, 2, figsize=(12, 8))
(ax_line, ax_img), (ax_fft, ax_spec) = axs
if hasattr(fig.canvas.manager, "set_window_title"):
fig.canvas.manager.set_window_title(args.title)
fig.subplots_adjust(wspace=0.25, hspace=0.35, left=0.07, right=0.90, top=0.92, bottom=0.22)
# Статусная строка
status_text = fig.text(0.01, 0.01, "", ha="left", va="bottom", fontsize=8, family="monospace")
pipeline_text = fig.text(0.01, 0.03, "", ha="left", va="bottom", fontsize=8, family="monospace")
ref_text = fig.text(0.01, 0.05, "", ha="left", va="bottom", fontsize=8, family="monospace")
# График последнего свипа
line_obj, = ax_line.plot([], [], lw=1, color="tab:blue")
line_norm_obj, = ax_line.plot([], [], lw=1, color="tab:green")
line_pre_exp_obj, = ax_line.plot([], [], lw=1, color="tab:red")
line_post_exp_obj, = ax_line.plot([], [], lw=1, color="tab:green")
line_env_lo, = ax_line.plot([], [], lw=1, color="tab:orange", linestyle="--", alpha=0.7)
line_env_hi, = ax_line.plot([], [], lw=1, color="tab:orange", linestyle="--", alpha=0.7)
ax_line.set_title("Сырые данные", pad=1)
ax_line.set_xlabel("Частота, ГГц")
channel_text = ax_line.text(
0.98, 0.98, "", transform=ax_line.transAxes,
ha="right", va="top", fontsize=9, family="monospace",
)
if fixed_ylim is not None:
ax_line.set_ylim(fixed_ylim)
# График спектра
fft_line_obj, = ax_fft.plot([], [], lw=1, color="tab:blue", label="full band")
ax_fft.set_title("FFT", pad=1)
ax_fft.set_xlabel("Глубина, м")
ax_fft.set_ylabel("Амплитуда")
ax_fft.legend(loc="upper right", fontsize=8)
# Водопад сырых данных
img_obj = ax_img.imshow(
np.zeros((1, 1), dtype=np.float32),
aspect="auto", interpolation="nearest", origin="lower", cmap=args.cmap,
)
ax_img.set_title("Сырые данные", pad=12)
ax_img.set_ylabel("частота")
try:
ax_img.tick_params(axis="x", labelbottom=False)
except Exception:
pass
# Водопад спектров
img_fft_obj = ax_spec.imshow(
np.zeros((1, 1), dtype=np.float32),
aspect="auto", interpolation="nearest", origin="lower", cmap=args.cmap,
)
ax_spec.set_title("B-scan", pad=12)
ax_spec.set_ylabel("Глубина, м")
try:
ax_spec.tick_params(axis="x", labelbottom=False)
except Exception:
pass
# Слайдеры и чекбокс
contrast_slider = None
try:
fft_bins = ring.fft_bins if ring.fft_bins > 0 else IFFT_LEN
ax_smin = fig.add_axes([0.92, 0.55, 0.02, 0.35])
ax_smax = fig.add_axes([0.95, 0.55, 0.02, 0.35])
ax_sctr = fig.add_axes([0.98, 0.55, 0.02, 0.35])
ax_cb = fig.add_axes([0.92, 0.45, 0.08, 0.08])
ax_cb_file = fig.add_axes([0.92, 0.36, 0.08, 0.08])
ax_line_mode = fig.add_axes([0.92, 0.10, 0.08, 0.08])
ax_ifft_mode = fig.add_axes([0.92, 0.01, 0.08, 0.08])
ymin_slider = Slider(ax_smin, "Y min", 0, max(1, fft_bins - 1), valinit=0, valstep=1, orientation="vertical")
ymax_slider = Slider(ax_smax, "Y max", 0, max(1, fft_bins - 1), valinit=max(1, fft_bins - 1), valstep=1, orientation="vertical")
contrast_slider = Slider(ax_sctr, "Int max", 0, 100, valinit=100, valstep=1, orientation="vertical")
calib_cb = CheckButtons(ax_cb, ["калибровка"], [False])
calib_file_cb = CheckButtons(ax_cb_file, ["из файла"], [False])
line_mode_rb = RadioButtons(ax_line_mode, ("raw", "processed"), active=0)
ifft_mode_rb = RadioButtons(
ax_ifft_mode,
("arccos", "diff"),
active=(1 if ring.fft_complex_mode == "diff" else 0),
)
try:
ax_line_mode.set_title("Линия", fontsize=8, pad=2)
except Exception:
pass
try:
ax_ifft_mode.set_title("IFFT", fontsize=8, pad=2)
except Exception:
pass
line_mode_state = {"value": "raw"}
ifft_mode_state = {"value": str(ring.fft_complex_mode)}
import os as _os
try:
import tkinter as _tk
from tkinter import filedialog as _tk_filedialog
_tk_available = True
except Exception:
_tk = None
_tk_filedialog = None
_tk_available = False
# Нижняя панель путей и кнопок (работает без Qt; выбор файла через tkinter опционален).
ax_calib_path = fig.add_axes([0.07, 0.14, 0.40, 0.04])
ax_calib_load = fig.add_axes([0.48, 0.14, 0.07, 0.04])
ax_calib_pick = fig.add_axes([0.56, 0.14, 0.06, 0.04])
ax_calib_sample = fig.add_axes([0.63, 0.14, 0.09, 0.04])
ax_calib_save = fig.add_axes([0.73, 0.14, 0.10, 0.04])
ax_bg_path = fig.add_axes([0.07, 0.09, 0.40, 0.04])
ax_bg_load = fig.add_axes([0.48, 0.09, 0.07, 0.04])
ax_bg_pick = fig.add_axes([0.56, 0.09, 0.06, 0.04])
ax_bg_sample = fig.add_axes([0.63, 0.09, 0.09, 0.04])
ax_bg_save2 = fig.add_axes([0.73, 0.09, 0.10, 0.04])
calib_path_box = TextBox(ax_calib_path, "Калибр", initial=state.calib_envelope_path)
bg_path_box = TextBox(ax_bg_path, "Фон", initial=state.background_path)
calib_load_btn2 = MplButton(ax_calib_load, "Загруз.")
calib_pick_btn2 = MplButton(ax_calib_pick, "Файл")
calib_sample_btn2 = MplButton(ax_calib_sample, "sample")
calib_save_btn2 = MplButton(ax_calib_save, "Сохр env")
bg_load_btn2 = MplButton(ax_bg_load, "Загруз.")
bg_pick_btn2 = MplButton(ax_bg_pick, "Файл")
bg_sample_btn2 = MplButton(ax_bg_sample, "sample")
bg_save_btn2 = MplButton(ax_bg_save2, "Сохр фон")
if not _tk_available:
try:
calib_pick_btn2.label.set_text("Файл-")
bg_pick_btn2.label.set_text("Файл-")
except Exception:
pass
def _tb_text(tb):
try:
return str(tb.text).strip()
except Exception:
return ""
def _pick_file_dialog(initial_path: str) -> str:
if not _tk_available or _tk is None or _tk_filedialog is None:
return ""
root = None
try:
root = _tk.Tk()
root.withdraw()
root.attributes("-topmost", True)
except Exception:
root = None
try:
return str(
_tk_filedialog.askopenfilename(
initialdir=_os.path.dirname(initial_path) or ".",
initialfile=_os.path.basename(initial_path) or "",
title="Выбрать файл эталона (.npy или capture)",
)
)
finally:
try:
if root is not None:
root.destroy()
except Exception:
pass
def _sync_path_boxes():
try:
if _tb_text(calib_path_box) != state.calib_envelope_path:
calib_path_box.set_val(state.calib_envelope_path)
except Exception:
pass
try:
if _tb_text(bg_path_box) != state.background_path:
bg_path_box.set_val(state.background_path)
except Exception:
pass
def _refresh_status_texts():
pipeline_text.set_text(f"{state.format_pipeline_status()} | cplx:{ring.fft_complex_mode}")
ref_text.set_text(state.format_reference_status())
try:
fig.canvas.draw_idle()
except Exception:
pass
def _line_mode() -> str:
return str(line_mode_state.get("value", "raw"))
def _refresh_checkboxes():
try:
# file-mode чекбокс показываем всегда; он активен при наличии пути/данных.
ax_cb_file.set_visible(True)
except Exception:
pass
def _load_calib_from_ui():
p = _tb_text(calib_path_box)
if p:
state.set_calib_envelope_path(p)
ok = state.load_calib_reference()
if ok and bool(calib_file_cb.get_status()[0]):
state.set_calib_mode("file")
state.set_calib_enabled(bool(calib_cb.get_status()[0]))
_sync_path_boxes()
_refresh_checkboxes()
_refresh_status_texts()
return ok
def _load_bg_from_ui():
p = _tb_text(bg_path_box)
if p:
state.set_background_path(p)
ok = state.load_background_reference()
_sync_path_boxes()
_refresh_status_texts()
return ok
def _on_ylim_change(_val):
try:
y0 = int(min(ymin_slider.val, ymax_slider.val))
y1 = int(max(ymin_slider.val, ymax_slider.val))
ax_spec.set_ylim(y0, y1)
fig.canvas.draw_idle()
except Exception:
pass
def _on_calib_file_clicked(_v):
use_file = bool(calib_file_cb.get_status()[0])
if use_file:
ok = _load_calib_from_ui()
if ok:
state.set_calib_mode("file")
else:
calib_file_cb.set_active(0) # снять галочку
else:
state.set_calib_mode("live")
state.set_calib_enabled(bool(calib_cb.get_status()[0]))
_refresh_status_texts()
def _on_calib_clicked(_v):
state.set_calib_enabled(bool(calib_cb.get_status()[0]))
_refresh_checkboxes()
_refresh_status_texts()
ax_btn_bg = fig.add_axes([0.92, 0.27, 0.08, 0.05])
ax_cb_bg = fig.add_axes([0.92, 0.20, 0.08, 0.06])
save_bg_btn = MplButton(ax_btn_bg, "Сохр. фон")
bg_cb = CheckButtons(ax_cb_bg, ["вычет фона"], [False])
def _on_save_bg(_event):
ok = state.save_background()
if ok:
state.load_background()
_sync_path_boxes()
_refresh_status_texts()
def _on_bg_clicked(_v):
state.set_background_enabled(bool(bg_cb.get_status()[0]))
_refresh_status_texts()
def _on_calib_load_btn(_event):
_load_calib_from_ui()
def _on_calib_pick_btn(_event):
path = _pick_file_dialog(_tb_text(calib_path_box) or state.calib_envelope_path)
if not path:
return
state.set_calib_envelope_path(path)
_sync_path_boxes()
_refresh_status_texts()
def _on_calib_sample_btn(_event):
state.set_calib_envelope_path(_os.path.join("sample_data", "no_antennas_35dB_attenuators"))
_sync_path_boxes()
if _load_calib_from_ui() and not bool(calib_file_cb.get_status()[0]):
calib_file_cb.set_active(0)
def _on_calib_save_btn(_event):
state.save_calib_envelope()
_sync_path_boxes()
_refresh_status_texts()
def _on_bg_load_btn(_event):
_load_bg_from_ui()
def _on_bg_pick_btn(_event):
path = _pick_file_dialog(_tb_text(bg_path_box) or state.background_path)
if not path:
return
state.set_background_path(path)
_sync_path_boxes()
_refresh_status_texts()
def _on_bg_sample_btn(_event):
state.set_background_path(_os.path.join("sample_data", "empty"))
_sync_path_boxes()
_load_bg_from_ui()
def _on_bg_save_btn2(_event):
ok = state.save_background()
if ok:
state.load_background()
_sync_path_boxes()
_refresh_status_texts()
def _on_line_mode_clicked(label):
line_mode_state["value"] = str(label)
try:
fig.canvas.draw_idle()
except Exception:
pass
def _on_ifft_mode_clicked(label):
ifft_mode_state["value"] = str(label)
try:
ring.set_fft_complex_mode(str(label))
except Exception:
pass
fft_line_obj.set_data([], [])
_refresh_status_texts()
try:
fig.canvas.draw_idle()
except Exception:
pass
save_bg_btn.on_clicked(_on_save_bg)
bg_cb.on_clicked(_on_bg_clicked)
calib_load_btn2.on_clicked(_on_calib_load_btn)
calib_pick_btn2.on_clicked(_on_calib_pick_btn)
calib_sample_btn2.on_clicked(_on_calib_sample_btn)
calib_save_btn2.on_clicked(_on_calib_save_btn)
bg_load_btn2.on_clicked(_on_bg_load_btn)
bg_pick_btn2.on_clicked(_on_bg_pick_btn)
bg_sample_btn2.on_clicked(_on_bg_sample_btn)
bg_save_btn2.on_clicked(_on_bg_save_btn2)
line_mode_rb.on_clicked(_on_line_mode_clicked)
ifft_mode_rb.on_clicked(_on_ifft_mode_clicked)
ymin_slider.on_changed(_on_ylim_change)
ymax_slider.on_changed(_on_ylim_change)
contrast_slider.on_changed(lambda _v: fig.canvas.draw_idle())
calib_cb.on_clicked(_on_calib_clicked)
calib_file_cb.on_clicked(_on_calib_file_clicked)
_sync_path_boxes()
_refresh_checkboxes()
_refresh_status_texts()
except Exception:
calib_cb = None
line_mode_state = {"value": "raw"}
ifft_mode_state = {"value": str(getattr(ring, "fft_complex_mode", "arccos"))}
FREQ_MIN = float(FREQ_MIN_GHZ)
FREQ_MAX = float(FREQ_MAX_GHZ)
def _fft_depth_max() -> float:
axis = ring.fft_depth_axis_m
if axis is None or axis.size == 0:
return 1.0
try:
vmax = float(axis[-1])
except Exception:
vmax = float(np.nanmax(axis))
if not np.isfinite(vmax) or vmax <= 0.0:
return 1.0
return vmax
# --- Инициализация imshow при первом свипе ---
def _init_imshow_extents():
w = ring.width
ms = ring.max_sweeps
fb = max(1, int(ring.fft_bins))
depth_max = _fft_depth_max()
img_obj.set_data(np.zeros((w, ms), dtype=np.float32))
img_obj.set_extent((0, ms - 1, FREQ_MIN, FREQ_MAX))
ax_img.set_xlim(0, ms - 1)
ax_img.set_ylim(FREQ_MIN, FREQ_MAX)
img_fft_obj.set_data(np.zeros((fb, ms), dtype=np.float32))
img_fft_obj.set_extent((0, ms - 1, 0.0, depth_max))
ax_spec.set_xlim(0, ms - 1)
ax_spec.set_ylim(0.0, depth_max)
ax_fft.set_xlim(0.0, depth_max)
_imshow_initialized = [False]
def update(_frame):
changed = state.drain_queue(q, ring) > 0
if changed and not _imshow_initialized[0] and ring.is_ready:
_init_imshow_extents()
_imshow_initialized[0] = True
# Линейный график свипа
if state.current_sweep_raw is not None:
raw = state.current_sweep_raw
if ring.x_shared is not None and raw.size <= ring.x_shared.size:
xs = ring.x_shared[: raw.size]
else:
xs = np.arange(raw.size, dtype=np.int32)
line_mode = str(line_mode_state.get("value", "raw"))
main = state.current_sweep_processed if line_mode == "processed" else raw
if main is not None:
line_obj.set_data(xs[: main.size], main)
else:
line_obj.set_data([], [])
if line_mode == "raw":
if state.calib_mode == "file" and state.calib_file_envelope is not None:
upper = np.asarray(state.calib_file_envelope, dtype=np.float32)
n_env = min(xs.size, upper.size)
if n_env > 0:
x_env = xs[:n_env]
y_env = upper[:n_env]
line_env_lo.set_data(x_env, -y_env)
line_env_hi.set_data(x_env, y_env)
else:
line_env_lo.set_data([], [])
line_env_hi.set_data([], [])
elif state.last_calib_sweep is not None:
calib = np.asarray(state.last_calib_sweep, dtype=np.float32)
lower, upper = build_calib_envelopes(calib)
n_env = min(xs.size, lower.size, upper.size)
if n_env > 0:
line_env_lo.set_data(xs[:n_env], lower[:n_env])
line_env_hi.set_data(xs[:n_env], upper[:n_env])
else:
line_env_lo.set_data([], [])
line_env_hi.set_data([], [])
else:
line_env_lo.set_data([], [])
line_env_hi.set_data([], [])
else:
line_env_lo.set_data([], [])
line_env_hi.set_data([], [])
if logscale_enabled:
if state.current_sweep_pre_exp is not None:
pre = state.current_sweep_pre_exp
line_pre_exp_obj.set_data(xs[: pre.size], pre)
else:
line_pre_exp_obj.set_data([], [])
post = state.current_sweep_post_exp if state.current_sweep_post_exp is not None else raw
line_post_exp_obj.set_data(xs[: post.size], post)
if line_mode == "processed":
if state.current_sweep_processed is not None:
proc = state.current_sweep_processed
line_obj.set_data(xs[: proc.size], proc)
else:
line_obj.set_data([], [])
else:
line_obj.set_data(xs[: raw.size], raw)
line_norm_obj.set_data([], [])
else:
line_pre_exp_obj.set_data([], [])
line_post_exp_obj.set_data([], [])
if line_mode == "raw" and state.current_sweep_norm is not None:
line_norm_obj.set_data(
xs[: state.current_sweep_norm.size], state.current_sweep_norm
)
else:
line_norm_obj.set_data([], [])
ax_line.set_xlim(FREQ_MIN, FREQ_MAX)
if fixed_ylim is not None:
ax_line.set_ylim(fixed_ylim)
else:
ax_line.relim()
ax_line.autoscale_view(scalex=False, scaley=True)
ax_line.set_ylabel("Y")
axis_fft = ring.fft_depth_axis_m
vals_fft = ring.last_fft_vals
if axis_fft is None or vals_fft is None:
fft_line_obj.set_data([], [])
else:
n_fft = min(int(axis_fft.size), int(vals_fft.size))
if n_fft <= 0:
fft_line_obj.set_data([], [])
else:
x_fft = axis_fft[:n_fft]
y_fft = vals_fft[:n_fft]
fft_line_obj.set_data(x_fft, y_fft)
ax_fft.set_xlim(0, float(x_fft[n_fft - 1]))
ax_fft.set_ylim(float(np.nanmin(y_fft)), float(np.nanmax(y_fft)))
# Водопад сырых данных
if changed and ring.is_ready:
disp = ring.get_display_ring()
if ring.x_shared is not None:
n = ring.x_shared.size
disp = disp[:n, :]
img_obj.set_data(disp)
img_obj.set_extent((0, ring.max_sweeps - 1, FREQ_MIN, FREQ_MAX))
ax_img.set_ylim(FREQ_MIN, FREQ_MAX)
levels = _visible_levels(disp, ax_img)
if levels is not None:
img_obj.set_clim(vmin=levels[0], vmax=levels[1])
# Водопад спектров
if changed and ring.is_ready:
disp_fft = ring.get_display_ring_fft()
disp_fft = ring.subtract_recent_mean_fft(disp_fft, spec_mean_sec)
img_fft_obj.set_data(disp_fft)
depth_max = _fft_depth_max()
img_fft_obj.set_extent((0, ring.max_sweeps - 1, 0.0, depth_max))
ax_spec.set_ylim(0.0, depth_max)
levels = ring.compute_fft_levels(disp_fft, spec_clip)
if levels is not None:
try:
c = float(contrast_slider.val) / 100.0 if contrast_slider is not None else 1.0
except Exception:
c = 1.0
vmax_eff = levels[0] + c * (levels[1] - levels[0])
img_fft_obj.set_clim(vmin=levels[0], vmax=vmax_eff)
# Статус и подпись канала
if changed and state.current_info:
status_text.set_text(format_status(state.current_info))
channel_text.set_text(state.format_channel_label())
pipeline_text.set_text(f"{state.format_pipeline_status()} | cplx:{ring.fft_complex_mode}")
ref_text.set_text(state.format_reference_status())
elif changed:
pipeline_text.set_text(f"{state.format_pipeline_status()} | cplx:{ring.fft_complex_mode}")
ref_text.set_text(state.format_reference_status())
return (
line_obj,
line_norm_obj,
line_pre_exp_obj,
line_post_exp_obj,
line_env_lo,
line_env_hi,
img_obj,
fft_line_obj,
img_fft_obj,
status_text,
pipeline_text,
ref_text,
channel_text,
)
ani = FuncAnimation(fig, update, interval=interval_ms, blit=False)
plt.show()
stop_event.set()
reader.join(timeout=1.0)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,6 @@
"""I/O helpers for serial sources and sweep parsing."""
from rfg_adc_plotter.io.serial_source import SerialChunkReader, SerialLineSource
from rfg_adc_plotter.io.sweep_reader import SweepReader
__all__ = ["SerialChunkReader", "SerialLineSource", "SweepReader"]

View File

@ -1,219 +0,0 @@
"""Загрузка эталонов (калибровка/фон) из .npy или бинарных capture-файлов."""
from __future__ import annotations
from collections import Counter
from dataclasses import dataclass
import os
from typing import Iterable, List, Optional, Tuple
import numpy as np
from rfg_adc_plotter.io.sweep_parser_core import BinaryRecordStreamParser, SweepAssembler
from rfg_adc_plotter.types import SweepPacket
@dataclass(frozen=True)
class CaptureParseSummary:
path: str
format: str # "npy" | "bin_capture"
sweeps_total: int
sweeps_valid: int
channels_seen: Tuple[int, ...]
dominant_width: Optional[int]
dominant_n_valid: Optional[int]
aggregation: str
warnings: Tuple[str, ...]
@dataclass(frozen=True)
class ReferenceLoadResult:
vector: np.ndarray
summary: CaptureParseSummary
kind: str # "calibration_envelope" | "background_raw" | "background_processed"
source_type: str # "npy" | "capture"
def detect_reference_file_format(path: str) -> Optional[str]:
"""Определить формат файла эталона: .npy или бинарный capture."""
p = str(path).strip()
if not p or not os.path.isfile(p):
return None
if p.lower().endswith(".npy"):
return "npy"
try:
size = os.path.getsize(p)
except Exception:
return None
if size <= 0:
return None
try:
with open(p, "rb") as f:
sample = f.read(min(size, 256 * 1024))
except Exception:
return None
if len(sample) < 8:
return None
# Универсальный sniff: прогоняем тем же потоковым парсером,
# который используется в realtime/capture-import.
parser = BinaryRecordStreamParser()
_ = parser.feed(sample)
if parser.start_count >= 1 and parser.point_count >= 16:
return "bin_capture"
return None
def load_capture_sweeps(path: str, *, fancy: bool = False, logscale: bool = False) -> List[SweepPacket]:
"""Загрузить свипы из бинарного capture-файла в формате --bin."""
parser = BinaryRecordStreamParser()
assembler = SweepAssembler(fancy=fancy, logscale=logscale, debug=False)
sweeps: List[SweepPacket] = []
with open(path, "rb") as f:
while True:
chunk = f.read(65536)
if not chunk:
break
events = parser.feed(chunk)
for ev in events:
packets = assembler.consume_binary_event(ev)
if packets:
sweeps.extend(packets)
tail = assembler.finalize_current()
if tail is not None:
sweeps.append(tail)
return sweeps
def _mode_int(values: Iterable[int]) -> Optional[int]:
vals = [int(v) for v in values]
if not vals:
return None
ctr = Counter(vals)
return int(max(ctr.items(), key=lambda kv: (kv[1], kv[0]))[0])
def aggregate_capture_reference(
sweeps: List[SweepPacket],
*,
channel: int = 0,
method: str = "median",
path: str = "",
) -> Tuple[np.ndarray, CaptureParseSummary]:
"""Отфильтровать и агрегировать свипы из capture в один эталонный вектор."""
ch_target = int(channel)
meth = str(method).strip().lower() or "median"
warnings: list[str] = []
if meth != "median":
warnings.append(f"aggregation '{meth}' не поддерживается, использую median")
meth = "median"
channels_seen: set[int] = set()
candidate_rows: list[np.ndarray] = []
widths: list[int] = []
n_valids: list[int] = []
for sweep, info in sweeps:
chs = info.get("chs") if isinstance(info, dict) else None
ch_set: set[int] = set()
if isinstance(chs, (list, tuple, set)):
for v in chs:
try:
ch_set.add(int(v))
except Exception:
pass
else:
try:
ch_set.add(int(info.get("ch", 0))) # type: ignore[union-attr]
except Exception:
pass
channels_seen.update(ch_set)
if ch_target not in ch_set:
continue
row = np.asarray(sweep, dtype=np.float32).reshape(-1)
candidate_rows.append(row)
widths.append(int(row.size))
n_valids.append(int(np.count_nonzero(np.isfinite(row))))
sweeps_total = len(sweeps)
if not candidate_rows:
summary = CaptureParseSummary(
path=path,
format="bin_capture",
sweeps_total=sweeps_total,
sweeps_valid=0,
channels_seen=tuple(sorted(channels_seen)),
dominant_width=None,
dominant_n_valid=None,
aggregation=meth,
warnings=tuple(warnings + [f"канал ch{ch_target} не найден"]),
)
raise ValueError(summary.warnings[-1])
dominant_width = _mode_int(widths)
dominant_n_valid = _mode_int(n_valids)
if dominant_width is None or dominant_n_valid is None:
summary = CaptureParseSummary(
path=path,
format="bin_capture",
sweeps_total=sweeps_total,
sweeps_valid=0,
channels_seen=tuple(sorted(channels_seen)),
dominant_width=dominant_width,
dominant_n_valid=dominant_n_valid,
aggregation=meth,
warnings=tuple(warnings + ["не удалось определить доминирующие параметры свипа"]),
)
raise ValueError(summary.warnings[-1])
valid_rows: list[np.ndarray] = []
n_valid_threshold = max(1, int(np.floor(0.95 * dominant_n_valid)))
for row in candidate_rows:
if row.size != dominant_width:
continue
n_valid = int(np.count_nonzero(np.isfinite(row)))
if n_valid < n_valid_threshold:
continue
valid_rows.append(row)
if not valid_rows:
warnings.append("после фильтрации не осталось валидных свипов")
summary = CaptureParseSummary(
path=path,
format="bin_capture",
sweeps_total=sweeps_total,
sweeps_valid=0,
channels_seen=tuple(sorted(channels_seen)),
dominant_width=dominant_width,
dominant_n_valid=dominant_n_valid,
aggregation=meth,
warnings=tuple(warnings),
)
raise ValueError(summary.warnings[-1])
# Детерминированная агрегация: медиана по валидным свипам.
stack = np.stack(valid_rows, axis=0).astype(np.float32, copy=False)
vector = np.nanmedian(stack, axis=0).astype(np.float32, copy=False)
if len(valid_rows) < len(candidate_rows):
warnings.append(f"отфильтровано {len(candidate_rows) - len(valid_rows)} неполных/нестандартных свипов")
summary = CaptureParseSummary(
path=path,
format="bin_capture",
sweeps_total=sweeps_total,
sweeps_valid=len(valid_rows),
channels_seen=tuple(sorted(channels_seen)),
dominant_width=dominant_width,
dominant_n_valid=dominant_n_valid,
aggregation=meth,
warnings=tuple(warnings),
)
return vector, summary

View File

@ -1,4 +1,6 @@
"""Источники последовательного ввода: обёртки над pyserial и raw TTY."""
"""Serial input helpers with pyserial and raw TTY fallbacks."""
from __future__ import annotations
import io
import os
@ -12,14 +14,13 @@ def try_open_pyserial(path: str, baud: int, timeout: float):
except Exception:
return None
try:
ser = serial.Serial(path, baudrate=baud, timeout=timeout)
return ser
return serial.Serial(path, baudrate=baud, timeout=timeout)
except Exception:
return None
class FDReader:
"""Простой враппер чтения строк из файлового дескриптора TTY."""
"""Buffered wrapper around a raw TTY file descriptor."""
def __init__(self, fd: int):
self._fd = fd
@ -33,7 +34,7 @@ class FDReader:
def readline(self) -> bytes:
return self._buf.readline()
def close(self):
def close(self) -> None:
try:
self._buf.close()
except Exception:
@ -41,10 +42,7 @@ class FDReader:
def open_raw_tty(path: str, baud: int) -> Optional[FDReader]:
"""Открыть TTY без pyserial и настроить порт через termios.
Возвращает FDReader или None при ошибке.
"""
"""Open a TTY without pyserial and configure it via termios."""
try:
import termios
import tty
@ -69,17 +67,14 @@ def open_raw_tty(path: str, baud: int) -> Optional[FDReader]:
230400: getattr(termios, "B230400", None),
460800: getattr(termios, "B460800", None),
}
b = baud_map.get(baud) or termios.B115200
speed = baud_map.get(baud) or termios.B115200
attrs[4] = b # ispeed
attrs[5] = b # ospeed
# VMIN=1, VTIME=0 — блокирующее чтение по байту
attrs[4] = speed
attrs[5] = speed
cc = attrs[6]
cc[termios.VMIN] = 1
cc[termios.VTIME] = 0
attrs[6] = cc
termios.tcsetattr(fd, termios.TCSANOW, attrs)
except Exception:
try:
@ -92,11 +87,11 @@ def open_raw_tty(path: str, baud: int) -> Optional[FDReader]:
class SerialLineSource:
"""Единый интерфейс для чтения строк из порта (pyserial или raw TTY)."""
"""Unified line-oriented wrapper for pyserial and raw TTY readers."""
def __init__(self, path: str, baud: int, timeout: float = 1.0):
self._pyserial = try_open_pyserial(path, baud, timeout)
self._fdreader = None
self._fdreader: Optional[FDReader] = None
self._using = "pyserial" if self._pyserial is not None else "raw"
if self._pyserial is None:
self._fdreader = open_raw_tty(path, baud)
@ -112,13 +107,12 @@ class SerialLineSource:
return self._pyserial.readline()
except Exception:
return b""
else:
try:
return self._fdreader.readline() # type: ignore[union-attr]
except Exception:
return b""
def close(self):
def close(self) -> None:
try:
if self._pyserial is not None:
self._pyserial.close()
@ -129,7 +123,7 @@ class SerialLineSource:
class SerialChunkReader:
"""Быстрое неблокирующее чтение чанков из serial/raw TTY для максимального дренажа буфера."""
"""Fast non-blocking chunk reader for serial sources."""
def __init__(self, src: SerialLineSource):
self._src = src
@ -151,20 +145,22 @@ class SerialChunkReader:
self._fd = None
def read_available(self) -> bytes:
"""Вернёт доступные байты (b"" если данных нет)."""
"""Return currently available bytes or b"" when nothing is ready."""
if self._ser is not None:
try:
n = int(getattr(self._ser, "in_waiting", 0))
available = int(getattr(self._ser, "in_waiting", 0))
except Exception:
n = 0
if n > 0:
available = 0
if available > 0:
try:
return self._ser.read(n)
return self._ser.read(available)
except Exception:
return b""
return b""
if self._fd is None:
return b""
out = bytearray()
while True:
try:

View File

@ -1,331 +1,644 @@
"""Переиспользуемые компоненты парсинга бинарных свипов и сборки SweepPacket."""
"""Reusable sweep parsers and sweep assembly helpers."""
from __future__ import annotations
import math
from collections import deque
import time
from typing import List, Optional, Sequence, Set, Tuple
from collections import deque
from typing import List, Optional, Sequence, Set
import numpy as np
from rfg_adc_plotter.constants import DATA_INVERSION_THRESHOLD, LOG_EXP
from rfg_adc_plotter.types import SweepInfo, SweepPacket
# Binary parser events:
# ("start", ch)
# ("point", ch, x, y)
BinaryEvent = Tuple[str, int] | Tuple[str, int, int, float]
# Параметры преобразования пары log-detector значений в линейную амплитуду.
_LOG_DETECTOR_BASE = 10.0
_LOG_DETECTOR_SCALER = 0.001
_LOG_DETECTOR_POSTSCALE = 1000.0
_LOG_DETECTOR_EXP_LIMIT = 300.0
from rfg_adc_plotter.constants import DATA_INVERSION_THRESHOLD, LOG_BASE, LOG_EXP_LIMIT, LOG_POSTSCALER, LOG_SCALER
from rfg_adc_plotter.types import (
ParserEvent,
PointEvent,
SignalKind,
StartEvent,
SweepAuxCurves,
SweepInfo,
SweepPacket,
)
def u32_to_i32(v: int) -> int:
"""Преобразование 32-bit слова в знаковое значение."""
return v - 0x1_0000_0000 if (v & 0x8000_0000) else v
def u32_to_i32(value: int) -> int:
return value - 0x1_0000_0000 if (value & 0x8000_0000) else value
def u_bits_to_i(v: int, bits: int) -> int:
"""Преобразование беззнакового целого fixed-width в знаковое (two's complement)."""
if bits <= 0:
return 0
sign = 1 << (bits - 1)
full = 1 << bits
return v - full if (v & sign) else v
def u16_to_i16(value: int) -> int:
return value - 0x1_0000 if (value & 0x8000) else value
def words_be_to_i(words: Sequence[int]) -> int:
"""Собрать big-endian набор 16-bit слов в знаковое число."""
acc = 0
for w in words:
acc = (acc << 16) | (int(w) & 0xFFFF)
return u_bits_to_i(acc, 16 * int(len(words)))
def log_value_to_linear(value: int) -> float:
exponent = max(-LOG_EXP_LIMIT, min(LOG_EXP_LIMIT, float(value) * LOG_SCALER))
return float(LOG_BASE ** exponent)
def _log_pair_to_linear(avg_1: int, avg_2: int) -> float:
"""Разность двух логарифмических усреднений в линейной шкале."""
exp1 = max(-_LOG_DETECTOR_EXP_LIMIT, min(_LOG_DETECTOR_EXP_LIMIT, float(avg_1) * _LOG_DETECTOR_SCALER))
exp2 = max(-_LOG_DETECTOR_EXP_LIMIT, min(_LOG_DETECTOR_EXP_LIMIT, float(avg_2) * _LOG_DETECTOR_SCALER))
return (math.pow(_LOG_DETECTOR_BASE, exp1) - math.pow(_LOG_DETECTOR_BASE, exp2)) * _LOG_DETECTOR_POSTSCALE
def log_pair_to_sweep(avg_1: int, avg_2: int) -> float:
value_1 = log_value_to_linear(avg_1)
value_2 = log_value_to_linear(avg_2)
return abs(value_1 - value_2) * LOG_POSTSCALER
class BinaryRecordStreamParser:
"""Инкрементальный парсер бинарных записей нескольких wire-форматов.
def tty_ch_pair_to_sweep(ch_1: int, ch_2: int) -> float:
"""Reduce a raw CH1/CH2 TTY point to power-like scalar ``ch1^2 + ch2^2``."""
ch_1_i = int(ch_1)
ch_2_i = int(ch_2)
return float((ch_1_i * ch_1_i) + (ch_2_i * ch_2_i))
Поддерживаемые форматы:
1) legacy 8-byte:
старт: 0xFFFF,0xFFFF,0xFFFF,(ch<<8)|0x0A
точка: step,value_hi16,value_lo16,(ch<<8)|0x0A
2) log-detector:
старт: 0xFFFF x5, (ch<<8)|0x0A
точка: step, avg1, avg2, (ch<<8)|0x0A,
где avg1/avg2 кодируются фиксированной шириной в 16-bit словах:
- 2 слова (int32) или
- 8 слов (int128).
"""
class AsciiSweepParser:
"""Incremental parser for ASCII sweep streams."""
def __init__(self):
self._buf = bytearray()
self.bytes_consumed: int = 0
self.start_count: int = 0
self.point_count: int = 0
self.desync_count: int = 0
self._log_pair_words: Optional[int] = None
def feed(self, data: bytes) -> List[ParserEvent]:
if data:
self._buf += data
events: List[ParserEvent] = []
while True:
nl = self._buf.find(b"\n")
if nl == -1:
break
line = bytes(self._buf[:nl])
del self._buf[: nl + 1]
if line.endswith(b"\r"):
line = line[:-1]
if not line:
continue
if line.startswith(b"Sweep_start"):
events.append(StartEvent())
continue
parts = line.split()
if len(parts) < 3:
continue
head = parts[0].lower()
try:
if head == b"s":
if len(parts) >= 4:
ch = int(parts[1], 10)
x = int(parts[2], 10)
y = int(parts[3], 10)
else:
ch = 0
x = int(parts[1], 10)
y = int(parts[2], 10)
elif head.startswith(b"s"):
ch = int(head[1:], 10)
x = int(parts[1], 10)
y = int(parts[2], 10)
else:
continue
except Exception:
continue
events.append(PointEvent(ch=int(ch), x=int(x), y=float(y)))
return events
class ComplexAsciiSweepParser:
"""Incremental parser for ASCII ``step real imag`` streams."""
def __init__(self):
self._buf = bytearray()
self._last_step: Optional[int] = None
self._seen_points = False
def feed(self, data: bytes) -> List[ParserEvent]:
if data:
self._buf += data
events: List[ParserEvent] = []
while True:
nl = self._buf.find(b"\n")
if nl == -1:
break
line = bytes(self._buf[:nl])
del self._buf[: nl + 1]
if line.endswith(b"\r"):
line = line[:-1]
if not line:
continue
if line.lower().startswith(b"sweep_start"):
self._last_step = None
self._seen_points = False
events.append(StartEvent())
continue
parts = line.split()
if len(parts) < 3:
continue
try:
step = int(parts[0], 10)
real = float(parts[1])
imag = float(parts[2])
except Exception:
continue
if step < 0 or (not math.isfinite(real)) or (not math.isfinite(imag)):
continue
if self._seen_points and self._last_step is not None and step <= self._last_step:
events.append(StartEvent())
self._seen_points = True
self._last_step = step
events.append(
PointEvent(
ch=0,
x=step,
y=float(abs(complex(real, imag))),
aux=(float(real), float(imag)),
)
)
return events
class LegacyBinaryParser:
"""Byte-resynchronizing parser for supported 8-byte binary record formats."""
def __init__(self):
self._buf = bytearray()
self._last_step: Optional[int] = None
self._seen_points = False
self._mode: Optional[str] = None
self._current_signal_kind: Optional[SignalKind] = None
@staticmethod
def _u16_at(buf: bytearray, offset: int) -> int:
return int(buf[offset]) | (int(buf[offset + 1]) << 8)
def _try_parse_log_start(self, buf: bytearray) -> Optional[Tuple[int, int]]:
rec_bytes = 12 # 6 слов: FFFF x5 + terminator
if len(buf) < rec_bytes:
return None
for wi in range(5):
if self._u16_at(buf, wi * 2) != 0xFFFF:
return None
term = self._u16_at(buf, 10)
if (term & 0x00FF) != 0x000A:
return None
ch = int((term >> 8) & 0x00FF)
return ch, rec_bytes
def _emit_legacy_start(self, events: List[ParserEvent], ch: int) -> None:
self._mode = "legacy"
self._last_step = None
self._seen_points = False
self._current_signal_kind = None
events.append(StartEvent(ch=int(ch)))
def _try_parse_log_point(self, buf: bytearray, pair_words: int) -> Optional[Tuple[int, int, float, int]]:
if pair_words <= 0:
return None
rec_words = 2 + 2 * int(pair_words)
rec_bytes = 2 * rec_words
if len(buf) < rec_bytes:
return None
def _emit_bin_start(self, events: List[ParserEvent], signal_kind: SignalKind) -> None:
self._mode = "bin"
self._last_step = None
self._seen_points = False
self._current_signal_kind = signal_kind
events.append(StartEvent(ch=0, signal_kind=signal_kind))
step = self._u16_at(buf, 0)
if step == 0xFFFF:
return None
def _emit_tty_start(self, events: List[ParserEvent]) -> None:
self._emit_bin_start(events, signal_kind="bin_iq")
term_off = rec_bytes - 2
term = self._u16_at(buf, term_off)
if (term & 0x00FF) != 0x000A:
return None
def _emit_legacy_point(self, events: List[ParserEvent], step: int, value_word_hi: int, value_word_lo: int, ch: int) -> None:
self._mode = "legacy"
self._current_signal_kind = None
if self._seen_points and self._last_step is not None and step <= self._last_step:
events.append(StartEvent(ch=int(ch)))
self._seen_points = True
self._last_step = int(step)
value = u32_to_i32((int(value_word_hi) << 16) | int(value_word_lo))
events.append(PointEvent(ch=int(ch), x=int(step), y=float(value)))
a1_words = [self._u16_at(buf, 2 + 2 * i) for i in range(pair_words)]
a2_words = [self._u16_at(buf, 2 + 2 * (pair_words + i)) for i in range(pair_words)]
avg_1 = words_be_to_i(a1_words)
avg_2 = words_be_to_i(a2_words)
y_val = _log_pair_to_linear(avg_1, avg_2)
ch = int((term >> 8) & 0x00FF)
return ch, int(step), float(y_val), rec_bytes
def _prepare_bin_point(self, events: List[ParserEvent], step: int, signal_kind: SignalKind) -> None:
self._mode = "bin"
if self._current_signal_kind != signal_kind:
if self._seen_points:
events.append(StartEvent(ch=0, signal_kind=signal_kind))
self._last_step = None
self._seen_points = False
self._current_signal_kind = signal_kind
if self._seen_points and self._last_step is not None and step <= self._last_step:
events.append(StartEvent(ch=0, signal_kind=signal_kind))
self._last_step = None
self._seen_points = False
self._seen_points = True
self._last_step = int(step)
def feed(self, data: bytes) -> List[BinaryEvent]:
def _emit_tty_point(self, events: List[ParserEvent], step: int, ch_1_word: int, ch_2_word: int) -> None:
self._prepare_bin_point(events, step=int(step), signal_kind="bin_iq")
ch_1 = u16_to_i16(int(ch_1_word))
ch_2 = u16_to_i16(int(ch_2_word))
events.append(
PointEvent(
ch=0,
x=int(step),
y=tty_ch_pair_to_sweep(ch_1, ch_2),
aux=(float(ch_1), float(ch_2)),
signal_kind="bin_iq",
)
)
def _emit_logdet_point(self, events: List[ParserEvent], step: int, value_word: int) -> None:
self._prepare_bin_point(events, step=int(step), signal_kind="bin_logdet")
value = u16_to_i16(int(value_word))
events.append(
PointEvent(
ch=0,
x=int(step),
y=float(value),
signal_kind="bin_logdet",
)
)
def feed(self, data: bytes) -> List[ParserEvent]:
if data:
self._buf += data
events: List[BinaryEvent] = []
buf = self._buf
events: List[ParserEvent] = []
while len(self._buf) >= 8:
w0 = self._u16_at(self._buf, 0)
w1 = self._u16_at(self._buf, 2)
w2 = self._u16_at(self._buf, 4)
w3 = self._u16_at(self._buf, 6)
while len(buf) >= 8:
# 1) log-detector start (12-byte): FFFF x5 + (ch<<8)|0x0A
parsed_log_start = self._try_parse_log_start(buf)
if parsed_log_start is not None:
ch, consumed = parsed_log_start
events.append(("start", ch))
del buf[:consumed]
self.bytes_consumed += consumed
self.start_count += 1
# Ширину пары (32/128) определим на ближайшей точке.
self._log_pair_words = None
is_legacy_start = (w0 == 0xFFFF and w1 == 0xFFFF and w2 == 0xFFFF and self._buf[6] == 0x0A)
is_tty_start = (w0 == 0x000A and w1 == 0xFFFF and w2 == 0xFFFF and w3 == 0xFFFF)
is_legacy_point = (self._buf[6] == 0x0A and w0 != 0xFFFF)
is_tty_point = (w0 == 0x000A and w1 != 0xFFFF)
is_logdet_point = (w0 == 0x001A and w3 == 0x0000)
if is_legacy_start:
self._emit_legacy_start(events, ch=int(self._buf[7]))
del self._buf[:8]
continue
# 2) log-detector point:
# сперва в уже известной ширине пары, иначе авто-детект 128/32.
# В авто-режиме сначала пробуем 32-bit пару (наиболее частый формат),
# затем 128-bit. Это снижает риск ложного совпадения 128-bit длины на 32-bit потоке.
pair_candidates = [self._log_pair_words] if self._log_pair_words in (2, 8) else [2, 8]
parsed_log_point: Optional[Tuple[int, int, float, int]] = None
for pair_words in pair_candidates:
if pair_words is None:
continue
parsed_log_point = self._try_parse_log_point(buf, int(pair_words))
if parsed_log_point is not None:
self._log_pair_words = int(pair_words)
break
if parsed_log_point is not None:
ch, step, y_val, consumed = parsed_log_point
events.append(("point", ch, step, y_val))
del buf[:consumed]
self.bytes_consumed += consumed
self.point_count += 1
if is_tty_start:
self._emit_tty_start(events)
del self._buf[:8]
continue
# 3) legacy 8-byte start / point.
w0 = self._u16_at(buf, 0)
w1 = self._u16_at(buf, 2)
w2 = self._u16_at(buf, 4)
if w0 == 0xFFFF and w1 == 0xFFFF and w2 == 0xFFFF and buf[6] == 0x0A:
ch = int(buf[7])
events.append(("start", ch))
del buf[:8]
self.bytes_consumed += 8
self.start_count += 1
# legacy не использует пару avg1/avg2.
self._log_pair_words = None
if is_logdet_point:
self._emit_logdet_point(events, step=int(w1), value_word=int(w2))
del self._buf[:8]
continue
if buf[6] == 0x0A:
ch = int(buf[7])
value_u32 = (w1 << 16) | w2
events.append(("point", ch, int(w0), float(u32_to_i32(value_u32))))
del buf[:8]
self.bytes_consumed += 8
self.point_count += 1
if self._mode == "legacy":
if is_legacy_point:
self._emit_legacy_point(
events,
step=int(w0),
value_word_hi=int(w1),
value_word_lo=int(w2),
ch=int(self._buf[7]),
)
del self._buf[:8]
continue
if is_tty_point and (not is_legacy_point):
self._emit_tty_point(events, step=int(w1), ch_1_word=int(w2), ch_2_word=int(w3))
del self._buf[:8]
continue
del self._buf[:1]
continue
del buf[:1]
self.bytes_consumed += 1
self.desync_count += 1
if self._mode == "bin":
if is_tty_point:
self._emit_tty_point(events, step=int(w1), ch_1_word=int(w2), ch_2_word=int(w3))
del self._buf[:8]
continue
if is_legacy_point and (not is_tty_point):
self._emit_legacy_point(
events,
step=int(w0),
value_word_hi=int(w1),
value_word_lo=int(w2),
ch=int(self._buf[7]),
)
del self._buf[:8]
continue
del self._buf[:1]
continue
# Mode is still unknown. Accept only unambiguous point shapes to avoid
# jumping between tty and legacy interpretations on coincidental bytes.
if is_tty_point and (not is_legacy_point):
self._emit_tty_point(events, step=int(w1), ch_1_word=int(w2), ch_2_word=int(w3))
del self._buf[:8]
continue
if is_legacy_point and (not is_tty_point):
self._emit_legacy_point(
events,
step=int(w0),
value_word_hi=int(w1),
value_word_lo=int(w2),
ch=int(self._buf[7]),
)
del self._buf[:8]
continue
del self._buf[:1]
return events
def buffered_size(self) -> int:
return len(self._buf)
def clear_buffer_keep_tail(self, max_tail: int = 262_144):
if len(self._buf) > max_tail:
del self._buf[:-max_tail]
class LogScaleBinaryParser32:
"""Byte-resynchronizing parser for 32-bit logscale pair records."""
def __init__(self):
self._buf = bytearray()
self._last_step: Optional[int] = None
self._seen_points = False
@staticmethod
def _u16_at(buf: bytearray, offset: int) -> int:
return int(buf[offset]) | (int(buf[offset + 1]) << 8)
def feed(self, data: bytes) -> List[ParserEvent]:
if data:
self._buf += data
events: List[ParserEvent] = []
while len(self._buf) >= 12:
words = [self._u16_at(self._buf, idx * 2) for idx in range(6)]
if words[0:5] == [0xFFFF] * 5 and (words[5] & 0x00FF) == 0x000A:
self._last_step = None
self._seen_points = False
events.append(StartEvent(ch=int((words[5] >> 8) & 0x00FF)))
del self._buf[:12]
continue
if (words[5] & 0x00FF) == 0x000A and words[0] != 0xFFFF:
ch = int((words[5] >> 8) & 0x00FF)
if self._seen_points and self._last_step is not None and words[0] <= self._last_step:
events.append(StartEvent(ch=ch))
self._seen_points = True
self._last_step = int(words[0])
avg_1 = u32_to_i32((words[1] << 16) | words[2])
avg_2 = u32_to_i32((words[3] << 16) | words[4])
events.append(
PointEvent(
ch=ch,
x=int(words[0]),
y=log_pair_to_sweep(avg_1, avg_2),
aux=(float(avg_1), float(avg_2)),
)
)
del self._buf[:12]
continue
del self._buf[:1]
return events
class LogScale16BitX2BinaryParser:
"""Byte-resynchronizing parser for 16-bit x2 logscale records."""
def __init__(self):
self._buf = bytearray()
self._current_channel = 0
self._last_step: Optional[int] = None
self._seen_points = False
@staticmethod
def _u16_at(buf: bytearray, offset: int) -> int:
return int(buf[offset]) | (int(buf[offset + 1]) << 8)
def feed(self, data: bytes) -> List[ParserEvent]:
if data:
self._buf += data
events: List[ParserEvent] = []
while len(self._buf) >= 8:
words = [self._u16_at(self._buf, idx * 2) for idx in range(4)]
if words[0:3] == [0xFFFF, 0xFFFF, 0xFFFF] and (words[3] & 0x00FF) == 0x000A:
self._current_channel = int((words[3] >> 8) & 0x00FF)
self._last_step = None
self._seen_points = False
events.append(StartEvent(ch=self._current_channel))
del self._buf[:8]
continue
if words[3] == 0xFFFF and words[0] != 0xFFFF:
if self._seen_points and self._last_step is not None and words[0] <= self._last_step:
events.append(StartEvent(ch=self._current_channel))
self._seen_points = True
self._last_step = int(words[0])
real = u16_to_i16(words[1])
imag = u16_to_i16(words[2])
events.append(
PointEvent(
ch=self._current_channel,
x=int(words[0]),
y=float(abs(complex(real, imag))),
aux=(float(real), float(imag)),
)
)
del self._buf[:8]
continue
del self._buf[:1]
return events
class ParserTestStreamParser:
"""Parser for the special test 16-bit x2 stream format."""
def __init__(self):
self._buf = bytearray()
self._buf_pos = 0
self._point_buf: list[int] = []
self._ffff_run = 0
self._current_channel = 0
self._expected_step: Optional[int] = None
self._in_sweep = False
self._local_resync = False
def _consume_point(self) -> Optional[PointEvent]:
if len(self._point_buf) != 3:
return None
step = int(self._point_buf[0])
if step <= 0:
return None
if self._expected_step is not None and step < self._expected_step:
return None
real = u16_to_i16(int(self._point_buf[1]))
imag = u16_to_i16(int(self._point_buf[2]))
self._expected_step = step + 1
return PointEvent(
ch=self._current_channel,
x=step,
y=float(abs(complex(real, imag))),
aux=(float(real), float(imag)),
)
def feed(self, data: bytes) -> List[ParserEvent]:
if data:
self._buf += data
events: List[ParserEvent] = []
while (self._buf_pos + 1) < len(self._buf):
word = int(self._buf[self._buf_pos]) | (int(self._buf[self._buf_pos + 1]) << 8)
self._buf_pos += 2
if word == 0xFFFF:
self._ffff_run += 1
continue
if self._ffff_run > 0:
bad_point_on_delim = False
if self._in_sweep and self._point_buf and not self._local_resync:
point = self._consume_point()
if point is None:
self._local_resync = True
bad_point_on_delim = True
else:
events.append(point)
self._point_buf.clear()
if self._ffff_run >= 2:
if (word & 0x00FF) == 0x000A:
self._current_channel = (word >> 8) & 0x00FF
self._in_sweep = True
self._expected_step = 1
self._local_resync = False
self._point_buf.clear()
events.append(StartEvent(ch=self._current_channel))
self._ffff_run = 0
continue
if self._in_sweep:
self._local_resync = True
self._ffff_run = 0
continue
if self._local_resync and not bad_point_on_delim:
self._local_resync = False
self._point_buf.clear()
self._ffff_run = 0
if self._in_sweep and not self._local_resync:
self._point_buf.append(word)
if len(self._point_buf) > 3:
self._point_buf.clear()
self._local_resync = True
if self._buf_pos >= 262144:
del self._buf[: self._buf_pos]
self._buf_pos = 0
if (len(self._buf) - self._buf_pos) > 1_000_000:
tail = self._buf[self._buf_pos :]
if len(tail) > 262144:
tail = tail[-262144:]
self._buf = bytearray(tail)
self._buf_pos = 0
return events
class SweepAssembler:
"""Собирает точки в свип и применяет ту же постобработку, что realtime parser."""
"""Collect parser events into sweep packets matching runtime expectations."""
def __init__(self, fancy: bool = False, logscale: bool = False, debug: bool = False):
def __init__(self, fancy: bool = False, apply_inversion: bool = True):
self._fancy = bool(fancy)
self._logscale = bool(logscale)
self._debug = bool(debug)
self._max_width: int = 0
self._sweep_idx: int = 0
self._apply_inversion = bool(apply_inversion)
self._max_width = 0
self._sweep_idx = 0
self._last_sweep_ts: Optional[float] = None
self._n_valid_hist = deque()
self._xs: list[int] = []
self._ys: list[float] = []
self._aux_1: list[float] = []
self._aux_2: list[float] = []
self._cur_channel: Optional[int] = None
self._cur_signal_kind: Optional[SignalKind] = None
self._cur_channels: set[int] = set()
def reset_current(self):
def _reset_current(self) -> None:
self._xs.clear()
self._ys.clear()
self._aux_1.clear()
self._aux_2.clear()
self._cur_channel = None
self._cur_signal_kind = None
self._cur_channels.clear()
def add_point(self, ch: int, x: int, y: float):
if self._cur_channel is None:
self._cur_channel = int(ch)
self._cur_channels.add(int(ch))
self._xs.append(int(x))
self._ys.append(float(y))
def _scatter(self, xs: Sequence[int], values: Sequence[float], width: int) -> np.ndarray:
series = np.full((width,), np.nan, dtype=np.float32)
try:
idx = np.asarray(xs, dtype=np.int64)
vals = np.asarray(values, dtype=np.float32)
series[idx] = vals
except Exception:
for x, y in zip(xs, values):
xi = int(x)
if 0 <= xi < width:
series[xi] = float(y)
return series
def start_new_sweep(self, ch: int, now_ts: Optional[float] = None) -> Optional[SweepPacket]:
packet = self.finalize_current(now_ts=now_ts)
self.reset_current()
self._cur_channel = int(ch)
self._cur_channels.add(int(ch))
@staticmethod
def _fill_missing(series: np.ndarray) -> None:
known = ~np.isnan(series)
if not np.any(known):
return
known_idx = np.nonzero(known)[0]
for i0, i1 in zip(known_idx[:-1], known_idx[1:]):
if i1 - i0 > 1:
avg = (series[i0] + series[i1]) * 0.5
series[i0 + 1 : i1] = avg
first_idx = int(known_idx[0])
last_idx = int(known_idx[-1])
if first_idx > 0:
series[:first_idx] = series[first_idx]
if last_idx < series.size - 1:
series[last_idx + 1 :] = series[last_idx]
def consume(self, event: ParserEvent) -> Optional[SweepPacket]:
if isinstance(event, StartEvent):
packet = self.finalize_current()
self._reset_current()
if event.ch is not None:
self._cur_channel = int(event.ch)
self._cur_signal_kind = event.signal_kind
return packet
def consume_binary_event(self, event: BinaryEvent, now_ts: Optional[float] = None) -> List[SweepPacket]:
out: List[SweepPacket] = []
tag = event[0]
if tag == "start":
packet = self.start_new_sweep(int(event[1]), now_ts=now_ts)
if packet is not None:
out.append(packet)
return out
# point
_tag, ch, x, y = event # type: ignore[misc]
self.add_point(int(ch), int(x), float(y))
return out
point_ch = int(event.ch)
point_signal_kind = event.signal_kind
packet: Optional[SweepPacket] = None
if self._cur_channel is None:
self._cur_channel = point_ch
elif point_ch != self._cur_channel:
if self._xs:
# Never mix channels in a single sweep packet: otherwise
# identical step indexes can overwrite each other.
packet = self.finalize_current()
self._reset_current()
self._cur_channel = point_ch
if self._cur_signal_kind != point_signal_kind:
if self._xs:
packet = self.finalize_current()
self._reset_current()
self._cur_channel = point_ch
self._cur_signal_kind = point_signal_kind
def finalize_arrays(
self,
xs: Sequence[int],
ys: Sequence[float],
channels: Optional[Set[int]],
now_ts: Optional[float] = None,
) -> Optional[SweepPacket]:
if self._debug:
if not xs:
import sys
sys.stderr.write("[debug] _finalize_current: xs пуст — свип пропущен\n")
else:
import sys
sys.stderr.write(
f"[debug] _finalize_current: {len(xs)} точек → свип #{self._sweep_idx + 1}\n"
)
if not xs:
self._cur_channels.add(point_ch)
self._xs.append(int(event.x))
self._ys.append(float(event.y))
if event.aux is not None:
self._aux_1.append(float(event.aux[0]))
self._aux_2.append(float(event.aux[1]))
return packet
def finalize_current(self) -> Optional[SweepPacket]:
if not self._xs:
return None
ch_list = sorted(channels) if channels else [0]
ch_list = sorted(self._cur_channels) if self._cur_channels else [0]
ch_primary = ch_list[0] if ch_list else 0
max_x = max(int(v) for v in xs)
width = max_x + 1
width = max(int(max(self._xs)) + 1, 1)
self._max_width = max(self._max_width, width)
target_width = self._max_width if self._fancy else width
sweep = np.full((target_width,), np.nan, dtype=np.float32)
try:
idx = np.asarray(xs, dtype=np.int64)
vals = np.asarray(ys, dtype=np.float32)
sweep[idx] = vals
except Exception:
for x, y in zip(xs, ys):
xi = int(x)
if 0 <= xi < target_width:
sweep[xi] = float(y)
sweep = self._scatter(self._xs, self._ys, target_width)
aux_curves: SweepAuxCurves = None
if self._aux_1 and self._aux_2 and len(self._aux_1) == len(self._xs):
aux_curves = (
self._scatter(self._xs, self._aux_1, target_width),
self._scatter(self._xs, self._aux_2, target_width),
)
n_valid_cur = int(np.count_nonzero(np.isfinite(sweep)))
if self._fancy:
try:
known = ~np.isnan(sweep)
if np.any(known):
known_idx = np.nonzero(known)[0]
for i0, i1 in zip(known_idx[:-1], known_idx[1:]):
if i1 - i0 > 1:
avg = (sweep[i0] + sweep[i1]) * 0.5
sweep[i0 + 1 : i1] = avg
first_idx = int(known_idx[0])
last_idx = int(known_idx[-1])
if first_idx > 0:
sweep[:first_idx] = sweep[first_idx]
if last_idx < sweep.size - 1:
sweep[last_idx + 1 :] = sweep[last_idx]
except Exception:
pass
self._fill_missing(sweep)
if aux_curves is not None:
self._fill_missing(aux_curves[0])
self._fill_missing(aux_curves[1])
if self._apply_inversion:
try:
m = float(np.nanmean(sweep))
if np.isfinite(m) and m < DATA_INVERSION_THRESHOLD:
mean_value = float(np.nanmean(sweep))
if np.isfinite(mean_value) and mean_value < DATA_INVERSION_THRESHOLD:
sweep *= -1.0
except Exception:
pass
pre_exp_sweep = None
if self._logscale:
try:
pre_exp_sweep = sweep.copy()
with np.errstate(over="ignore", invalid="ignore"):
sweep = np.power(LOG_EXP, np.asarray(sweep, dtype=np.float64)).astype(np.float32)
sweep[~np.isfinite(sweep)] = np.nan
except Exception:
pass
self._sweep_idx += 1
if len(ch_list) > 1:
import sys
sys.stderr.write(f"[warn] Sweep {self._sweep_idx}: изменялся номер канала: {ch_list}\n")
now = float(time.time() if now_ts is None else now_ts)
now = time.time()
if self._last_sweep_ts is None:
dt_ms = float("nan")
else:
@ -335,10 +648,7 @@ class SweepAssembler:
self._n_valid_hist.append((now, n_valid_cur))
while self._n_valid_hist and (now - self._n_valid_hist[0][0]) > 1.0:
self._n_valid_hist.popleft()
if self._n_valid_hist:
n_valid = float(sum(v for _t, v in self._n_valid_hist) / len(self._n_valid_hist))
else:
n_valid = float(n_valid_cur)
n_valid = float(sum(value for _ts, value in self._n_valid_hist) / len(self._n_valid_hist))
if n_valid_cur > 0:
vmin = float(np.nanmin(sweep))
@ -352,6 +662,7 @@ class SweepAssembler:
"sweep": self._sweep_idx,
"ch": ch_primary,
"chs": ch_list,
"signal_kind": self._cur_signal_kind,
"n_valid": n_valid,
"min": vmin,
"max": vmax,
@ -359,10 +670,4 @@ class SweepAssembler:
"std": std,
"dt_ms": dt_ms,
}
if pre_exp_sweep is not None:
info["pre_exp_sweep"] = pre_exp_sweep
return (sweep, info)
def finalize_current(self, now_ts: Optional[float] = None) -> Optional[SweepPacket]:
return self.finalize_arrays(self._xs, self._ys, self._cur_channels, now_ts=now_ts)
return (sweep, info, aux_curves)

View File

@ -1,18 +1,109 @@
"""Фоновый поток чтения и парсинга свипов из последовательного порта."""
"""Background sweep reader thread."""
from __future__ import annotations
import sys
import threading
import time
from queue import Full, Queue
from typing import Optional
from rfg_adc_plotter.io.sweep_parser_core import BinaryRecordStreamParser, SweepAssembler
from rfg_adc_plotter.io.serial_source import SerialChunkReader, SerialLineSource
from rfg_adc_plotter.types import SweepPacket
from rfg_adc_plotter.io.sweep_parser_core import (
AsciiSweepParser,
ComplexAsciiSweepParser,
LegacyBinaryParser,
LogScale16BitX2BinaryParser,
LogScaleBinaryParser32,
ParserTestStreamParser,
SweepAssembler,
)
from rfg_adc_plotter.types import ParserEvent, PointEvent, StartEvent, SweepPacket
_PARSER_16_BIT_X2_PROBE_BYTES = 64 * 1024
_LEGACY_STREAM_MIN_RECORDS = 32
_LEGACY_STREAM_MIN_MATCH_RATIO = 0.95
_TTY_STREAM_MIN_MATCH_RATIO = 0.60
_DEBUG_FRAME_LOG_EVERY = 10
_NO_INPUT_WARN_INTERVAL_S = 5.0
_NO_PACKET_WARN_INTERVAL_S = 5.0
_NO_PACKET_HINT_AFTER_S = 10.0
def _u16le_at(data: bytes, offset: int) -> int:
return int(data[offset]) | (int(data[offset + 1]) << 8)
def _looks_like_legacy_8byte_stream(data: bytes) -> bool:
"""Heuristically detect supported 8-byte binary streams on an arbitrary byte offset."""
buf = bytes(data)
for offset in range(8):
blocks = (len(buf) - offset) // 8
if blocks < _LEGACY_STREAM_MIN_RECORDS:
continue
min_matches = max(_LEGACY_STREAM_MIN_RECORDS, int(blocks * _LEGACY_STREAM_MIN_MATCH_RATIO))
matched_steps_legacy: list[int] = []
matched_steps_tty: list[int] = []
matched_steps_logdet: list[int] = []
for block_idx in range(blocks):
base = offset + (block_idx * 8)
if (_u16le_at(buf, base + 6) & 0x00FF) != 0x000A:
w0 = _u16le_at(buf, base)
w1 = _u16le_at(buf, base + 2)
w3 = _u16le_at(buf, base + 6)
if w0 == 0x000A and w1 != 0xFFFF:
matched_steps_tty.append(w1)
elif w0 == 0x001A and w3 == 0x0000:
matched_steps_logdet.append(w1)
continue
matched_steps_legacy.append(_u16le_at(buf, base))
if len(matched_steps_legacy) >= min_matches:
monotonic_or_reset = 0
for prev_step, next_step in zip(matched_steps_legacy, matched_steps_legacy[1:]):
if next_step == (prev_step + 1) or next_step <= prev_step:
monotonic_or_reset += 1
if monotonic_or_reset >= max(4, len(matched_steps_legacy) - 4):
return True
tty_min_matches = max(_LEGACY_STREAM_MIN_RECORDS, int(blocks * _TTY_STREAM_MIN_MATCH_RATIO))
if len(matched_steps_tty) >= tty_min_matches:
monotonic_or_reset = 0
for prev_step, next_step in zip(matched_steps_tty, matched_steps_tty[1:]):
if next_step == (prev_step + 1) or next_step <= 2:
monotonic_or_reset += 1
if monotonic_or_reset >= max(4, len(matched_steps_tty) - 4):
return True
if len(matched_steps_logdet) >= tty_min_matches:
monotonic_or_reset = 0
for prev_step, next_step in zip(matched_steps_logdet, matched_steps_logdet[1:]):
if next_step == (prev_step + 1) or next_step <= 2:
monotonic_or_reset += 1
if monotonic_or_reset >= max(4, len(matched_steps_logdet) - 4):
return True
return False
def _is_valid_parser_16_bit_x2_probe(events: list[ParserEvent]) -> bool:
"""Accept only plausible complex streams and ignore resync noise."""
point_steps: list[int] = []
for event in events:
if isinstance(event, PointEvent):
point_steps.append(int(event.x))
if len(point_steps) < 3:
return False
monotonic_or_small_reset = 0
for prev_step, next_step in zip(point_steps, point_steps[1:]):
if next_step == (prev_step + 1) or next_step <= 2:
monotonic_or_small_reset += 1
return monotonic_or_small_reset >= max(2, len(point_steps) - 3)
class SweepReader(threading.Thread):
"""Фоновый поток: читает строки, формирует завершённые свипы и кладёт в очередь."""
"""Read a serial source in the background and emit completed sweep packets."""
def __init__(
self,
@ -23,217 +114,265 @@ class SweepReader(threading.Thread):
fancy: bool = False,
bin_mode: bool = False,
logscale: bool = False,
debug: bool = False,
parser_16_bit_x2: bool = False,
parser_test: bool = False,
parser_complex_ascii: bool = False,
):
super().__init__(daemon=True)
self._port_path = port_path
self._baud = baud
self._q = out_queue
self._stop = stop_event
self._src: Optional[SerialLineSource] = None
self._baud = int(baud)
self._queue = out_queue
self._stop_event = stop_event
self._fancy = bool(fancy)
self._bin_mode = bool(bin_mode)
self._logscale = bool(logscale)
self._debug = bool(debug)
self._assembler = SweepAssembler(fancy=self._fancy, logscale=self._logscale, debug=self._debug)
self._parser_16_bit_x2 = bool(parser_16_bit_x2)
self._parser_test = bool(parser_test)
self._parser_complex_ascii = bool(parser_complex_ascii)
self._src: SerialLineSource | None = None
self._frames_read = 0
self._frames_dropped = 0
self._started_at = time.perf_counter()
def _finalize_current(self, xs, ys, channels: Optional[set]):
packet = self._assembler.finalize_arrays(xs, ys, channels)
if packet is None:
return
sweep, info = packet
def _resolve_parser_mode_label(self) -> str:
if self._parser_complex_ascii:
return "complex_ascii"
if self._parser_test:
return "parser_test_16x2"
if self._parser_16_bit_x2:
return "parser_16_bit_x2"
if self._logscale:
return "logscale_32"
if self._bin_mode:
return "legacy_8byte"
return "ascii"
def _build_parser(self):
if self._parser_complex_ascii:
return ComplexAsciiSweepParser(), SweepAssembler(fancy=self._fancy, apply_inversion=False)
if self._parser_test:
return ParserTestStreamParser(), SweepAssembler(fancy=self._fancy, apply_inversion=False)
if self._parser_16_bit_x2:
return LogScale16BitX2BinaryParser(), SweepAssembler(fancy=self._fancy, apply_inversion=False)
if self._logscale:
return LogScaleBinaryParser32(), SweepAssembler(fancy=self._fancy, apply_inversion=False)
if self._bin_mode:
return LegacyBinaryParser(), SweepAssembler(fancy=self._fancy, apply_inversion=True)
return AsciiSweepParser(), SweepAssembler(fancy=self._fancy, apply_inversion=True)
@staticmethod
def _consume_events(assembler: SweepAssembler, events) -> list[SweepPacket]:
packets: list[SweepPacket] = []
for event in events:
packet = assembler.consume(event)
if packet is not None:
packets.append(packet)
return packets
def _probe_parser_16_bit_x2(self, chunk_reader: SerialChunkReader):
parser = LogScale16BitX2BinaryParser()
probe_buf = bytearray()
probe_events: list[ParserEvent] = []
probe_started_at = time.perf_counter()
while not self._stop_event.is_set() and len(probe_buf) < _PARSER_16_BIT_X2_PROBE_BYTES:
data = chunk_reader.read_available()
if not data:
time.sleep(0.0005)
continue
probe_buf += data
probe_events.extend(parser.feed(data))
if _is_valid_parser_16_bit_x2_probe(probe_events):
assembler = SweepAssembler(fancy=self._fancy, apply_inversion=False)
probe_packets = self._consume_events(assembler, probe_events)
n_points = int(sum(1 for event in probe_events if isinstance(event, PointEvent)))
n_starts = int(sum(1 for event in probe_events if isinstance(event, StartEvent)))
probe_ms = (time.perf_counter() - probe_started_at) * 1000.0
sys.stderr.write(
"[info] parser_16_bit_x2 probe: bytes:%d events:%d points:%d starts:%d parser:16x2 elapsed_ms:%.1f\n"
% (
len(probe_buf),
len(probe_events),
n_points,
n_starts,
probe_ms,
)
)
return parser, assembler, probe_packets
probe_looks_legacy = bool(probe_buf) and _looks_like_legacy_8byte_stream(bytes(probe_buf))
n_points = int(sum(1 for event in probe_events if isinstance(event, PointEvent)))
n_starts = int(sum(1 for event in probe_events if isinstance(event, StartEvent)))
probe_ms = (time.perf_counter() - probe_started_at) * 1000.0
if probe_looks_legacy:
sys.stderr.write(
"[info] parser_16_bit_x2 probe: bytes:%d events:%d points:%d starts:%d parser:legacy(fallback) elapsed_ms:%.1f\n"
% (
len(probe_buf),
len(probe_events),
n_points,
n_starts,
probe_ms,
)
)
sys.stderr.write("[info] parser_16_bit_x2: fallback -> legacy\n")
parser = LegacyBinaryParser()
assembler = SweepAssembler(fancy=self._fancy, apply_inversion=True)
probe_packets = self._consume_events(assembler, parser.feed(bytes(probe_buf)))
return parser, assembler, probe_packets
sys.stderr.write(
"[warn] parser_16_bit_x2 probe inconclusive: bytes:%d events:%d points:%d starts:%d parser:16x2 elapsed_ms:%.1f\n"
% (
len(probe_buf),
len(probe_events),
n_points,
n_starts,
probe_ms,
)
)
sys.stderr.write(
"[hint] parser_16_bit_x2: if source is 8-byte tty CH1/CH2 stream (0x000A,step,ch1,ch2), try --bin\n"
)
assembler = SweepAssembler(fancy=self._fancy, apply_inversion=False)
return parser, assembler, []
def _enqueue(self, packet: SweepPacket) -> None:
dropped = False
try:
self._q.put_nowait((sweep, info))
self._queue.put_nowait(packet)
except Full:
try:
_ = self._q.get_nowait()
_ = self._queue.get_nowait()
dropped = True
except Exception:
pass
try:
self._q.put_nowait((sweep, info))
self._queue.put_nowait(packet)
except Exception:
pass
if dropped:
self._frames_dropped += 1
def _run_ascii_stream(self, chunk_reader: SerialChunkReader):
xs: list[int] = []
ys: list[int] = []
cur_channel: Optional[int] = None
cur_channels: set[int] = set()
buf = bytearray()
_dbg_line_count = 0
_dbg_match_count = 0
_dbg_sweep_count = 0
while not self._stop.is_set():
data = chunk_reader.read_available()
if data:
buf += data
else:
time.sleep(0.0005)
continue
while True:
nl = buf.find(b"\n")
if nl == -1:
break
line = bytes(buf[:nl])
del buf[: nl + 1]
if line.endswith(b"\r"):
line = line[:-1]
if not line:
continue
_dbg_line_count += 1
if line.startswith(b"Sweep_start"):
if self._debug:
sys.stderr.write(f"[debug] ASCII строка #{_dbg_line_count}: Sweep_start → финализация свипа\n")
_dbg_sweep_count += 1
self._finalize_current(xs, ys, cur_channels)
xs.clear()
ys.clear()
cur_channel = None
cur_channels.clear()
continue
if len(line) >= 3:
parts = line.split()
if len(parts) >= 3 and (parts[0].lower() == b"s" or parts[0].lower().startswith(b"s")):
self._frames_read += 1
if self._frames_read % _DEBUG_FRAME_LOG_EVERY == 0:
sweep, info, _aux = packet
try:
if parts[0].lower() == b"s":
if len(parts) >= 4:
ch = int(parts[1], 10)
x = int(parts[2], 10)
y = int(parts[3], 10)
else:
ch = 0
x = int(parts[1], 10)
y = int(parts[2], 10)
else:
ch = int(parts[0][1:], 10)
x = int(parts[1], 10)
y = int(parts[2], 10)
queue_size = self._queue.qsize()
except Exception:
if self._debug and _dbg_line_count <= 5:
hex_repr = " ".join(f"{b:02x}" for b in line[:16])
queue_size = -1
elapsed_s = max(time.perf_counter() - self._started_at, 1e-9)
frames_per_sec = float(self._frames_read) / elapsed_s
sweep_idx = info.get("sweep") if isinstance(info, dict) else None
channel = info.get("ch") if isinstance(info, dict) else None
sys.stderr.write(
f"[debug] ASCII строка #{_dbg_line_count} ({len(line)} байт): {hex_repr}"
f"{'...' if len(line) > 16 else ''} → похожа на 's', но не парсится\n"
"[debug] reader frames:%d rate:%.2f/s last_sweep:%s ch:%s width:%d queue:%d dropped:%d\n"
% (
self._frames_read,
frames_per_sec,
str(sweep_idx),
str(channel),
int(getattr(sweep, "size", 0)),
int(queue_size),
self._frames_dropped,
)
continue
_dbg_match_count += 1
if self._debug and _dbg_match_count <= 3:
sys.stderr.write(f"[debug] ASCII точка: ch={ch} x={x} y={y}\n")
if cur_channel is None:
cur_channel = ch
cur_channels.add(ch)
xs.append(x)
ys.append(y)
continue
if self._debug and _dbg_line_count <= 5:
hex_repr = " ".join(f"{b:02x}" for b in line[:16])
sys.stderr.write(
f"[debug] ASCII строка #{_dbg_line_count} ({len(line)} байт): {hex_repr}"
f"{'...' if len(line) > 16 else ''} → нет совпадения\n"
)
if self._debug and _dbg_line_count % 100 == 0:
sys.stderr.write(
f"[debug] ASCII статистика: строк={_dbg_line_count}, "
f"совпадений={_dbg_match_count}, свипов={_dbg_sweep_count}\n"
)
if len(buf) > 1_000_000:
del buf[:-262144]
self._finalize_current(xs, ys, cur_channels)
def _run_binary_stream(self, chunk_reader: SerialChunkReader):
xs: list[int] = []
ys: list[float] = []
cur_channel: Optional[int] = None
cur_channels: set[int] = set()
parser = BinaryRecordStreamParser()
# Поддерживаются оба wire-формата:
# 1) legacy: 8-byte записи (start/point с одним int32 значением).
# 2) log-detector: start = FFFF x5 + (ch<<8)|0x0A,
# point = step + (avg1, avg2), где avg1/avg2 имеют ширину 32-bit или 128-bit.
# Для point парсер сразу преобразует (avg1, avg2) в линейную амплитуду y.
# В обоих режимах при десинхронизации parser.feed() сдвигается на 1 байт.
_dbg_byte_count = 0
_dbg_desync_count = 0
_dbg_sweep_count = 0
_dbg_point_count = 0
while not self._stop.is_set():
data = chunk_reader.read_available()
if data:
events = parser.feed(data)
else:
time.sleep(0.0005)
continue
for ev in events:
tag = ev[0]
if tag == "start":
ch_new = int(ev[1])
if self._debug:
sys.stderr.write(f"[debug] BIN: старт свипа, ch={ch_new}\n")
_dbg_sweep_count += 1
self._finalize_current(xs, ys, cur_channels)
xs.clear()
ys.clear()
cur_channels.clear()
cur_channel = ch_new
cur_channels.add(cur_channel)
continue
_tag, ch_from_term, step, value_i32 = ev # type: ignore[misc]
if cur_channel is None:
cur_channel = int(ch_from_term)
cur_channels.add(int(ch_from_term))
xs.append(int(step))
ys.append(float(value_i32))
_dbg_point_count += 1
if self._debug and _dbg_point_count <= 3:
sys.stderr.write(
f"[debug] BIN точка: step={int(step)} ch={int(ch_from_term)} → value={float(value_i32):.3f}\n"
)
_dbg_byte_count = parser.bytes_consumed
_dbg_desync_count = parser.desync_count
if self._debug and _dbg_byte_count > 0 and _dbg_byte_count % 4000 < 8:
sys.stderr.write(
f"[debug] BIN статистика: байт={_dbg_byte_count}, "
f"десинхронизаций={_dbg_desync_count}, точек={_dbg_point_count}, свипов={_dbg_sweep_count}\n"
)
if parser.buffered_size() > 1_000_000:
parser.clear_buffer_keep_tail(262_144)
self._finalize_current(xs, ys, cur_channels)
def run(self):
def run(self) -> None:
try:
self._src = SerialLineSource(self._port_path, self._baud, timeout=1.0)
queue_cap = int(getattr(self._queue, "maxsize", -1))
sys.stderr.write(f"[info] Открыл порт {self._port_path} ({self._src._using})\n")
except Exception as e:
sys.stderr.write(f"[error] {e}\n")
sys.stderr.write(
"[info] reader start: parser:%s fancy:%d queue_max:%d source:%s\n"
% (
self._resolve_parser_mode_label(),
int(self._fancy),
queue_cap,
getattr(self._src, "_using", "unknown"),
)
)
except Exception as exc:
sys.stderr.write(f"[error] {exc}\n")
return
try:
chunk_reader = SerialChunkReader(self._src)
if self._debug:
mode_str = "бинарный (--bin)" if self._bin_mode else "ASCII (по умолчанию)"
sys.stderr.write(f"[debug] Режим парсера: {mode_str}\n")
if self._bin_mode:
self._run_binary_stream(chunk_reader)
if self._parser_16_bit_x2:
parser, assembler, pending_packets = self._probe_parser_16_bit_x2(chunk_reader)
else:
self._run_ascii_stream(chunk_reader)
parser, assembler = self._build_parser()
pending_packets = []
for packet in pending_packets:
self._enqueue(packet)
loop_started_at = time.perf_counter()
last_input_at = loop_started_at
last_packet_at = loop_started_at if self._frames_read > 0 else loop_started_at
last_no_input_warn_at = loop_started_at
last_no_packet_warn_at = loop_started_at
parser_hint_emitted = False
while not self._stop_event.is_set():
data = chunk_reader.read_available()
now_s = time.perf_counter()
if not data:
input_idle_s = now_s - last_input_at
if (
input_idle_s >= _NO_INPUT_WARN_INTERVAL_S
and (now_s - last_no_input_warn_at) >= _NO_INPUT_WARN_INTERVAL_S
):
sys.stderr.write(
"[warn] reader no input bytes for %.1fs on %s (parser:%s)\n"
% (
input_idle_s,
self._port_path,
self._resolve_parser_mode_label(),
)
)
last_no_input_warn_at = now_s
packets_idle_s = now_s - last_packet_at
if (
packets_idle_s >= _NO_PACKET_WARN_INTERVAL_S
and (now_s - last_no_packet_warn_at) >= _NO_PACKET_WARN_INTERVAL_S
):
try:
queue_size = self._queue.qsize()
except Exception:
queue_size = -1
sys.stderr.write(
"[warn] reader no sweep packets for %.1fs (input_idle:%.1fs queue:%d parser:%s)\n"
% (
packets_idle_s,
input_idle_s,
int(queue_size),
self._resolve_parser_mode_label(),
)
)
last_no_packet_warn_at = now_s
if (
self._parser_16_bit_x2
and (not parser_hint_emitted)
and (now_s - self._started_at) >= _NO_PACKET_HINT_AFTER_S
):
sys.stderr.write(
"[hint] parser_16_bit_x2 still has no sweeps; if source is tty CH1/CH2, rerun with --bin\n"
)
parser_hint_emitted = True
time.sleep(0.0005)
continue
last_input_at = now_s
packets = self._consume_events(assembler, parser.feed(data))
if packets:
last_packet_at = now_s
for packet in packets:
self._enqueue(packet)
packet = assembler.finalize_current()
if packet is not None:
self._enqueue(packet)
finally:
try:
if self._src is not None:

134
rfg_adc_plotter/main.py Executable file → Normal file
View File

@ -1,137 +1,25 @@
#!/usr/bin/env python3
"""
Реалтайм-плоттер для свипов из виртуального COM-порта.
"""Main entrypoint for the modularized ADC plotter."""
Формат строк:
- "Sweep_start" — начало нового свипа (предыдущий считается завершённым)
- "s CH X Y" — точка (номер канала, индекс X, значение Y), все целые со знаком
from __future__ import annotations
Отрисовываются четыре графика:
- Сырые данные: последний полученный свип (Y vs X)
- Водопад сырых данных: последние N свипов
- FFT текущего свипа
- B-scan: водопад FFT-строк
Зависимости: numpy. PySerial опционален — при его отсутствии
используется сырой доступ к TTY через termios.
GUI: matplotlib (совместимый) или pyqtgraph (быстрый).
"""
import argparse
import sys
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
description=(
"Читает свипы из виртуального COM-порта и рисует: "
"последний свип и водопад (реалтайм)."
)
)
parser.add_argument(
"port",
help="Путь к порту, например /dev/ttyACM1 или COM3 (COM10+: \\\\.\\COM10)",
)
parser.add_argument("--baud", type=int, default=115200, help="Скорость (по умолчанию 115200)")
parser.add_argument("--max-sweeps", type=int, default=200, help="Количество видимых свипов в водопаде")
parser.add_argument("--max-fps", type=float, default=30.0, help="Лимит частоты отрисовки, кадров/с")
parser.add_argument("--cmap", default="viridis", help="Цветовая карта водопада")
parser.add_argument(
"--spec-clip",
default="2,98",
help=(
"Процентильная обрезка уровней водопада спектров, %% (min,max). "
"Напр. 2,98. 'off' — отключить"
),
)
parser.add_argument(
"--spec-mean-sec",
type=float,
default=0.0,
help=(
"Вычитание среднего по каждой частоте за последние N секунд "
"в водопаде спектров (0 — отключить)"
),
)
parser.add_argument("--title", default="ADC Sweeps", help="Заголовок окна")
parser.add_argument(
"--fancy",
action="store_true",
help="Заполнять выпавшие точки средними значениями между соседними",
)
parser.add_argument(
"--ylim",
type=str,
default=None,
help="Фиксированные Y-пределы для кривой формата min,max (например -1000,1000). По умолчанию авто",
)
parser.add_argument(
"--backend",
choices=["auto", "pg", "mpl"],
default="auto",
help="Графический бэкенд: pyqtgraph (pg) — быстрее; matplotlib (mpl) — совместимый. По умолчанию auto",
)
parser.add_argument(
"--norm-type",
choices=["projector", "simple"],
default="projector",
help="Тип нормировки: projector (по огибающим в [-1000,+1000]) или simple (raw/calib)",
)
parser.add_argument(
"--ifft-complex-mode",
choices=["arccos", "diff"],
default="arccos",
help=(
"Режим реконструкции комплексного спектра перед IFFT: "
"arccos (phi=arccos(x), unwrap) или diff (sin(phi) через численную производную)"
),
)
parser.add_argument(
"--bin",
dest="bin_mode",
action="store_true",
help=(
"Бинарный протокол (8 байт на запись, LE u16 слова): "
"старт свипа ff ff ff ff ff ff 0a [ch]; "
"точка step_u16 hi_u16 lo_u16 0a [ch]; "
"value=sign_ext((hi<<16)|lo); ch=0..N в старшем байте маркера"
),
)
parser.add_argument(
"--logscale",
action="store_true",
help="После поправки знака применять экспоненту LOG_EXP**x (LOG_EXP=2)",
)
parser.add_argument(
"--debug",
action="store_true",
help="Отладочный вывод парсера: показывает принятые строки/слова и причины отсутствия свипов",
)
return parser
from rfg_adc_plotter.cli import build_parser
def main():
def main() -> None:
args = build_parser().parse_args()
if args.backend == "mpl":
sys.stderr.write("[error] Matplotlib backend removed. Use --backend pg or --backend auto.\n")
raise SystemExit(2)
if args.backend == "pg":
from rfg_adc_plotter.gui.pyqtgraph_backend import run_pyqtgraph
try:
run_pyqtgraph(args)
except Exception as e:
sys.stderr.write(f"[error] PyQtGraph бэкенд недоступен: {e}\n")
sys.exit(1)
return
if args.backend == "auto":
try:
from rfg_adc_plotter.gui.pyqtgraph_backend import run_pyqtgraph
run_pyqtgraph(args)
return
except Exception:
pass # Откатываемся на matplotlib
from rfg_adc_plotter.gui.matplotlib_backend import run_matplotlib
run_matplotlib(args)
except Exception as exc:
sys.stderr.write(f"[error] PyQtGraph бэкенд недоступен: {exc}\n")
raise SystemExit(1) from exc
if __name__ == "__main__":

View File

@ -0,0 +1,79 @@
"""Pure sweep-processing helpers."""
from rfg_adc_plotter.processing.background import (
load_fft_background,
save_fft_background,
subtract_fft_background,
validate_fft_background,
)
from rfg_adc_plotter.processing.calibration import (
build_calib_envelope,
build_complex_calibration_curve,
calibrate_freqs,
get_calibration_base,
get_calibration_coeffs,
load_calib_envelope,
load_complex_calibration,
recalculate_calibration_c,
save_calib_envelope,
save_complex_calibration,
set_calibration_base_value,
)
from rfg_adc_plotter.processing.fft import (
compute_distance_axis,
compute_fft_complex_row,
compute_fft_mag_row,
compute_fft_row,
fft_mag_to_db,
)
from rfg_adc_plotter.processing.formatting import (
compute_auto_ylim,
format_status_kv,
parse_spec_clip,
)
from rfg_adc_plotter.processing.normalization import (
build_calib_envelopes,
fit_complex_calibration_to_width,
normalize_by_complex_calibration,
normalize_by_envelope,
normalize_by_calib,
)
from rfg_adc_plotter.processing.peaks import (
find_peak_width_markers,
find_top_peaks_over_ref,
rolling_median_ref,
)
__all__ = [
"build_calib_envelopes",
"build_calib_envelope",
"build_complex_calibration_curve",
"calibrate_freqs",
"compute_auto_ylim",
"compute_distance_axis",
"compute_fft_complex_row",
"compute_fft_mag_row",
"compute_fft_row",
"fft_mag_to_db",
"find_peak_width_markers",
"find_top_peaks_over_ref",
"format_status_kv",
"get_calibration_base",
"get_calibration_coeffs",
"load_calib_envelope",
"load_complex_calibration",
"load_fft_background",
"fit_complex_calibration_to_width",
"normalize_by_complex_calibration",
"normalize_by_envelope",
"normalize_by_calib",
"parse_spec_clip",
"recalculate_calibration_c",
"rolling_median_ref",
"save_calib_envelope",
"save_complex_calibration",
"save_fft_background",
"set_calibration_base_value",
"subtract_fft_background",
"validate_fft_background",
]

View File

@ -0,0 +1,66 @@
"""Helpers for persisted FFT background profiles."""
from __future__ import annotations
from pathlib import Path
import numpy as np
def validate_fft_background(background: np.ndarray) -> np.ndarray:
"""Validate a saved FFT background payload."""
values = np.asarray(background)
if values.ndim != 1:
raise ValueError("FFT background must be a 1D array")
if not np.issubdtype(values.dtype, np.number):
raise ValueError("FFT background must be numeric")
values = np.asarray(values, dtype=np.float32).reshape(-1)
if values.size == 0:
raise ValueError("FFT background is empty")
return values
def _normalize_background_path(path: str | Path) -> Path:
out = Path(path).expanduser()
if out.suffix.lower() != ".npy":
out = out.with_suffix(".npy")
return out
def save_fft_background(path: str | Path, background: np.ndarray) -> str:
"""Persist an FFT background profile as a .npy file."""
normalized_path = _normalize_background_path(path)
values = validate_fft_background(background)
np.save(normalized_path, values.astype(np.float32, copy=False))
return str(normalized_path)
def load_fft_background(path: str | Path) -> np.ndarray:
"""Load and validate an FFT background profile from a .npy file."""
normalized_path = _normalize_background_path(path)
loaded = np.load(normalized_path, allow_pickle=False)
return validate_fft_background(loaded)
def subtract_fft_background(signal_mag: np.ndarray, background_mag: np.ndarray) -> np.ndarray:
"""Subtract a background profile from FFT magnitudes in linear amplitude."""
signal = np.asarray(signal_mag, dtype=np.float32)
background = validate_fft_background(background_mag)
if signal.ndim == 1:
if signal.size != background.size:
raise ValueError("FFT background size does not match signal size")
valid = np.isfinite(signal) & np.isfinite(background)
out = np.full_like(signal, np.nan, dtype=np.float32)
if np.any(valid):
out[valid] = np.maximum(signal[valid] - background[valid], 0.0)
return out
if signal.ndim == 2:
if signal.shape[0] != background.size:
raise ValueError("FFT background size does not match signal rows")
background_2d = background[:, None]
valid = np.isfinite(signal) & np.isfinite(background_2d)
diff = signal - background_2d
return np.where(valid, np.maximum(diff, 0.0), np.nan).astype(np.float32, copy=False)
raise ValueError("FFT background subtraction supports only 1D or 2D signals")

View File

@ -0,0 +1,169 @@
"""Frequency-axis calibration helpers."""
from __future__ import annotations
from pathlib import Path
from typing import Any, Mapping
import numpy as np
from rfg_adc_plotter.constants import SWEEP_FREQ_MAX_GHZ, SWEEP_FREQ_MIN_GHZ
from rfg_adc_plotter.processing.normalization import build_calib_envelopes
from rfg_adc_plotter.types import SweepData
def recalculate_calibration_c(
base_coeffs: np.ndarray,
f_min: float = SWEEP_FREQ_MIN_GHZ,
f_max: float = SWEEP_FREQ_MAX_GHZ,
) -> np.ndarray:
"""Recalculate coefficients while preserving sweep edges."""
coeffs = np.asarray(base_coeffs, dtype=np.float64).reshape(-1)
if coeffs.size < 3:
out = np.zeros((3,), dtype=np.float64)
out[: coeffs.size] = coeffs
coeffs = out
c0, c1, c2 = float(coeffs[0]), float(coeffs[1]), float(coeffs[2])
x0 = float(f_min)
x1 = float(f_max)
y0 = c0 + c1 * x0 + c2 * (x0 ** 2)
y1 = c0 + c1 * x1 + c2 * (x1 ** 2)
if not (np.isfinite(y0) and np.isfinite(y1)) or y1 == y0:
return np.asarray([c0, c1, c2], dtype=np.float64)
scale = (x1 - x0) / (y1 - y0)
shift = x0 - scale * y0
return np.asarray(
[
shift + scale * c0,
scale * c1,
scale * c2,
],
dtype=np.float64,
)
CALIBRATION_C_BASE = np.asarray([0.0, 1.0, 0.025], dtype=np.float64)
CALIBRATION_C = recalculate_calibration_c(CALIBRATION_C_BASE)
def get_calibration_base() -> np.ndarray:
return np.asarray(CALIBRATION_C_BASE, dtype=np.float64).copy()
def get_calibration_coeffs() -> np.ndarray:
return np.asarray(CALIBRATION_C, dtype=np.float64).copy()
def set_calibration_base_value(index: int, value: float) -> np.ndarray:
"""Update one base coefficient and recalculate the working coefficients."""
global CALIBRATION_C
CALIBRATION_C_BASE[int(index)] = float(value)
CALIBRATION_C = recalculate_calibration_c(CALIBRATION_C_BASE)
return get_calibration_coeffs()
def calibrate_freqs(sweep: Mapping[str, Any]) -> SweepData:
"""Return a sweep copy with calibrated and resampled frequency axis."""
freqs = np.asarray(sweep["F"], dtype=np.float64).copy()
values_in = np.asarray(sweep["I"]).reshape(-1)
values = np.asarray(
values_in,
dtype=np.complex128 if np.iscomplexobj(values_in) else np.float64,
).copy()
coeffs = np.asarray(CALIBRATION_C, dtype=np.float64)
if freqs.size > 0:
freqs = coeffs[0] + coeffs[1] * freqs + coeffs[2] * (freqs * freqs)
if freqs.size >= 2:
freqs_cal = np.linspace(float(freqs[0]), float(freqs[-1]), freqs.size, dtype=np.float64)
if np.iscomplexobj(values):
values_real = np.interp(freqs_cal, freqs, values.real.astype(np.float64, copy=False))
values_imag = np.interp(freqs_cal, freqs, values.imag.astype(np.float64, copy=False))
values_cal = (values_real + (1j * values_imag)).astype(np.complex64)
else:
values_cal = np.interp(freqs_cal, freqs, values).astype(np.float64)
else:
freqs_cal = freqs.copy()
values_cal = values.copy()
return {
"F": freqs_cal,
"I": values_cal,
}
def build_calib_envelope(sweep: np.ndarray) -> np.ndarray:
"""Build the active calibration envelope from a raw sweep."""
values = np.asarray(sweep, dtype=np.float32).reshape(-1)
if values.size == 0:
raise ValueError("Calibration sweep is empty")
_, upper = build_calib_envelopes(values)
return np.asarray(upper, dtype=np.float32)
def build_complex_calibration_curve(ch1: np.ndarray, ch2: np.ndarray) -> np.ndarray:
"""Build a complex calibration curve as ``ch1 + 1j*ch2``."""
ch1_arr = np.asarray(ch1, dtype=np.float32).reshape(-1)
ch2_arr = np.asarray(ch2, dtype=np.float32).reshape(-1)
width = min(ch1_arr.size, ch2_arr.size)
if width <= 0:
raise ValueError("Complex calibration source is empty")
curve = ch1_arr[:width].astype(np.complex64) + (1j * ch2_arr[:width].astype(np.complex64))
return validate_complex_calibration_curve(curve)
def validate_calib_envelope(envelope: np.ndarray) -> np.ndarray:
"""Validate a saved calibration envelope payload."""
values = np.asarray(envelope, dtype=np.float32).reshape(-1)
if values.size == 0:
raise ValueError("Calibration envelope is empty")
if not np.issubdtype(values.dtype, np.number):
raise ValueError("Calibration envelope must be numeric")
return values
def validate_complex_calibration_curve(curve: np.ndarray) -> np.ndarray:
"""Validate a saved complex calibration payload."""
values = np.asarray(curve).reshape(-1)
if values.size == 0:
raise ValueError("Complex calibration curve is empty")
if not np.issubdtype(values.dtype, np.number):
raise ValueError("Complex calibration curve must be numeric")
return np.asarray(values, dtype=np.complex64)
def _normalize_calib_path(path: str | Path) -> Path:
out = Path(path).expanduser()
if out.suffix.lower() != ".npy":
out = out.with_suffix(".npy")
return out
def save_calib_envelope(path: str | Path, envelope: np.ndarray) -> str:
"""Persist a calibration envelope as a .npy file and return the final path."""
normalized_path = _normalize_calib_path(path)
values = validate_calib_envelope(envelope)
np.save(normalized_path, values.astype(np.float32, copy=False))
return str(normalized_path)
def load_calib_envelope(path: str | Path) -> np.ndarray:
"""Load and validate a calibration envelope from a .npy file."""
normalized_path = _normalize_calib_path(path)
loaded = np.load(normalized_path, allow_pickle=False)
return validate_calib_envelope(loaded)
def save_complex_calibration(path: str | Path, curve: np.ndarray) -> str:
"""Persist a complex calibration curve as a .npy file and return the final path."""
normalized_path = _normalize_calib_path(path)
values = validate_complex_calibration_curve(curve)
np.save(normalized_path, values.astype(np.complex64, copy=False))
return str(normalized_path)
def load_complex_calibration(path: str | Path) -> np.ndarray:
"""Load and validate a complex calibration curve from a .npy file."""
normalized_path = _normalize_calib_path(path)
loaded = np.load(normalized_path, allow_pickle=False)
return validate_complex_calibration_curve(loaded)

View File

@ -0,0 +1,511 @@
"""FFT helpers for line and waterfall views."""
from __future__ import annotations
from typing import Optional, Tuple
import numpy as np
from rfg_adc_plotter.constants import C_M_S, FFT_LEN, SWEEP_FREQ_MAX_GHZ, SWEEP_FREQ_MIN_GHZ
def _finite_freq_bounds(freqs: Optional[np.ndarray]) -> Optional[Tuple[float, float]]:
"""Return finite frequency bounds for the current working segment."""
if freqs is None:
return None
freq_arr = np.asarray(freqs, dtype=np.float64).reshape(-1)
finite = freq_arr[np.isfinite(freq_arr)]
if finite.size < 2:
return None
f_min = float(np.min(finite))
f_max = float(np.max(finite))
if not np.isfinite(f_min) or not np.isfinite(f_max) or f_max <= f_min:
return None
return f_min, f_max
def _coerce_sweep_array(sweep: np.ndarray) -> np.ndarray:
values = np.asarray(sweep).reshape(-1)
if np.iscomplexobj(values):
return np.asarray(values, dtype=np.complex64)
return np.asarray(values, dtype=np.float32)
def _interp_signal(x_uniform: np.ndarray, x_known: np.ndarray, y_known: np.ndarray) -> np.ndarray:
if np.iscomplexobj(y_known):
real = np.interp(x_uniform, x_known, np.asarray(y_known.real, dtype=np.float64))
imag = np.interp(x_uniform, x_known, np.asarray(y_known.imag, dtype=np.float64))
return (real + (1j * imag)).astype(np.complex64)
return np.interp(x_uniform, x_known, np.asarray(y_known, dtype=np.float64)).astype(np.float32)
def _fit_complex_bins(values: np.ndarray, bins: int) -> np.ndarray:
arr = np.asarray(values, dtype=np.complex64).reshape(-1)
if bins <= 0:
return np.zeros((0,), dtype=np.complex64)
if arr.size == bins:
return arr
out = np.full((bins,), np.nan + 0j, dtype=np.complex64)
take = min(arr.size, bins)
out[:take] = arr[:take]
return out
def _extract_positive_exact_band(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
) -> Optional[Tuple[np.ndarray, np.ndarray, float, float]]:
"""Return sorted positive band data and exact-grid parameters."""
if freqs is None:
return None
sweep_arr = _coerce_sweep_array(sweep)
freq_arr = np.asarray(freqs, dtype=np.float64).reshape(-1)
take = min(int(sweep_arr.size), int(freq_arr.size))
if take <= 1:
return None
sweep_seg = sweep_arr[:take]
freq_seg = freq_arr[:take]
valid = np.isfinite(freq_seg) & np.isfinite(sweep_seg) & (freq_seg > 0.0)
if int(np.count_nonzero(valid)) < 2:
return None
freq_band = np.asarray(freq_seg[valid], dtype=np.float64)
sweep_band = np.asarray(sweep_seg[valid])
order = np.argsort(freq_band, kind="mergesort")
freq_band = freq_band[order]
sweep_band = sweep_band[order]
n_band = int(freq_band.size)
if n_band <= 1:
return None
f_min = float(freq_band[0])
f_max = float(freq_band[-1])
if (not np.isfinite(f_min)) or (not np.isfinite(f_max)) or f_max <= f_min:
return None
df_ghz = float((f_max - f_min) / max(1, n_band - 1))
if (not np.isfinite(df_ghz)) or df_ghz <= 0.0:
return None
return freq_band, sweep_band, f_max, df_ghz
def _positive_exact_shift_size(f_max: float, df_ghz: float) -> int:
if (not np.isfinite(f_max)) or (not np.isfinite(df_ghz)) or f_max <= 0.0 or df_ghz <= 0.0:
return 0
return int(np.arange(-f_max, f_max + (0.5 * df_ghz), df_ghz, dtype=np.float64).size)
def _resolve_positive_exact_band_size(
f_min: float,
f_max: float,
n_band: int,
max_shift_len: Optional[int],
) -> int:
if n_band <= 2:
return max(2, int(n_band))
if max_shift_len is None:
return int(n_band)
limit = int(max_shift_len)
if limit <= 1:
return max(2, int(n_band))
span = float(f_max - f_min)
if (not np.isfinite(span)) or span <= 0.0:
return int(n_band)
df_current = float(span / max(1, int(n_band) - 1))
if _positive_exact_shift_size(f_max, df_current) <= limit:
return int(n_band)
denom = max(2.0 * f_max, 1e-12)
approx = int(np.floor(1.0 + ((float(limit - 1) * span) / denom)))
target = min(int(n_band), max(2, approx))
while target > 2:
df_try = float(span / max(1, target - 1))
if _positive_exact_shift_size(f_max, df_try) <= limit:
break
target -= 1
return max(2, target)
def _normalize_positive_exact_band(
freq_band: np.ndarray,
sweep_band: np.ndarray,
*,
max_shift_len: Optional[int] = None,
) -> Optional[Tuple[np.ndarray, np.ndarray, float, float]]:
freq_arr = np.asarray(freq_band, dtype=np.float64).reshape(-1)
sweep_arr = np.asarray(sweep_band).reshape(-1)
width = min(int(freq_arr.size), int(sweep_arr.size))
if width <= 1:
return None
freq_arr = freq_arr[:width]
sweep_arr = sweep_arr[:width]
f_min = float(freq_arr[0])
f_max = float(freq_arr[-1])
if (not np.isfinite(f_min)) or (not np.isfinite(f_max)) or f_max <= f_min:
return None
target_band = _resolve_positive_exact_band_size(f_min, f_max, int(freq_arr.size), max_shift_len)
if target_band < int(freq_arr.size):
target_freqs = np.linspace(f_min, f_max, target_band, dtype=np.float64)
target_sweep = _interp_signal(target_freqs, freq_arr, sweep_arr)
freq_arr = target_freqs
sweep_arr = np.asarray(target_sweep).reshape(-1)
n_band = int(freq_arr.size)
if n_band <= 1:
return None
df_ghz = float((f_max - f_min) / max(1, n_band - 1))
if (not np.isfinite(df_ghz)) or df_ghz <= 0.0:
return None
return freq_arr, sweep_arr, f_max, df_ghz
def _resolve_positive_only_exact_geometry(
freqs: Optional[np.ndarray],
*,
max_shift_len: Optional[int] = None,
) -> Optional[Tuple[int, float]]:
"""Return (N_shift, df_hz) for the exact centered positive-only mode."""
if freqs is None:
return None
freq_arr = np.asarray(freqs, dtype=np.float64).reshape(-1)
finite = np.asarray(freq_arr[np.isfinite(freq_arr) & (freq_arr > 0.0)], dtype=np.float64)
if finite.size < 2:
return None
finite.sort(kind="mergesort")
f_min = float(finite[0])
f_max = float(finite[-1])
if (not np.isfinite(f_min)) or (not np.isfinite(f_max)) or f_max <= f_min:
return None
n_band = int(finite.size)
target_band = _resolve_positive_exact_band_size(f_min, f_max, n_band, max_shift_len)
n_band = max(2, min(n_band, target_band))
df_ghz = float((f_max - f_min) / max(1, n_band - 1))
if (not np.isfinite(df_ghz)) or df_ghz <= 0.0:
return None
n_shift = _positive_exact_shift_size(f_max, df_ghz)
if n_shift <= 1:
return None
return int(n_shift), float(df_ghz * 1e9)
def prepare_fft_segment(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
fft_len: int = FFT_LEN,
) -> Optional[Tuple[np.ndarray, int]]:
"""Prepare a sweep segment for FFT on a uniform frequency grid."""
take_fft = min(int(sweep.size), int(fft_len))
if take_fft <= 0:
return None
sweep_arr = _coerce_sweep_array(sweep)
sweep_seg = sweep_arr[:take_fft]
fallback_dtype = np.complex64 if np.iscomplexobj(sweep_seg) else np.float32
fallback = np.nan_to_num(sweep_seg, nan=0.0).astype(fallback_dtype, copy=False)
if freqs is None:
return fallback, take_fft
freq_arr = np.asarray(freqs)
if freq_arr.size < take_fft:
return fallback, take_fft
freq_seg = np.asarray(freq_arr[:take_fft], dtype=np.float64)
valid = np.isfinite(sweep_seg) & np.isfinite(freq_seg)
if int(np.count_nonzero(valid)) < 2:
return fallback, take_fft
x_valid = freq_seg[valid]
y_valid = sweep_seg[valid]
order = np.argsort(x_valid, kind="mergesort")
x_valid = x_valid[order]
y_valid = y_valid[order]
x_unique, unique_idx = np.unique(x_valid, return_index=True)
y_unique = y_valid[unique_idx]
if x_unique.size < 2 or x_unique[-1] <= x_unique[0]:
return fallback, take_fft
x_uniform = np.linspace(float(x_unique[0]), float(x_unique[-1]), take_fft, dtype=np.float64)
resampled = _interp_signal(x_uniform, x_unique, y_unique)
return resampled, take_fft
def build_symmetric_ifft_spectrum(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
fft_len: int = FFT_LEN,
) -> Optional[np.ndarray]:
"""Build a centered symmetric spectrum over [-f_max, f_max] for IFFT."""
if fft_len <= 0:
return None
bounds = _finite_freq_bounds(freqs)
if bounds is None:
f_min = float(SWEEP_FREQ_MIN_GHZ)
f_max = float(SWEEP_FREQ_MAX_GHZ)
else:
f_min, f_max = bounds
freq_axis = np.linspace(-f_max, f_max, int(fft_len), dtype=np.float64)
neg_idx_all = np.flatnonzero(freq_axis <= (-f_min))
pos_idx_all = np.flatnonzero(freq_axis >= f_min)
band_len = int(min(neg_idx_all.size, pos_idx_all.size))
if band_len <= 1:
return None
neg_idx = neg_idx_all[:band_len]
pos_idx = pos_idx_all[-band_len:]
prepared = prepare_fft_segment(sweep, freqs, fft_len=band_len)
if prepared is None:
return None
fft_seg, take_fft = prepared
if take_fft != band_len:
fft_dtype = np.complex64 if np.iscomplexobj(fft_seg) else np.float32
fft_seg = np.asarray(fft_seg[:band_len], dtype=fft_dtype)
if fft_seg.size < band_len:
padded = np.zeros((band_len,), dtype=fft_dtype)
padded[: fft_seg.size] = fft_seg
fft_seg = padded
window = np.hanning(band_len).astype(np.float32)
band_dtype = np.complex64 if np.iscomplexobj(fft_seg) else np.float32
band = np.nan_to_num(fft_seg, nan=0.0).astype(band_dtype, copy=False) * window
spectrum = np.zeros((int(fft_len),), dtype=band_dtype)
spectrum[pos_idx] = band
spectrum[neg_idx] = np.conj(band[::-1]) if np.iscomplexobj(band) else band[::-1]
return spectrum
def build_positive_only_centered_ifft_spectrum(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
fft_len: int = FFT_LEN,
) -> Optional[np.ndarray]:
"""Build a centered spectrum with zeros from -f_max to +f_min."""
if fft_len <= 0:
return None
bounds = _finite_freq_bounds(freqs)
if bounds is None:
f_min = float(SWEEP_FREQ_MIN_GHZ)
f_max = float(SWEEP_FREQ_MAX_GHZ)
else:
f_min, f_max = bounds
freq_axis = np.linspace(-f_max, f_max, int(fft_len), dtype=np.float64)
pos_idx = np.flatnonzero(freq_axis >= f_min)
band_len = int(pos_idx.size)
if band_len <= 1:
return None
prepared = prepare_fft_segment(sweep, freqs, fft_len=band_len)
if prepared is None:
return None
fft_seg, take_fft = prepared
if take_fft != band_len:
fft_dtype = np.complex64 if np.iscomplexobj(fft_seg) else np.float32
fft_seg = np.asarray(fft_seg[:band_len], dtype=fft_dtype)
if fft_seg.size < band_len:
padded = np.zeros((band_len,), dtype=fft_dtype)
padded[: fft_seg.size] = fft_seg
fft_seg = padded
window = np.hanning(band_len).astype(np.float32)
band_dtype = np.complex64 if np.iscomplexobj(fft_seg) else np.float32
band = np.nan_to_num(fft_seg, nan=0.0).astype(band_dtype, copy=False) * window
spectrum = np.zeros((int(fft_len),), dtype=band_dtype)
spectrum[pos_idx] = band
return spectrum
def build_positive_only_exact_centered_ifft_spectrum(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
*,
max_shift_len: Optional[int] = None,
) -> Optional[np.ndarray]:
"""Build centered spectrum exactly as zeros[-f_max..+f_min) + measured positive band."""
prepared = _extract_positive_exact_band(sweep, freqs)
if prepared is None:
return None
freq_band, sweep_band, _f_max, _df_ghz = prepared
normalized = _normalize_positive_exact_band(
freq_band,
sweep_band,
max_shift_len=max_shift_len,
)
if normalized is None:
return None
freq_band, sweep_band, f_max, df_ghz = normalized
f_shift = np.arange(-f_max, f_max + (0.5 * df_ghz), df_ghz, dtype=np.float64)
if f_shift.size <= 1:
return None
band_dtype = np.complex64 if np.iscomplexobj(sweep_band) else np.float32
band = np.nan_to_num(np.asarray(sweep_band, dtype=band_dtype), nan=0.0)
spectrum = np.zeros((int(f_shift.size),), dtype=band_dtype)
idx = np.round((freq_band - f_shift[0]) / df_ghz).astype(np.int64)
idx = np.clip(idx, 0, spectrum.size - 1)
spectrum[idx] = band
return spectrum
def fft_mag_to_db(mag: np.ndarray) -> np.ndarray:
"""Convert magnitude to dB with safe zero handling."""
mag_arr = np.asarray(mag, dtype=np.float32)
safe_mag = np.maximum(mag_arr, 0.0)
return (20.0 * np.log10(safe_mag + 1e-9)).astype(np.float32, copy=False)
def _compute_fft_complex_row_direct(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
bins: int,
) -> np.ndarray:
prepared = prepare_fft_segment(sweep, freqs, fft_len=FFT_LEN)
if prepared is None:
return np.full((bins,), np.nan + 0j, dtype=np.complex64)
fft_seg, take_fft = prepared
fft_in = np.zeros((FFT_LEN,), dtype=np.complex64)
window = np.hanning(take_fft).astype(np.float32)
fft_in[:take_fft] = np.asarray(fft_seg, dtype=np.complex64) * window
spec = np.fft.ifft(fft_in)
return _fit_complex_bins(spec, bins)
def _normalize_fft_mode(mode: str | None, symmetric: Optional[bool]) -> str:
if symmetric is not None:
return "symmetric" if symmetric else "direct"
normalized = str(mode or "symmetric").strip().lower()
if normalized in {"direct", "ordinary", "normal"}:
return "direct"
if normalized in {"symmetric", "sym", "mirror"}:
return "symmetric"
if normalized in {"positive_only", "positive-centered", "positive_centered", "zero_left"}:
return "positive_only"
if normalized in {"positive_only_exact", "positive-centered-exact", "positive_centered_exact", "zero_left_exact"}:
return "positive_only_exact"
raise ValueError(f"Unsupported FFT mode: {mode!r}")
def compute_fft_complex_row(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
bins: int,
*,
mode: str = "symmetric",
symmetric: Optional[bool] = None,
) -> np.ndarray:
"""Compute a complex FFT/IFFT row on the distance axis."""
if bins <= 0:
return np.zeros((0,), dtype=np.complex64)
fft_mode = _normalize_fft_mode(mode, symmetric)
if fft_mode == "direct":
return _compute_fft_complex_row_direct(sweep, freqs, bins)
if fft_mode == "positive_only":
spectrum_centered = build_positive_only_centered_ifft_spectrum(sweep, freqs, fft_len=FFT_LEN)
elif fft_mode == "positive_only_exact":
spectrum_centered = build_positive_only_exact_centered_ifft_spectrum(
sweep,
freqs,
max_shift_len=bins,
)
else:
spectrum_centered = build_symmetric_ifft_spectrum(sweep, freqs, fft_len=FFT_LEN)
if spectrum_centered is None:
return np.full((bins,), np.nan + 0j, dtype=np.complex64)
spec = np.fft.ifft(np.fft.ifftshift(np.asarray(spectrum_centered, dtype=np.complex64)))
return _fit_complex_bins(spec, bins)
def compute_fft_mag_row(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
bins: int,
*,
mode: str = "symmetric",
symmetric: Optional[bool] = None,
) -> np.ndarray:
"""Compute a linear FFT magnitude row."""
complex_row = compute_fft_complex_row(sweep, freqs, bins, mode=mode, symmetric=symmetric)
return np.abs(complex_row).astype(np.float32, copy=False)
def compute_fft_row(
sweep: np.ndarray,
freqs: Optional[np.ndarray],
bins: int,
*,
mode: str = "symmetric",
symmetric: Optional[bool] = None,
) -> np.ndarray:
"""Compute a dB FFT row."""
return fft_mag_to_db(compute_fft_mag_row(sweep, freqs, bins, mode=mode, symmetric=symmetric))
def compute_distance_axis(
freqs: Optional[np.ndarray],
bins: int,
*,
mode: str = "symmetric",
symmetric: Optional[bool] = None,
) -> np.ndarray:
"""Compute the one-way distance axis for IFFT output."""
if bins <= 0:
return np.zeros((0,), dtype=np.float64)
fft_mode = _normalize_fft_mode(mode, symmetric)
if fft_mode == "positive_only_exact":
geometry = _resolve_positive_only_exact_geometry(freqs, max_shift_len=bins)
if geometry is None:
return np.arange(bins, dtype=np.float64)
n_shift, df_hz = geometry
if (not np.isfinite(df_hz)) or df_hz <= 0.0 or n_shift <= 0:
return np.arange(bins, dtype=np.float64)
step_m = C_M_S / (2.0 * float(n_shift) * df_hz)
return np.arange(bins, dtype=np.float64) * step_m
if fft_mode in {"symmetric", "positive_only"}:
bounds = _finite_freq_bounds(freqs)
if bounds is None:
f_max = float(SWEEP_FREQ_MAX_GHZ)
else:
_, f_max = bounds
df_ghz = (2.0 * f_max) / max(1, FFT_LEN - 1)
else:
if freqs is None:
return np.arange(bins, dtype=np.float64)
freq_arr = np.asarray(freqs, dtype=np.float64)
finite = freq_arr[np.isfinite(freq_arr)]
if finite.size < 2:
return np.arange(bins, dtype=np.float64)
df_ghz = float((finite[-1] - finite[0]) / max(1, finite.size - 1))
df_hz = abs(df_ghz) * 1e9
if not np.isfinite(df_hz) or df_hz <= 0.0:
return np.arange(bins, dtype=np.float64)
step_m = C_M_S / (2.0 * FFT_LEN * df_hz)
return np.arange(bins, dtype=np.float64) * step_m

View File

@ -0,0 +1,71 @@
"""Formatting and display-range helpers."""
from __future__ import annotations
from typing import Any, Mapping, Optional, Tuple
import numpy as np
def format_status_kv(data: Mapping[str, Any]) -> str:
"""Convert status metrics into a compact single-line representation."""
def _fmt(value: Any) -> str:
if value is None:
return "NA"
try:
f_value = float(value)
except Exception:
return str(value)
if not np.isfinite(f_value):
return "nan"
if abs(f_value) >= 1000 or (0 < abs(f_value) < 0.01):
return f"{f_value:.3g}"
return f"{f_value:.3f}".rstrip("0").rstrip(".")
return " ".join(f"{key}:{_fmt(value)}" for key, value in data.items())
def parse_spec_clip(spec: Optional[str]) -> Optional[Tuple[float, float]]:
"""Parse a waterfall percentile clip specification."""
if not spec:
return None
value = str(spec).strip().lower()
if value in ("off", "none", "no"):
return None
try:
p0, p1 = value.replace(";", ",").split(",")
low = float(p0)
high = float(p1)
if not (0.0 <= low < high <= 100.0):
return None
return (low, high)
except Exception:
return None
def compute_auto_ylim(*series_list: Optional[np.ndarray]) -> Optional[Tuple[float, float]]:
"""Compute a common Y-range with a small padding."""
y_min: Optional[float] = None
y_max: Optional[float] = None
for series in series_list:
if series is None:
continue
arr = np.asarray(series)
if arr.size == 0:
continue
finite = arr[np.isfinite(arr)]
if finite.size == 0:
continue
cur_min = float(np.min(finite))
cur_max = float(np.max(finite))
y_min = cur_min if y_min is None else min(y_min, cur_min)
y_max = cur_max if y_max is None else max(y_max, cur_max)
if y_min is None or y_max is None:
return None
if y_min == y_max:
pad = max(1.0, abs(y_min) * 0.05)
else:
pad = 0.05 * (y_max - y_min)
return (y_min - pad, y_max + pad)

View File

@ -1,300 +0,0 @@
"""Преобразование свипа в IFFT-профиль по глубине (м).
Поддерживает несколько режимов восстановления комплексного спектра перед IFFT:
- ``arccos``: phi = arccos(x), continuous unwrap, z = exp(1j*phi)
- ``diff``: x ~= cos(phi), diff(x) -> sin(phi), z = cos + 1j*sin (с проекцией на единичную окружность)
"""
from __future__ import annotations
import logging
from typing import Optional
import numpy as np
from rfg_adc_plotter.constants import (
FREQ_MAX_GHZ,
FREQ_MIN_GHZ,
FREQ_SPAN_GHZ,
IFFT_LEN,
SPEED_OF_LIGHT_M_S,
)
logger = logging.getLogger(__name__)
_EPS = 1e-12
_TWO_PI = float(2.0 * np.pi)
_VALID_COMPLEX_MODES = {"arccos", "diff"}
def _fallback_depth_response(
size: int,
values: Optional[np.ndarray] = None,
) -> tuple[np.ndarray, np.ndarray]:
"""Безопасный fallback для GUI/ring: всегда возвращает ненулевую длину."""
n = max(1, int(size))
depth = np.linspace(0.0, 1.0, n, dtype=np.float32)
if values is None:
return depth, np.zeros((n,), dtype=np.float32)
arr = np.asarray(values)
if arr.size == 0:
return depth, np.zeros((n,), dtype=np.float32)
if np.iscomplexobj(arr):
src = np.abs(arr)
else:
src = np.abs(np.nan_to_num(arr, nan=0.0, posinf=0.0, neginf=0.0))
src = np.asarray(src, dtype=np.float32).ravel()
out = np.zeros((n,), dtype=np.float32)
take = min(n, src.size)
if take > 0:
out[:take] = src[:take]
return depth, out
def _normalize_complex_mode(mode: str) -> str:
m = str(mode).strip().lower()
if m not in _VALID_COMPLEX_MODES:
raise ValueError(f"Invalid complex reconstruction mode: {mode!r}")
return m
def build_ifft_time_axis_ns() -> np.ndarray:
"""Legacy helper: старая временная ось IFFT в наносекундах (фиксированная длина)."""
return (
np.arange(IFFT_LEN, dtype=np.float64) / (FREQ_SPAN_GHZ * 1e9) * 1e9
).astype(np.float32)
def build_frequency_axis_hz(sweep_width: int) -> np.ndarray:
"""Построить частотную сетку (Гц) для текущей длины свипа."""
n = int(sweep_width)
if n <= 0:
return np.zeros((0,), dtype=np.float64)
if n == 1:
return np.array([FREQ_MIN_GHZ * 1e9], dtype=np.float64)
return np.linspace(FREQ_MIN_GHZ * 1e9, FREQ_MAX_GHZ * 1e9, n, dtype=np.float64)
def normalize_trace_unit_range(x: np.ndarray) -> np.ndarray:
"""Signed-нормировка массива по max(abs(.)) в диапазон около [-1, 1]."""
arr = np.asarray(x, dtype=np.float64).ravel()
if arr.size == 0:
return arr
arr = np.nan_to_num(arr, nan=0.0, posinf=0.0, neginf=0.0)
amax = float(np.max(np.abs(arr)))
if (not np.isfinite(amax)) or amax <= _EPS:
return np.zeros_like(arr, dtype=np.float64)
return arr / amax
def normalize_sweep_for_phase(sweep: np.ndarray) -> np.ndarray:
"""Совместимый alias: нормировка свипа перед восстановлением фазы."""
return normalize_trace_unit_range(sweep)
def unwrap_arccos_phase_continuous(x_norm: np.ndarray) -> np.ndarray:
"""Непрерывно развернуть фазу, восстановленную через arccos.
Для каждой точки рассматриваются ветви ±phi + 2πk и выбирается кандидат,
ближайший к предыдущей фазе (nearest continuous).
"""
x = np.asarray(x_norm, dtype=np.float64).ravel()
if x.size == 0:
return np.zeros((0,), dtype=np.float64)
x = np.nan_to_num(x, nan=0.0, posinf=1.0, neginf=-1.0)
x = np.clip(x, -1.0, 1.0)
phi0 = np.arccos(x)
out = np.empty_like(phi0, dtype=np.float64)
out[0] = float(phi0[0])
for i in range(1, phi0.size):
base_phi = float(phi0[i])
prev = float(out[i - 1])
best_cand: Optional[float] = None
best_key: Optional[tuple[float, float]] = None
for sign in (1.0, -1.0):
base = sign * base_phi
k_center = int(np.round((prev - base) / _TWO_PI))
for k in (k_center - 1, k_center, k_center + 1):
cand = base + _TWO_PI * float(k)
step = abs(cand - prev)
# Tie-break: при равенстве шага предпочесть больший кандидат.
key = (step, -cand)
if best_key is None or key < best_key:
best_key = key
best_cand = cand
out[i] = prev if best_cand is None else float(best_cand)
return out
def reconstruct_complex_spectrum_arccos(sweep: np.ndarray) -> np.ndarray:
"""Режим arccos: cos(phi) -> phi -> exp(i*phi)."""
x_norm = normalize_trace_unit_range(sweep)
if x_norm.size == 0:
return np.zeros((0,), dtype=np.complex128)
phi = unwrap_arccos_phase_continuous(np.clip(x_norm, -1.0, 1.0))
return np.exp(1j * phi).astype(np.complex128, copy=False)
def reconstruct_complex_spectrum_diff(sweep: np.ndarray) -> np.ndarray:
"""Режим diff: x~=cos(phi), diff(x)->sin(phi), z=cos+i*sin с проекцией на единичную окружность."""
cos_phi = normalize_trace_unit_range(sweep)
if cos_phi.size == 0:
return np.zeros((0,), dtype=np.complex128)
cos_phi = np.clip(cos_phi, -1.0, 1.0)
if cos_phi.size < 2:
sin_est = np.zeros_like(cos_phi, dtype=np.float64)
else:
d = np.gradient(cos_phi)
sin_est = normalize_trace_unit_range(d)
sin_est = np.clip(sin_est, -1.0, 1.0)
sin_est = normalize_trace_unit_range(d)
# mag = np.abs(sin_est)
# mask = mag > _EPS
# if np.any(mask):
# sin_est[mask] = sin_est[mask] / mag[mask]
z = cos_phi.astype(np.complex128, copy=False) + 1j * sin_est.astype(np.complex128, copy=False)
mag = np.abs(z)
z_unit = np.ones_like(z, dtype=np.complex128)
mask = mag > _EPS
if np.any(mask):
z_unit[mask] = z[mask] / mag[mask]
return z_unit
def reconstruct_complex_spectrum_from_real_trace(
sweep: np.ndarray,
*,
complex_mode: str = "arccos",
) -> np.ndarray:
"""Восстановить комплексный спектр из вещественного свипа в выбранном режиме."""
mode = _normalize_complex_mode(complex_mode)
if mode == "arccos":
return reconstruct_complex_spectrum_arccos(sweep)
if mode == "diff":
return reconstruct_complex_spectrum_diff(sweep)
raise ValueError(f"Unsupported complex reconstruction mode: {complex_mode!r}")
def perform_ifft_depth_response(
s_array: np.ndarray,
frequencies_hz: np.ndarray,
*,
axis: str = "abs",
start_hz: float | None = None,
stop_hz: float | None = None,
) -> tuple[np.ndarray, np.ndarray]:
"""Frequency-to-depth conversion with zero-padding and frequency offset handling."""
try:
s_in = np.asarray(s_array, dtype=np.complex128).ravel()
f_in = np.asarray(frequencies_hz, dtype=np.float64).ravel()
m = min(s_in.size, f_in.size)
if m < 2:
raise ValueError("Not enough points")
s = s_in[:m]
f = f_in[:m]
lo = float(FREQ_MIN_GHZ * 1e9 if start_hz is None else start_hz)
hi = float(FREQ_MAX_GHZ * 1e9 if stop_hz is None else stop_hz)
if hi < lo:
lo, hi = hi, lo
mask = (
np.isfinite(f)
& np.isfinite(np.real(s))
& np.isfinite(np.imag(s))
& (f >= lo)
& (f <= hi)
)
f = f[mask]
s = s[mask]
n = int(f.size)
if n < 2:
raise ValueError("Not enough frequency points after filtering")
if np.any(np.diff(f) <= 0.0):
raise ValueError("Non-increasing frequency grid")
df = float((f[-1] - f[0]) / (n - 1))
if not np.isfinite(df) or df <= 0.0:
raise ValueError("Invalid frequency step")
k0 = int(np.round(float(f[0]) / df))
if k0 < 0:
raise ValueError("Negative frequency offset index")
min_len = int(2 * (k0 + n - 1))
if min_len <= 0:
raise ValueError("Invalid FFT length")
n_fft = 1 << int(np.ceil(np.log2(float(min_len))))
dt = 1.0 / (n_fft * df)
t_sec = np.arange(n_fft, dtype=np.float64) * dt
h = np.zeros((n_fft,), dtype=np.complex128)
end = k0 + n
if end > n_fft:
raise ValueError("Spectrum placement exceeds FFT buffer")
h[k0:end] = s
y = np.fft.ifft(h)
depth_m = t_sec * SPEED_OF_LIGHT_M_S
axis_name = str(axis).strip().lower()
if axis_name == "abs":
y_fin = np.abs(y)
elif axis_name == "real":
y_fin = np.real(y)
elif axis_name == "imag":
y_fin = np.imag(y)
elif axis_name == "phase":
y_fin = np.angle(y)
else:
raise ValueError(f"Invalid axis parameter: {axis!r}")
return depth_m.astype(np.float32, copy=False), np.asarray(y_fin, dtype=np.float32)
except Exception as exc: # noqa: BLE001
logger.error("IFFT depth response failed: %r", exc)
return _fallback_depth_response(np.asarray(s_array).size, np.asarray(s_array))
def compute_ifft_profile_from_sweep(
sweep: Optional[np.ndarray],
*,
complex_mode: str = "arccos",
) -> tuple[np.ndarray, np.ndarray]:
"""Высокоуровневый pipeline: sweep -> complex spectrum -> IFFT(abs) depth profile."""
if sweep is None:
return _fallback_depth_response(1, None)
try:
s = np.asarray(sweep, dtype=np.float64).ravel()
if s.size == 0:
return _fallback_depth_response(1, None)
freqs_hz = build_frequency_axis_hz(s.size)
s_complex = reconstruct_complex_spectrum_from_real_trace(s, complex_mode=complex_mode)
depth_m, y = perform_ifft_depth_response(s_complex, freqs_hz, axis="abs")
n = min(depth_m.size, y.size)
if n <= 0:
return _fallback_depth_response(s.size, s)
return depth_m[:n].astype(np.float32, copy=False), np.maximum(y[:n], 1e-12).astype(np.float32, copy=False) # log10 для лучшей визуализации в водопаде
except Exception as exc: # noqa: BLE001
logger.error("compute_ifft_profile_from_sweep failed: %r", exc)
return _fallback_depth_response(np.asarray(sweep).size if sweep is not None else 1, sweep)
def compute_ifft_db_profile(sweep: Optional[np.ndarray]) -> np.ndarray:
"""Legacy wrapper (deprecated name): возвращает линейный |IFFT| профиль."""
_depth_m, y = compute_ifft_profile_from_sweep(sweep, complex_mode="arccos")
return y

View File

@ -0,0 +1,230 @@
"""Sweep normalization helpers."""
from __future__ import annotations
from typing import Tuple
import numpy as np
def normalize_sweep_simple(raw: np.ndarray, calib: np.ndarray) -> np.ndarray:
"""Simple element-wise raw/calib normalization."""
width = min(raw.size, calib.size)
if width <= 0:
return raw
out = np.full_like(raw, np.nan, dtype=np.float32)
with np.errstate(divide="ignore", invalid="ignore"):
out[:width] = raw[:width] / calib[:width]
out = np.nan_to_num(out, nan=np.nan, posinf=np.nan, neginf=np.nan)
return out
def build_calib_envelopes(calib: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
"""Estimate smooth lower/upper envelopes from local extrema."""
n = int(calib.size)
if n <= 0:
empty = np.zeros((0,), dtype=np.float32)
return empty, empty
values = np.asarray(calib, dtype=np.float32)
finite = np.isfinite(values)
if not np.any(finite):
zeros = np.zeros_like(values, dtype=np.float32)
return zeros, zeros
if not np.all(finite):
x = np.arange(n, dtype=np.float32)
values = values.copy()
values[~finite] = np.interp(x[~finite], x[finite], values[finite]).astype(np.float32)
if n < 3:
return values.copy(), values.copy()
x = np.arange(n, dtype=np.float32)
def _moving_average(series: np.ndarray, window: int) -> np.ndarray:
width = max(1, int(window))
if width <= 1 or series.size <= 2:
return np.asarray(series, dtype=np.float32).copy()
if width % 2 == 0:
width += 1
pad = width // 2
padded = np.pad(np.asarray(series, dtype=np.float32), (pad, pad), mode="edge")
kernel = np.full((width,), 1.0 / float(width), dtype=np.float32)
return np.convolve(padded, kernel, mode="valid").astype(np.float32)
def _smooth_extrema_envelope(use_max: bool) -> np.ndarray:
step = max(3, n // 32)
node_idx_list = []
for start in range(0, n, step):
stop = min(n, start + step)
segment = values[start:stop]
idx_rel = int(np.argmax(segment) if use_max else np.argmin(segment))
node_idx_list.append(start + idx_rel)
extrema_idx = np.unique(np.asarray(node_idx_list, dtype=np.int64))
if extrema_idx.size == 0:
extrema_idx = np.asarray([int(np.argmax(values) if use_max else np.argmin(values))], dtype=np.int64)
node_idx = np.unique(np.concatenate(([0], extrema_idx, [n - 1]))).astype(np.int64)
node_vals = values[node_idx].astype(np.float32, copy=True)
node_vals[0] = float(values[extrema_idx[0]])
node_vals[-1] = float(values[extrema_idx[-1]])
node_vals = _moving_average(node_vals, 3)
node_vals[0] = float(values[extrema_idx[0]])
node_vals[-1] = float(values[extrema_idx[-1]])
envelope = np.interp(x, node_idx.astype(np.float32), node_vals).astype(np.float32)
smooth_window = max(1, n // 64)
if smooth_window > 1:
envelope = _moving_average(envelope, smooth_window)
return envelope
upper = _smooth_extrema_envelope(use_max=True)
lower = _smooth_extrema_envelope(use_max=False)
swap = lower > upper
if np.any(swap):
tmp = upper[swap].copy()
upper[swap] = lower[swap]
lower[swap] = tmp
return lower, upper
def normalize_sweep_projector(raw: np.ndarray, calib: np.ndarray) -> np.ndarray:
"""Project raw values between calibration envelopes into [-1000, 1000]."""
width = min(raw.size, calib.size)
if width <= 0:
return raw
out = np.full_like(raw, np.nan, dtype=np.float32)
raw_seg = np.asarray(raw[:width], dtype=np.float32)
lower, upper = build_calib_envelopes(np.asarray(calib[:width], dtype=np.float32))
span = upper - lower
finite_span = span[np.isfinite(span) & (span > 0)]
if finite_span.size > 0:
eps = max(float(np.median(finite_span)) * 1e-6, 1e-9)
else:
eps = 1e-9
valid = (
np.isfinite(raw_seg)
& np.isfinite(lower)
& np.isfinite(upper)
& (span > eps)
)
if np.any(valid):
proj = np.empty_like(raw_seg, dtype=np.float32)
proj[valid] = ((2.0 * (raw_seg[valid] - lower[valid]) / span[valid]) - 1.0) * 1000.0
proj[valid] = np.clip(proj[valid], -1000.0, 1000.0)
proj[~valid] = np.nan
out[:width] = proj
return out
def resample_envelope(envelope: np.ndarray, width: int) -> np.ndarray:
"""Resample an envelope to the target sweep width on the index axis."""
target_width = int(width)
if target_width <= 0:
return np.zeros((0,), dtype=np.float32)
values = np.asarray(envelope, dtype=np.float32).reshape(-1)
if values.size == 0:
return np.full((target_width,), np.nan, dtype=np.float32)
if values.size == target_width:
return values.astype(np.float32, copy=True)
x_src = np.arange(values.size, dtype=np.float32)
finite = np.isfinite(values)
if not np.any(finite):
return np.full((target_width,), np.nan, dtype=np.float32)
if int(np.count_nonzero(finite)) == 1:
fill = float(values[finite][0])
return np.full((target_width,), fill, dtype=np.float32)
x_dst = np.linspace(0.0, float(values.size - 1), target_width, dtype=np.float32)
return np.interp(x_dst, x_src[finite], values[finite]).astype(np.float32)
def fit_complex_calibration_to_width(calib: np.ndarray, width: int) -> np.ndarray:
"""Fit a complex calibration curve to the signal width via trim/pad with ones."""
target_width = int(width)
if target_width <= 0:
return np.zeros((0,), dtype=np.complex64)
values = np.asarray(calib, dtype=np.complex64).reshape(-1)
if values.size <= 0:
return np.ones((target_width,), dtype=np.complex64)
if values.size == target_width:
return values.astype(np.complex64, copy=True)
if values.size > target_width:
return np.asarray(values[:target_width], dtype=np.complex64)
out = np.ones((target_width,), dtype=np.complex64)
out[: values.size] = values
return out
def normalize_by_complex_calibration(
signal: np.ndarray,
calib: np.ndarray,
eps: float = 1e-9,
) -> np.ndarray:
"""Normalize complex signal by a complex calibration curve with zero protection."""
sig_arr = np.asarray(signal, dtype=np.complex64).reshape(-1)
if sig_arr.size <= 0:
return sig_arr.copy()
calib_fit = fit_complex_calibration_to_width(calib, sig_arr.size)
eps_abs = max(abs(float(eps)), 1e-12)
denom = np.asarray(calib_fit, dtype=np.complex64).copy()
safe_denom = (
np.isfinite(denom.real)
& np.isfinite(denom.imag)
& (np.abs(denom) >= eps_abs)
)
if np.any(~safe_denom):
denom[~safe_denom] = np.complex64(1.0 + 0.0j)
out = np.full(sig_arr.shape, np.nan + 0j, dtype=np.complex64)
valid_sig = np.isfinite(sig_arr.real) & np.isfinite(sig_arr.imag)
if np.any(valid_sig):
with np.errstate(divide="ignore", invalid="ignore"):
out[valid_sig] = sig_arr[valid_sig] / denom[valid_sig]
out_real = np.nan_to_num(out.real, nan=np.nan, posinf=np.nan, neginf=np.nan)
out_imag = np.nan_to_num(out.imag, nan=np.nan, posinf=np.nan, neginf=np.nan)
return (out_real + (1j * out_imag)).astype(np.complex64, copy=False)
def normalize_by_envelope(raw: np.ndarray, envelope: np.ndarray) -> np.ndarray:
"""Normalize a sweep by an envelope with safe resampling and zero protection."""
raw_in = np.asarray(raw).reshape(-1)
raw_dtype = np.complex64 if np.iscomplexobj(raw_in) else np.float32
raw_arr = np.asarray(raw_in, dtype=raw_dtype).reshape(-1)
if raw_arr.size == 0:
return raw_arr.copy()
env = resample_envelope(envelope, raw_arr.size)
out = np.full(raw_arr.shape, np.nan + 0j if np.iscomplexobj(raw_arr) else np.nan, dtype=raw_dtype)
den_eps = np.float32(1e-9)
valid = np.isfinite(raw_arr) & np.isfinite(env)
if np.any(valid):
with np.errstate(divide="ignore", invalid="ignore"):
denom = env[valid] + np.where(env[valid] >= 0.0, den_eps, -den_eps)
out[valid] = raw_arr[valid] / denom
if np.iscomplexobj(out):
out_real = np.nan_to_num(out.real, nan=np.nan, posinf=np.nan, neginf=np.nan)
out_imag = np.nan_to_num(out.imag, nan=np.nan, posinf=np.nan, neginf=np.nan)
return (out_real + (1j * out_imag)).astype(np.complex64, copy=False)
return np.nan_to_num(out, nan=np.nan, posinf=np.nan, neginf=np.nan)
def normalize_by_calib(raw: np.ndarray, calib: np.ndarray, norm_type: str) -> np.ndarray:
"""Apply the selected normalization method."""
norm = str(norm_type).strip().lower()
if norm == "simple":
return normalize_sweep_simple(raw, calib)
return normalize_sweep_projector(raw, calib)

View File

@ -1,149 +0,0 @@
"""Алгоритмы нормировки свипов по калибровочной кривой."""
from typing import Tuple
import numpy as np
def normalize_simple(raw: np.ndarray, calib: np.ndarray) -> np.ndarray:
"""Простая нормировка: поэлементное деление raw/calib."""
w = min(raw.size, calib.size)
if w <= 0:
return raw
out = np.full_like(raw, np.nan, dtype=np.float32)
with np.errstate(divide="ignore", invalid="ignore"):
out[:w] = raw[:w] / calib[:w]
out = np.nan_to_num(out, nan=np.nan, posinf=np.nan, neginf=np.nan)
return out
def build_calib_envelopes(calib: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
"""Оценить огибающую по модулю сигнала.
Возвращает (lower, upper) = (-envelope, +envelope), где envelope —
интерполяция через локальные максимумы |calib|.
"""
n = int(calib.size)
if n <= 0:
empty = np.zeros((0,), dtype=np.float32)
return empty, empty
y = np.asarray(calib, dtype=np.float32)
finite = np.isfinite(y)
if not np.any(finite):
zeros = np.zeros_like(y, dtype=np.float32)
return zeros, zeros
if not np.all(finite):
x = np.arange(n, dtype=np.float32)
y = y.copy()
y[~finite] = np.interp(x[~finite], x[finite], y[finite]).astype(np.float32)
a = np.abs(y)
if n < 3:
env = a.copy()
return -env, env
da = np.diff(a)
s = np.sign(da).astype(np.int8, copy=False)
if np.any(s == 0):
for i in range(1, s.size):
if s[i] == 0:
s[i] = s[i - 1]
for i in range(s.size - 2, -1, -1):
if s[i] == 0:
s[i] = s[i + 1]
s[s == 0] = 1
max_idx = np.where((s[:-1] > 0) & (s[1:] < 0))[0] + 1
x = np.arange(n, dtype=np.float32)
if max_idx.size == 0:
idx = np.array([0, n - 1], dtype=np.int64)
else:
idx = np.unique(np.concatenate(([0], max_idx, [n - 1]))).astype(np.int64)
env = np.interp(x, idx.astype(np.float32), a[idx]).astype(np.float32)
return -env, env
def normalize_projector(raw: np.ndarray, calib: np.ndarray) -> np.ndarray:
"""Нормировка через проекцию между огибающими калибровки в диапазон [-1000, +1000]."""
w = min(raw.size, calib.size)
if w <= 0:
return raw
out = np.full_like(raw, np.nan, dtype=np.float32)
raw_seg = np.asarray(raw[:w], dtype=np.float32)
lower, upper = build_calib_envelopes(np.asarray(calib[:w], dtype=np.float32))
span = upper - lower
finite_span = span[np.isfinite(span) & (span > 0)]
if finite_span.size > 0:
eps = max(float(np.median(finite_span)) * 1e-6, 1e-9)
else:
eps = 1e-9
valid = (
np.isfinite(raw_seg)
& np.isfinite(lower)
& np.isfinite(upper)
& (span > eps)
)
if np.any(valid):
proj = np.empty_like(raw_seg, dtype=np.float32)
proj[valid] = ((2.0 * (raw_seg[valid] - lower[valid]) / span[valid]) - 1.0) * 1000.0
proj[valid] = np.clip(proj[valid], -1000.0, 1000.0)
proj[~valid] = np.nan
out[:w] = proj
return out
def normalize_by_calib(raw: np.ndarray, calib: np.ndarray, norm_type: str) -> np.ndarray:
"""Нормировка свипа по выбранному алгоритму."""
nt = str(norm_type).strip().lower()
if nt == "simple":
return normalize_simple(raw, calib)
return normalize_projector(raw, calib)
def normalize_by_envelope(raw: np.ndarray, envelope: np.ndarray) -> np.ndarray:
"""Нормировка свипа через проекцию на огибающую из файла.
Воспроизводит логику normalize_projector: проецирует raw в [-1000, +1000]
используя готовую верхнюю огибающую (upper = envelope, lower = -envelope).
"""
w = min(raw.size, envelope.size)
if w <= 0:
return raw
out = np.full_like(raw, np.nan, dtype=np.float32)
raw_seg = np.asarray(raw[:w], dtype=np.float32)
upper = np.asarray(envelope[:w], dtype=np.float32)
lower = -upper
span = upper - lower # = 2 * upper
finite_span = span[np.isfinite(span) & (span > 0)]
if finite_span.size > 0:
eps = max(float(np.median(finite_span)) * 1e-6, 1e-9)
else:
eps = 1e-9
valid = (
np.isfinite(raw_seg)
& np.isfinite(lower)
& np.isfinite(upper)
& (span > eps)
)
if np.any(valid):
proj = np.empty_like(raw_seg, dtype=np.float32)
proj[valid] = ((2.0 * (raw_seg[valid] - lower[valid]) / span[valid]) - 1.0) * 1000.0
proj[valid] = np.clip(proj[valid], -1000.0, 1000.0)
proj[~valid] = np.nan
out[:w] = proj
return out

View File

@ -0,0 +1,209 @@
"""Peak-search helpers for FFT visualizations."""
from __future__ import annotations
from typing import Dict, List, Optional
import numpy as np
def find_peak_width_markers(xs: np.ndarray, ys: np.ndarray) -> Optional[Dict[str, float]]:
"""Find the dominant non-zero peak and its half-height width."""
x_arr = np.asarray(xs, dtype=np.float64)
y_arr = np.asarray(ys, dtype=np.float64)
valid = np.isfinite(x_arr) & np.isfinite(y_arr) & (x_arr > 0.0)
if int(np.count_nonzero(valid)) < 3:
return None
x = x_arr[valid]
y = y_arr[valid]
x_min = float(x[0])
x_max = float(x[-1])
x_span = x_max - x_min
central_mask = (x >= (x_min + 0.25 * x_span)) & (x <= (x_min + 0.75 * x_span))
if int(np.count_nonzero(central_mask)) > 0:
central_idx = np.flatnonzero(central_mask)
peak_idx = int(central_idx[int(np.argmax(y[central_mask]))])
else:
peak_idx = int(np.argmax(y))
peak_y = float(y[peak_idx])
shoulder_gap = max(1, min(8, y.size // 64 if y.size > 0 else 1))
shoulder_width = max(4, min(32, y.size // 16 if y.size > 0 else 4))
left_lo = max(0, peak_idx - shoulder_gap - shoulder_width)
left_hi = max(0, peak_idx - shoulder_gap)
right_lo = min(y.size, peak_idx + shoulder_gap + 1)
right_hi = min(y.size, right_lo + shoulder_width)
background_parts = []
if left_hi > left_lo:
background_parts.append(float(np.nanmedian(y[left_lo:left_hi])))
if right_hi > right_lo:
background_parts.append(float(np.nanmedian(y[right_lo:right_hi])))
if background_parts:
background = float(np.mean(background_parts))
else:
background = float(np.nanpercentile(y, 10))
if not np.isfinite(peak_y) or not np.isfinite(background) or peak_y <= background:
return None
half_level = background + 0.5 * (peak_y - background)
def _interp_cross(x0: float, y0: float, x1: float, y1: float) -> float:
if not (np.isfinite(x0) and np.isfinite(y0) and np.isfinite(x1) and np.isfinite(y1)):
return x1
dy = y1 - y0
if dy == 0.0:
return x1
t = (half_level - y0) / dy
t = min(1.0, max(0.0, t))
return x0 + t * (x1 - x0)
left_x = float(x[0])
for i in range(peak_idx, 0, -1):
if y[i - 1] <= half_level <= y[i]:
left_x = _interp_cross(float(x[i - 1]), float(y[i - 1]), float(x[i]), float(y[i]))
break
right_x = float(x[-1])
for i in range(peak_idx, x.size - 1):
if y[i] >= half_level >= y[i + 1]:
right_x = _interp_cross(float(x[i]), float(y[i]), float(x[i + 1]), float(y[i + 1]))
break
width = right_x - left_x
if not np.isfinite(width) or width <= 0.0:
return None
return {
"background": background,
"left": left_x,
"right": right_x,
"width": width,
"amplitude": peak_y,
}
def rolling_median_ref(xs: np.ndarray, ys: np.ndarray, window_ghz: float) -> np.ndarray:
"""Compute a rolling median reference on a fixed-width X window."""
x = np.asarray(xs, dtype=np.float64)
y = np.asarray(ys, dtype=np.float64)
out = np.full(y.shape, np.nan, dtype=np.float64)
if x.size == 0 or y.size == 0 or x.size != y.size:
return out
width = float(window_ghz)
if not np.isfinite(width) or width <= 0.0:
return out
half = 0.5 * width
for i in range(x.size):
xi = x[i]
if not np.isfinite(xi):
continue
left = np.searchsorted(x, xi - half, side="left")
right = np.searchsorted(x, xi + half, side="right")
if right <= left:
continue
segment = y[left:right]
finite = np.isfinite(segment)
if not np.any(finite):
continue
out[i] = float(np.nanmedian(segment))
return out
def find_top_peaks_over_ref(
xs: np.ndarray,
ys: np.ndarray,
ref: np.ndarray,
top_n: int = 3,
) -> List[Dict[str, float]]:
"""Find the top-N non-overlapping peaks above a reference curve."""
x = np.asarray(xs, dtype=np.float64)
y = np.asarray(ys, dtype=np.float64)
r = np.asarray(ref, dtype=np.float64)
if x.size < 3 or y.size != x.size or r.size != x.size:
return []
valid = np.isfinite(x) & np.isfinite(y) & np.isfinite(r)
if not np.any(valid):
return []
delta = np.full_like(y, np.nan, dtype=np.float64)
delta[valid] = y[valid] - r[valid]
candidates: List[int] = []
for i in range(1, x.size - 1):
if not (np.isfinite(delta[i - 1]) and np.isfinite(delta[i]) and np.isfinite(delta[i + 1])):
continue
if delta[i] <= 0.0:
continue
left_ok = delta[i] > delta[i - 1]
right_ok = delta[i] >= delta[i + 1]
alt_left_ok = delta[i] >= delta[i - 1]
alt_right_ok = delta[i] > delta[i + 1]
if (left_ok and right_ok) or (alt_left_ok and alt_right_ok):
candidates.append(i)
if not candidates:
return []
candidates.sort(key=lambda i: float(delta[i]), reverse=True)
def _interp_cross(x0: float, y0: float, x1: float, y1: float, y_cross: float) -> float:
dy = y1 - y0
if not np.isfinite(dy) or dy == 0.0:
return x1
t = (y_cross - y0) / dy
t = min(1.0, max(0.0, t))
return x0 + t * (x1 - x0)
picked: List[Dict[str, float]] = []
for idx in candidates:
peak_y = float(y[idx])
peak_ref = float(r[idx])
peak_h = float(delta[idx])
if not (np.isfinite(peak_y) and np.isfinite(peak_ref) and np.isfinite(peak_h)) or peak_h <= 0.0:
continue
half_level = peak_ref + 0.5 * peak_h
left_x = float(x[0])
for i in range(idx, 0, -1):
y0 = float(y[i - 1])
y1 = float(y[i])
if np.isfinite(y0) and np.isfinite(y1) and (y0 <= half_level <= y1):
left_x = _interp_cross(float(x[i - 1]), y0, float(x[i]), y1, half_level)
break
right_x = float(x[-1])
for i in range(idx, x.size - 1):
y0 = float(y[i])
y1 = float(y[i + 1])
if np.isfinite(y0) and np.isfinite(y1) and (y0 >= half_level >= y1):
right_x = _interp_cross(float(x[i]), y0, float(x[i + 1]), y1, half_level)
break
width = float(right_x - left_x)
if not np.isfinite(width) or width <= 0.0:
continue
overlap = False
for peak in picked:
if not (right_x <= peak["left"] or left_x >= peak["right"]):
overlap = True
break
if overlap:
continue
picked.append(
{
"x": float(x[idx]),
"peak_y": peak_y,
"ref": peak_ref,
"height": peak_h,
"left": left_x,
"right": right_x,
"width": width,
}
)
if len(picked) >= int(max(1, top_n)):
break
picked.sort(key=lambda peak: peak["x"])
return picked

View File

@ -1,415 +0,0 @@
"""Явный pipeline предобработки свипов перед помещением в RingBuffer."""
from __future__ import annotations
from dataclasses import dataclass
import os
from typing import Optional, Tuple
import numpy as np
from rfg_adc_plotter.io.capture_reference_loader import (
CaptureParseSummary,
aggregate_capture_reference,
detect_reference_file_format,
load_capture_sweeps,
)
from rfg_adc_plotter.processing.normalizer import (
build_calib_envelopes,
normalize_by_calib,
normalize_by_envelope,
)
DEFAULT_CALIB_ENVELOPE_PATH = "calib_envelope.npy"
DEFAULT_BACKGROUND_PATH = "background.npy"
def _normalize_path(path: str) -> str:
return str(path).strip()
def _normalize_save_npy_path(path: str) -> str:
p = _normalize_path(path)
if not p:
return p
_root, ext = os.path.splitext(p)
if ext:
return p
return f"{p}.npy"
def _summary_for_npy(path: str) -> CaptureParseSummary:
return CaptureParseSummary(
path=path,
format="npy",
sweeps_total=0,
sweeps_valid=0,
channels_seen=tuple(),
dominant_width=None,
dominant_n_valid=None,
aggregation="median",
warnings=tuple(),
)
@dataclass(frozen=True)
class SweepProcessingResult:
"""Результат предобработки одного свипа."""
processed_sweep: np.ndarray
normalized_sweep: Optional[np.ndarray]
calibration_applied: bool
background_applied: bool
calibration_source: str # off|live|npy|capture
background_source: str # off|npy|capture(raw)|capture(raw->calib)
is_calibration_reference: bool
stage_trace: Tuple[str, ...]
class SweepPreprocessor:
"""Управляет калибровкой/фоном и применяет их к входному свипу."""
def __init__(
self,
norm_type: str = "projector",
calib_envelope_path: str = DEFAULT_CALIB_ENVELOPE_PATH,
background_path: str = DEFAULT_BACKGROUND_PATH,
auto_save_live_calib_envelope: bool = True,
):
self.norm_type = str(norm_type).strip().lower() or "projector"
self.calib_enabled = False
self.calib_mode = "live" # live | file
self.background_enabled = False
self.auto_save_live_calib_envelope = bool(auto_save_live_calib_envelope)
self.calib_envelope_path = _normalize_path(calib_envelope_path)
self.background_path = _normalize_path(background_path)
self.last_calib_sweep: Optional[np.ndarray] = None
self.calib_file_envelope: Optional[np.ndarray] = None
# background — в текущем домене вычитания (raw или normalized), UI использует для preview/state
self.background: Optional[np.ndarray] = None
# raw background loaded from capture file; преобразуется на лету при активной калибровке
self.background_raw_capture: Optional[np.ndarray] = None
# Источники и метаданные загрузки
self.calib_external_source_type: str = "none" # none|npy|capture
self.background_source_type: str = "none" # none|npy_processed|capture_raw
self.calib_reference_summary: Optional[CaptureParseSummary] = None
self.background_reference_summary: Optional[CaptureParseSummary] = None
self.last_reference_error: str = ""
# Параметры офлайн-парсинга capture (должны совпадать с live parser по настройке UI)
self.capture_fancy: bool = False
self.capture_logscale: bool = False
self.reference_aggregation_method: str = "median"
# ---- Конфигурация ----
def set_calib_mode(self, mode: str):
m = str(mode).strip().lower()
self.calib_mode = "file" if m == "file" else "live"
def set_calib_enabled(self, enabled: bool):
self.calib_enabled = bool(enabled)
def set_background_enabled(self, enabled: bool):
self.background_enabled = bool(enabled)
def set_capture_parse_options(self, *, fancy: Optional[bool] = None, logscale: Optional[bool] = None):
if fancy is not None:
self.capture_fancy = bool(fancy)
if logscale is not None:
self.capture_logscale = bool(logscale)
def set_calib_envelope_path(self, path: str):
p = _normalize_path(path)
if p:
if p != self.calib_envelope_path:
self.calib_file_envelope = None
if self.calib_external_source_type in ("npy", "capture"):
self.calib_external_source_type = "none"
self.calib_reference_summary = None
self.calib_envelope_path = p
def set_background_path(self, path: str):
p = _normalize_path(path)
if p:
if p != self.background_path:
self.background = None
self.background_raw_capture = None
self.background_source_type = "none"
self.background_reference_summary = None
self.background_path = p
def has_calib_envelope_file(self) -> bool:
return bool(self.calib_envelope_path) and os.path.isfile(self.calib_envelope_path)
def has_background_file(self) -> bool:
return bool(self.background_path) and os.path.isfile(self.background_path)
# ---- Загрузка/сохранение .npy ----
def _save_array(self, arr: np.ndarray, current_path: str, path: Optional[str]) -> str:
target = _normalize_save_npy_path(path if path is not None else current_path)
if not target:
raise ValueError("Пустой путь сохранения")
np.save(target, arr)
return target
def save_calib_envelope(self, path: Optional[str] = None) -> bool:
"""Сохранить огибающую из последнего live-калибровочного свипа (экспорт .npy)."""
if self.last_calib_sweep is None:
return False
try:
_lower, upper = build_calib_envelopes(self.last_calib_sweep)
self.calib_envelope_path = self._save_array(upper, self.calib_envelope_path, path)
self.last_reference_error = ""
return True
except Exception as exc:
self.last_reference_error = f"save calib envelope failed: {exc}"
return False
def save_background(self, sweep_for_ring: Optional[np.ndarray], path: Optional[str] = None) -> bool:
"""Сохранить текущий свип (в текущем домене обработки) как .npy-фон."""
if sweep_for_ring is None:
return False
try:
bg = np.asarray(sweep_for_ring, dtype=np.float32).copy()
self.background_path = self._save_array(bg, self.background_path, path)
self.background = bg
self.background_raw_capture = None
self.background_source_type = "npy_processed"
self.background_reference_summary = _summary_for_npy(self.background_path)
self.last_reference_error = ""
return True
except Exception as exc:
self.last_reference_error = f"save background failed: {exc}"
return False
# ---- Загрузка эталонов (.npy или capture) ----
def _detect_source_kind(self, path: str, source_kind: str) -> Optional[str]:
sk = str(source_kind).strip().lower() or "auto"
if sk == "auto":
return detect_reference_file_format(path)
if sk in ("npy", "bin_capture", "capture"):
return "bin_capture" if sk == "capture" else sk
return None
def _load_npy_vector(self, path: str) -> np.ndarray:
arr = np.load(path)
return np.asarray(arr, dtype=np.float32).reshape(-1)
def load_calib_reference(
self,
path: Optional[str] = None,
*,
source_kind: str = "auto",
method: str = "median",
) -> bool:
"""Загрузить калибровку из .npy (огибающая) или raw capture файла."""
if path is not None:
self.set_calib_envelope_path(path)
p = self.calib_envelope_path
if not p or not os.path.isfile(p):
self.last_reference_error = f"Файл калибровки не найден: {p}"
return False
fmt = self._detect_source_kind(p, source_kind)
if fmt is None:
self.last_reference_error = f"Неизвестный формат файла калибровки: {p}"
return False
try:
if fmt == "npy":
env = self._load_npy_vector(p)
self.calib_file_envelope = env
self.calib_external_source_type = "npy"
self.calib_reference_summary = _summary_for_npy(p)
self.last_reference_error = ""
return True
sweeps = load_capture_sweeps(p, fancy=self.capture_fancy, logscale=self.capture_logscale)
vec, summary = aggregate_capture_reference(
sweeps,
channel=0,
method=method or self.reference_aggregation_method,
path=p,
)
_lower, upper = build_calib_envelopes(vec)
self.calib_file_envelope = np.asarray(upper, dtype=np.float32)
self.calib_external_source_type = "capture"
self.calib_reference_summary = summary
self.last_reference_error = ""
return True
except Exception as exc:
self.last_reference_error = f"Ошибка загрузки калибровки: {exc}"
return False
def load_background_reference(
self,
path: Optional[str] = None,
*,
source_kind: str = "auto",
method: str = "median",
) -> bool:
"""Загрузить фон из .npy (готовый домен) или raw capture файла."""
if path is not None:
self.set_background_path(path)
p = self.background_path
if not p or not os.path.isfile(p):
self.last_reference_error = f"Файл фона не найден: {p}"
return False
fmt = self._detect_source_kind(p, source_kind)
if fmt is None:
self.last_reference_error = f"Неизвестный формат файла фона: {p}"
return False
try:
if fmt == "npy":
bg = self._load_npy_vector(p)
self.background = bg
self.background_raw_capture = None
self.background_source_type = "npy_processed"
self.background_reference_summary = _summary_for_npy(p)
self.last_reference_error = ""
return True
sweeps = load_capture_sweeps(p, fancy=self.capture_fancy, logscale=self.capture_logscale)
vec, summary = aggregate_capture_reference(
sweeps,
channel=0,
method=method or self.reference_aggregation_method,
path=p,
)
self.background_raw_capture = np.asarray(vec, dtype=np.float32)
# Для UI/preview текущий background отражает текущий домен (пока raw по умолчанию).
self.background = self.background_raw_capture
self.background_source_type = "capture_raw"
self.background_reference_summary = summary
self.last_reference_error = ""
return True
except Exception as exc:
self.last_reference_error = f"Ошибка загрузки фона: {exc}"
return False
# Совместимые обертки для старого API (строго .npy)
def load_calib_envelope(self, path: Optional[str] = None) -> bool:
target = path if path is not None else self.calib_envelope_path
return self.load_calib_reference(target, source_kind="npy")
def load_background(self, path: Optional[str] = None) -> bool:
target = path if path is not None else self.background_path
return self.load_background_reference(target, source_kind="npy")
# ---- Нормировка / фон ----
def _normalize_against_active_reference(self, raw: np.ndarray) -> Tuple[Optional[np.ndarray], bool, str]:
if not self.calib_enabled:
return None, False, "off"
if self.calib_mode == "file":
if self.calib_file_envelope is None:
return None, False, "off"
src = "capture" if self.calib_external_source_type == "capture" else "npy"
return normalize_by_envelope(raw, self.calib_file_envelope), True, src
if self.last_calib_sweep is None:
return None, False, "off"
return normalize_by_calib(raw, self.last_calib_sweep, self.norm_type), True, "live"
def _transform_raw_background_for_current_domain(self, calib_applied: bool) -> Optional[np.ndarray]:
if self.background_raw_capture is None:
return None
if not calib_applied:
return self.background_raw_capture
# Порядок pipeline фиксирован: raw -> calibration -> background -> IFFT.
# Поэтому raw-фон из capture нужно привести в тот же домен, что и текущий sweep_for_ring.
if self.calib_mode == "file" and self.calib_file_envelope is not None:
return normalize_by_envelope(self.background_raw_capture, self.calib_file_envelope)
if self.calib_mode == "live" and self.last_calib_sweep is not None:
return normalize_by_calib(self.background_raw_capture, self.last_calib_sweep, self.norm_type)
return None
def _effective_background(self, calib_applied: bool) -> Tuple[Optional[np.ndarray], str]:
if self.background_source_type == "capture_raw":
bg = self._transform_raw_background_for_current_domain(calib_applied)
if bg is None:
return None, "capture(raw->calib:missing-calib)"
self.background = np.asarray(bg, dtype=np.float32)
return self.background, ("capture(raw->calib)" if calib_applied else "capture(raw)")
if self.background_source_type == "npy_processed" and self.background is not None:
return self.background, "npy"
if self.background is not None:
return self.background, "unknown"
return None, "off"
def _subtract_background(self, sweep: np.ndarray, calib_applied: bool) -> Tuple[np.ndarray, bool, str]:
if not self.background_enabled:
return sweep, False, "off"
bg, bg_src = self._effective_background(calib_applied)
if bg is None:
return sweep, False, f"{bg_src}:missing"
out = np.asarray(sweep, dtype=np.float32).copy()
w = min(out.size, bg.size)
if w > 0:
out[:w] -= bg[:w]
return out, True, bg_src
def process(self, sweep: np.ndarray, channel: int, update_references: bool = True) -> SweepProcessingResult:
"""Применить к свипу калибровку/фон и вернуть явные этапы обработки."""
raw = np.asarray(sweep, dtype=np.float32)
ch = int(channel)
if ch == 0:
if update_references:
self.last_calib_sweep = raw
if self.auto_save_live_calib_envelope:
self.save_calib_envelope()
# ch0 всегда остаётся live-калибровочной ссылкой (raw), но при file-калибровке
# можем применять её и к ch0 для отображения/обработки независимо от канала.
calib_applied = False
calib_source = "off"
normalized: Optional[np.ndarray] = None
if self.calib_enabled and self.calib_mode == "file":
normalized, calib_applied, calib_source = self._normalize_against_active_reference(raw)
base = normalized if normalized is not None else raw
processed, bg_applied, bg_source = self._subtract_background(base, calib_applied=calib_applied)
stages = ["parsed_sweep", "raw_sweep", "ch0_live_calibration_reference"]
stages.append(f"calibration_{calib_source}" if calib_applied else "calibration_off")
stages.append(f"background_{bg_source}" if bg_applied else "background_off")
stages.extend(["ring_buffer", "ifft_db"])
return SweepProcessingResult(
processed_sweep=processed,
normalized_sweep=normalized,
calibration_applied=calib_applied,
background_applied=bg_applied,
calibration_source=calib_source if calib_applied else "off",
background_source=bg_source if bg_applied else "off",
is_calibration_reference=True,
stage_trace=tuple(stages),
)
normalized, calib_applied, calib_source = self._normalize_against_active_reference(raw)
base = normalized if normalized is not None else raw
processed, bg_applied, bg_source = self._subtract_background(base, calib_applied)
stages = ["parsed_sweep", "raw_sweep"]
stages.append(f"calibration_{calib_source}" if calib_applied else "calibration_off")
stages.append(f"background_{bg_source}" if bg_applied else "background_off")
stages.extend(["ring_buffer", "ifft_db"])
return SweepProcessingResult(
processed_sweep=processed,
normalized_sweep=normalized,
calibration_applied=calib_applied,
background_applied=bg_applied,
calibration_source=calib_source if calib_applied else "off",
background_source=bg_source if bg_applied else "off",
is_calibration_reference=False,
stage_trace=tuple(stages),
)

View File

@ -0,0 +1,7 @@
"""Runtime state helpers."""
from rfg_adc_plotter.state.background_buffer import BackgroundMedianBuffer
from rfg_adc_plotter.state.ring_buffer import RingBuffer
from rfg_adc_plotter.state.runtime_state import RuntimeState
__all__ = ["BackgroundMedianBuffer", "RingBuffer", "RuntimeState"]

View File

@ -1,355 +0,0 @@
"""Состояние приложения: текущие свипы и настройки калибровки/нормировки."""
from queue import Empty, Queue
from typing import Any, Mapping, Optional
import numpy as np
from rfg_adc_plotter.processing.pipeline import (
DEFAULT_BACKGROUND_PATH,
DEFAULT_CALIB_ENVELOPE_PATH,
SweepPreprocessor,
)
from rfg_adc_plotter.state.ring_buffer import RingBuffer
from rfg_adc_plotter.types import SweepInfo, SweepPacket
CALIB_ENVELOPE_PATH = DEFAULT_CALIB_ENVELOPE_PATH
BACKGROUND_PATH = DEFAULT_BACKGROUND_PATH
def format_status(data: Mapping[str, Any]) -> str:
"""Преобразовать словарь метрик в одну строку 'k:v'."""
def _fmt(v: Any) -> str:
if v is None:
return "NA"
try:
fv = float(v)
except Exception:
return str(v)
if not np.isfinite(fv):
return "nan"
if abs(fv) >= 1000 or (0 < abs(fv) < 0.01):
return f"{fv:.3g}"
return f"{fv:.3f}".rstrip("0").rstrip(".")
parts = [f"{k}:{_fmt(v)}" for k, v in data.items() if k != "pre_exp_sweep"]
return " ".join(parts)
class AppState:
"""Весь изменяемый GUI-state: текущие данные + pipeline предобработки."""
def __init__(self, norm_type: str = "projector"):
self.current_sweep_pre_exp: Optional[np.ndarray] = None
self.current_sweep_post_exp: Optional[np.ndarray] = None
self.current_sweep_processed: Optional[np.ndarray] = None
self.current_sweep_raw: Optional[np.ndarray] = None
self.current_sweep_norm: Optional[np.ndarray] = None
self.current_info: Optional[SweepInfo] = None
self.norm_type: str = str(norm_type).strip().lower()
self.preprocessor = SweepPreprocessor(norm_type=self.norm_type)
self._last_sweep_for_ring: Optional[np.ndarray] = None
self._last_stage_trace: tuple[str, ...] = tuple()
def configure_capture_import(self, *, fancy: Optional[bool] = None, logscale: Optional[bool] = None):
self.preprocessor.set_capture_parse_options(fancy=fancy, logscale=logscale)
# ---- Свойства pipeline (для совместимости с GUI) ----
@property
def calib_enabled(self) -> bool:
return self.preprocessor.calib_enabled
@property
def calib_mode(self) -> str:
return self.preprocessor.calib_mode
@property
def calib_file_envelope(self) -> Optional[np.ndarray]:
return self.preprocessor.calib_file_envelope
@property
def last_calib_sweep(self) -> Optional[np.ndarray]:
return self.preprocessor.last_calib_sweep
@property
def background(self) -> Optional[np.ndarray]:
return self.preprocessor.background
@property
def background_enabled(self) -> bool:
return self.preprocessor.background_enabled
@property
def calib_source_type(self) -> str:
return self.preprocessor.calib_external_source_type
@property
def background_source_type(self) -> str:
return self.preprocessor.background_source_type
@property
def calib_reference_summary(self):
return self.preprocessor.calib_reference_summary
@property
def background_reference_summary(self):
return self.preprocessor.background_reference_summary
@property
def last_reference_error(self) -> str:
return self.preprocessor.last_reference_error
@property
def calib_envelope_path(self) -> str:
return self.preprocessor.calib_envelope_path
@property
def background_path(self) -> str:
return self.preprocessor.background_path
# ---- Управление файлами калибровки/фона ----
def set_calib_envelope_path(self, path: str):
self.preprocessor.set_calib_envelope_path(path)
self._refresh_current_processed()
def set_background_path(self, path: str):
self.preprocessor.set_background_path(path)
self._refresh_current_processed()
def has_calib_envelope_file(self) -> bool:
return self.preprocessor.has_calib_envelope_file()
def has_background_file(self) -> bool:
return self.preprocessor.has_background_file()
def save_calib_envelope(self, path: Optional[str] = None) -> bool:
return self.preprocessor.save_calib_envelope(path)
def load_calib_reference(self, path: Optional[str] = None) -> bool:
ok = self.preprocessor.load_calib_reference(path)
if ok:
self._refresh_current_processed()
return ok
def load_calib_envelope(self, path: Optional[str] = None) -> bool:
return self.load_calib_reference(path)
def set_calib_mode(self, mode: str):
self.preprocessor.set_calib_mode(mode)
self._refresh_current_processed()
def save_background(self, path: Optional[str] = None) -> bool:
return self.preprocessor.save_background(self._last_sweep_for_ring, path)
def load_background_reference(self, path: Optional[str] = None) -> bool:
ok = self.preprocessor.load_background_reference(path)
if ok:
self._refresh_current_processed()
return ok
def load_background(self, path: Optional[str] = None) -> bool:
return self.load_background_reference(path)
def set_background_enabled(self, enabled: bool):
self.preprocessor.set_background_enabled(enabled)
self._refresh_current_processed()
def set_calib_enabled(self, enabled: bool):
self.preprocessor.set_calib_enabled(enabled)
self._refresh_current_processed()
# ---- Вспомогательные методы для UI ----
def _current_channel(self) -> Optional[int]:
if not isinstance(self.current_info, dict):
return None
try:
return int(self.current_info.get("ch", 0))
except Exception:
return 0
def _apply_result_to_current(self, result) -> None:
self._last_stage_trace = tuple(result.stage_trace)
if result.is_calibration_reference:
self.current_sweep_norm = None
elif result.calibration_applied or result.background_applied:
self.current_sweep_norm = result.processed_sweep
else:
self.current_sweep_norm = None
self.current_sweep_processed = result.processed_sweep
self._last_sweep_for_ring = result.processed_sweep
def _refresh_current_processed(self):
if self.current_sweep_raw is None:
self.current_sweep_norm = None
self.current_sweep_processed = None
self._last_stage_trace = tuple()
return
ch = self._current_channel() or 0
result = self.preprocessor.process(self.current_sweep_raw, ch, update_references=False)
self._apply_result_to_current(result)
def format_pipeline_status(self) -> str:
"""Краткое описание pipeline для UI: от распарсенного свипа до IFFT."""
ch = self._current_channel()
if ch is None:
ch_txt = "?"
else:
ch_txt = str(ch)
reader_stage = "log-exp" if self.current_sweep_pre_exp is not None else "linear"
if ch == 0:
file_calib_applies = (
self.calib_enabled
and self.calib_mode == "file"
and self.calib_file_envelope is not None
)
if self.calib_enabled and self.calib_mode == "file":
calib_stage = self.format_calib_source_status()
else:
calib_stage = "calib[off]"
if not self.background_enabled:
bg_stage = "bg[off]"
elif self.background_source_type == "capture_raw":
if self.background is None:
bg_stage = (
"bg[capture(raw->calib):missing]"
if file_calib_applies
else "bg[capture(raw):missing]"
)
else:
bg_stage = "bg[capture(raw->calib)]" if file_calib_applies else "bg[capture(raw)]"
elif self.background_source_type == "npy_processed":
bg_stage = "bg[npy]" if self.background is not None else "bg[npy:missing]"
else:
bg_stage = "bg[sub]" if self.background is not None else "bg[missing]"
return (
f"pipeline ch{ch_txt}: parsed -> {reader_stage} -> raw -> "
f"live-calib-ref -> {calib_stage} -> {bg_stage} -> ring -> IFFT(abs, depth_m)"
)
calib_stage = self.format_calib_source_status()
bg_stage = self.format_background_source_status()
return (
f"pipeline ch{ch_txt}: parsed -> {reader_stage} -> raw -> "
f"{calib_stage} -> {bg_stage} -> ring -> IFFT(abs, depth_m)"
)
def _format_summary(self, summary) -> str:
if summary is None:
return ""
parts: list[str] = []
if getattr(summary, "sweeps_valid", 0) or getattr(summary, "sweeps_total", 0):
parts.append(f"valid:{summary.sweeps_valid}/{summary.sweeps_total}")
if getattr(summary, "dominant_width", None) is not None:
parts.append(f"w:{summary.dominant_width}")
chs = getattr(summary, "channels_seen", tuple())
if chs:
parts.append("chs:" + ",".join(str(v) for v in chs))
warns = getattr(summary, "warnings", tuple())
if warns:
parts.append(f"warn:{warns[0]}")
return " ".join(parts)
def format_calib_source_status(self) -> str:
if not self.calib_enabled:
return "calib[off]"
if self.calib_mode == "live":
return "calib[live]" if self.last_calib_sweep is not None else "calib[live:no-ref]"
if self.calib_file_envelope is None:
return "calib[file:missing]"
if self.calib_source_type == "capture":
return "calib[capture]"
if self.calib_source_type == "npy":
return "calib[npy]"
return "calib[file]"
def format_background_source_status(self) -> str:
if not self.background_enabled:
return "bg[off]"
src = self.background_source_type
if src == "capture_raw":
if self.calib_enabled:
can_map = (
(self.calib_mode == "file" and self.calib_file_envelope is not None)
or (self.calib_mode == "live" and self.last_calib_sweep is not None)
)
if not can_map:
return "bg[capture(raw->calib):missing]"
if self.background is None:
return "bg[capture(raw->calib):missing]"
return "bg[capture(raw->calib)]" if self.calib_enabled else "bg[capture(raw)]"
if src == "npy_processed":
return "bg[npy]" if self.background is not None else "bg[npy:missing]"
if self.background is not None:
return "bg[sub]"
return "bg[missing]"
def format_reference_status(self) -> str:
parts: list[str] = []
calib_s = self._format_summary(self.calib_reference_summary)
if calib_s:
parts.append(f"calib[{calib_s}]")
bg_s = self._format_summary(self.background_reference_summary)
if bg_s:
parts.append(f"bg[{bg_s}]")
if self.last_reference_error:
parts.append(f"err:{self.last_reference_error}")
return " | ".join(parts)
def format_stage_trace(self) -> str:
if not self._last_stage_trace:
return ""
return " -> ".join(self._last_stage_trace)
def drain_queue(self, q: "Queue[SweepPacket]", ring: RingBuffer) -> int:
"""Вытащить все ожидающие свипы из очереди, обновить state и ring.
Возвращает количество обработанных свипов.
"""
drained = 0
while True:
try:
s, info = q.get_nowait()
except Empty:
break
drained += 1
self.current_sweep_raw = s
self.current_sweep_post_exp = s
self.current_info = info
pre_exp = info.get("pre_exp_sweep") if isinstance(info, dict) else None
self.current_sweep_pre_exp = pre_exp if isinstance(pre_exp, np.ndarray) else None
try:
ch = int(info.get("ch", 0)) if isinstance(info, dict) else 0
except Exception:
ch = 0
result = self.preprocessor.process(s, ch, update_references=True)
self._apply_result_to_current(result)
ring.ensure_init(s.size)
ring.push(result.processed_sweep)
return drained
def format_channel_label(self) -> str:
"""Строка с номерами каналов для подписи на графике."""
if self.current_info is None:
return ""
info = self.current_info
chs = info.get("chs") if isinstance(info, dict) else None
if chs is None:
chs = info.get("ch") if isinstance(info, dict) else None
if chs is None:
return ""
try:
if isinstance(chs, (list, tuple, set)):
ch_list = sorted(int(v) for v in chs)
return "chs " + ", ".join(str(v) for v in ch_list)
return f"chs {int(chs)}"
except Exception:
return f"chs {chs}"

View File

@ -0,0 +1,49 @@
"""Rolling median buffer for persisted FFT background capture."""
from __future__ import annotations
from typing import Optional
import numpy as np
class BackgroundMedianBuffer:
"""Store recent FFT rows and expose their median profile."""
def __init__(self, max_rows: int):
self.max_rows = max(1, int(max_rows))
self.width = 0
self.head = 0
self.count = 0
self.rows: Optional[np.ndarray] = None
def reset(self) -> None:
self.width = 0
self.head = 0
self.count = 0
self.rows = None
def push(self, fft_mag: np.ndarray) -> None:
values = np.asarray(fft_mag, dtype=np.float32).reshape(-1)
if values.size == 0:
return
if self.rows is None or self.width != values.size:
self.width = values.size
self.rows = np.full((self.max_rows, self.width), np.nan, dtype=np.float32)
self.head = 0
self.count = 0
self.rows[self.head, :] = values
self.head = (self.head + 1) % self.max_rows
self.count = min(self.count + 1, self.max_rows)
def median(self) -> Optional[np.ndarray]:
if self.rows is None or self.count <= 0:
return None
rows = self.rows[: self.count] if self.count < self.max_rows else self.rows
valid_rows = np.any(np.isfinite(rows), axis=1)
if not np.any(valid_rows):
return None
median = np.nanmedian(rows[valid_rows], axis=0).astype(np.float32, copy=False)
if not np.any(np.isfinite(median)):
return None
return np.nan_to_num(median, nan=0.0).astype(np.float32, copy=False)

View File

@ -1,225 +1,266 @@
"""Кольцевой буфер свипов и FFT-строк для водопадного отображения."""
"""Ring buffers for raw sweeps and FFT waterfall rows."""
from __future__ import annotations
import time
from typing import Optional, Tuple
from typing import Optional
import numpy as np
from rfg_adc_plotter.constants import (
FREQ_MAX_GHZ,
FREQ_MIN_GHZ,
WF_WIDTH,
)
from rfg_adc_plotter.processing.fourier import (
compute_ifft_profile_from_sweep,
)
from rfg_adc_plotter.constants import FFT_LEN, SWEEP_FREQ_MAX_GHZ, SWEEP_FREQ_MIN_GHZ, WF_WIDTH
from rfg_adc_plotter.processing.fft import compute_distance_axis, compute_fft_mag_row, fft_mag_to_db
class RingBuffer:
"""Хранит последние N свипов и соответствующие FFT-строки.
Все мутабельные поля водопада инкапсулированы здесь,
что устраняет необходимость nonlocal в GUI-коде.
"""
"""Store raw sweeps, FFT rows, and matching time markers."""
def __init__(self, max_sweeps: int):
self.max_sweeps = max_sweeps
# Размер IFFT-профиля теперь динамический и определяется по первому успешному свипу.
self.fft_bins = 0
self.fft_complex_mode: str = "arccos"
# Инициализируются при первом свипе (ensure_init)
self.ring: Optional[np.ndarray] = None # (max_sweeps, WF_WIDTH)
self.ring_fft: Optional[np.ndarray] = None # (max_sweeps, fft_bins)
self.ring_time: Optional[np.ndarray] = None # (max_sweeps,)
self.head: int = 0
self.width: Optional[int] = None
self.max_sweeps = int(max_sweeps)
self.fft_bins = FFT_LEN // 2 + 1
self.fft_mode = "symmetric"
self.width = 0
self.head = 0
self.ring: Optional[np.ndarray] = None
self.ring_time: Optional[np.ndarray] = None
self.ring_fft: Optional[np.ndarray] = None
self.ring_fft_input: Optional[np.ndarray] = None
self.x_shared: Optional[np.ndarray] = None
self.fft_depth_axis_m: Optional[np.ndarray] = None # ось глубины IFFT в метрах
self.distance_axis: Optional[np.ndarray] = None
self.last_fft_mag: Optional[np.ndarray] = None
self.last_fft_db: Optional[np.ndarray] = None
self.last_freqs: Optional[np.ndarray] = None
self.y_min_fft: Optional[float] = None
self.y_max_fft: Optional[float] = None
# FFT последнего свипа (для отображения без повторного вычисления)
self.last_fft_vals: Optional[np.ndarray] = None
self.last_push_valid_points = 0
self.last_push_fft_valid = False
self.last_push_axis_valid = False
@property
def is_ready(self) -> bool:
return self.ring is not None
return self.ring is not None and self.ring_fft is not None
@property
def fft_time_axis(self) -> Optional[np.ndarray]:
"""Legacy alias: старое имя поля (раньше было время в нс, теперь глубина в м)."""
return self.fft_depth_axis_m
def fft_symmetric(self) -> bool:
return self.fft_mode == "symmetric"
def set_fft_complex_mode(self, mode: str) -> bool:
"""Выбрать режим реконструкции комплексного спектра для IFFT.
Возвращает True, если режим изменился (и FFT-буфер был сброшен).
"""
m = str(mode).strip().lower()
if m not in ("arccos", "diff"):
raise ValueError(f"Unsupported IFFT complex mode: {mode!r}")
if m == self.fft_complex_mode:
return False
self.fft_complex_mode = m
# Сбрасываем только FFT-зависимые структуры. Сырые ряды сохраняем.
def reset(self) -> None:
"""Drop all buffered sweeps and derived FFT state."""
self.width = 0
self.head = 0
self.ring = None
self.ring_time = None
self.ring_fft = None
self.fft_depth_axis_m = None
self.fft_bins = 0
self.last_fft_vals = None
self.ring_fft_input = None
self.x_shared = None
self.distance_axis = None
self.last_fft_mag = None
self.last_fft_db = None
self.last_freqs = None
self.y_min_fft = None
self.y_max_fft = None
self.last_push_valid_points = 0
self.last_push_fft_valid = False
self.last_push_axis_valid = False
def _promote_fft_cache(self, fft_mag: np.ndarray) -> bool:
fft_mag_arr = np.asarray(fft_mag, dtype=np.float32).reshape(-1)
if fft_mag_arr.size <= 0:
self.last_push_fft_valid = False
return False
fft_db = fft_mag_to_db(fft_mag_arr)
finite_db = fft_db[np.isfinite(fft_db)]
if finite_db.size <= 0:
self.last_push_fft_valid = False
return False
self.last_fft_mag = fft_mag_arr.copy()
self.last_fft_db = fft_db
fr_min = float(np.min(finite_db))
fr_max = float(np.max(finite_db))
self.y_min_fft = fr_min if self.y_min_fft is None else min(self.y_min_fft, fr_min)
self.y_max_fft = fr_max if self.y_max_fft is None else max(self.y_max_fft, fr_max)
self.last_push_fft_valid = True
return True
def ensure_init(self, sweep_width: int):
"""Инициализировать буферы при первом свипе. Повторные вызовы — no-op (кроме x_shared)."""
if self.ring is None:
self.width = WF_WIDTH
def _promote_distance_axis(self, axis: np.ndarray) -> bool:
axis_arr = np.asarray(axis, dtype=np.float64).reshape(-1)
if axis_arr.size <= 0 or not np.all(np.isfinite(axis_arr)):
self.last_push_axis_valid = False
return False
self.distance_axis = axis_arr.copy()
self.last_push_axis_valid = True
return True
def ensure_init(self, sweep_width: int) -> bool:
"""Allocate or resize buffers. Returns True when geometry changed."""
target_width = max(int(sweep_width), int(WF_WIDTH))
changed = False
if self.ring is None or self.ring_time is None or self.ring_fft is None:
self.width = target_width
self.ring = np.full((self.max_sweeps, self.width), np.nan, dtype=np.float32)
self.ring_time = np.full((self.max_sweeps,), np.nan, dtype=np.float64)
self.ring_fft = np.full((self.max_sweeps, self.fft_bins), np.nan, dtype=np.float32)
self.ring_fft_input = np.full((self.max_sweeps, self.width), np.nan + 0j, dtype=np.complex64)
self.head = 0
# Обновляем x_shared если пришёл свип большего размера
if self.x_shared is None or sweep_width > self.x_shared.size:
self.x_shared = np.linspace(FREQ_MIN_GHZ, FREQ_MAX_GHZ, sweep_width, dtype=np.float32)
changed = True
elif target_width != self.width:
new_ring = np.full((self.max_sweeps, target_width), np.nan, dtype=np.float32)
new_fft_input = np.full((self.max_sweeps, target_width), np.nan + 0j, dtype=np.complex64)
take = min(self.width, target_width)
new_ring[:, :take] = self.ring[:, :take]
if self.ring_fft_input is not None:
new_fft_input[:, :take] = self.ring_fft_input[:, :take]
self.ring = new_ring
self.ring_fft_input = new_fft_input
self.width = target_width
changed = True
def push(self, s: np.ndarray):
"""Добавить строку свипа в кольцевой буфер, вычислить FFT-строку."""
if s is None or s.size == 0 or self.ring is None:
return
w = self.ring.shape[1]
row = np.full((w,), np.nan, dtype=np.float32)
take = min(w, s.size)
row[:take] = s[:take]
self.ring[self.head, :] = row
self.ring_time[self.head] = time.time()
self.head = (self.head + 1) % self.ring.shape[0]
self._push_fft(s)
def _push_fft(self, s: np.ndarray):
depth_axis_m, fft_row = compute_ifft_profile_from_sweep(
s,
complex_mode=self.fft_complex_mode,
if self.x_shared is None or self.x_shared.size != self.width:
self.x_shared = np.linspace(
SWEEP_FREQ_MIN_GHZ,
SWEEP_FREQ_MAX_GHZ,
self.width,
dtype=np.float32,
)
fft_row = np.asarray(fft_row, dtype=np.float32).ravel()
depth_axis_m = np.asarray(depth_axis_m, dtype=np.float32).ravel()
changed = True
return changed
n = min(int(fft_row.size), int(depth_axis_m.size))
if n <= 0:
self.last_fft_vals = None
return
if n != fft_row.size:
fft_row = fft_row[:n]
if n != depth_axis_m.size:
depth_axis_m = depth_axis_m[:n]
def set_fft_mode(self, mode: str) -> bool:
"""Switch FFT mode and rebuild cached FFT rows from stored sweeps."""
normalized_mode = str(mode).strip().lower()
if normalized_mode in {"ordinary", "normal"}:
normalized_mode = "direct"
if normalized_mode in {"sym", "mirror"}:
normalized_mode = "symmetric"
if normalized_mode in {"positive-centered", "positive_centered", "zero_left"}:
normalized_mode = "positive_only"
if normalized_mode in {"positive-centered-exact", "positive_centered_exact", "zero_left_exact"}:
normalized_mode = "positive_only_exact"
if normalized_mode not in {"direct", "symmetric", "positive_only", "positive_only_exact"}:
raise ValueError(f"Unsupported FFT mode: {mode!r}")
if normalized_mode == self.fft_mode:
return False
needs_reset = (
self.ring_fft is None
or self.fft_depth_axis_m is None
or self.fft_bins != n
or self.ring_fft.shape != (self.max_sweeps, n)
or self.fft_depth_axis_m.size != n
)
if (not needs_reset) and n > 0:
prev_axis = self.fft_depth_axis_m
assert prev_axis is not None
if prev_axis.size != n:
needs_reset = True
else:
# Если ось изменилась (например, изменилась длина/частотная сетка), сбрасываем FFT-водопад.
if not np.allclose(prev_axis[[0, -1]], depth_axis_m[[0, -1]], rtol=1e-6, atol=1e-9):
needs_reset = True
if needs_reset:
self.fft_bins = n
self.ring_fft = np.full((self.max_sweeps, n), np.nan, dtype=np.float32)
self.fft_depth_axis_m = depth_axis_m.astype(np.float32, copy=True)
self.fft_mode = normalized_mode
self.y_min_fft = None
self.y_max_fft = None
self.last_push_fft_valid = False
self.last_push_axis_valid = False
assert self.ring_fft is not None
prev_head = (self.head - 1) % self.ring_fft.shape[0]
self.ring_fft[prev_head, :] = fft_row
self.last_fft_vals = fft_row
if self.ring is None or self.ring_fft is None:
return True
fr_min = np.nanmin(fft_row)
fr_max = float(np.nanpercentile(fft_row, 90))
if self.y_min_fft is None or (not np.isnan(fr_min) and fr_min < self.y_min_fft):
self.y_min_fft = float(fr_min)
if self.y_max_fft is None or (not np.isnan(fr_max) and fr_max > self.y_max_fft):
self.y_max_fft = float(fr_max)
self.ring_fft.fill(np.nan)
for row_idx in range(self.ring.shape[0]):
fft_source_row = self.ring_fft_input[row_idx] if self.ring_fft_input is not None else self.ring[row_idx]
if not np.any(np.isfinite(fft_source_row)):
continue
finite_idx = np.flatnonzero(np.isfinite(fft_source_row))
if finite_idx.size <= 0:
continue
row_width = int(finite_idx[-1]) + 1
fft_source = fft_source_row[:row_width]
freqs = self.last_freqs[:row_width] if self.last_freqs is not None and self.last_freqs.size >= row_width else self.last_freqs
fft_mag = compute_fft_mag_row(
fft_source,
freqs,
self.fft_bins,
mode=self.fft_mode,
)
self.ring_fft[row_idx, :] = fft_mag
def get_display_ring(self) -> np.ndarray:
"""Кольцо в порядке от старого к новому, ось времени по X. Форма: (width, time)."""
if self.last_freqs is not None:
self._promote_distance_axis(
compute_distance_axis(
self.last_freqs,
self.fft_bins,
mode=self.fft_mode,
)
)
last_idx = (self.head - 1) % self.max_sweeps
if self.ring_fft.shape[0] > 0:
last_fft = self.ring_fft[last_idx]
self._promote_fft_cache(last_fft)
finite = self.ring_fft[np.isfinite(self.ring_fft)]
if finite.size > 0:
finite_db = fft_mag_to_db(finite.astype(np.float32, copy=False))
self.y_min_fft = float(np.nanmin(finite_db))
self.y_max_fft = float(np.nanmax(finite_db))
return True
def set_symmetric_fft_enabled(self, enabled: bool) -> bool:
"""Backward-compatible wrapper for the old two-state FFT switch."""
return self.set_fft_mode("symmetric" if enabled else "direct")
def push(
self,
sweep: np.ndarray,
freqs: Optional[np.ndarray] = None,
*,
fft_input: Optional[np.ndarray] = None,
) -> None:
"""Push a processed sweep and refresh raw/FFT buffers."""
if sweep is None or sweep.size == 0:
return
self.ensure_init(int(sweep.size))
if self.ring is None or self.ring_time is None or self.ring_fft is None or self.ring_fft_input is None:
return
row = np.full((self.width,), np.nan, dtype=np.float32)
take = min(self.width, int(sweep.size))
row[:take] = np.asarray(sweep[:take], dtype=np.float32)
self.last_push_valid_points = int(np.count_nonzero(np.isfinite(row[:take])))
self.ring[self.head, :] = row
self.ring_time[self.head] = time.time()
if freqs is not None:
self.last_freqs = np.asarray(freqs, dtype=np.float64).copy()
fft_source = np.asarray(fft_input if fft_input is not None else sweep).reshape(-1)
fft_row = np.full((self.width,), np.nan + 0j, dtype=np.complex64)
fft_take = min(self.width, int(fft_source.size))
fft_row[:fft_take] = np.asarray(fft_source[:fft_take], dtype=np.complex64)
self.ring_fft_input[self.head, :] = fft_row
fft_mag = compute_fft_mag_row(fft_source, freqs, self.fft_bins, mode=self.fft_mode)
self.ring_fft[self.head, :] = fft_mag
self._promote_fft_cache(fft_mag)
self._promote_distance_axis(compute_distance_axis(freqs, self.fft_bins, mode=self.fft_mode))
self.head = (self.head + 1) % self.max_sweeps
def get_display_raw(self) -> np.ndarray:
if self.ring is None:
return np.zeros((1, 1), dtype=np.float32)
base = self.ring if self.head == 0 else np.roll(self.ring, -self.head, axis=0)
return base.T # (width, time)
return base.T
def get_display_ring_fft(self) -> np.ndarray:
"""FFT-кольцо в порядке от старого к новому. Форма: (bins, time)."""
def get_display_raw_decimated(self, max_points: int) -> np.ndarray:
"""Return a display-oriented raw waterfall with optional frequency decimation."""
if self.ring is None:
return np.zeros((1, 1), dtype=np.float32)
limit = int(max_points)
if limit <= 0 or self.width <= limit:
return self.get_display_raw()
row_order = np.arange(self.ring.shape[0], dtype=np.int64)
if self.head:
row_order = np.roll(row_order, -self.head)
col_idx = np.linspace(0, self.width - 1, limit, dtype=np.int64)
return self.ring[np.ix_(row_order, col_idx)].T
def get_display_fft_linear(self) -> np.ndarray:
if self.ring_fft is None:
return np.zeros((1, 1), dtype=np.float32)
base = self.ring_fft if self.head == 0 else np.roll(self.ring_fft, -self.head, axis=0)
return base.T # (bins, time)
return base.T
def get_last_fft_linear(self) -> Optional[np.ndarray]:
if self.last_fft_mag is None:
return None
return np.asarray(self.last_fft_mag, dtype=np.float32).copy()
def get_display_times(self) -> Optional[np.ndarray]:
"""Временные метки строк в порядке от старого к новому."""
if self.ring_time is None:
return None
return self.ring_time if self.head == 0 else np.roll(self.ring_time, -self.head)
def subtract_recent_mean_fft(
self, disp_fft: np.ndarray, spec_mean_sec: float
) -> np.ndarray:
"""Вычесть среднее по каждой частоте за последние spec_mean_sec секунд."""
if spec_mean_sec <= 0.0:
return disp_fft
disp_times = self.get_display_times()
if disp_times is None:
return disp_fft
now_t = time.time()
mask = np.isfinite(disp_times) & (disp_times >= (now_t - spec_mean_sec))
if not np.any(mask):
return disp_fft
try:
mean_spec = np.nanmean(disp_fft[:, mask], axis=1)
except Exception:
return disp_fft
mean_spec = np.nan_to_num(mean_spec, nan=0.0)
return disp_fft - mean_spec[:, None]
def compute_fft_levels(
self, disp_fft: np.ndarray, spec_clip: Optional[Tuple[float, float]]
) -> Optional[Tuple[float, float]]:
"""Вычислить (vmin, vmax) для отображения водопада спектров."""
# 1. По среднему спектру за видимое время
try:
mean_spec = np.nanmean(disp_fft, axis=1)
vmin_v = float(np.nanmin(mean_spec))
vmax_v = float(np.nanmax(mean_spec))
if np.isfinite(vmin_v) and np.isfinite(vmax_v) and vmin_v != vmax_v:
return (vmin_v, vmax_v)
except Exception:
pass
# 2. Процентильная обрезка
if spec_clip is not None:
try:
vmin_v = float(np.nanpercentile(disp_fft, spec_clip[0]))
vmax_v = float(np.nanpercentile(disp_fft, spec_clip[1]))
if np.isfinite(vmin_v) and np.isfinite(vmax_v) and vmin_v != vmax_v:
return (vmin_v, vmax_v)
except Exception:
pass
# 3. Глобальные накопленные мин/макс
if (
self.y_min_fft is not None
and self.y_max_fft is not None
and np.isfinite(self.y_min_fft)
and np.isfinite(self.y_max_fft)
and self.y_min_fft != self.y_max_fft
):
return (self.y_min_fft, self.y_max_fft)
return None

View File

@ -0,0 +1,54 @@
"""Mutable state container for the PyQtGraph backend."""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Dict, List, Optional
import numpy as np
from rfg_adc_plotter.constants import BACKGROUND_MEDIAN_SWEEPS
from rfg_adc_plotter.state.background_buffer import BackgroundMedianBuffer
from rfg_adc_plotter.state.ring_buffer import RingBuffer
from rfg_adc_plotter.types import SweepAuxCurves, SweepInfo
@dataclass
class RuntimeState:
ring: RingBuffer
range_min_ghz: float = 0.0
range_max_ghz: float = 0.0
full_current_freqs: Optional[np.ndarray] = None
full_current_sweep_raw: Optional[np.ndarray] = None
full_current_sweep_codes: Optional[np.ndarray] = None
full_current_fft_source: Optional[np.ndarray] = None
full_current_aux_curves: SweepAuxCurves = None
full_current_aux_curves_codes: SweepAuxCurves = None
current_freqs: Optional[np.ndarray] = None
current_distances: Optional[np.ndarray] = None
current_sweep_raw: Optional[np.ndarray] = None
current_fft_source: Optional[np.ndarray] = None
current_fft_input: Optional[np.ndarray] = None
current_fft_complex: Optional[np.ndarray] = None
current_aux_curves: SweepAuxCurves = None
current_sweep_norm: Optional[np.ndarray] = None
current_fft_mag: Optional[np.ndarray] = None
current_fft_db: Optional[np.ndarray] = None
last_calib_sweep: Optional[np.ndarray] = None
calib_envelope: Optional[np.ndarray] = None
calib_file_path: Optional[str] = None
complex_calib_curve: Optional[np.ndarray] = None
complex_calib_file_path: Optional[str] = None
background_buffer: BackgroundMedianBuffer = field(
default_factory=lambda: BackgroundMedianBuffer(BACKGROUND_MEDIAN_SWEEPS)
)
background_profile: Optional[np.ndarray] = None
background_file_path: Optional[str] = None
current_info: Optional[SweepInfo] = None
current_peak_width: Optional[float] = None
current_peak_amplitude: Optional[float] = None
peak_candidates: List[Dict[str, float]] = field(default_factory=list)
plot_dirty: bool = False
def mark_dirty(self) -> None:
self.plot_dirty = True

View File

@ -1,7 +1,34 @@
from typing import Any, Dict, Tuple, Union
"""Shared runtime and parser types."""
from __future__ import annotations
from dataclasses import dataclass
from typing import Any, Dict, Literal, Optional, Tuple, TypeAlias, Union
import numpy as np
Number = Union[int, float]
SignalKind = Literal["bin_iq", "bin_logdet"]
SweepInfo = Dict[str, Any]
SweepPacket = Tuple[np.ndarray, SweepInfo]
SweepData = Dict[str, np.ndarray]
SweepAuxCurves = Optional[Tuple[np.ndarray, np.ndarray]]
SweepPacket = Tuple[np.ndarray, SweepInfo, SweepAuxCurves]
@dataclass(frozen=True)
class StartEvent:
ch: Optional[int] = None
signal_kind: Optional[SignalKind] = None
@dataclass(frozen=True)
class PointEvent:
ch: int
x: int
y: float
aux: Optional[Tuple[float, float]] = None
signal_kind: Optional[SignalKind] = None
ParserEvent: TypeAlias = Union[StartEvent, PointEvent]

View File

@ -1,2 +0,0 @@
#!/usr/bin/bash
python3 -m rfg_adc_plotter.main --bin --backend mpl $@

View File

@ -0,0 +1,44 @@
from __future__ import annotations
import numpy as np
import unittest
from rfg_adc_plotter.state.background_buffer import BackgroundMedianBuffer
class BackgroundMedianBufferTests(unittest.TestCase):
def test_buffer_returns_median_for_partial_fill(self):
buffer = BackgroundMedianBuffer(max_rows=4)
buffer.push(np.asarray([1.0, 5.0, 9.0], dtype=np.float32))
buffer.push(np.asarray([3.0, 7.0, 11.0], dtype=np.float32))
median = buffer.median()
self.assertIsNotNone(median)
self.assertTrue(np.allclose(median, np.asarray([2.0, 6.0, 10.0], dtype=np.float32)))
def test_buffer_wraparound_keeps_latest_rows(self):
buffer = BackgroundMedianBuffer(max_rows=2)
buffer.push(np.asarray([1.0, 5.0], dtype=np.float32))
buffer.push(np.asarray([3.0, 7.0], dtype=np.float32))
buffer.push(np.asarray([9.0, 11.0], dtype=np.float32))
median = buffer.median()
self.assertIsNotNone(median)
self.assertTrue(np.allclose(median, np.asarray([6.0, 9.0], dtype=np.float32)))
def test_buffer_reset_clears_state(self):
buffer = BackgroundMedianBuffer(max_rows=2)
buffer.push(np.asarray([1.0, 2.0], dtype=np.float32))
buffer.reset()
self.assertIsNone(buffer.rows)
self.assertIsNone(buffer.median())
self.assertEqual(buffer.count, 0)
self.assertEqual(buffer.head, 0)
if __name__ == "__main__":
unittest.main()

View File

@ -1,102 +0,0 @@
from pathlib import Path
import numpy as np
from rfg_adc_plotter.io.capture_reference_loader import (
aggregate_capture_reference,
detect_reference_file_format,
load_capture_sweeps,
)
from rfg_adc_plotter.processing.pipeline import SweepPreprocessor
ROOT = Path(__file__).resolve().parents[1]
SAMPLE_BG = ROOT / "sample_data" / "empty"
SAMPLE_CALIB = ROOT / "sample_data" / "no_antennas_35dB_attenuators"
SAMPLE_NEW_FMT = ROOT / "sample_data" / "new_format" / "attenuators_50dB"
def test_detect_reference_file_format_for_sample_capture():
assert detect_reference_file_format(str(SAMPLE_BG)) == "bin_capture"
assert detect_reference_file_format(str(SAMPLE_CALIB)) == "bin_capture"
assert detect_reference_file_format(str(SAMPLE_NEW_FMT)) == "bin_capture"
def test_load_capture_sweeps_parses_binary_capture():
sweeps = load_capture_sweeps(str(SAMPLE_BG), fancy=False, logscale=False)
assert len(sweeps) > 100
sweep0, info0 = sweeps[0]
assert isinstance(sweep0, np.ndarray)
assert "ch" in info0
channels = set()
for _s, info in sweeps:
chs = info.get("chs", [info.get("ch", 0)])
channels.update(int(v) for v in chs)
assert channels == {0}
def test_load_capture_sweeps_parses_new_format_logdetector_capture():
sweeps = load_capture_sweeps(str(SAMPLE_NEW_FMT), fancy=False, logscale=False)
assert len(sweeps) > 900
widths = [int(s.size) for s, _info in sweeps]
dominant_width = max(set(widths), key=widths.count)
# Должно совпадать с ожидаемой шириной свипа из штатных capture.
assert dominant_width in (758, 759)
channels = set()
for _s, info in sweeps:
chs = info.get("chs", [info.get("ch", 0)])
channels.update(int(v) for v in chs)
assert channels == {0}
def test_aggregate_capture_reference_filters_incomplete_sweeps():
sweeps = load_capture_sweeps(str(SAMPLE_BG), fancy=False, logscale=False)
vector, summary = aggregate_capture_reference(sweeps, channel=0, method="median", path=str(SAMPLE_BG))
assert isinstance(vector, np.ndarray)
assert vector.dtype == np.float32
assert summary.sweeps_total == len(sweeps)
assert summary.sweeps_valid > 0
assert summary.sweeps_valid < summary.sweeps_total
assert summary.dominant_width in (759, 758) # sample_data starts at x=1..758 => width=759
def test_preprocessor_can_load_capture_calib_and_background_and_apply():
p = SweepPreprocessor(norm_type="projector", auto_save_live_calib_envelope=False)
p.set_capture_parse_options(fancy=False, logscale=False)
assert p.load_calib_reference(str(SAMPLE_CALIB))
p.set_calib_mode("file")
p.set_calib_enabled(True)
assert p.calib_file_envelope is not None
assert p.calib_external_source_type == "capture"
assert p.load_background_reference(str(SAMPLE_BG))
p.set_background_enabled(True)
assert p.background_source_type == "capture_raw"
n = min(758, int(p.calib_file_envelope.size))
sweep = np.linspace(-100.0, 100.0, n, dtype=np.float32)
res = p.process(sweep, channel=1, update_references=False)
assert res.calibration_applied is True
assert res.background_applied is True
assert res.calibration_source == "capture"
assert "background_capture(raw->calib)" in res.stage_trace
def test_preprocessor_applies_background_for_ch0_reference_too():
p = SweepPreprocessor(norm_type="projector", auto_save_live_calib_envelope=False)
p.set_capture_parse_options(fancy=False, logscale=False)
assert p.load_background_reference(str(SAMPLE_BG))
p.set_background_enabled(True)
n = min(758, int(p.background.size)) if p.background is not None else 758
raw = np.linspace(-10.0, 10.0, n, dtype=np.float32)
res = p.process(raw, channel=0, update_references=True)
assert res.is_calibration_reference is True
assert res.background_applied is True
assert np.any(np.abs(res.processed_sweep - raw) > 0)
assert p.last_calib_sweep is not None
assert np.allclose(p.last_calib_sweep[:n], raw[:n], equal_nan=True)

57
tests/test_cli.py Normal file
View File

@ -0,0 +1,57 @@
from __future__ import annotations
import subprocess
import sys
import unittest
from pathlib import Path
from rfg_adc_plotter.cli import build_parser
ROOT = Path(__file__).resolve().parents[1]
def _run(*args: str) -> subprocess.CompletedProcess[str]:
return subprocess.run(
[sys.executable, *args],
cwd=ROOT,
text=True,
capture_output=True,
check=False,
)
class CliTests(unittest.TestCase):
def test_logscale_and_opengl_are_opt_in(self):
args = build_parser().parse_args(["/dev/null"])
self.assertFalse(args.logscale)
self.assertFalse(args.opengl)
self.assertAlmostEqual(float(args.tty_range_v), 5.0, places=6)
args_log = build_parser().parse_args(["/dev/null", "--logscale", "--opengl", "--tty-range-v", "2.5"])
self.assertTrue(args_log.logscale)
self.assertTrue(args_log.opengl)
self.assertAlmostEqual(float(args_log.tty_range_v), 2.5, places=6)
def test_wrapper_help_works(self):
proc = _run("RFG_ADC_dataplotter.py", "--help")
self.assertEqual(proc.returncode, 0)
self.assertIn("usage:", proc.stdout)
self.assertIn("--peak_search", proc.stdout)
def test_module_help_works(self):
proc = _run("-m", "rfg_adc_plotter.main", "--help")
self.assertEqual(proc.returncode, 0)
self.assertIn("usage:", proc.stdout)
self.assertIn("--parser_16_bit_x2", proc.stdout)
self.assertIn("--parser_complex_ascii", proc.stdout)
self.assertIn("--opengl", proc.stdout)
def test_backend_mpl_reports_removal(self):
proc = _run("-m", "rfg_adc_plotter.main", "/dev/null", "--backend", "mpl")
self.assertNotEqual(proc.returncode, 0)
self.assertIn("Matplotlib backend removed", proc.stderr)
if __name__ == "__main__":
unittest.main()

View File

@ -1,54 +0,0 @@
import numpy as np
from rfg_adc_plotter.processing.fourier import (
compute_ifft_profile_from_sweep,
reconstruct_complex_spectrum_from_real_trace,
)
def test_reconstruct_complex_spectrum_arccos_mode_returns_complex128():
sweep = np.linspace(-3.0, 7.0, 128, dtype=np.float32)
z = reconstruct_complex_spectrum_from_real_trace(sweep, complex_mode="arccos")
assert z.dtype == np.complex128
assert z.shape == sweep.shape
assert np.all(np.isfinite(np.real(z)))
assert np.all(np.isfinite(np.imag(z)))
def test_reconstruct_complex_spectrum_diff_mode_returns_complex128():
sweep = np.linspace(-1.0, 1.0, 128, dtype=np.float32)
z = reconstruct_complex_spectrum_from_real_trace(sweep, complex_mode="diff")
assert z.dtype == np.complex128
assert z.shape == sweep.shape
assert np.all(np.isfinite(np.real(z)))
assert np.all(np.isfinite(np.imag(z)))
def test_reconstruct_complex_spectrum_diff_mode_projects_to_unit_circle():
sweep = np.sin(np.linspace(0.0, 6.0 * np.pi, 256)).astype(np.float32)
z = reconstruct_complex_spectrum_from_real_trace(sweep, complex_mode="diff")
mag = np.abs(z)
assert np.all(np.isfinite(mag))
assert np.allclose(mag, np.ones_like(mag), atol=1e-5, rtol=0.0)
def test_compute_ifft_profile_from_sweep_accepts_both_modes():
sweep = np.linspace(-5.0, 5.0, 257, dtype=np.float32)
d1, y1 = compute_ifft_profile_from_sweep(sweep, complex_mode="arccos")
d2, y2 = compute_ifft_profile_from_sweep(sweep, complex_mode="diff")
assert d1.dtype == np.float32 and y1.dtype == np.float32
assert d2.dtype == np.float32 and y2.dtype == np.float32
assert d1.size == y1.size and d2.size == y2.size
assert d1.size > 0 and d2.size > 0
assert np.all(np.diff(d1) >= 0.0)
assert np.all(np.diff(d2) >= 0.0)
def test_invalid_complex_mode_falls_back_deterministically_in_outer_wrapper():
sweep = np.linspace(-1.0, 1.0, 64, dtype=np.float32)
depth, y = compute_ifft_profile_from_sweep(sweep, complex_mode="unknown")
assert depth.dtype == np.float32
assert y.dtype == np.float32
assert depth.size == y.size
assert depth.size > 0

View File

@ -1,75 +0,0 @@
import numpy as np
from rfg_adc_plotter.processing.fourier import (
build_frequency_axis_hz,
compute_ifft_profile_from_sweep,
normalize_sweep_for_phase,
perform_ifft_depth_response,
reconstruct_complex_spectrum_from_real_trace,
unwrap_arccos_phase_continuous,
)
def test_normalize_sweep_for_phase_max_abs_and_finite():
sweep = np.array([np.nan, -10.0, 5.0, 20.0, -40.0, np.inf, -np.inf], dtype=np.float32)
x = normalize_sweep_for_phase(sweep)
assert x.dtype == np.float64
assert np.all(np.isfinite(x))
assert np.max(np.abs(x)) <= 1.0 + 1e-12
def test_arccos_unwrap_continuous_recovers_complex_phase_without_large_jumps():
phi_true = np.linspace(0.0, 4.0 * np.pi, 1000, dtype=np.float64)
x = np.cos(phi_true)
phi_rec = unwrap_arccos_phase_continuous(x)
assert phi_rec.shape == phi_true.shape
assert np.max(np.abs(np.diff(phi_rec))) < 0.2
z_true = np.exp(1j * phi_true)
z_rec = np.exp(1j * phi_rec)
assert np.allclose(z_rec, z_true, atol=2e-2, rtol=0.0)
def test_reconstruct_complex_spectrum_from_real_trace_output_complex128():
sweep = np.linspace(-1.0, 1.0, 64, dtype=np.float32)
z = reconstruct_complex_spectrum_from_real_trace(sweep)
assert z.dtype == np.complex128
assert z.shape == sweep.shape
assert np.all(np.isfinite(np.real(z)))
assert np.all(np.isfinite(np.imag(z)))
def test_perform_ifft_depth_response_basic_abs():
n = 128
freqs = build_frequency_axis_hz(n)
s = np.exp(1j * np.linspace(0.0, 2.0 * np.pi, n, dtype=np.float64))
depth_m, y = perform_ifft_depth_response(s, freqs, axis="abs")
assert depth_m.dtype == np.float32
assert y.dtype == np.float32
assert depth_m.ndim == 1 and y.ndim == 1
assert depth_m.size == y.size
assert depth_m.size >= n
assert np.all(np.diff(depth_m) >= 0.0)
assert np.all(y >= 0.0)
def test_perform_ifft_depth_response_bad_grid_returns_fallback_not_exception():
s = np.ones(16, dtype=np.complex128)
freqs_desc = np.linspace(10.0, 1.0, 16, dtype=np.float64)
depth_m, y = perform_ifft_depth_response(s, freqs_desc, axis="abs")
assert depth_m.size == y.size
assert depth_m.size == s.size
assert np.all(np.isfinite(depth_m))
def test_compute_ifft_profile_from_sweep_returns_depth_and_linear_abs():
sweep = np.linspace(-5.0, 7.0, 257, dtype=np.float32)
depth_m, y = compute_ifft_profile_from_sweep(sweep)
assert depth_m.dtype == np.float32
assert y.dtype == np.float32
assert depth_m.size == y.size
assert depth_m.size > 0
assert np.all(np.diff(depth_m) >= 0.0)

694
tests/test_processing.py Normal file
View File

@ -0,0 +1,694 @@
from __future__ import annotations
import os
import tempfile
import numpy as np
import unittest
from rfg_adc_plotter.constants import C_M_S, FFT_LEN, SWEEP_FREQ_MAX_GHZ, SWEEP_FREQ_MIN_GHZ
from rfg_adc_plotter.gui.pyqtgraph_backend import (
apply_distance_cut_to_axis,
apply_working_range,
apply_working_range_to_aux_curves,
build_logdet_voltage_fft_input,
build_main_window_layout,
coalesce_packets_for_ui,
compute_background_subtracted_bscan_levels,
compute_aux_phase_curve,
convert_tty_i16_to_voltage,
decimate_curve_for_display,
resolve_axis_bounds,
resolve_heavy_refresh_stride,
resolve_initial_window_size,
resolve_distance_cut_start,
sanitize_curve_data_for_display,
sanitize_image_for_display,
set_image_rect_if_ready,
resolve_visible_fft_curves,
resolve_visible_aux_curves,
)
from rfg_adc_plotter.processing.calibration import (
build_calib_envelope,
build_complex_calibration_curve,
calibrate_freqs,
load_calib_envelope,
load_complex_calibration,
recalculate_calibration_c,
save_calib_envelope,
save_complex_calibration,
)
from rfg_adc_plotter.processing.background import (
load_fft_background,
save_fft_background,
subtract_fft_background,
)
from rfg_adc_plotter.processing.fft import (
build_positive_only_exact_centered_ifft_spectrum,
build_positive_only_centered_ifft_spectrum,
build_symmetric_ifft_spectrum,
compute_distance_axis,
compute_fft_complex_row,
compute_fft_mag_row,
compute_fft_row,
fft_mag_to_db,
)
from rfg_adc_plotter.processing.normalization import (
build_calib_envelopes,
fit_complex_calibration_to_width,
normalize_by_calib,
normalize_by_complex_calibration,
normalize_by_envelope,
resample_envelope,
)
from rfg_adc_plotter.processing.peaks import find_peak_width_markers, find_top_peaks_over_ref, rolling_median_ref
class ProcessingTests(unittest.TestCase):
def test_convert_tty_i16_to_voltage_maps_and_clips_full_range(self):
codes = np.asarray([-32768.0, -16384.0, 0.0, 16384.0, 32767.0], dtype=np.float32)
volts = convert_tty_i16_to_voltage(codes, 5.0)
self.assertEqual(volts.shape, codes.shape)
self.assertAlmostEqual(float(volts[0]), -5.0, places=6)
self.assertAlmostEqual(float(volts[2]), 0.0, places=6)
self.assertAlmostEqual(float(volts[-1]), 5.0, places=6)
self.assertTrue(np.all(volts >= -5.0))
self.assertTrue(np.all(volts <= 5.0))
def test_build_logdet_voltage_fft_input_converts_codes_and_exponentiates(self):
codes = np.asarray([-32768.0, 0.0, 32767.0], dtype=np.float32)
volts, fft_input = build_logdet_voltage_fft_input(codes, 5.0)
self.assertEqual(volts.shape, codes.shape)
self.assertEqual(fft_input.shape, codes.shape)
self.assertAlmostEqual(float(volts[0]), -5.0, places=6)
self.assertAlmostEqual(float(volts[1]), 0.0, places=6)
self.assertAlmostEqual(float(volts[2]), 5.0, places=6)
self.assertTrue(np.allclose(fft_input, np.exp(volts.astype(np.float32))))
def test_build_logdet_voltage_fft_input_clips_exp_argument_and_respects_range(self):
codes = np.asarray([32767.0], dtype=np.float32)
volts_5, fft_5 = build_logdet_voltage_fft_input(codes, 5.0, exp_input_limit=2.0)
volts_10, fft_10 = build_logdet_voltage_fft_input(codes, 10.0, exp_input_limit=2.0)
self.assertAlmostEqual(float(volts_5[0]), 5.0, places=6)
self.assertAlmostEqual(float(volts_10[0]), 10.0, places=6)
self.assertAlmostEqual(float(fft_5[0]), float(np.exp(np.float32(2.0))), places=5)
self.assertAlmostEqual(float(fft_10[0]), float(np.exp(np.float32(2.0))), places=5)
self.assertTrue(np.isfinite(fft_5[0]))
self.assertTrue(np.isfinite(fft_10[0]))
def test_recalculate_calibration_preserves_requested_edges(self):
coeffs = recalculate_calibration_c(np.asarray([0.0, 1.0, 0.025], dtype=np.float64), 3.3, 14.3)
y0 = coeffs[0] + coeffs[1] * 3.3 + coeffs[2] * (3.3 ** 2)
y1 = coeffs[0] + coeffs[1] * 14.3 + coeffs[2] * (14.3 ** 2)
self.assertTrue(np.isclose(y0, 3.3))
self.assertTrue(np.isclose(y1, 14.3))
def test_calibrate_freqs_returns_monotonic_axis_and_same_shape(self):
sweep = {"F": np.linspace(3.3, 14.3, 32), "I": np.linspace(-1.0, 1.0, 32)}
calibrated = calibrate_freqs(sweep)
self.assertEqual(calibrated["F"].shape, (32,))
self.assertEqual(calibrated["I"].shape, (32,))
self.assertTrue(np.all(np.diff(calibrated["F"]) >= 0.0))
def test_calibrate_freqs_keeps_complex_payload(self):
sweep = {
"F": np.linspace(3.3, 14.3, 32),
"I": np.exp(1j * np.linspace(0.0, np.pi, 32)).astype(np.complex64),
}
calibrated = calibrate_freqs(sweep)
self.assertEqual(calibrated["F"].shape, (32,))
self.assertEqual(calibrated["I"].shape, (32,))
self.assertTrue(np.iscomplexobj(calibrated["I"]))
self.assertTrue(np.all(np.isfinite(calibrated["I"])))
def test_normalizers_and_envelopes_return_finite_ranges(self):
calib = (np.sin(np.linspace(0.0, 4.0 * np.pi, 64)) * 5.0).astype(np.float32)
raw = calib * 0.75
lower, upper = build_calib_envelopes(calib)
self.assertEqual(lower.shape, calib.shape)
self.assertEqual(upper.shape, calib.shape)
self.assertTrue(np.all(lower <= upper))
self.assertTrue(np.all(np.isfinite(upper)))
self.assertLess(
float(np.mean(np.abs(np.diff(upper, n=2)))),
float(np.mean(np.abs(np.diff(calib, n=2)))),
)
simple = normalize_by_calib(raw, calib + 10.0, norm_type="simple")
projector = normalize_by_calib(raw, calib, norm_type="projector")
self.assertEqual(simple.shape, raw.shape)
self.assertEqual(projector.shape, raw.shape)
self.assertTrue(np.any(np.isfinite(simple)))
self.assertTrue(np.any(np.isfinite(projector)))
def test_file_calibration_envelope_roundtrip_and_division(self):
raw = (np.sin(np.linspace(0.0, 8.0 * np.pi, 128)) * 50.0 + 100.0).astype(np.float32)
envelope = build_calib_envelope(raw)
normalized = normalize_by_envelope(raw, envelope)
resampled = resample_envelope(envelope, 96)
self.assertEqual(envelope.shape, raw.shape)
self.assertEqual(normalized.shape, raw.shape)
self.assertEqual(resampled.shape, (96,))
self.assertTrue(np.any(np.isfinite(normalized)))
self.assertTrue(np.all(np.isfinite(envelope)))
with tempfile.TemporaryDirectory() as tmp_dir:
path = os.path.join(tmp_dir, "calibration_envelope")
saved_path = save_calib_envelope(path, envelope)
loaded = load_calib_envelope(saved_path)
self.assertTrue(saved_path.endswith(".npy"))
self.assertTrue(np.allclose(loaded, envelope))
def test_normalize_by_envelope_adds_small_epsilon_to_zero_denominator(self):
raw = np.asarray([1.0, 2.0, 3.0], dtype=np.float32)
envelope = np.asarray([0.0, 1.0, -1.0], dtype=np.float32)
normalized = normalize_by_envelope(raw, envelope)
self.assertTrue(np.all(np.isfinite(normalized)))
self.assertGreater(normalized[0], 1e8)
self.assertAlmostEqual(float(normalized[1]), 2.0, places=5)
self.assertAlmostEqual(float(normalized[2]), -3.0, places=5)
def test_normalize_by_envelope_supports_complex_input(self):
raw = np.asarray([1.0 + 1.0j, 2.0 - 2.0j], dtype=np.complex64)
envelope = np.asarray([1.0, 2.0], dtype=np.float32)
normalized = normalize_by_envelope(raw, envelope)
self.assertTrue(np.iscomplexobj(normalized))
self.assertTrue(np.all(np.isfinite(normalized)))
self.assertTrue(np.allclose(normalized, np.asarray([1.0 + 1.0j, 1.0 - 1.0j], dtype=np.complex64)))
def test_load_calib_envelope_rejects_empty_payload(self):
with tempfile.TemporaryDirectory() as tmp_dir:
path = os.path.join(tmp_dir, "empty.npy")
np.save(path, np.zeros((0,), dtype=np.float32))
with self.assertRaises(ValueError):
load_calib_envelope(path)
def test_complex_calibration_curve_roundtrip(self):
ch1 = np.asarray([1.0, 2.0, 3.0], dtype=np.float32)
ch2 = np.asarray([0.5, -1.0, 4.0], dtype=np.float32)
curve = build_complex_calibration_curve(ch1, ch2)
expected = np.asarray([1.0 + 0.5j, 2.0 - 1.0j, 3.0 + 4.0j], dtype=np.complex64)
self.assertTrue(np.iscomplexobj(curve))
self.assertTrue(np.allclose(curve, expected))
with tempfile.TemporaryDirectory() as tmp_dir:
path = os.path.join(tmp_dir, "complex_calibration")
saved_path = save_complex_calibration(path, curve)
loaded = load_complex_calibration(saved_path)
self.assertTrue(saved_path.endswith(".npy"))
self.assertEqual(loaded.dtype, np.complex64)
self.assertTrue(np.allclose(loaded, expected))
def test_fit_complex_calibration_to_width_pads_or_trims(self):
calib = np.asarray([1.0 + 1.0j, 2.0 + 2.0j], dtype=np.complex64)
padded = fit_complex_calibration_to_width(calib, 4)
trimmed = fit_complex_calibration_to_width(
np.asarray([1.0 + 1.0j, 2.0 + 2.0j, 3.0 + 3.0j], dtype=np.complex64),
2,
)
self.assertEqual(padded.shape, (4,))
self.assertTrue(np.allclose(padded, np.asarray([1.0 + 1.0j, 2.0 + 2.0j, 1.0 + 0.0j, 1.0 + 0.0j], dtype=np.complex64)))
self.assertEqual(trimmed.shape, (2,))
self.assertTrue(np.allclose(trimmed, np.asarray([1.0 + 1.0j, 2.0 + 2.0j], dtype=np.complex64)))
def test_normalize_by_complex_calibration_handles_zero_and_length_mismatch(self):
signal = np.asarray([2.0 + 2.0j, 4.0 + 0.0j, 3.0 + 3.0j], dtype=np.complex64)
calib = np.asarray([1.0 + 1.0j, 0.0 + 0.0j], dtype=np.complex64)
normalized = normalize_by_complex_calibration(signal, calib)
expected = np.asarray([2.0 + 0.0j, 4.0 + 0.0j, 3.0 + 3.0j], dtype=np.complex64)
self.assertTrue(np.iscomplexobj(normalized))
self.assertTrue(np.all(np.isfinite(normalized)))
self.assertTrue(np.allclose(normalized, expected))
def test_fft_background_roundtrip_and_rejects_non_1d_payload(self):
background = np.asarray([0.5, 1.5, 2.5], dtype=np.float32)
with tempfile.TemporaryDirectory() as tmp_dir:
path = os.path.join(tmp_dir, "fft_background")
saved_path = save_fft_background(path, background)
loaded = load_fft_background(saved_path)
self.assertTrue(saved_path.endswith(".npy"))
self.assertTrue(np.allclose(loaded, background))
invalid_path = os.path.join(tmp_dir, "fft_background_invalid.npy")
np.save(invalid_path, np.zeros((2, 2), dtype=np.float32))
with self.assertRaises(ValueError):
load_fft_background(invalid_path)
def test_subtract_fft_background_clamps_negative_residuals_to_zero(self):
signal = np.asarray([1.0, 2.0, 3.0], dtype=np.float32)
background = np.asarray([1.0, 1.5, 5.0], dtype=np.float32)
subtracted = subtract_fft_background(signal, background)
self.assertTrue(np.allclose(subtracted, np.asarray([0.0, 0.5, 0.0], dtype=np.float32)))
self.assertTrue(np.allclose(subtract_fft_background(signal, signal), 0.0))
def test_apply_working_range_crops_sweep_to_selected_band(self):
freqs = np.linspace(3.3, 14.3, 12, dtype=np.float64)
sweep = np.arange(12, dtype=np.float32)
cropped_freqs, cropped_sweep = apply_working_range(freqs, sweep, 5.0, 9.0)
self.assertGreater(cropped_freqs.size, 0)
self.assertEqual(cropped_freqs.shape, cropped_sweep.shape)
self.assertGreaterEqual(float(np.min(cropped_freqs)), 5.0)
self.assertLessEqual(float(np.max(cropped_freqs)), 9.0)
def test_apply_working_range_returns_empty_when_no_points_match(self):
freqs = np.linspace(3.3, 14.3, 12, dtype=np.float64)
sweep = np.arange(12, dtype=np.float32)
cropped_freqs, cropped_sweep = apply_working_range(freqs, sweep, 20.0, 21.0)
self.assertEqual(cropped_freqs.shape, (0,))
self.assertEqual(cropped_sweep.shape, (0,))
def test_apply_working_range_to_aux_curves_uses_same_mask_as_raw_sweep(self):
freqs = np.linspace(3.3, 14.3, 6, dtype=np.float64)
sweep = np.asarray([0.0, 1.0, np.nan, 3.0, 4.0, 5.0], dtype=np.float32)
aux = (
np.asarray([10.0, 11.0, 12.0, 13.0, 14.0, 15.0], dtype=np.float32),
np.asarray([20.0, 21.0, 22.0, 23.0, 24.0, 25.0], dtype=np.float32),
)
cropped_freqs, cropped_sweep = apply_working_range(freqs, sweep, 4.0, 12.5)
cropped_aux = apply_working_range_to_aux_curves(freqs, sweep, aux, 4.0, 12.5)
self.assertIsNotNone(cropped_aux)
self.assertEqual(cropped_aux[0].shape, cropped_freqs.shape)
self.assertEqual(cropped_aux[1].shape, cropped_freqs.shape)
self.assertEqual(cropped_aux[0].shape, cropped_sweep.shape)
self.assertTrue(np.allclose(cropped_aux[0], np.asarray([11.0, 13.0, 14.0], dtype=np.float32)))
self.assertTrue(np.allclose(cropped_aux[1], np.asarray([21.0, 23.0, 24.0], dtype=np.float32)))
def test_resolve_visible_aux_curves_obeys_checkbox_state(self):
aux = (
np.asarray([1.0, 2.0], dtype=np.float32),
np.asarray([3.0, 4.0], dtype=np.float32),
)
self.assertIsNone(resolve_visible_aux_curves(aux, enabled=False))
visible = resolve_visible_aux_curves(aux, enabled=True)
self.assertIsNotNone(visible)
self.assertTrue(np.allclose(visible[0], aux[0]))
self.assertTrue(np.allclose(visible[1], aux[1]))
def test_compute_aux_phase_curve_returns_atan2_of_aux_channels(self):
aux = (
np.asarray([1.0, 1.0, -1.0, 0.0], dtype=np.float32),
np.asarray([0.0, 1.0, 1.0, 1.0], dtype=np.float32),
)
phase = compute_aux_phase_curve(aux)
self.assertIsNotNone(phase)
expected = np.asarray([0.0, np.pi / 4.0, 3.0 * np.pi / 4.0, np.pi / 2.0], dtype=np.float32)
self.assertEqual(phase.shape, expected.shape)
self.assertTrue(np.allclose(phase, expected, atol=1e-6))
def test_decimate_curve_for_display_preserves_small_series(self):
xs = np.linspace(3.3, 14.3, 64, dtype=np.float64)
ys = np.linspace(-1.0, 1.0, 64, dtype=np.float32)
decimated_x, decimated_y = decimate_curve_for_display(xs, ys, max_points=128)
self.assertTrue(np.allclose(decimated_x, xs))
self.assertTrue(np.allclose(decimated_y, ys))
def test_decimate_curve_for_display_limits_points_and_keeps_endpoints(self):
xs = np.linspace(3.3, 14.3, 10000, dtype=np.float64)
ys = np.sin(np.linspace(0.0, 12.0 * np.pi, 10000)).astype(np.float32)
decimated_x, decimated_y = decimate_curve_for_display(xs, ys, max_points=512)
self.assertLessEqual(decimated_x.size, 512)
self.assertEqual(decimated_x.shape, decimated_y.shape)
self.assertAlmostEqual(float(decimated_x[0]), float(xs[0]), places=12)
self.assertAlmostEqual(float(decimated_x[-1]), float(xs[-1]), places=12)
self.assertAlmostEqual(float(decimated_y[0]), float(ys[0]), places=6)
self.assertAlmostEqual(float(decimated_y[-1]), float(ys[-1]), places=6)
def test_coalesce_packets_for_ui_keeps_newest_packets(self):
packets = [
(np.asarray([float(idx)], dtype=np.float32), {"sweep": idx}, None)
for idx in range(6)
]
kept, skipped = coalesce_packets_for_ui(packets, max_packets=2)
self.assertEqual(skipped, 4)
self.assertEqual(len(kept), 2)
self.assertEqual(int(kept[0][1]["sweep"]), 4)
self.assertEqual(int(kept[1][1]["sweep"]), 5)
def test_coalesce_packets_for_ui_never_returns_empty_for_non_empty_input(self):
packets = [
(np.asarray([1.0], dtype=np.float32), {"sweep": 1}, None),
]
kept, skipped = coalesce_packets_for_ui(packets, max_packets=0)
self.assertEqual(skipped, 0)
self.assertEqual(len(kept), 1)
self.assertEqual(int(kept[0][1]["sweep"]), 1)
def test_coalesce_packets_for_ui_switches_to_latest_only_on_large_backlog(self):
packets = [
(np.asarray([float(idx)], dtype=np.float32), {"sweep": idx}, None)
for idx in range(40)
]
kept, skipped = coalesce_packets_for_ui(packets, max_packets=8, backlog_packets=40)
self.assertEqual(skipped, 39)
self.assertEqual(len(kept), 1)
self.assertEqual(int(kept[0][1]["sweep"]), 39)
def test_resolve_heavy_refresh_stride_increases_with_backlog(self):
self.assertEqual(resolve_heavy_refresh_stride(0, max_packets=8), 1)
self.assertEqual(resolve_heavy_refresh_stride(20, max_packets=8), 2)
self.assertEqual(resolve_heavy_refresh_stride(40, max_packets=8), 4)
def test_sanitize_curve_data_for_display_rejects_fully_nonfinite_series(self):
xs, ys = sanitize_curve_data_for_display(
np.asarray([np.nan, np.nan], dtype=np.float64),
np.asarray([np.nan, np.nan], dtype=np.float32),
)
self.assertEqual(xs.shape, (0,))
self.assertEqual(ys.shape, (0,))
def test_sanitize_image_for_display_rejects_fully_nonfinite_frame(self):
data = sanitize_image_for_display(np.full((4, 4), np.nan, dtype=np.float32))
self.assertIsNone(data)
def test_set_image_rect_if_ready_skips_uninitialized_image(self):
class _DummyImageItem:
def __init__(self):
self.calls = 0
def width(self):
return None
def height(self):
return None
def setRect(self, *_args):
self.calls += 1
image_item = _DummyImageItem()
applied = set_image_rect_if_ready(image_item, 0.0, 0.0, 10.0, 1.0)
self.assertFalse(applied)
self.assertEqual(image_item.calls, 0)
def test_resolve_axis_bounds_rejects_nonfinite_ranges(self):
bounds = resolve_axis_bounds(np.asarray([np.nan, np.inf], dtype=np.float64))
self.assertIsNone(bounds)
def test_resolve_distance_cut_start_interpolates_with_percent(self):
axis = np.asarray([0.0, 1.0, 2.0, 3.0], dtype=np.float64)
cut_start = resolve_distance_cut_start(axis, 50.0)
self.assertIsNotNone(cut_start)
self.assertAlmostEqual(float(cut_start), 1.5, places=6)
def test_apply_distance_cut_to_axis_keeps_farthest_point_for_extreme_cut(self):
axis = np.asarray([0.0, 1.0, 2.0, 3.0], dtype=np.float64)
cut_axis, keep_mask = apply_distance_cut_to_axis(axis, 10.0)
self.assertEqual(cut_axis.shape, (1,))
self.assertEqual(keep_mask.shape, axis.shape)
self.assertTrue(bool(keep_mask[-1]))
self.assertAlmostEqual(float(cut_axis[0]), 3.0, places=6)
def test_resolve_initial_window_size_stays_within_small_screen(self):
width, height = resolve_initial_window_size(800, 480)
self.assertLessEqual(width, 800)
self.assertLessEqual(height, 480)
self.assertGreaterEqual(width, 640)
self.assertGreaterEqual(height, 420)
def test_build_main_window_layout_uses_splitter_and_scroll_area(self):
os.environ.setdefault("QT_QPA_PLATFORM", "offscreen")
try:
from PyQt5 import QtCore, QtWidgets
except Exception as exc: # pragma: no cover - environment-dependent
self.skipTest(f"Qt unavailable: {exc}")
app = QtWidgets.QApplication.instance() or QtWidgets.QApplication([])
main_window = QtWidgets.QWidget()
try:
_layout, splitter, _plot_layout, settings_widget, settings_layout, settings_scroll = build_main_window_layout(
QtCore,
QtWidgets,
main_window,
)
self.assertIsInstance(splitter, QtWidgets.QSplitter)
self.assertIsInstance(settings_scroll, QtWidgets.QScrollArea)
self.assertIs(settings_scroll.widget(), settings_widget)
self.assertIsInstance(settings_layout, QtWidgets.QVBoxLayout)
finally:
main_window.close()
def test_background_subtracted_bscan_levels_ignore_zero_floor(self):
disp_fft_lin = np.zeros((4, 8), dtype=np.float32)
disp_fft_lin[1, 2:6] = np.asarray([0.05, 0.1, 0.5, 2.0], dtype=np.float32)
disp_fft_lin[2, 1:6] = np.asarray([0.08, 0.2, 0.7, 3.0, 9.0], dtype=np.float32)
disp_fft = fft_mag_to_db(disp_fft_lin)
levels = compute_background_subtracted_bscan_levels(disp_fft_lin, disp_fft)
self.assertIsNotNone(levels)
positive_vals = disp_fft[disp_fft_lin > 0.0]
self.assertAlmostEqual(levels[0], float(np.nanpercentile(positive_vals, 15.0)), places=5)
self.assertAlmostEqual(levels[1], float(np.nanpercentile(positive_vals, 99.7)), places=5)
zero_floor = disp_fft[disp_fft_lin == 0.0]
self.assertLess(float(np.nanmax(zero_floor)), levels[0])
def test_background_subtracted_bscan_levels_fallback_when_residuals_too_sparse(self):
disp_fft_lin = np.zeros((3, 4), dtype=np.float32)
disp_fft_lin[1, 2] = 1.0
disp_fft = fft_mag_to_db(disp_fft_lin)
levels = compute_background_subtracted_bscan_levels(disp_fft_lin, disp_fft)
self.assertIsNone(levels)
def test_fft_helpers_return_expected_shapes(self):
sweep = np.sin(np.linspace(0.0, 4.0 * np.pi, 128)).astype(np.float32)
freqs = np.linspace(3.3, 14.3, 128, dtype=np.float64)
mag = compute_fft_mag_row(sweep, freqs, 513)
row = compute_fft_row(sweep, freqs, 513)
axis = compute_distance_axis(freqs, 513)
self.assertEqual(mag.shape, (513,))
self.assertEqual(row.shape, (513,))
self.assertEqual(axis.shape, (513,))
self.assertTrue(np.all(np.diff(axis) >= 0.0))
def test_symmetric_ifft_spectrum_has_zero_gap_and_mirrored_band(self):
sweep = np.linspace(1.0, 2.0, 128, dtype=np.float32)
freqs = np.linspace(4.0, 10.0, 128, dtype=np.float64)
spectrum = build_symmetric_ifft_spectrum(sweep, freqs, fft_len=FFT_LEN)
self.assertIsNotNone(spectrum)
freq_axis = np.linspace(-10.0, 10.0, FFT_LEN, dtype=np.float64)
neg_idx_all = np.flatnonzero(freq_axis <= (-4.0))
pos_idx_all = np.flatnonzero(freq_axis >= 4.0)
band_len = int(min(neg_idx_all.size, pos_idx_all.size))
neg_idx = neg_idx_all[:band_len]
pos_idx = pos_idx_all[-band_len:]
zero_mask = (freq_axis > (-4.0)) & (freq_axis < 4.0)
self.assertTrue(np.allclose(spectrum[zero_mask], 0.0))
self.assertTrue(np.allclose(spectrum[neg_idx], spectrum[pos_idx][::-1]))
def test_positive_only_centered_spectrum_keeps_zeros_until_positive_min(self):
sweep = np.linspace(1.0, 2.0, 128, dtype=np.float32)
freqs = np.linspace(4.0, 10.0, 128, dtype=np.float64)
spectrum = build_positive_only_centered_ifft_spectrum(sweep, freqs, fft_len=FFT_LEN)
self.assertIsNotNone(spectrum)
freq_axis = np.linspace(-10.0, 10.0, FFT_LEN, dtype=np.float64)
zero_mask = freq_axis < 4.0
pos_idx = np.flatnonzero(freq_axis >= 4.0)
self.assertTrue(np.allclose(spectrum[zero_mask], 0.0))
self.assertTrue(np.any(np.abs(spectrum[pos_idx]) > 0.0))
def test_positive_only_exact_spectrum_uses_direct_index_insertion_without_window(self):
sweep = np.asarray([1.0, 2.0, 3.0], dtype=np.float32)
freqs = np.asarray([4.0, 5.0, 6.0], dtype=np.float64)
spectrum = build_positive_only_exact_centered_ifft_spectrum(sweep, freqs)
self.assertIsNotNone(spectrum)
df = (6.0 - 4.0) / 2.0
f_shift = np.arange(-6.0, 6.0 + (0.5 * df), df, dtype=np.float64)
idx = np.round((freqs - f_shift[0]) / df).astype(np.int64)
zero_mask = (f_shift > -6.0) & (f_shift < 4.0)
self.assertEqual(int(spectrum.size), int(f_shift.size))
self.assertTrue(np.allclose(spectrum[zero_mask], 0.0))
self.assertTrue(np.allclose(spectrum[idx], sweep))
def test_complex_symmetric_ifft_spectrum_uses_conjugate_mirror(self):
sweep = np.exp(1j * np.linspace(0.0, np.pi, 128)).astype(np.complex64)
freqs = np.linspace(4.0, 10.0, 128, dtype=np.float64)
spectrum = build_symmetric_ifft_spectrum(sweep, freqs, fft_len=FFT_LEN)
self.assertIsNotNone(spectrum)
freq_axis = np.linspace(-10.0, 10.0, FFT_LEN, dtype=np.float64)
neg_idx_all = np.flatnonzero(freq_axis <= (-4.0))
pos_idx_all = np.flatnonzero(freq_axis >= 4.0)
band_len = int(min(neg_idx_all.size, pos_idx_all.size))
neg_idx = neg_idx_all[:band_len]
pos_idx = pos_idx_all[-band_len:]
self.assertTrue(np.iscomplexobj(spectrum))
self.assertTrue(np.allclose(spectrum[neg_idx], np.conj(spectrum[pos_idx][::-1])))
def test_compute_fft_helpers_accept_complex_input(self):
sweep = np.exp(1j * np.linspace(0.0, 2.0 * np.pi, 128)).astype(np.complex64)
freqs = np.linspace(3.3, 14.3, 128, dtype=np.float64)
complex_row = compute_fft_complex_row(sweep, freqs, 513, mode="positive_only")
mag = compute_fft_mag_row(sweep, freqs, 513, mode="positive_only")
row = compute_fft_row(sweep, freqs, 513, mode="positive_only")
self.assertEqual(complex_row.shape, (513,))
self.assertTrue(np.iscomplexobj(complex_row))
self.assertEqual(mag.shape, (513,))
self.assertEqual(row.shape, (513,))
self.assertTrue(np.allclose(mag, np.abs(complex_row), equal_nan=True))
self.assertTrue(np.any(np.isfinite(mag)))
self.assertTrue(np.any(np.isfinite(row)))
def test_compute_fft_complex_row_positive_only_exact_matches_manual_ifftshift_ifft(self):
sweep = np.asarray([1.0 + 1.0j, 2.0 + 0.0j, 3.0 - 1.0j], dtype=np.complex64)
freqs = np.asarray([4.0, 5.0, 6.0], dtype=np.float64)
bins = 16
row = compute_fft_complex_row(sweep, freqs, bins, mode="positive_only_exact")
df = (6.0 - 4.0) / 2.0
f_shift = np.arange(-6.0, 6.0 + (0.5 * df), df, dtype=np.float64)
manual_shift = np.zeros((f_shift.size,), dtype=np.complex64)
idx = np.round((freqs - f_shift[0]) / df).astype(np.int64)
manual_shift[idx] = sweep
manual_ifft = np.fft.ifft(np.fft.ifftshift(manual_shift))
expected = np.full((bins,), np.nan + 0j, dtype=np.complex64)
expected[: manual_ifft.size] = np.asarray(manual_ifft, dtype=np.complex64)
self.assertEqual(row.shape, (bins,))
self.assertTrue(np.allclose(row, expected, equal_nan=True))
def test_positive_only_exact_distance_axis_uses_exact_grid_geometry(self):
freqs = np.asarray([4.0, 5.0, 6.0], dtype=np.float64)
bins = 8
axis = compute_distance_axis(freqs, bins, mode="positive_only_exact")
# With a small bins budget the exact-mode grid is downsampled so
# internal IFFT length does not exceed visible bins.
df_hz = 2e9
n_shift = int(np.arange(-6.0, 6.0 + 1.0, 2.0, dtype=np.float64).size)
expected_step = C_M_S / (2.0 * n_shift * df_hz)
expected = np.arange(bins, dtype=np.float64) * expected_step
self.assertEqual(axis.shape, (bins,))
self.assertTrue(np.allclose(axis, expected))
def test_positive_only_exact_mode_remains_stable_when_input_points_double(self):
bins = FFT_LEN // 2 + 1
tau_s = 45e-9
freqs_400 = np.linspace(3.3, 14.3, 400, dtype=np.float64)
freqs_800 = np.linspace(3.3, 14.3, 800, dtype=np.float64)
sweep_400 = np.exp(-1j * 2.0 * np.pi * freqs_400 * 1e9 * tau_s).astype(np.complex64)
sweep_800 = np.exp(-1j * 2.0 * np.pi * freqs_800 * 1e9 * tau_s).astype(np.complex64)
mag_400 = compute_fft_mag_row(sweep_400, freqs_400, bins, mode="positive_only_exact")
mag_800 = compute_fft_mag_row(sweep_800, freqs_800, bins, mode="positive_only_exact")
self.assertEqual(mag_400.shape, mag_800.shape)
finite = np.isfinite(mag_400) & np.isfinite(mag_800)
self.assertGreater(int(np.count_nonzero(finite)), int(0.95 * bins))
idx_400 = int(np.nanargmax(mag_400))
idx_800 = int(np.nanargmax(mag_800))
peak_400 = float(np.nanmax(mag_400))
peak_800 = float(np.nanmax(mag_800))
self.assertLess(abs(idx_400 - idx_800), 64)
self.assertGreater(idx_400, 8)
self.assertGreater(idx_800, 8)
self.assertLess(idx_400, bins - 8)
self.assertLess(idx_800, bins - 8)
self.assertGreater(peak_400, 0.05)
self.assertGreater(peak_800, 0.05)
def test_resolve_visible_fft_curves_handles_complex_mode(self):
complex_row = np.asarray([1.0 + 2.0j, -3.0 + 4.0j], dtype=np.complex64)
mag = np.abs(complex_row).astype(np.float32)
abs_curve, real_curve, imag_curve = resolve_visible_fft_curves(
complex_row,
mag,
complex_mode=True,
show_abs=True,
show_real=False,
show_imag=True,
)
self.assertTrue(np.allclose(abs_curve, mag))
self.assertIsNone(real_curve)
self.assertTrue(np.allclose(imag_curve, np.asarray([2.0, 4.0], dtype=np.float32)))
def test_resolve_visible_fft_curves_preserves_legacy_abs_mode(self):
mag = np.asarray([1.0, 2.0, 3.0], dtype=np.float32)
abs_curve, real_curve, imag_curve = resolve_visible_fft_curves(
None,
mag,
complex_mode=False,
show_abs=True,
show_real=True,
show_imag=True,
)
self.assertTrue(np.allclose(abs_curve, mag))
self.assertIsNone(real_curve)
self.assertIsNone(imag_curve)
def test_symmetric_distance_axis_uses_windowed_frequency_bounds(self):
freqs = np.linspace(4.0, 10.0, 128, dtype=np.float64)
axis = compute_distance_axis(freqs, 513, mode="symmetric")
df_hz = (2.0 * 10.0 / max(1, FFT_LEN - 1)) * 1e9
expected_step = 299_792_458.0 / (2.0 * FFT_LEN * df_hz)
self.assertEqual(axis.shape, (513,))
self.assertTrue(np.all(np.diff(axis) >= 0.0))
self.assertAlmostEqual(float(axis[1] - axis[0]), expected_step, places=15)
def test_peak_helpers_find_reference_and_peak_boxes(self):
xs = np.linspace(0.0, 10.0, 200)
ys = np.exp(-((xs - 5.0) ** 2) / 0.4) * 10.0 + 1.0
ref = rolling_median_ref(xs, ys, 2.0)
peaks = find_top_peaks_over_ref(xs, ys, ref, top_n=3)
width = find_peak_width_markers(xs, ys)
self.assertEqual(ref.shape, ys.shape)
self.assertEqual(len(peaks), 1)
self.assertGreater(peaks[0]["x"], 4.0)
self.assertLess(peaks[0]["x"], 6.0)
self.assertIsNotNone(width)
self.assertGreater(width["width"], 0.0)
if __name__ == "__main__":
unittest.main()

176
tests/test_ring_buffer.py Normal file
View File

@ -0,0 +1,176 @@
from __future__ import annotations
import numpy as np
import unittest
import warnings
from unittest.mock import patch
from rfg_adc_plotter.processing.fft import compute_fft_mag_row
from rfg_adc_plotter.state.ring_buffer import RingBuffer
class RingBufferTests(unittest.TestCase):
def test_ring_buffer_initializes_on_first_push(self):
ring = RingBuffer(max_sweeps=4)
sweep = np.linspace(-1.0, 1.0, 64, dtype=np.float32)
ring.push(sweep, np.linspace(3.3, 14.3, 64))
self.assertIsNotNone(ring.ring)
self.assertIsNotNone(ring.ring_fft)
self.assertIsNotNone(ring.ring_time)
self.assertIsNotNone(ring.distance_axis)
self.assertIsNotNone(ring.get_last_fft_linear())
self.assertIsNotNone(ring.last_fft_db)
self.assertEqual(ring.ring.shape[0], 4)
self.assertEqual(ring.ring_fft.shape, (4, ring.fft_bins))
def test_ring_buffer_reallocates_when_sweep_width_grows(self):
ring = RingBuffer(max_sweeps=3)
ring.push(np.ones((32,), dtype=np.float32), np.linspace(3.3, 14.3, 32))
first_width = ring.width
ring.push(np.ones((2048,), dtype=np.float32), np.linspace(3.3, 14.3, 2048))
self.assertGreater(ring.width, first_width)
self.assertIsNotNone(ring.ring)
self.assertEqual(ring.ring.shape, (3, ring.width))
def test_ring_buffer_tracks_latest_fft_and_display_arrays(self):
ring = RingBuffer(max_sweeps=2)
ring.push(np.linspace(0.0, 1.0, 64, dtype=np.float32), np.linspace(3.3, 14.3, 64))
ring.push(np.linspace(1.0, 0.0, 64, dtype=np.float32), np.linspace(3.3, 14.3, 64))
raw = ring.get_display_raw()
fft = ring.get_display_fft_linear()
self.assertEqual(raw.shape[1], 2)
self.assertEqual(fft.shape[1], 2)
self.assertIsNotNone(ring.last_fft_db)
self.assertEqual(ring.last_fft_db.shape, (ring.fft_bins,))
def test_ring_buffer_can_return_decimated_display_raw(self):
ring = RingBuffer(max_sweeps=3)
sweep_a = np.linspace(0.0, 1.0, 4096, dtype=np.float32)
sweep_b = np.linspace(1.0, 2.0, 4096, dtype=np.float32)
sweep_c = np.linspace(2.0, 3.0, 4096, dtype=np.float32)
freqs = np.linspace(3.3, 14.3, 4096, dtype=np.float64)
ring.push(sweep_a, freqs)
ring.push(sweep_b, freqs)
ring.push(sweep_c, freqs)
raw = ring.get_display_raw_decimated(256)
self.assertEqual(raw.shape, (256, 3))
self.assertAlmostEqual(float(raw[0, -1]), float(sweep_c[0]), places=6)
self.assertAlmostEqual(float(raw[-1, -1]), float(sweep_c[-1]), places=6)
def test_ring_buffer_can_switch_fft_mode_and_rebuild_fft_rows(self):
ring = RingBuffer(max_sweeps=2)
sweep = np.linspace(0.0, 1.0, 64, dtype=np.float32)
freqs = np.linspace(3.3, 14.3, 64, dtype=np.float64)
ring.push(sweep, freqs)
fft_before = ring.last_fft_db.copy()
axis_before = ring.distance_axis.copy()
changed = ring.set_symmetric_fft_enabled(False)
self.assertTrue(changed)
self.assertFalse(ring.fft_symmetric)
self.assertEqual(ring.get_display_raw().shape[1], 2)
self.assertIsNotNone(ring.get_last_fft_linear())
self.assertEqual(ring.last_fft_db.shape, fft_before.shape)
self.assertFalse(np.allclose(ring.last_fft_db, fft_before))
self.assertFalse(np.allclose(ring.distance_axis, axis_before))
def test_ring_buffer_can_switch_to_positive_only_fft_mode(self):
ring = RingBuffer(max_sweeps=2)
sweep = np.linspace(0.0, 1.0, 64, dtype=np.float32)
freqs = np.linspace(3.3, 14.3, 64, dtype=np.float64)
ring.push(sweep, freqs)
changed = ring.set_fft_mode("positive_only")
self.assertTrue(changed)
self.assertEqual(ring.fft_mode, "positive_only")
self.assertIsNotNone(ring.last_fft_db)
self.assertEqual(ring.last_fft_db.shape, (ring.fft_bins,))
self.assertIsNotNone(ring.distance_axis)
def test_ring_buffer_can_switch_to_positive_only_exact_fft_mode(self):
ring = RingBuffer(max_sweeps=2)
sweep = np.linspace(0.0, 1.0, 64, dtype=np.float32)
freqs = np.linspace(3.3, 14.3, 64, dtype=np.float64)
ring.push(sweep, freqs)
changed = ring.set_fft_mode("positive_only_exact")
self.assertTrue(changed)
self.assertEqual(ring.fft_mode, "positive_only_exact")
self.assertIsNotNone(ring.last_fft_db)
self.assertEqual(ring.last_fft_db.shape, (ring.fft_bins,))
self.assertIsNotNone(ring.distance_axis)
def test_ring_buffer_rebuilds_fft_from_complex_input(self):
ring = RingBuffer(max_sweeps=2)
freqs = np.linspace(3.3, 14.3, 64, dtype=np.float64)
complex_input = np.exp(1j * np.linspace(0.0, 2.0 * np.pi, 64)).astype(np.complex64)
display_sweep = np.abs(complex_input).astype(np.float32)
ring.push(display_sweep, freqs, fft_input=complex_input)
ring.set_fft_mode("direct")
expected = compute_fft_mag_row(complex_input, freqs, ring.fft_bins, mode="direct")
self.assertTrue(np.allclose(ring.get_last_fft_linear(), expected))
self.assertFalse(np.iscomplexobj(ring.get_display_fft_linear()))
self.assertTrue(np.allclose(ring.get_display_raw()[: display_sweep.size, -1], display_sweep))
def test_ring_buffer_reset_clears_cached_history(self):
ring = RingBuffer(max_sweeps=2)
ring.push(np.linspace(0.0, 1.0, 64, dtype=np.float32), np.linspace(4.0, 10.0, 64))
ring.reset()
self.assertIsNone(ring.ring)
self.assertIsNone(ring.ring_fft)
self.assertIsNone(ring.distance_axis)
self.assertIsNone(ring.last_fft_db)
self.assertEqual(ring.width, 0)
self.assertEqual(ring.head, 0)
def test_ring_buffer_push_ignores_all_nan_fft_without_runtime_warning(self):
ring = RingBuffer(max_sweeps=2)
freqs = np.linspace(3.3, 14.3, 64, dtype=np.float64)
ring.push(np.linspace(0.0, 1.0, 64, dtype=np.float32), freqs)
fft_before = ring.last_fft_db.copy()
y_min_before = ring.y_min_fft
y_max_before = ring.y_max_fft
with warnings.catch_warnings():
warnings.simplefilter("error", RuntimeWarning)
with patch(
"rfg_adc_plotter.state.ring_buffer.compute_fft_mag_row",
return_value=np.full((ring.fft_bins,), np.nan, dtype=np.float32),
):
ring.push(np.linspace(1.0, 2.0, 64, dtype=np.float32), freqs)
self.assertFalse(ring.last_push_fft_valid)
self.assertTrue(np.allclose(ring.last_fft_db, fft_before))
self.assertEqual(ring.y_min_fft, y_min_before)
self.assertEqual(ring.y_max_fft, y_max_before)
def test_ring_buffer_set_fft_mode_ignores_all_nan_rebuild_without_runtime_warning(self):
ring = RingBuffer(max_sweeps=2)
freqs = np.linspace(3.3, 14.3, 64, dtype=np.float64)
ring.push(np.linspace(0.0, 1.0, 64, dtype=np.float32), freqs)
fft_before = ring.last_fft_db.copy()
with warnings.catch_warnings():
warnings.simplefilter("error", RuntimeWarning)
with patch(
"rfg_adc_plotter.state.ring_buffer.compute_fft_mag_row",
return_value=np.full((ring.fft_bins,), np.nan, dtype=np.float32),
):
ring.set_fft_mode("direct")
self.assertFalse(ring.last_push_fft_valid)
self.assertTrue(np.allclose(ring.last_fft_db, fft_before))
self.assertEqual(ring.fft_mode, "direct")
if __name__ == "__main__":
unittest.main()

View File

@ -1,81 +0,0 @@
import numpy as np
from rfg_adc_plotter.processing.fourier import compute_ifft_profile_from_sweep
from rfg_adc_plotter.state.ring_buffer import RingBuffer
def test_ring_buffer_allocates_fft_buffers_from_first_push():
ring = RingBuffer(max_sweeps=4)
ring.ensure_init(64)
sweep = np.linspace(-1.0, 1.0, 64, dtype=np.float32)
depth_expected, vals_expected = compute_ifft_profile_from_sweep(sweep, complex_mode="arccos")
ring.push(sweep)
assert ring.ring_fft is not None
assert ring.fft_depth_axis_m is not None
assert ring.last_fft_vals is not None
assert ring.fft_bins == ring.ring_fft.shape[1]
assert ring.fft_bins == ring.fft_depth_axis_m.size
assert ring.fft_bins == ring.last_fft_vals.size
assert ring.fft_bins == min(depth_expected.size, vals_expected.size)
# Legacy alias kept for compatibility with existing GUI code paths.
assert ring.fft_time_axis is ring.fft_depth_axis_m
def test_ring_buffer_reallocates_fft_buffers_when_ifft_length_changes():
ring = RingBuffer(max_sweeps=4)
ring.ensure_init(512)
ring.push(np.linspace(-1.0, 1.0, 64, dtype=np.float32))
first_bins = ring.fft_bins
first_shape = None if ring.ring_fft is None else ring.ring_fft.shape
ring.push(np.linspace(-1.0, 1.0, 512, dtype=np.float32))
second_bins = ring.fft_bins
second_shape = None if ring.ring_fft is None else ring.ring_fft.shape
assert ring.ring is not None # raw ring сохраняется
assert first_shape is not None and second_shape is not None
assert first_bins != second_bins
assert second_shape == (ring.max_sweeps, second_bins)
assert ring.fft_depth_axis_m is not None
assert ring.fft_depth_axis_m.size == second_bins
def test_ring_buffer_mode_switch_resets_fft_buffers_only():
ring = RingBuffer(max_sweeps=4)
ring.ensure_init(128)
ring.push(np.linspace(-1.0, 1.0, 128, dtype=np.float32))
assert ring.ring is not None
assert ring.ring_fft is not None
raw_before = ring.ring.copy()
changed = ring.set_fft_complex_mode("diff")
assert changed is True
assert ring.fft_complex_mode == "diff"
assert ring.ring is not None
assert np.array_equal(ring.ring, raw_before, equal_nan=True)
assert ring.ring_fft is None
assert ring.fft_depth_axis_m is None
assert ring.last_fft_vals is None
assert ring.fft_bins == 0
ring.push(np.linspace(-1.0, 1.0, 128, dtype=np.float32))
assert ring.ring_fft is not None
assert ring.fft_depth_axis_m is not None
assert ring.last_fft_vals is not None
def test_ring_buffer_short_sweeps_keep_fft_profile_well_formed():
for n in (1, 2, 3):
ring = RingBuffer(max_sweeps=4)
ring.ensure_init(n)
ring.push(np.linspace(-1.0, 1.0, n, dtype=np.float32))
assert ring.fft_depth_axis_m is not None
assert ring.last_fft_vals is not None
assert ring.fft_depth_axis_m.dtype == np.float32
assert ring.last_fft_vals.dtype == np.float32
assert ring.fft_depth_axis_m.size == ring.last_fft_vals.size

View File

@ -0,0 +1,416 @@
from __future__ import annotations
import math
import unittest
from rfg_adc_plotter.io.sweep_parser_core import (
AsciiSweepParser,
ComplexAsciiSweepParser,
LegacyBinaryParser,
LogScale16BitX2BinaryParser,
LogScaleBinaryParser32,
ParserTestStreamParser,
PointEvent,
StartEvent,
SweepAssembler,
log_pair_to_sweep,
)
def _u16le(word: int) -> bytes:
w = int(word) & 0xFFFF
return bytes((w & 0xFF, (w >> 8) & 0xFF))
def _pack_legacy_start(ch: int) -> bytes:
return b"\xff\xff" * 3 + bytes((0x0A, int(ch) & 0xFF))
def _pack_legacy_point(ch: int, step: int, value_i32: int) -> bytes:
value = int(value_i32) & 0xFFFF_FFFF
return b"".join(
[
_u16le(step),
_u16le((value >> 16) & 0xFFFF),
_u16le(value & 0xFFFF),
bytes((0x0A, int(ch) & 0xFF)),
]
)
def _pack_log_start(ch: int) -> bytes:
return b"\xff\xff" * 5 + bytes((0x0A, int(ch) & 0xFF))
def _pack_log_point(step: int, avg1: int, avg2: int, ch: int = 0) -> bytes:
a1 = int(avg1) & 0xFFFF_FFFF
a2 = int(avg2) & 0xFFFF_FFFF
return b"".join(
[
_u16le(step),
_u16le((a1 >> 16) & 0xFFFF),
_u16le(a1 & 0xFFFF),
_u16le((a2 >> 16) & 0xFFFF),
_u16le(a2 & 0xFFFF),
bytes((0x0A, int(ch) & 0xFF)),
]
)
def _pack_log16_start(ch: int) -> bytes:
return b"\xff\xff" * 3 + bytes((0x0A, int(ch) & 0xFF))
def _pack_log16_point(step: int, avg1: int, avg2: int) -> bytes:
return b"".join(
[
_u16le(step),
_u16le(avg1),
_u16le(avg2),
_u16le(0xFFFF),
]
)
def _pack_tty_start() -> bytes:
return b"".join([_u16le(0x000A), _u16le(0xFFFF), _u16le(0xFFFF), _u16le(0xFFFF)])
def _pack_tty_point(step: int, ch1: int, ch2: int) -> bytes:
return b"".join(
[
_u16le(0x000A),
_u16le(step),
_u16le(ch1),
_u16le(ch2),
]
)
def _pack_logdet_point(step: int, value: int) -> bytes:
return b"".join(
[
_u16le(0x001A),
_u16le(step),
_u16le(value),
_u16le(0x0000),
]
)
class SweepParserCoreTests(unittest.TestCase):
def test_ascii_parser_emits_start_and_points(self):
parser = AsciiSweepParser()
events = parser.feed(b"Sweep_start\ns 1 2 -3\ns2 4 5\n")
self.assertIsInstance(events[0], StartEvent)
self.assertIsInstance(events[1], PointEvent)
self.assertIsInstance(events[2], PointEvent)
self.assertEqual(events[1].ch, 1)
self.assertEqual(events[1].x, 2)
self.assertEqual(events[1].y, -3.0)
self.assertEqual(events[2].ch, 2)
self.assertEqual(events[2].x, 4)
self.assertEqual(events[2].y, 5.0)
def test_legacy_binary_parser_resynchronizes_after_garbage(self):
parser = LegacyBinaryParser()
stream = b"\x00junk" + _pack_legacy_start(3) + _pack_legacy_point(3, 1, -2)
events = parser.feed(stream)
self.assertIsInstance(events[0], StartEvent)
self.assertEqual(events[0].ch, 3)
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].ch, 3)
self.assertEqual(events[1].x, 1)
self.assertEqual(events[1].y, -2.0)
def test_legacy_binary_parser_detects_new_sweep_on_step_reset(self):
parser = LegacyBinaryParser()
stream = b"".join(
[
_pack_legacy_point(3, 1, -2),
_pack_legacy_point(3, 2, -3),
_pack_legacy_point(3, 1, -4),
]
)
events = parser.feed(stream)
self.assertIsInstance(events[0], PointEvent)
self.assertIsInstance(events[1], PointEvent)
self.assertIsInstance(events[2], StartEvent)
self.assertEqual(events[2].ch, 3)
self.assertIsInstance(events[3], PointEvent)
self.assertEqual(events[3].x, 1)
self.assertEqual(events[3].y, -4.0)
def test_legacy_binary_parser_accepts_tty_ch1_ch2_stream(self):
parser = LegacyBinaryParser()
stream = b"".join(
[
_pack_tty_start(),
_pack_tty_point(1, 100, 90),
_pack_tty_point(2, 120, 95),
]
)
events = parser.feed(stream)
self.assertIsInstance(events[0], StartEvent)
self.assertEqual(events[0].ch, 0)
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].x, 1)
self.assertEqual(events[1].y, 18100.0)
self.assertEqual(events[1].aux, (100.0, 90.0))
self.assertEqual(events[1].signal_kind, "bin_iq")
self.assertIsInstance(events[2], PointEvent)
self.assertEqual(events[2].x, 2)
self.assertEqual(events[2].y, 23425.0)
self.assertEqual(events[2].aux, (120.0, 95.0))
self.assertEqual(events[2].signal_kind, "bin_iq")
def test_legacy_binary_parser_detects_new_tty_sweep_on_step_reset(self):
parser = LegacyBinaryParser()
stream = b"".join(
[
_pack_tty_start(),
_pack_tty_point(1, 100, 90),
_pack_tty_point(2, 110, 95),
_pack_tty_point(1, 120, 80),
]
)
events = parser.feed(stream)
self.assertIsInstance(events[0], StartEvent)
self.assertIsInstance(events[1], PointEvent)
self.assertIsInstance(events[2], PointEvent)
self.assertIsInstance(events[3], StartEvent)
self.assertEqual(events[3].ch, 0)
self.assertIsInstance(events[4], PointEvent)
self.assertEqual(events[4].x, 1)
self.assertEqual(events[4].aux, (120.0, 80.0))
self.assertEqual(events[4].signal_kind, "bin_iq")
def test_legacy_binary_parser_tty_mode_does_not_flip_to_legacy_on_ch2_low_byte_0x0a(self):
parser = LegacyBinaryParser()
stream = b"".join(
[
_pack_tty_start(),
_pack_tty_point(1, 100, 0x040A), # low byte is 0x0A: used to be misparsed as legacy
_pack_tty_point(2, 120, 0x0410),
]
)
events = parser.feed(stream)
self.assertEqual(len(events), 3)
self.assertIsInstance(events[0], StartEvent)
self.assertEqual(events[0].ch, 0)
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].ch, 0)
self.assertEqual(events[1].x, 1)
self.assertEqual(events[1].aux, (100.0, 1034.0))
self.assertEqual(events[1].y, 1079156.0)
self.assertIsInstance(events[2], PointEvent)
self.assertEqual(events[2].ch, 0)
self.assertEqual(events[2].x, 2)
self.assertEqual(events[2].aux, (120.0, 1040.0))
self.assertEqual(events[2].y, 1096000.0)
def test_legacy_binary_parser_accepts_logdet_stream(self):
parser = LegacyBinaryParser()
stream = b"".join(
[
_pack_logdet_point(1, 0x0F77),
_pack_logdet_point(2, 0xF234),
]
)
events = parser.feed(stream)
self.assertEqual(len(events), 2)
self.assertIsInstance(events[0], PointEvent)
self.assertEqual(events[0].x, 1)
self.assertEqual(events[0].y, 3959.0)
self.assertIsNone(events[0].aux)
self.assertEqual(events[0].signal_kind, "bin_logdet")
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].x, 2)
self.assertEqual(events[1].y, -3532.0)
self.assertEqual(events[1].signal_kind, "bin_logdet")
def test_legacy_binary_parser_splits_packet_on_bin_signal_kind_change(self):
parser = LegacyBinaryParser()
stream = b"".join(
[
_pack_tty_start(),
_pack_tty_point(1, 100, 90),
_pack_tty_point(2, 110, 95),
_pack_logdet_point(3, 0x0F77),
]
)
events = parser.feed(stream)
self.assertIsInstance(events[0], StartEvent)
self.assertEqual(events[0].signal_kind, "bin_iq")
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].signal_kind, "bin_iq")
self.assertIsInstance(events[2], PointEvent)
self.assertEqual(events[2].signal_kind, "bin_iq")
self.assertIsInstance(events[3], StartEvent)
self.assertEqual(events[3].signal_kind, "bin_logdet")
self.assertIsInstance(events[4], PointEvent)
self.assertEqual(events[4].x, 3)
self.assertEqual(events[4].signal_kind, "bin_logdet")
def test_complex_ascii_parser_detects_new_sweep_on_step_reset(self):
parser = ComplexAsciiSweepParser()
events = parser.feed(b"0 3 4\n1 5 12\n0 8 15\n")
self.assertIsInstance(events[0], PointEvent)
self.assertEqual(events[0].x, 0)
self.assertEqual(events[0].y, 5.0)
self.assertEqual(events[0].aux, (3.0, 4.0))
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].y, 13.0)
self.assertIsInstance(events[2], StartEvent)
self.assertIsInstance(events[3], PointEvent)
self.assertEqual(events[3].aux, (8.0, 15.0))
def test_logscale_32_parser_keeps_channel_and_aux_values(self):
parser = LogScaleBinaryParser32()
stream = _pack_log_start(5) + _pack_log_point(7, 1500, 700, ch=5)
events = parser.feed(stream)
self.assertIsInstance(events[0], StartEvent)
self.assertEqual(events[0].ch, 5)
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].ch, 5)
self.assertEqual(events[1].x, 7)
self.assertAlmostEqual(events[1].y, log_pair_to_sweep(1500, 700), places=6)
self.assertEqual(events[1].aux, (1500.0, 700.0))
def test_logscale_32_parser_detects_new_sweep_on_step_reset(self):
parser = LogScaleBinaryParser32()
stream = b"".join(
[
_pack_log_point(1, 1500, 700, ch=5),
_pack_log_point(2, 1400, 650, ch=5),
_pack_log_point(1, 1300, 600, ch=5),
]
)
events = parser.feed(stream)
self.assertIsInstance(events[0], PointEvent)
self.assertIsInstance(events[1], PointEvent)
self.assertIsInstance(events[2], StartEvent)
self.assertEqual(events[2].ch, 5)
self.assertIsInstance(events[3], PointEvent)
self.assertEqual(events[3].x, 1)
self.assertAlmostEqual(events[3].y, log_pair_to_sweep(1300, 600), places=6)
def test_log_pair_to_sweep_is_order_independent(self):
self.assertAlmostEqual(log_pair_to_sweep(1500, 700), log_pair_to_sweep(700, 1500), places=6)
def test_logscale_16bit_parser_uses_last_start_channel(self):
parser = LogScale16BitX2BinaryParser()
stream = _pack_log16_start(2) + _pack_log16_point(1, 100, 90)
events = parser.feed(stream)
self.assertIsInstance(events[0], StartEvent)
self.assertEqual(events[0].ch, 2)
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].ch, 2)
self.assertAlmostEqual(events[1].y, math.hypot(100.0, 90.0), places=6)
self.assertEqual(events[1].aux, (100.0, 90.0))
def test_logscale_16bit_parser_detects_new_sweep_on_step_reset(self):
parser = LogScale16BitX2BinaryParser()
stream = b"".join(
[
_pack_log16_start(2),
_pack_log16_point(1, 100, 90),
_pack_log16_point(2, 110, 95),
_pack_log16_point(1, 120, 80),
]
)
events = parser.feed(stream)
self.assertIsInstance(events[0], StartEvent)
self.assertIsInstance(events[1], PointEvent)
self.assertIsInstance(events[2], PointEvent)
self.assertIsInstance(events[3], StartEvent)
self.assertEqual(events[3].ch, 2)
self.assertIsInstance(events[4], PointEvent)
self.assertEqual(events[4].x, 1)
self.assertAlmostEqual(events[4].y, math.hypot(120.0, 80.0), places=6)
def test_parser_test_stream_parser_recovers_point_after_single_separator(self):
parser = ParserTestStreamParser()
stream = b"".join(
[
b"\xff\xff\xff\xff",
bytes((0x0A, 4)),
_u16le(1),
_u16le(100),
_u16le(90),
_u16le(0xFFFF),
]
)
events = parser.feed(stream)
events.extend(parser.feed(_u16le(2)))
self.assertIsInstance(events[0], StartEvent)
self.assertEqual(events[0].ch, 4)
self.assertIsInstance(events[1], PointEvent)
self.assertEqual(events[1].ch, 4)
self.assertEqual(events[1].x, 1)
self.assertAlmostEqual(events[1].y, math.hypot(100.0, 90.0), places=6)
self.assertEqual(events[1].aux, (100.0, 90.0))
def test_sweep_assembler_builds_aux_curves_without_inversion(self):
assembler = SweepAssembler(fancy=False, apply_inversion=False)
self.assertIsNone(assembler.consume(StartEvent(ch=1, signal_kind="bin_iq")))
assembler.consume(PointEvent(ch=1, x=1, y=10.0, aux=(100.0, 90.0), signal_kind="bin_iq"))
assembler.consume(PointEvent(ch=1, x=2, y=20.0, aux=(110.0, 95.0), signal_kind="bin_iq"))
sweep, info, aux = assembler.finalize_current()
self.assertEqual(sweep.shape[0], 3)
self.assertEqual(info["ch"], 1)
self.assertEqual(info["signal_kind"], "bin_iq")
self.assertIsNotNone(aux)
self.assertEqual(aux[0][1], 100.0)
self.assertEqual(aux[1][2], 95.0)
def test_sweep_assembler_splits_packet_on_channel_switch(self):
assembler = SweepAssembler(fancy=False, apply_inversion=False)
self.assertIsNone(assembler.consume(PointEvent(ch=1, x=1, y=10.0)))
packet = assembler.consume(PointEvent(ch=2, x=1, y=20.0))
self.assertIsNotNone(packet)
sweep_1, info_1, aux_1 = packet
self.assertIsNone(aux_1)
self.assertEqual(info_1["ch"], 1)
self.assertEqual(info_1["chs"], [1])
self.assertAlmostEqual(float(sweep_1[1]), 10.0, places=6)
sweep_2, info_2, aux_2 = assembler.finalize_current()
self.assertIsNone(aux_2)
self.assertEqual(info_2["ch"], 2)
self.assertEqual(info_2["chs"], [2])
self.assertAlmostEqual(float(sweep_2[1]), 20.0, places=6)
def test_sweep_assembler_splits_packet_on_signal_kind_switch(self):
assembler = SweepAssembler(fancy=False, apply_inversion=False)
self.assertIsNone(assembler.consume(PointEvent(ch=0, x=1, y=10.0, signal_kind="bin_iq")))
packet = assembler.consume(PointEvent(ch=0, x=1, y=20.0, signal_kind="bin_logdet"))
self.assertIsNotNone(packet)
sweep_1, info_1, aux_1 = packet
self.assertIsNone(aux_1)
self.assertEqual(info_1["signal_kind"], "bin_iq")
self.assertAlmostEqual(float(sweep_1[1]), 10.0, places=6)
sweep_2, info_2, aux_2 = assembler.finalize_current()
self.assertIsNone(aux_2)
self.assertEqual(info_2["signal_kind"], "bin_logdet")
self.assertAlmostEqual(float(sweep_2[1]), 20.0, places=6)
if __name__ == "__main__":
unittest.main()

View File

@ -1,110 +0,0 @@
import math
from rfg_adc_plotter.io.sweep_parser_core import BinaryRecordStreamParser
def _u16le(word: int) -> bytes:
w = int(word) & 0xFFFF
return bytes((w & 0xFF, (w >> 8) & 0xFF))
def _pack_signed_words_be(value: int, words: int) -> list[int]:
bits = 16 * int(words)
v = int(value)
if v < 0:
v = (1 << bits) + v
out: list[int] = []
for i in range(words):
shift = (words - 1 - i) * 16
out.append((v >> shift) & 0xFFFF)
return out
def _pack_legacy_start(ch: int) -> bytes:
return b"\xff\xff" * 3 + bytes((0x0A, int(ch) & 0xFF))
def _pack_legacy_point(ch: int, step: int, value_i32: int) -> bytes:
v = int(value_i32) & 0xFFFF_FFFF
return b"".join(
[
_u16le(step),
_u16le((v >> 16) & 0xFFFF),
_u16le(v & 0xFFFF),
bytes((0x0A, int(ch) & 0xFF)),
]
)
def _pack_log_start(ch: int) -> bytes:
return b"\xff\xff" * 5 + bytes((0x0A, int(ch) & 0xFF))
def _pack_log_point(step: int, avg1: int, avg2: int, pair_words: int, ch: int = 0) -> bytes:
words = [int(step) & 0xFFFF]
words.extend(_pack_signed_words_be(avg1, pair_words))
words.extend(_pack_signed_words_be(avg2, pair_words))
words.append(((int(ch) & 0xFF) << 8) | 0x000A)
return b"".join(_u16le(w) for w in words)
def _log_pair_to_linear(avg1: int, avg2: int) -> float:
exp1 = max(-300.0, min(300.0, float(avg1) * 0.001))
exp2 = max(-300.0, min(300.0, float(avg2) * 0.001))
return (math.pow(10.0, exp1) - math.pow(10.0, exp2)) * 1000.0
def test_binary_parser_parses_legacy_8_byte_records():
parser = BinaryRecordStreamParser()
stream = b"".join(
[
_pack_legacy_start(3),
_pack_legacy_point(3, 1, -2),
_pack_legacy_point(3, 2, 123456),
]
)
events = []
events.extend(parser.feed(stream[:5]))
events.extend(parser.feed(stream[5:17]))
events.extend(parser.feed(stream[17:]))
assert events[0] == ("start", 3)
assert events[1] == ("point", 3, 1, -2.0)
assert events[2] == ("point", 3, 2, 123456.0)
def test_binary_parser_parses_logdetector_32bit_pair_records():
parser = BinaryRecordStreamParser()
stream = b"".join(
[
_pack_log_start(0),
_pack_log_point(1, 1500, 700, pair_words=2, ch=0),
_pack_log_point(2, 1510, 710, pair_words=2, ch=0),
]
)
events = parser.feed(stream)
assert events[0] == ("start", 0)
assert events[1][0:3] == ("point", 0, 1)
assert events[2][0:3] == ("point", 0, 2)
assert abs(float(events[1][3]) - _log_pair_to_linear(1500, 700)) < 1e-6
assert abs(float(events[2][3]) - _log_pair_to_linear(1510, 710)) < 1e-6
def test_binary_parser_parses_logdetector_128bit_pair_records():
parser = BinaryRecordStreamParser()
stream = b"".join(
[
_pack_log_start(5),
_pack_log_point(7, 1600, 800, pair_words=8, ch=5),
_pack_log_point(8, 1610, 810, pair_words=8, ch=5),
]
)
events = parser.feed(stream)
assert events[0] == ("start", 5)
assert events[1][0:3] == ("point", 5, 7)
assert events[2][0:3] == ("point", 5, 8)
assert abs(float(events[1][3]) - _log_pair_to_linear(1600, 800)) < 1e-6
assert abs(float(events[2][3]) - _log_pair_to_linear(1610, 810)) < 1e-6

262
tests/test_sweep_reader.py Normal file
View File

@ -0,0 +1,262 @@
from __future__ import annotations
import contextlib
import io
import threading
import time
import unittest
from queue import Queue
from unittest.mock import patch
from rfg_adc_plotter.io import sweep_reader as sweep_reader_module
from rfg_adc_plotter.io.sweep_reader import SweepReader, _PARSER_16_BIT_X2_PROBE_BYTES
def _u16le(word: int) -> bytes:
value = int(word) & 0xFFFF
return bytes((value & 0xFF, (value >> 8) & 0xFF))
def _pack_legacy_point(ch: int, step: int, value_i32: int) -> bytes:
value = int(value_i32) & 0xFFFF_FFFF
return b"".join(
[
_u16le(step),
_u16le((value >> 16) & 0xFFFF),
_u16le(value & 0xFFFF),
bytes((0x0A, int(ch) & 0xFF)),
]
)
def _pack_log16_start(ch: int) -> bytes:
return b"\xff\xff" * 3 + bytes((0x0A, int(ch) & 0xFF))
def _pack_log16_point(step: int, real: int, imag: int) -> bytes:
return b"".join(
[
_u16le(step),
_u16le(real),
_u16le(imag),
_u16le(0xFFFF),
]
)
def _pack_tty_start() -> bytes:
return b"".join(
[
_u16le(0x000A),
_u16le(0xFFFF),
_u16le(0xFFFF),
_u16le(0xFFFF),
]
)
def _pack_tty_point(step: int, ch1: int, ch2: int) -> bytes:
return b"".join(
[
_u16le(0x000A),
_u16le(step),
_u16le(ch1),
_u16le(ch2),
]
)
def _pack_logdet_point(step: int, value: int) -> bytes:
return b"".join(
[
_u16le(0x001A),
_u16le(step),
_u16le(value),
_u16le(0x0000),
]
)
def _chunk_bytes(data: bytes, size: int = 4096) -> list[bytes]:
return [data[idx : idx + size] for idx in range(0, len(data), size)]
class _FakeSerialLineSource:
def __init__(self, path: str, baud: int, timeout: float = 1.0):
self.path = path
self.baud = baud
self.timeout = timeout
self._using = "fake"
def close(self) -> None:
pass
class _FakeChunkReader:
payload_chunks: list[bytes] = []
def __init__(self, src):
self._src = src
self._chunks = list(type(self).payload_chunks)
def read_available(self) -> bytes:
if self._chunks:
return self._chunks.pop(0)
return b""
class SweepReaderTests(unittest.TestCase):
def _start_reader(self, payload: bytes, **reader_kwargs):
queue: Queue = Queue()
stop_event = threading.Event()
stderr = io.StringIO()
_FakeChunkReader.payload_chunks = _chunk_bytes(payload)
reader = SweepReader(
"/tmp/fake-tty",
115200,
queue,
stop_event,
**reader_kwargs,
)
stack = contextlib.ExitStack()
stack.enter_context(patch.object(sweep_reader_module, "SerialLineSource", _FakeSerialLineSource))
stack.enter_context(patch.object(sweep_reader_module, "SerialChunkReader", _FakeChunkReader))
stack.enter_context(contextlib.redirect_stderr(stderr))
reader.start()
return stack, reader, queue, stop_event, stderr
def test_parser_16_bit_x2_falls_back_to_legacy_stream(self):
payload = bytearray()
while len(payload) < (_PARSER_16_BIT_X2_PROBE_BYTES + 24):
payload += _pack_legacy_point(3, 1, -2)
payload += _pack_legacy_point(3, 2, -3)
payload += _pack_legacy_point(3, 1, -4)
stack, reader, queue, stop_event, stderr = self._start_reader(bytes(payload), parser_16_bit_x2=True)
try:
sweep, info, aux = queue.get(timeout=2.0)
self.assertEqual(info["ch"], 3)
self.assertIsNone(aux)
self.assertGreaterEqual(sweep.shape[0], 3)
self.assertIn("fallback -> legacy", stderr.getvalue())
finally:
stop_event.set()
reader.join(timeout=1.0)
stack.close()
def test_parser_16_bit_x2_falls_back_to_tty_ch1_ch2_stream(self):
payload = bytearray()
while len(payload) < (_PARSER_16_BIT_X2_PROBE_BYTES + 24):
payload += _pack_tty_start()
payload += _pack_tty_point(1, 100, 90)
payload += _pack_tty_point(2, 120, 95)
payload += _pack_tty_point(1, 80, 70)
stack, reader, queue, stop_event, stderr = self._start_reader(bytes(payload), parser_16_bit_x2=True)
try:
sweep, info, aux = queue.get(timeout=2.0)
self.assertEqual(info["ch"], 0)
self.assertIsNotNone(aux)
self.assertGreaterEqual(sweep.shape[0], 3)
self.assertAlmostEqual(float(sweep[1]), 18100.0, places=6)
self.assertAlmostEqual(float(sweep[2]), 23425.0, places=6)
self.assertIn("fallback -> legacy", stderr.getvalue())
finally:
stop_event.set()
reader.join(timeout=1.0)
stack.close()
def test_parser_16_bit_x2_keeps_true_complex_stream(self):
payload = b"".join(
[
_pack_log16_start(2),
_pack_log16_point(1, 3, 4),
_pack_log16_point(2, 5, 12),
_pack_log16_point(1, 8, 15),
]
)
stack, reader, queue, stop_event, stderr = self._start_reader(payload, parser_16_bit_x2=True)
try:
sweep, info, aux = queue.get(timeout=1.0)
self.assertEqual(info["ch"], 2)
self.assertIsNotNone(aux)
self.assertAlmostEqual(float(sweep[1]), 5.0, places=6)
self.assertAlmostEqual(float(sweep[2]), 13.0, places=6)
self.assertNotIn("fallback -> legacy", stderr.getvalue())
finally:
stop_event.set()
reader.join(timeout=1.0)
stack.close()
def test_parser_16_bit_x2_falls_back_to_logdet_1a00_stream(self):
payload = bytearray()
while len(payload) < (_PARSER_16_BIT_X2_PROBE_BYTES + 24):
payload += _pack_logdet_point(1, 0x0F77)
payload += _pack_logdet_point(2, 0x0FCB)
payload += _pack_logdet_point(1, 0x0F88)
stack, reader, queue, stop_event, stderr = self._start_reader(bytes(payload), parser_16_bit_x2=True)
try:
sweep, info, aux = queue.get(timeout=2.0)
self.assertEqual(info["signal_kind"], "bin_logdet")
self.assertIsNone(aux)
self.assertGreaterEqual(sweep.shape[0], 3)
self.assertAlmostEqual(float(sweep[1]), 3959.0, places=6)
self.assertIn("fallback -> legacy", stderr.getvalue())
finally:
stop_event.set()
reader.join(timeout=1.0)
stack.close()
def test_parser_16_bit_x2_probe_inconclusive_logs_hint(self):
payload = b"\x00" * (_PARSER_16_BIT_X2_PROBE_BYTES + 128)
stack, reader, queue, stop_event, stderr = self._start_reader(payload, parser_16_bit_x2=True)
try:
deadline = time.time() + 1.5
logs = ""
while time.time() < deadline:
logs = stderr.getvalue()
if "probe inconclusive" in logs:
break
time.sleep(0.02)
self.assertTrue(queue.empty())
self.assertIn("probe inconclusive", logs)
self.assertIn("try --bin", logs)
finally:
stop_event.set()
reader.join(timeout=1.0)
stack.close()
def test_reader_logs_no_input_warning_when_source_is_idle(self):
with patch.object(sweep_reader_module, "_NO_INPUT_WARN_INTERVAL_S", 0.02), patch.object(
sweep_reader_module, "_NO_PACKET_WARN_INTERVAL_S", 0.02
):
stack, reader, _queue, stop_event, stderr = self._start_reader(b"", parser_16_bit_x2=False)
try:
time.sleep(0.12)
logs = stderr.getvalue()
self.assertIn("no input bytes", logs)
self.assertIn("no sweep packets", logs)
finally:
stop_event.set()
reader.join(timeout=1.0)
stack.close()
def test_reader_join_does_not_raise_when_stopped(self):
stack, reader, _queue, stop_event, _stderr = self._start_reader(b"", parser_16_bit_x2=True)
try:
time.sleep(0.01)
stop_event.set()
reader.join(timeout=1.0)
self.assertFalse(reader.is_alive())
finally:
stop_event.set()
if reader.is_alive():
reader.join(timeout=1.0)
stack.close()
if __name__ == "__main__":
unittest.main()