Changelog
Version 0.2.2 (2026-02-22)
SD Card File Deletion — Shell rm with MAVFTP Fallback:
Implemented a robust two-tier file deletion strategy in
MavLinkFTPProxyvia the new_delete_file_with_fallback()method:Tier 1 — NuttX shell ``rm``: Sends
rm /pathvia MAVLinkSERIAL_CONTROLto the PX4 NuttX shell, bypassing the MAVFTPFileProtected(code 9) restriction that blockscmd_rmon active log files. This is the primary method on real hardware.Tier 2 — MAVFTP ``cmd_rm``: Falls back to the MAVFTP
_delete()path when the shellrmcommand is not available (e.g. PX4 SITL builds where the NuttX shell does not exposerm/rmdirand returns"Invalid command: rm").
All file deletion call sites in
MavLinkFTPProxynow use_delete_file_with_fallback():delete_file(),delete_all_logs()(per-file loop), andclear_error_logs()(per-file loop). Directory cleanup viarmdirremains shell-only (best-effort) since MAVFTP has no directory removal equivalent.PX4 logger stop/start (
logger stop\n/logger start\n) is issued before and after deletion loops to release file handles, enabling both shell and MAVFTP deletion to succeed.delete_file_via_shell()onMavLinkExternalProxydetects shell errors by checking reply text for keywords:"invalid","no such","error","failed","not empty"— returningFalseso the fallback can be triggered.
Format SD Card (petal-user-journey-coordinator):
Added
format_sd_cardMQTT command handler with two-phase response pattern:Phase 1: Immediate
send_command_responseacknowledging the command was receivedPhase 2: Executes
MAV_CMD_PREFLIGHT_STORAGEvia MAVLink, then publishes the result (success or failure) tocommand/webviapublish_messagewith command/<petal_name>/format_sd_card_status
Added
format_sd_card()method onMavLinkExternalProxythat sendsMAV_CMD_PREFLIGHT_STORAGEand waits forCOMMAND_ACK, returning a structuredFormatStorageResponsewithFormatStorageStatusCodeenum values for all ACK outcomes (accepted, denied, timeout, unsupported, etc.)Added
build_format_storage_command()to construct the MAVLinkCOMMAND_LONGmessage forMAV_CMD_PREFLIGHT_STORAGEwith configurablestorage_id(0–3)Added Pydantic models:
FormatStorageStatusCode(enum) andFormatStorageResponseinpetal_app_manager.models.mavlinkFormatSDCardRequestandFormatSDCardStatusPayloadinpetal-user-journey-coordinatordata models
Active-operation guard prevents concurrent format operations
Full documentation added to
petal_user_journey_coordinator.rstwith Phase 1/Phase 2 response examples, error scenarios, and front-end handling instructions
Bulk Delete Flight Records (petal-flight-log):
Added
bulk_delete_flight_recordsMQTT command with job-based architecture:Creates a
BulkDeleteFlightRecordsJobthat runs in the background via the job managerSupports real-time progress streaming via
subscribe_bulk_delete_flight_records/unsubscribe_bulk_delete_flight_records/cancel_bulk_delete_flight_recordsDeletes both ULog files from SD card (via
_delete_file_with_fallback) and local rosbag files, plus associated database records
Added Pydantic models:
BulkDeleteFlightRecordsRequest,BulkDeleteFlightRecordsResponse,BulkDeleteFlightRecordsStatusPayload,BulkDeleteProgressPayload
Clear All ULogs (petal-flight-log):
Added
clear_all_ulogsMQTT command with job-based architecture:Creates a
ClearAllUlogsJobthat recursively scans date directories via MAVFTP and deletes every.ulgfile using_delete_file_with_fallback()Supports real-time progress streaming via
subscribe_clear_all_ulogs/unsubscribe_clear_all_ulogs/cancel_clear_all_ulogsEmpty date directories are cleaned up via shell
rmdir(best-effort)
MavLinkFTPProxy.delete_all_logs()inline_scan()returns both file list and date directory list in a single pass, enabling cleanup of directories left empty by previous runsMAVFTP base directory listing retries reduced from 5 to 2 to avoid long stalls on empty SD cards; empty-directory listing failures (MAVFTP code 73 / timeout) are caught gracefully and treated as “nothing to delete”
Per-file progress callback support via
progress_callback(current_index, total)— accepts both sync and async callablesAdded Pydantic models:
ClearAllUlogsRequest,ClearAllUlogsResponse,ClearAllUlogsStatusPayload,ClearAllUlogsProgressPayload
Detect & Clear Error Logs (petal-flight-log):
Added
detect_error_logsMQTT command (2-phase pattern):Scans the Pixhawk SD card for
fail_*.logerror log files via MAVFTP without deleting themPhase 1: Immediate acknowledgement via
send_command_responsePhase 2: Publishes scan results (file paths, sizes, count) to
command/webviapublish_messagewith command/<petal_name>/detect_error_logs
Added
clear_error_logsMQTT command (2-phase pattern):Lists
fail_*.logfiles via MAVFTP, then deletes each via_delete_file_with_fallback()with PX4 logger stop/startPhase 1: Immediate acknowledgement via
send_command_responsePhase 2: Publishes deletion summary (total, deleted, failed counts) to
command/webviapublish_messagewith command/<petal_name>/clear_error_logsZero files found counts as success (
"No error log files found under {base}")Partial failures reported with
PARTIAL_FAILUREerror code
Added
MavLinkFTPProxy.detect_error_logs()— read-only listing method that returns{"total": N, "files": [{"path": ..., "size_bytes": ...}, ...]}Added
MavLinkFTPProxy.clear_error_logs()— returns summary dict{"total": N, "deleted": N, "failed": N}Added Pydantic models:
DetectErrorLogsRequest,DetectErrorLogsResponse,DetectErrorLogsStatusPayload,ClearErrorLogsRequest,ClearErrorLogsResponse,ClearErrorLogsStatusPayload
Single Flight Record Deletion (petal-flight-log):
delete_flight_recordhandler now usesMavLinkFTPProxy.delete_file()which calls_delete_file_with_fallback()(shell rm → MAVFTP fallback) instead of relying solely on shell rm
MAVFTP Proxy Improvements:
Phased out direct MAVFTP
cmd_rmusage from all high-level deletion methods; all file deletions now go through_delete_file_with_fallback()which provides automatic fallback_BlockingParser.clear_error_logs()retained for legacy/direct calls but the primary async path (MavLinkFTPProxy.clear_error_logs()) uses shell rm with MAVFTP fallbackFTP listing retries for sub-directories reduced from 5 to 1 to avoid long stalls on directories that were emptied by deletion
Empty base directory listing failures (MAVFTP timeout code 73) caught with try/except in
_scan()and treated as empty — prevents crashes when the log directory has been fully clearedFTPDeleteErrorexception class added for structured MAVFTP error reporting with FTP error code and path
Redis Compatibility Fix:
Pinned
redisdependency to>=6.2.0,<7.0.0inpyproject.tomlto fix a compatibility issue whereredis>=7.0introduced breaking API changes that caused connection and command failures at runtime
Documentation Updates:
Added
detect_error_logscommand documentation topetal_flight_log.rst: payload, Phase 1 responses, Phase 2 status publish examples (files found, no files, execution error),DetectErrorLogsStatusPayloadfield referenceAdded
clear_error_logscommand documentation topetal_flight_log.rst: payload, Phase 1 responses, Phase 2 status publish examples (files deleted, no files, partial failure, execution error),ClearErrorLogsStatusPayloadfield referenceUpdated MQTT Topics Reference in
petal_flight_log.rst:Commands:
petal-flight-log/detect_error_logs,petal-flight-log/clear_error_logsPublished topics:
/petal-flight-log/detect_error_logs,/petal-flight-log/clear_error_logs
Added
format_sd_cardcommand documentation topetal_user_journey_coordinator.rstwith full two-phase response pattern, including Phase 1 immediate/error responses, Phase 2 status publish (success/failure),FormatSDCardStatusPayloadfield reference, and front-end handling instructions
Dependency Updates:
Updated
petal-flight-logfromv0.2.6tov0.2.7:Feature: Added
bulk_delete_flight_recordsjob-based MQTT command with progress streaming (subscribe/unsubscribe/cancel)Feature: Added
clear_all_ulogsjob-based MQTT command with progress streamingFeature: Added
detect_error_logs2-phase MQTT command for scanning fail_*.log filesFeature: Added
clear_error_logs2-phase MQTT command for deleting fail_*.log filesImprovement: All file deletion uses
_delete_file_with_fallback()(shell rm → MAVFTP) for hardware and SITL compatibilityImprovement: MAVFTP listing resilience (reduced retries, graceful empty-dir handling)
Fix: Progress reporting no longer stuck at 10% —
progress_callbackpropagated correctly through deletion loopsFix: Empty date directories cleaned up after deletion via shell
rmdirFix: FTP listing crash on empty base directory (code 73) caught gracefully
Updated
petal-user-journey-coordinatorfromv0.1.12tov0.1.13:Feature: Added
format_sd_cardMQTT command with 2-phase response pattern (immediate acknowledgement + status publish tocommand/web)Feature: Added
FormatSDCardRequestandFormatSDCardStatusPayloadPydantic modelsFeature: Active-operation guard prevents concurrent SD card format operations
Version 0.2.1 (2026-02-12)
Bulk Parameter Two-Phase Response Pattern:
Refactored
bulk_set_parametersandbulk_get_parametersMQTT handlers to use the same two-phase response pattern asreboot_autopilot:Phase 1 (Immediate Acknowledgement):
send_command_responseis now called immediately after message validation and active-operation checks, before executing the MAVLink bulk operation. This prevents the front-end from timing out on long-running parameter operations.Phase 2 (Status Publish): Results are published to
command/webviapublish_messagewith command/<petal_name>/bulk-parameter-setand/<petal_name>/bulk-parameter-getrespectively, once the operation completes (successfully, partially, or with errors).
Added
BulkParameterStatusPayloadPydantic model for structured bulk parameter status payloads published via MQTT. Fields:success,status,message,error_code,data(containsBulkParameterResponse), andtimestamp.All MQTT publish payloads now use
BulkParameterStatusPayload.model_dump(mode="json")instead of hardcoded dictionaries, ensuring consistent serialization and validation.Error handling in the execution phase (Phase 2) now publishes error status via MQTT
publish_messagewithEXECUTION_ERRORorNO_PARAMETERS_CONFIRMEDerror codes, so the front-end is always notified of failures.
Documentation Updates:
Updated petal-user-journey-coordinator documentation version from
v0.1.11tov0.1.12Rewrote
bulk_set_parametersandbulk_get_parameterscommand documentation with full two-phase response pattern details:Phase 1 immediate response and error response examples
Phase 2 status publish examples (success, partial failure, total failure)
BulkParameterStatusPayloadfield referenceFront-end handling instructions
Added new published topics to MQTT Topics Reference:
/petal-user-journey-coordinator/bulk-parameter-set/petal-user-journey-coordinator/bulk-parameter-get
Dependency Updates:
Updated
petal-user-journey-coordinatorfromv0.1.11tov0.1.12:Refactor:
bulk_set_parametersandbulk_get_parametershandlers refactored with two-phase response pattern: immediatesend_command_responseacknowledgement followed bypublish_messagewith results tocommand/webFeature: Added
BulkParameterStatusPayloadPydantic model for structured MQTT status payloadsImprovement: All bulk parameter MQTT publish payloads now use validated Pydantic models instead of hardcoded dictionaries
Improvement: Execution errors in bulk parameter operations now always publish error status to
command/web, ensuring the front-end is notified of failures
Version 0.2.0 (2026-02-12)
``@mqtt_action`` Decorator & MQTT Command Handler Refactor:
Introduced the
@mqtt_actiondecorator inpetal_app_manager.plugins.decoratorsfor declarative MQTT command handler registration:commandparameter: specifies the command suffix (framework auto-prefixes the petal name)cpu_heavyparameter (defaultFalse): whenTrue, offloads handler execution to a thread-pool executor to prevent event-loop starvation from CPU-bound work (e.g. NumPy, image processing, large serialization)
Added base-class infrastructure in
Petal(plugins/base.py):_collect_mqtt_actions(): scans all instance methods/attributes for__mqtt_action__metadata and builds the dispatch table_mqtt_master_command_handler(): single registered MQTT handler that dispatches incoming commands to the correct@mqtt_actionhandler, with automatic error responses for unknown commands and organization-ID guard logic_setup_mqtt_actions(): called at startup to wire everything up and register the master handler with the MQTT proxyhas_mqtt_actions(): quick check for whether a petal has any decorated handlers
Removed legacy boilerplate from all refactored petals:
_setup_command_handlers()methods (manual dict of command → handler)_master_command_handler()methods (manual if/elif dispatch)Manual
register_handler()calls in_setup_mqtt_topics()
Documentation Updates:
Added MQTT Command Handlers (@mqtt_action) section to Adding a New Petal guide covering:
Basic usage, handler signature,
cpu_heavyparameterUnder-the-hood dispatch mechanism
Dynamic/factory handler pattern
Migration guide from the legacy manual dispatch pattern
Replaced MQTTProxy placeholder in Using Proxies guide with full documentation:
@mqtt_action-based command handling (recommended)publish_message()andsend_command_response()public APIMethod reference table
cpu_heavyflag explanationStatus broadcasting example
Dependency Updates:
Updated
petal-flight-logfromv0.2.5tov0.2.6:Refactor: All 10 MQTT command handlers now registered via
@mqtt_actiondecorator, eliminating the manual command handler registry and master dispatch method:fetch_flight_records,subscribe_fetch_flight_records,unsubscribe_fetch_flight_records,cancel_fetch_flight_records,fetch_existing_flight_records,start_sync_flight_record,subscribe_sync_job_value_stream,unsubscribe_sync_job_value_stream,cancel_sync_job,delete_flight_recordImprovement: Redis command acknowledgment and message handling methods are now fully asynchronous (
async def/await), ensuring non-blocking behaviorImprovement: Fire-and-forget scheduling of long-running Redis command handlers now uses
asyncio.create_taskinstead ofasyncio.run_coroutine_threadsafeFix:
sync_px4_timeserial handler converted toasync def, aligning with the rest of the async codebase
Updated
petal-user-journey-coordinatorfromv0.1.10tov0.1.11:Refactor: Replaced manual
_command_handlersregistry and_master_command_handlerwith automatic handler discovery and dispatch via@mqtt_actiondecorator and the base classPetal._mqtt_master_command_handlerRefactor: All static handlers decorated with
@mqtt_action; dynamically created parameter and pub/sub handlers now attach__mqtt_action__metadata for automatic discovery and registrationImprovement: Converted internal handler functions (
_handler,_position_handler,_attitude_handler,_statustext_handler) toasync deffor asynchronous message processingFix: Fixed missing
awaitonasyncio.sleep(0.1)in a test handlerCleanup: Removed unused
RedisProxyimport
Updated
petal-leafsdkfromv0.2.9tov0.2.10:Refactor: 3 MQTT command handlers (
mission_plan,rtl,goto) now registered via@mqtt_actiondecorator, removing legacy_mqtt_subscribe_to_mission_planand_mqtt_command_handler_masterImprovement: All MAVLink message handler methods in
fc_status_provider.pyconverted toasync deffor non-blocking message processingImprovement: All Redis and MAVLink publishing functions in
mission.py,mission_step.py, andheartbeat.pyconverted toasync defImprovement: Updated type annotations for MAVLink subscription setup and teardown functions to require async callbacks
Fix: Fixed
msg_idNameErrorbug in legacy master handler (caught during refactor)
Updated
petal-warehousefromv0.1.8tov0.1.9:Improvement: Captured main event loop (
self._loop) fromMavLinkExternalProxyduring initialization for safe coroutine scheduling from background threadsImprovement:
send_positionandsend_target_trajWebSocket methods now useasyncio.run_coroutine_threadsafeto execute in the correct event loop, preventing threading issuesImprovement: MAVLink message handler functions (
handler_pos,handler_att,handler_target_trajectory) converted toasync def
Updated
petal-qgc-mission-serverfromv0.1.3tov0.1.4:Refactor: Message router refactored to support async handlers;
route()method is nowasyncand awaits handler results if they are coroutinesRefactor: MAVLink server main loop and message draining/handling methods converted to
asyncfor non-blocking message processing and routingRefactor: Mission upload and download protocol handlers (
upload.py,download.py) converted toasync, includingrequest_waypoint,_finalize_upload, and all mission item/count/request handlersRefactor: Mission translation and Redis publishing logic (
translation.py) converted toasync; all Redis interactions are now properly awaitedImprovement: All internal handler methods in bridge and connection modules converted to
asyncto ensure the entire message handling pipeline is non-blocking
Version 0.1.62 (2026-02-06)
New Features:
Added single-motor ESC calibration (
esc_calibration_single) documentation and Postman API collection entries:Detailed step-by-step workflow: initialization, maximum throttle, minimum throttle, safe stop
Emergency stop handling via
force_cancel_calibrationSeparate Postman requests for each calibration step targeting
{{CALLBACK_URL}}/mqtt-callback/callback
PX4 Reboot Workflow Improvements:
Enhanced
reboot_px4command documentation to clarify heartbeat-based reboot confirmation:Reboot is confirmed via heartbeat drop (PX4 shutting down) and heartbeat return (PX4 alive again)
Updated immediate response message to specify confirmed status is published to
command/webClarified client timing expectations: reboot confirmation can take up to ~35 seconds
Improved Postman reboot request descriptions and payloads to reflect the confirmed reboot flow
Documentation & Metadata Updates:
Updated petal-user-journey-coordinator documentation version from
v0.1.8tov0.1.10Improved Postman collection metadata and preview settings for better usability
Dependency Updates:
Updated
petal-user-journey-coordinatorfromv0.1.9tov0.1.10:Feature: Added
ESCCalibrationSingleControllerclass implementing step-by-step single-motor ESC calibration workflow with interface setup, per-motor parameter configuration, and throttle controlFeature: Introduced
ESCCalibrationSinglePayloadPydantic model with motor index, calibration state, safety timeout, throttle commands, and ESC interface selectionFeature: Added
ESC_CALIBRATION_SINGLEoperation mode to theOperationModeenumIntegration: Registered
ESCCalibrationSingleControllerandESCCalibrationSinglePayloadin the plugin startup routine and command handler setup
Version 0.1.61 (2026-02-03)
Documentation Updates:
Added comprehensive MQTT Topics Reference sections to petal documentation:
petal-user-journey-coordinator: Lists all commands received on
command/edgeand topics published tocommand/webpetal-flight-log: Lists all commands received on
command/edgeand topics published tocommand/web
Enhanced
reboot_px4command documentation with two-phase response pattern:Phase 1: Immediate
send_command_responsefor command acknowledgementPhase 2: Async
publish_messagetocommand/webwith reboot statusDocumented all error response types (
OPERATION_ACTIVE,VALIDATION_ERROR,HANDLER_ERROR)Added front-end handling instructions for status subscription
Dependency Updates:
Updated
petal-user-journey-coordinatorfromv0.1.8tov0.1.9:Refactor: Refactored
_reboot_px4_message_handlerwith two-phase response pattern:Immediate
send_command_responseacknowledges command receiptSequential
await reboot_autopilotexecutes the rebootpublish_messagepublishes final status to/petal-user-journey-coordinator/reboot_px4_status
Feature: Added
RebootPX4StatusPayloadPydantic model for structured reboot status payloadsFix: Added missing
returnstatement afterValidationErrorhandler to prevent fall-through executionImprovement: All error responses (validation, handler errors) now sent immediately via
send_command_response
Version 0.1.60 (2026-01-30)
Plugin Loading & Startup Refactor:
Separated petal loading into two distinct phases for finer control and clearer logging:
initialize_petals: Loads and configures petals without starting themstartup_petals: Starts up and mounts petals to the FastAPI appOriginal
load_petalsfunction now wraps these two steps
Updated main application startup logic to use new initialization pattern, ensuring petals are loaded and started sequentially and safely in the background
MQTT Callback & Routing Improvements:
Registered MQTT callback router under
/mqtt-callbackpath only when MQTT proxy and callbacks are enabledChanged default MQTT callback port to
9000to match main app’s port (previously used dedicated server on port 3005)Updated Postman collection and FastAPI launch configuration for new callback endpoint URL
Proxy & Threading Enhancements:
Added explicit thread name prefixes to all proxy thread pools and background threads for easier debugging and log tracing:
S3BucketProxy: Thread pool naming for S3 operationsCloudProxy: Thread pool naming for cloud operationsLocalDbProxy: Thread pool naming for database operationsMavLinkExternalProxy: I/O and worker thread naming
Exposed
PETAL_REDIS_WORKER_THREADSenvironment variable to configure Redis proxy worker threadsIncreased Redis proxy
ThreadPoolExecutorworkers to prevent blocking listen loops from starving key/value operationsAdded
_invoke_callback_safely()method in Redis proxy for proper async callback handling from worker threads usingasyncio.run_coroutine_threadsafe()
Health Check Logic:
Simplified MavLink proxy health check to only consider main connection status, excluding
leaf_fc_connectedflag
Dependency Updates:
Updated
petal-warehousefromv0.1.7tov0.1.8:Version Management:
PetalWarehouse.versionattribute now dynamically references package__version__instead of being hardcodedThread Naming: Background thread for sending position and yaw data to Blender now named
BlenderPositionSenderfor improved debugging and monitoring
Version 0.1.59 (2026-01-27)
Dependency Updates:
Updated
leaf-pymavlinkfromv0.1.15tov0.1.16:Critical Fix: Fixed PyPI wheel builds missing DroneLeaf LEAF_* MAVLink messages
Root cause was pip’s build isolation preventing the
MDEFenvironment variable from reaching setup.pyCI workflow now uses
--no-build-isolationwith explicit dependency installation to ensure custom message definitions are included in wheels
Updated
petal-leafsdkfromv0.2.7tov0.2.9:Pinned
leaf-pymavlinktov0.1.16
Version 0.1.57 (2026-01-18)
Dependency Updates:
Updated
leaf-pymavlinktov0.1.15Updated
petal-leafsdkfromv0.2.6tov0.2.7:Pinned
leaf-pymavlinktov0.1.15
Version 0.1.56 (2026-01-17)
FTP Download Error Handling Improvements:
Improved error handling in
external.pyFTP download logic to ensure failed or cancelled downloads raise aRuntimeError, log the error, and clean up partial filesAdded more granular error logging and FTP state reset logic after failures or cancellations, including after non-zero return codes from FTP operations
Ensured FTP state is reset after each download attempt and after recovering from temp files, preventing state leakage between operations
Dependency Updates:
Updated
petal-flight-logfromv0.2.4tov0.2.5:Error Handling Improvements:
Added specific handling for
RuntimeErrorduring MAVFTP ULog downloads, providing clearer error messages when the remote file cannot be openedChanged logging for failed ULog downloads to exclude exception tracebacks for both MAVLink and MAVFTP errors, making logs less verbose
Reliability Enhancements:
Ensured flight record status is updated in the cloud database before exceptions are re-raised during sync job errors, improving consistency between job state and record status
Logic and Workflow Adjustments:
Updated
start_sync_flight_recordto always attempt ULog and Rosbag uploads if present, regardless of whether an S3 key already existsAdded handling for files from both “pixhawk” and “local” storage types
Updated
petal-leafsdkfromv0.2.5tov0.2.6:Mission Abort and Drone State Handling:
Improved
abortmethod to handle abort requests during takeoff or landing statesMission is cancelled locally without sending stop trajectory to flight controller during these states, preventing unsafe interruptions
Added new
is_drone_taking_offmethod infc_status_provider.py
MQTT Communication and Error Handling:
Refactored all MQTT command handlers to use new
send_command_responsemethod for sending responses and errorsReplaced direct calls to
publish_messagefor more consistent and reliable client communication
Startup and Proxy Initialization:
Improved startup logic by separating proxy initialization from asynchronous MQTT topic setup
Added more robust logging and retry logic for MQTT proxy availability
Simplified
async_startupmethod, delegating complex logic to the main application
Mission Queue and RTL Safety:
Reduced mission queue size from 10 to 1 to avoid overloading the mission manager
Decreased RTL (Return-To-Launch) mission return speed from 0.5 m/s to 0.1 m/s for safer drone returns
Updated
petal-qgc-mission-serverfromv0.1.2tov0.1.3:Mission Translation Logic Updates:
Replaced all uses of
calculate_yaw_to_targetwithcalculate_yaw_to_target_ENUinmission_translator.pyAffects takeoff, waypoint, land, and RTL command handling to ensure yaw calculations consistently use the ENU coordinate system
Logging Changes:
Changed logging level for local position NED updates in
gps.pyfrominfotodebug, reducing log verbosity for frequent updates
Version 0.1.54 (2026-01-15)
Dependency Updates:
Updated
petal-leafsdkfromv0.2.3tov0.2.4:Pinned
leaf-pymavlinktov0.1.13for MAVLink compatibility stability
Version 0.1.53 (2026-01-15)
Dependency Updates:
Updated
petal-flight-logfromv0.2.3tov0.2.4:Feature: Improved progress tracking with weighted job distribution
ULog download now accounts for 90% of sync progress when present
S3 upload jobs split remaining 10% (5% each) when ULog download exists
S3 jobs split 100% evenly (50% each, or 100% if single job) when no ULog download
Added
_calculate_job_weights()method for dynamic weight calculationUpdated
_monitor_sub_job_progress()to use weight-based progress slices
Version 0.1.52 (2026-01-14)
Configuration Updates:
Updated
proxies.yamlto replacepetal-mission-plannerwithpetal-leafsdkUpdated plugin entry points for
petal-leafsdkandpetal-qgc-mission-serverRevised petal dependencies for more accurate service configuration
New Proxy Functionality:
Added async
head_objectmethod tobucket.pyfor checking S3 object existence and retrieving metadataAdded
build_request_message_commandmethod toexternal.pyfor requesting specific MAVLink messagesAdded
build_shell_serial_control_msgsmethod toexternal.pyfor sending shell commands to PX4 via MAVLink
Logging Improvements:
Enhanced application startup and shutdown logs in
main.pywith clearer, more prominent status messages
Health Model Validation:
Migrated all Pydantic v1
@validatordecorators to Pydantic v2@field_validatorinmodels/health.py:Added
@classmethoddecorator and proper type hints to all validatorsUpdated import from
validatortofield_validatorResolves deprecation warnings for Pydantic v2.0+ (to be removed in v3.0)
Dependency Updates:
Upgraded
leaf-pymavlink,petal-leafsdk,petal-user-journey-coordinator, andpetal-qgc-mission-serverinpyproject.tomlAdded
petal-qgc-mission-serveras a local editable install in development dependenciesUpdated
petal-user-journey-coordinatorfromv0.1.7tov0.1.8:Fix: Bulk parameter setting now handles floating-point precision issues correctly
Uses
math.isclose()with relative tolerance (1e-5) for float32/float64 comparisonResolves false validation failures for parameters like
0.2vs0.20000000298023224
Updated
petal-flight-logfromv0.2.1tov0.2.3:Refactor: Centralized table name constants (
FLIGHT_RECORD_TABLE,LEAF_FC_RECORD_TABLE)Replaced all hardcoded table name strings with constants in
jobs.pyandplugin.pyPlugin version now set dynamically from package
__version__
Updated
pymavlinktov0.1.14:Added MAVLink definitions for
petal-leafsdkv0.2.3 mission states and actions
Updated
LeafSDKtov0.3.3:Refactored class naming for improved clarity
Distributed state synchronization
Mission state analysis documentation
Mission planning and trajectory updates
Refactored mission step handling
Joystick mode updates to match MAVLink
New mission configuration format support
Updated
petal-leafsdktov0.2.3:Major Refactor: Mission flow and state management overhaul
Refactored mission execution logic with FSM and heartbeat modules
Distributed state synchronization and centralized state management
New MAVLink definitions integration
Mission behavior bug fixes
Enhanced mission step handling and joystick mode functionality
Comprehensive documentation and testing utilities
CI/CD pipeline integration
Updated
petal-qgc-mission-servertov0.1.2:New Feature: Working adapter for QGC mission planning and execution updates
Renamed
MissionSteptoMissionPlanStepfor consistencyRenamed plugin class from
QGCMissionAdapterPetaltoPetalQGCMissionServerAdded
calculate_yaw_to_targetfunction for NED frame yaw calculationsRenamed previous function to
calculate_yaw_to_target_ENUfor clarity
Version 0.1.51 (2026-01-08)
Breaking Changes:
Renamed
flight-log-petaltopetal-flight-logthroughout the codebase for naming consistency:Updated all references in
proxies.yaml(enabled petals, proxy mappings, dependencies)Updated API documentation and example requests
Updated Postman collection and environment files
Dependency Updates:
Updated
petal-flight-logfromv0.2.0tov0.2.1:Critical Fix: Corrected MQTT topic prefix from
flight-log-petaltopetal-flight-logto match petal naming conventionCritical Fix: Corrected MQTT topic prefix from
petal-user-journey-coordinatortopetal-flight-logto match petal naming convention: was causing a conflict with user journey coordinatorResolves MQTT subscription issues where topics were not matching expected patterns
Updated
petal-user-journey-coordinatorfromv0.1.6tov0.1.7:Critical Fix: Added MQTT topic prefix from
petal-user-journey-coordinatortopetal-user-journey-coordinatorfor consistencyEnsures proper MQTT message routing and subscription handling
Updated
petal-qgc-mission-servermapping and dependencies inproxies.yaml
MAVLink Proxy Configuration:
Added required MAVLink system identification parameters:
SOURCE_SYSTEM_ID: System ID for MAVLink messages (default:1)SOURCE_COMPONENT_ID: Component ID for MAVLink messages (default:1)Exposed via environment variables and
ProxyConfiginsrc/petal_app_manager/__init__.pyPassed to
MavLinkExternalProxyconstructor inmain.pyandexternal.py
Changed default log directory from
/var/log/petal-app-managertologsfor development convenience
Health Model Validation:
Refactored all Pydantic health models in
models/health.pyto use Pydantic v2’s@field_validatordecorator:Updated validators for
SystemHealthStatus,PetalHealthStatus,RedisHealthStatusUpdated validators for
MqttHealthStatus,MavLinkHealthStatus,HealthStatusEnsures compatibility with Pydantic v2 and removes deprecation warnings
Maintains backward compatibility with existing health status enumeration values
Testing Improvements:
Fixed unit tests in
test_external_proxy.pyto include requiredsource_system_idandsource_component_idparametersFixed unit tests in
test_mavlink_proxy.pyto use updatedMavLinkExternalProxyconstructor signatureUpdated Postman environment file with new test variables (
test_flight_record,CALLBACK_URL,TS_CLIENT_URL)
Migration Guide:
If you have existing configurations or integrations referencing flight-log-petal, update them to petal-flight-log:
Environment variable names
MQTT topic subscriptions
API endpoint references
Configuration files
Version 0.1.50 (2026-01-05)
Configuration Enhancements:
All environment variables now use the
PETAL_prefix to avoid conflicts with other applications.Added
PETAL_LOG_DIRenvironment variable for configuring log file directory:Development:
logs(relative to project directory)Production:
/home/droneleaf/.droneleaf/petal-app-manager
MAVLink Proxy Improvements:
Added
set_params_bulk_lossyasync method for efficient bulk parameter setting over lossy links:Windowed sends with configurable
max_in_flightparameterAutomatic periodic resend of unconfirmed parameters
Retry cap with configurable
max_retriesOptional parameter type specification (
UINT8,INT16,REAL32, etc.)Confirmation via echoed
PARAM_VALUEmessages
Added
get_params_bulk_lossyasync method for efficient bulk parameter retrieval:Uses
PARAM_REQUEST_READby name with windowed requestsPeriodic resend of pending requests up to retry limit
Returns partial results on timeout for resilience
Added
reboot_autopilotasync method toMavLinkExternalProxyfor rebooting the autopilot (PX4/ArduPilot):Sends
MAV_CMD_PREFLIGHT_REBOOT_SHUTDOWNcommand and waits forCOMMAND_ACKOptional
reboot_onboard_computerparameter to also reboot the onboard computerReturns structured
RebootAutopilotResponsewith success status, status code, and reasonFallback verification via heartbeat drop/return detection when no ACK is received
Comprehensive status codes for all failure scenarios (denied, rejected, unsupported, etc.)
Petal Loading Architecture:
Updated
proxies.yamlconfiguration to support two distinct petal loading strategies:startup_petals: Petals loaded synchronously during server startup (blocking)
Critical petals that must be available before the server accepts requests
Server waits for these petals to fully initialize
Example:
petal-user-journey-coordinator
enabled_petals: Petals loaded asynchronously after server startup (non-blocking)
Background task spawned after server is ready to accept requests
Loads petals one-by-one without blocking the main event loop
Reduces server startup time and improves responsiveness
Example:
flight-log-petal,petal-warehouse,petal-mission-planner
Refactored petal async startup into reusable
_handle_petal_async_startup()helper
Health Monitoring Enhancements:
Added
PetalHealthInfomodel to track individual petal status:name: Petal identifierstatus: One ofloaded,loading,failed,not_loadedversion: Petal version if availableis_startup_petal: Whether this is a critical startup petalload_time: ISO timestamp when petal was loadederror: Error message if petal failed to load
Extended
HealthMessageto includepetalsarray with real-time petal loading statusHealth publisher now reports petal loading progress during background loading phase
S3 Bucket Proxy Improvements:
Added
move_fileasync method tobucket.pyfor moving (renaming) files within the S3 bucket, which performs a copy followed by a delete operation.Enhanced
upload_filemethod to accept an optionalcustom_s3_keyparameter, allowing callers to specify the exact S3 key for uploads.
Redis Proxy Improvements:
Added
scan_keysasync method toredis.pyto efficiently scan and return keys matching a given pattern, supporting pagination via thecountparameter.Changed Redis set operation logging from info to debug level to reduce log verbosity for routine key writes.
Version 0.1.49 (2025-01-05)
Bug Fixes:
Remove redundant log streaming utility files and references:
Deleted
log_streamer.pyfromutils/directoryRemoved import and endpoint registration for log streaming in
config_api.pyandmain.py
Version 0.1.48 (2025-12-31)
Bug Fixes: - Added mqtt as a dependency for petal-mission-planner in proxies.yaml
Version 0.1.47 (2025-12-31)
Bug Fixes:
Fix all Petal plugin keys in
proxies.yamlto match correct__name__attribute inplugin.py:flight_records → flight-log-petal
petal_warehouse → petal-warehouse
mission_planner → petal-mission-planner
petal_user_journey_coordinator → petal-user-journey-coordinator
qgc_petal → petal-qgc-mission-server
Version 0.1.46 (2025-12-31)
Bug Fixes: - Made the robot_type_id field in the LocalDbMachineInfo model (health.py) optional, allowing it to be None if not provided.
Version 0.1.45 (2025-11-23)
Architecture Enhancements:
Multi-Threaded MAVLink Processing - Significant performance improvements for MAVLink message handling:
ExternalProxy and MavLinkExternalProxy: Now use I/O thread + multiple worker threads architecture
I/O Thread: Dedicated thread for reading/writing MAVLink messages (non-blocking)
Worker Threads: Configurable pool of threads for processing handlers in parallel (default: 4)
Thread-Safe Message Buffer: Deque-based buffer with thread-safe enqueue/dequeue operations
Configuration:
MAVLINK_WORKER_THREADSenvironment variable (default: 4)Performance: 2x throughput improvement with parallel handler processing
Resilient Proxy Startup - Enhanced reliability and stability:
Non-Blocking Startup: MQTT and Cloud proxies no longer crash the FastAPI server on connection failures
Graceful Degradation: Proxies log warnings and remain inactive until dependencies are available
Background Monitoring: Automatic retry tasks monitor and reconnect failed proxies
Configurable Retry Intervals: All timeout/retry values centralized in
ProxyConfigclassEnvironment Control: Override timeouts via environment variables without code changes
Configuration Management:
Centralized ProxyConfig Class - New configuration section for proxy connection management:
MQTT_RETRY_INTERVAL- Monitoring task retry interval (default: 10.0 seconds)CLOUD_RETRY_INTERVAL- Cloud proxy retry interval (default: 10.0 seconds)MQTT_STARTUP_TIMEOUT- MQTT startup timeout (default: 5.0 seconds)CLOUD_STARTUP_TIMEOUT- Cloud token fetch timeout (default: 5.0 seconds)MQTT_SUBSCRIBE_TIMEOUT- Topic subscription timeout (default: 5.0 seconds)
Hybrid Petal Loading - Massive performance improvement for petal initialization:
Direct Path Import: Load petals from
module.submodule:ClassNamepaths (~0.002ms)Entry Point Fallback: Falls back to traditional entry point discovery if path fails (~67ms)
4355x Speedup: Direct path loading is 4355 times faster than entry point discovery
Configuration: Define petal paths in
proxies.yamlunderpetalssection
Health Monitoring Updates:
Enhanced Thread Tracking - Updated health check models for multi-threaded architecture:
MavlinkWorkerThreadInfo: Separate tracking for I/O thread and worker threadsio_thread_running: Boolean status for I/O threadio_thread_alive: Health status for I/O threadworker_threads_running: Boolean status for worker threadsworker_thread_count: Number of configured worker threadsworker_threads_alive: Count of healthy worker threads
Stability Improvements:
Fixed server freeze during startup when TypeScript MQTT client is unavailable
Fixed 30-second timeout blocking in MQTT proxy subscription operations
Improved error handling for missing organization IDs during proxy startup
Enhanced monitoring tasks with proper timeout protection
All proxy operations respect configurable timeout values
Developer Benefits:
Faster petal loading during development (4355x speedup)
No server crashes when cloud/MQTT services are unavailable
Easy timeout/retry configuration via environment variables
Better visibility into proxy connection status via health endpoints
Improved multi-threading performance for high-throughput scenarios
Version 0.1.44 (2025-11-07)
Configuration Enhancements:
MQTTProxy Topic Configuration - MQTT topic names now configurable via environment variables
Added environment-configurable topic parameters to MQTTProxy class:
command_edge_topic- Configurable viaCOMMAND_EDGE_TOPIC(default:command/edge)response_topic- Configurable viaRESPONSE_TOPIC(default:response)test_topic- Configurable viaTEST_TOPIC(default:command)command_web_topic- Configurable viaCOMMAND_WEB_TOPIC(default:command/web)
Topics now read from
Configclass enabling direct environment controlImproved deployment flexibility across different MQTT broker configurations
Developer Benefits: - Simplified MQTT topic customization for different environments - Enhanced configuration management without code changes - Better separation of configuration from implementation
Version 0.1.43 (2025-11-05) - Hotfix
Breaking Changes:
MQTTProxy Refactoring ⚠️ Code Breaking Changes ⚠️ - Handler-based subscription model replaces arbitrary topic subscriptions - Petals now register handlers for
command/edgetopic usingregister_handler(callback)- All command messages flow through registered handlers with command-based routing - Single subscription per petal with command-based routing
Removed Public Methods:
- subscribe_to_topic() - Now private
- unsubscribe_from_topic() - Now private
- subscribe_pattern() - Removed (use command-based routing instead)
New Public Methods:
# Register a handler for command/edge topic
def register_handler(handler: Callable) -> str:
"""Returns subscription_id for later unregistration"""
# Unregister a handler
def unregister_handler(subscription_id: str) -> bool:
"""Remove handler using its subscription_id"""
# Publish to command/web topic
async def publish_message(payload: Dict[str, Any], qos: int = 1) -> bool:
"""Publish message to command/web topic"""
# Send response to response topic
async def send_command_response(message_id: str, response_data: Dict[str, Any]) -> bool:
"""Send command response with automatic topic routing"""
Dependencies Updated:
- Latest petal-leafsdk compatibility
- Latest petal-user-journey-coordinator compatibility
Version 0.1.42 (2025-11-03)
Health Reporting Enhancements:
/health/overviewendpoint now contains version information for each Petal component:petal_leafsdkpetal_flight_logpetal_warehousepetal_user_journey_coordinatorpetal_qgc_mission_server
Components report
"not installed"when not available
Dependency Updates:
- Updated petal-user-journey-coordinator dependency to version v0.1.3
- Bumped application version from 0.1.41 to 0.1.42
Version 0.1.41 (2025-11-02)
Bug Fixes:
- Fixed health publishing error: 'MavlinkProxyHealth' object has no attribute 'details'
- Mavlink connection issues now report INFO messages instead of ERROR logs
- Improved error handling for unhealthy mavlink connections
Version 0.1.39 (2025-10-23)
New Features:
- /health/overview endpoint for accessing Petal App Manager version
- Dynamic version retrieval using import petal_app_manager; print(petal_app_manager.__version__)
- Version information added to controller dashboard message /controller-dashboard/petals-status
API Enhancements: - Controller dashboard now includes version field in status responses:
{
"title": "Petal App Manager",
"component_name": "petal_app_manager",
"status": "healthy",
"version": "0.1.39",
"message": "Good conditions",
"timestamp": "2025-10-23T14:35:30.143747",
"services": [...]
}
Version 0.1.38 (2025-10-23)
Stability Improvements: - MAVFTP reliability improvements for Pixhawk SD card file system operations - Enhanced download and file system information request stability - Overall Petal App Manager stability improvements
Related Releases:
- petal-user-journey-coordinator v0.1.1 - Fixed square test JSON response compatibility bug
Version 0.1.37 (2025-10-22)
Bug Fixes:
- Cloud proxy error message improvements
- Fixed MAVFTP reliability issues affecting overall system reliability
- Fixed petal-user-journey-coordinator bugs
Stability: - Improved overall reliability of petal-app-manager - Enhanced MAVFTP communication stability
Version 0.1.31 (2025-09-25)
New Features:
- MQTT Middleware Proxy - Unified interface for MQTT communications
- Organization Manager - Fetches organization ID without DynamoDB dependency
- LeafFC Heartbeat Check - Added to logs and /health/detailed endpoint
Stability Improvements: - Enhanced server startup and shutdown sequence (avoiding dead-locks) - Improved system reliability and error handling
Related Releases:
Petal User Journey Coordinator - MQTT integration for web client applications:
Multiple handlers for PX4 parameter management
ESC calibration with keep-alive streaming
Real-time telemetry streaming to web client
Debug flag for square test with plotting and data dump
Version 0.1.29 (2025-08-28)
New Features:
MQTT Middleware Proxy:
Reduces integration complexity for local applications
Unified interface to MQTT communications
Enables faster development cycles
Centralizes communication logic for easier maintenance
Petal & Proxy Control Dashboard:
Unified control and transparency for all proxies and petals
Real-time health monitoring and dependency tracking
Centralized enable/disable controls and API testing
Real-time log streaming and filtering
Accessible at
http://localhost/home/petals-proxies-control
Business Value: - Faster development and maintenance cycles - Scalable and flexible operations - Reduced Petal App Manager overhead
Version 0.1.28 (2025-08-17)
New Features:
- Redis Pattern PubSub Support - Enhanced communication reliability
- LeafFC v1.4.0 Compatibility - Improved internal DroneLeaf system communication
- Log Output Configuration - Configurable log level routing via config.json
Bug Fixes:
- S3 bucket access credential refresh from session manager
- HEAR_CLI petal-app-manager-prepare-arm and petal-app-manager-prepare-sitl fixes
Improvements: - Enhanced communication between LeafFC and LeafSDK - Improved log management and debugging capabilities
Version 0.1.23 (2025-07-31)
Major Updates:
- Cloud Integration: Full implementation of cloud DynamoDB and S3 bucket proxies
- Flight Log Integration: Latest petal-flight-log v0.1.4 for cloud syncing endpoints
- Error Management: Routes for clearing error flags from edge devices
- MAVLink Improvements: Enhanced proxy communication with threading locks for non-thread safe operations
Minor Updates:
- Centralized configuration management
- Updated petal-warehouse v0.1.3 with pymavlink bug fixes
Related Releases:
- petal-flight-log v0.1.4 - Flight record management and cloud syncing
- LeafSDK v0.1.5 - Mission flow control and progress updates (pause, resume, cancel)
Version 0.1.18 (2025-07-29)
New Features:
- Burst Message Support - Optional burst message capability
- Message Timeout Control - Configurable timeouts to reduce CPU overhead
- Detailed Health Check - /health/detailed endpoint for field status checks
- Petal Template - HEAR_CLI petal initialization template
HEAR_CLI Integration:
hear-cli local_machine run_program --p petal_init
Related Releases:
LeafSDK Petalv0.1.5:Upgraded trajectory polynomial coefficient generation
Burst MAVLink messages for improved communication reliability
Addressed robustness issues in LeafSDK functionalities
Known Limitations: - Trajectory sampling causes jitter (needs LeafFC-side implementation) - MAVLink communication over mavlink-router lacks reliability (Redis recommended)
Version 0.1.5 (2025-07-03) - First Stable Release
Milestone Release: - First stable release of Petal App Manager - Available on PyPI: https://pypi.org/project/petal-app-manager/ - HEAR_CLI support for development and production deployment
Related Petal Releases:
Flight Log Petal v0.1.1: - Added ability to cancel Pixhawk downloads - Prevents mavlink connection interruption
LeafSDK Petal v0.1.0: - Mission planning library integration - Easy deployment in DroneLeaf ecosystem - External application compatibility
Warehouse Management Petal v0.1.0: - Real-time drone publishing via MAVLink over WebSocket - Blender visualization integration
Version 0.1.0 (2025-06-22) - Initial Release
Milestone: - Passed initial testing phase - First public release
Core Capabilities: - MAVLink connection management abstraction - Local DynamoDB database integration - Redis cache support - Cloud infrastructure communication - ROS1 topic integration
Availability: - PyPI: https://pypi.org/project/petal-app-manager/ - GitHub: https://github.com/DroneLeaf/petal-app-manager.git
Business Value: - Accelerated development cycles - Reduced implementation costs - Low-level code abstraction - Simplified drone application development