Known Issues
Petal Issues
Petal Name Must Start with petal-*
Symptoms:
Petal loads but logging doesn’t work properly
Log messages don’t appear in expected log files
Petal functionality seems degraded or incomplete
Logger instances return incorrect or missing information
Cause:
The Petal App Manager framework expects all petals to follow the petal-* naming convention (kebab-case). Internal systems rely on this naming pattern for proper initialization, logging configuration, and other core functionality.
Solutions:
Rename your petal to follow the convention:
❌ Incorrect names:
example-petaltelemetrymy-plugin
✅ Correct names:
petal-examplepetal-telemetrypetal-my-plugin
Update all references:
# pyproject.toml [project] name = "petal-example" # Must start with petal- [project.entry-points."petal.plugins"] petal_example = "petal_example.plugin:PetalExample"
Reinstall the petal:
cd ~/petal-app-manager-dev/petal-app-manager pdm remove old-name pdm add -e ../petal-example --group dev
Petal Not Getting Loaded
Symptoms:
Petal doesn’t appear in
/health/detailedendpointEndpoints defined in your petal return 404
No log messages from your petal during startup
curl http://localhost:9000/your-petal/healthreturns 404
Cause:
The most common cause is that the petal is not registered in the proxies.yaml configuration file. Petal App Manager only loads petals that are explicitly enabled in this file.
Solutions:
Check if petal is registered in proxies.yaml:
cd ~/petal-app-manager-dev/petal-app-manager # or for production: cd ~/.droneleaf/petal-app-manager cat proxies.yaml
Add your petal to enabled_petals:
enabled_petals: - flight_records - petal_warehouse - mission_planner - petal_user_journey_coordinator - petal_example # Add your petal here (use entry point name)
Verify entry point name matches:
The name in
enabled_petalsmust match the entry point key in yourpyproject.toml:[project.entry-points."petal.plugins"] petal_example = "petal_example.plugin:PetalExample" # ^^^^^^^^^^^^ # This name goes in proxies.yaml
Ensure required proxies are enabled:
Check that all proxies listed in your petal’s
get_required_proxies()are enabled:enabled_proxies: - redis # If your petal requires redis - ext_mavlink - db petal_dependencies: petal_example: - redis # List the same proxies here
Verify petal is installed:
pdm list | grep petal-example
Restart Petal App Manager:
# If running manually # Stop with Ctrl+C and restart uvicorn petal_app_manager.main:app --host 0.0.0.0 --port 9000 --log-level info --no-access-log --http h11 --reload # If running as a service sudo systemctl restart petal-app-manager
Python 3.11 Issues
Python 3.11 Not Found
Symptoms:
python3.11: command not foundPDM fails with “No Python interpreter found”
Installation scripts fail at Python detection step
Solutions:
Verify Python 3.11 installation:
which python3.11 python3.11 --version
Check symlinks:
ls -la /usr/bin/python3.11 ls -la /usr/local/bin/python3.11
Recreate symlinks if missing:
sudo ln -sf /home/$USER/miniforge3/bin/python3.11 /usr/bin/python3.11 sudo ln -sf /home/$USER/miniforge3/bin/python3.11 /usr/local/bin/python3.11
If any of the above fails, Rerun HEAR_CLI command:
hear-cli local_machine run_program --p petal_app_manager_prepare_arm
or for x86_64:
hear-cli local_machine run_program --p petal_app_manager_prepare_sitl
PDM Issues
PDM Installation Fails
Symptoms:
pdm: command not foundPDM installation script completes but command not available
Permission errors during PDM installation
Solutions:
Verify PDM installation:
which pdm pdm --version
Rerun HEAR_CLI command:
hear-cli local_machine run_program --p petal_app_manager_prepare_arm
or for x86_64:
hear-cli local_machine run_program --p petal_app_manager_prepare_sitl
PDM Lock File Issues
Symptoms:
pdm installfails with lock file errorsDependency resolution takes very long
Version conflicts during installation
Solutions:
Update lock file:
pdm lock --update-reuse
Clear cache and reinstall:
pdm cache clear rm -f pdm.lock pdm install
Redis Issues
Redis Connection Errors
Symptoms:
Connection refusederrorsredis.exceptions.ConnectionErrorPetal App Manager health check shows Redis as unhealthy
Solutions:
Check if Redis is running:
sudo systemctl status redis-server
Start Redis if not running:
sudo systemctl start redis-server sudo systemctl enable redis-server
Verify socket permissions:
ls -la /var/run/redis/redis-server.sock
Fix socket permissions if needed:
sudo chmod 777 /var/run/redis/redis-server.sock
Redis Configuration Issues
Symptoms:
Redis starts but Petal App Manager can’t connect via UNIX socket
No such file or directoryfor socket pathPermission denied on socket
Solutions:
Verify UNIX socket is enabled in Redis config:
grep unixsocket /etc/redis/redis.conf
Expected configuration:
unixsocket /var/run/redis/redis-server.sock unixsocketperm 777
Update configuration and restart:
sudo nano /etc/redis/redis.conf sudo systemctl restart redis-server
Test socket connection:
redis-cli -s /var/run/redis/redis-server.sock ping # Should return: PONG
If any of the above fails, Rerun HEAR_CLI command:
hear-cli local_machine run_program --p petal_app_manager_prepare_arm
or for x86_64:
hear-cli local_machine run_program --p petal_app_manager_prepare_sitl
MAVLink Issues
MAVLink Connection Issues
Symptoms:
ext_mavlinkproxy shows as unhealthyNo telemetry data received
MAVLink endpoints timeout
Solutions:
Verify MAVLink endpoint configuration:
grep PETAL_MAVLINK_ENDPOINT .env
Ensure correct endpoint set in mavlink master configuration:
cat /etc/mavlink-router/main.confEnsure the following
UdpEndpointis set correctly[UdpEndpoint droneleaf] Mode = Normal Address = 127.0.0.1 Port = 14551
Check if simulation/drone is running:
For SITL:
# Check if SITL simulator is running ps aux | grep px4
Launch SITL if not running:
cd ~/software-stack/PX4-Autopilot make px4_sitl gazebo-classic
Test UDP connection:
# Listen on MAVLink port nc -ul 14551
Verify pymavlink installation:
pdm list | grep pymavlink
MAVLink Submodule Branch Issues (SITL)
Symptoms:
Fresh HEAR-CLI installation in SITL environment
mavlinkrepository and itspymavlinksubmodule pointing tomainbranch instead ofdev-sitlMAVLink functionality may not work as expected with latest development features
Version mismatch between MAVLink libraries and Petal App Manager expectations
Cause:
After fresh installation using HEAR-CLI, the MAVLink repository (located at ~/petal-app-manager-dev/mavlink) and its pymavlink submodule default to the main branch. The HEAR-CLI installation script doesn’t force checkout of the correct branch if the MAVLink repository already exists on the system, causing the repository and its submodules to remain on whatever branch they were previously on.
Repository Structure:
~/petal-app-manager-dev/
├── petal-app-manager/ # Main Petal App Manager repo
└── mavlink/ # MAVLink repository
└── pymavlink/ # pymavlink as a submodule of mavlink
Expected Behavior:
Both the mavlink repository and its pymavlink submodule should be on the dev-sitl branch for SITL development environments.
Solutions:
Quick Fix (Manual Branch Correction):
Navigate to the MAVLink directory:
cd ~/petal-app-manager-dev/mavlink
Check current branches:
# Check mavlink repo branch git branch # Check pymavlink submodule branch cd pymavlink && git branch && cd ..
Checkout correct branches:
# For the mavlink repository git checkout dev-sitl # For the pymavlink submodule cd pymavlink git checkout dev-sitl cd ..
Update submodules:
git submodule update --init --recursive
Clean Reinstall (Recommended):
If you encounter persistent branch issues, perform a clean reinstall:
Remove existing MAVLink directory:
rm -rf ~/petal-app-manager-dev/mavlink
Rerun HEAR-CLI installation:
For SITL/x86_64:
hear-cli local_machine run_program --p petal_app_manager_prepare_sitl
For ARM devices:
hear-cli local_machine run_program --p petal_app_manager_prepare_arm
Verify branches after installation:
cd ~/petal-app-manager-dev/mavlink echo "MAVLink repo branch: $(git branch --show-current)" echo "pymavlink submodule branch: $(cd pymavlink && git branch --show-current)" cd mavlink && git branch && cd ..
Verification:
Confirm the MAVLink repository and its submodule are on the correct branches:
cd ~/petal-app-manager-dev/mavlink
echo "MAVLink repo branch: $(git branch --show-current)"
echo "pymavlink submodule branch: $(cd pymavlink && git branch --show-current)"
Expected output for SITL:
MAVLink repo branch: dev-sitl
pymavlink submodule branch: dev-sitl
Note
HEAR-CLI Limitation: The current HEAR-CLI installation script does not clear the MAVLink repository if it already exists on the system. This means:
If
~/petal-app-manager-dev/mavlinkalready exists, the clone step is skippedExisting branch configurations are not updated
The
pymavlinksubmodule may remain on an incorrect branch
Recommended HEAR-CLI Enhancement: The installation script should either:
Remove the existing MAVLink directory before cloning, OR
Explicitly checkout the
dev-sitlbranch and update submodules after checking for existence
MQTT Issues
MQTT Proxy Fails on Fresh Installation
Symptoms:
Petal App Manager fails to start in development environment
MQTT proxy shows errors or remains unhealthy
Application logs show MQTT connection failures
Service fails to start after fresh installation
Cause:
On a fresh installation, the MQTT proxy requires organization provisioning to be completed before it can connect properly. Without provisioning, the MQTT proxy lacks necessary organization and device identifiers.
Solutions:
Development Environment:
Start Petal App Manager manually (ignore MQTT errors initially):
cd ~/petal-app-manager-dev/petal-app-manager uvicorn petal_app_manager.main:app --host 0.0.0.0 --port 9000 --log-level info --no-access-log --http h11
Complete local provisioning steps:
Open
http://localhost:80in your browserFollow the provisioning wizard
Use
fly.droneleaf.ioto generate the API key when promptedComplete the additional provisioning steps shown in the localhost interface
Restart Petal App Manager:
# Stop with Ctrl+C and restart uvicorn petal_app_manager.main:app --reload --host 0.0.0.0 --port 9000 --log-level info --no-access-log --http h11
Complete cloud provisioning:
Finish any remaining provisioning steps on
fly.droneleaf.ioVerify the device appears in the cloud dashboard
Production/Service Environment:
Warning
Known Concern: When running as a systemd service on a fresh installation, the service may fail to start or repeatedly restart due to MQTT provisioning requirements. This behavior needs further investigation.
Workaround for Service Deployment:
Temporarily disable MQTT proxy during initial setup:
cd ~/.droneleaf/petal-app-manager nano proxies.yaml
Comment out or remove
mqttfromenabled_proxies:enabled_proxies: # - mqtt # Temporarily disabled for provisioning - redis - ext_mavlink - db
Start the service:
sudo systemctl start petal-app-manager sudo systemctl status petal-app-manager
Complete provisioning via web interface
Re-enable MQTT proxy:
nano ~/.droneleaf/petal-app-manager/proxies.yamlUncomment
mqttinenabled_proxies:enabled_proxies: - mqtt # Re-enabled after provisioning - redis - ext_mavlink - db
Restart the service:
sudo systemctl restart petal-app-manager
Alternative: Pre-provision Before Service Installation
Run Petal App Manager manually first
Complete all provisioning steps
Stop manual instance
Install and start as service
Verification:
Check MQTT proxy health after provisioning:
curl http://localhost:9000/health/detailed | jq '.proxies.mqtt'
Expected healthy output:
{
"status": "healthy",
"is_connected": true,
"organization_id": "your-org-id",
"device_id": "Instance-your-machine-id"
}
Port Conflicts
Port 9000 Already in Use
Symptoms:
Address already in useerror when starting Petal App ManagerCannot start uvicorn server
Application fails to bind to port
Cause:
This typically occurs when trying to run Petal App Manager using uvicorn while the systemd service is already running, or when another process is using port 9000.
Solutions:
Check what’s using port 9000:
sudo lsof -i :9000
Stop the systemd service (if running):
sudo systemctl stop petal-app-manager sudo systemctl status petal-app-manager
Kill the conflicting process:
# Find the process ID (PID) from lsof output sudo kill <PID>
Reporting Issues
If you encounter issues not covered here:
GitHub Issues
Report bugs and request features at:
Petal App Manager: https://github.com/DroneLeaf/petal-app-manager/issues
Individual petals: Check respective repository issue trackers
Include in Your Report
Environment information:
python3.11 --version pdm --version redis-server --version
Log files:
# Application logs tail -100 app.log # System logs sudo journalctl -u petal-app-manager -n 100
Configuration:
# Sanitize sensitive data before sharing cat proxies.yaml cat .env | grep -v TOKEN | grep -v PASSWORD
Steps to reproduce the issue
Expected vs actual behavior
Getting Help
Check Quick Start Guide for setup guidance
Review Adding a New Petal Guide for petal development
See Contribution Guidelines for version management