[3.5.2] - 2026-04-15
CLI Determinism, Performance Parity, Agent Safety & Multi-Agent Security Audit
Added
- Resume download (Range requests): Interrupted downloads can now resume from where they left off instead of restarting from scratch. New
ResumableFileinfrastructure preserves partial.aerotmpfiles on failure. Implemented across 17 HTTP-based providers: S3, Azure, WebDAV, Google Drive, OneDrive, Dropbox, Box, pCloud, Yandex Disk, Jottacloud, kDrive, Zoho WorkDrive, FileLu, OpenDrive, 4shared, Swift, Google Photos. Tested with 83% bandwidth savings on a 48 MB S3 file - Resume in all transfer paths: Resume is active in single file download (GUI), folder download (batch executor with retry), CLI (
--partialflag), AeroSync, and cross-profile transfers - CLI path resolution logging:
resolve_cli_remote_pathlogs to stderr whenever the profile base path is prepended, so agents and users always see the resolved path (e.g.Note: path 'data/' resolved to '/www.example.com/data/' (profile base: /www.example.com)) - CLI contextual connection errors: Connection failures now include actionable hints per error type (hostname/port, credentials, timeout/firewall, DNS resolution)
- CLI first-run guidance:
No saved profiles foundnow suggests URL mode as alternative (aeroftp-cli ls ftp://user@host/path) - CLI alias list feedback:
alias listshows "No aliases configured" on empty instead of silent exit, and prints alias count on stderr - CLI agent OpenAI-compatible tool calls: New
ToolCallEchoandtool_call_idfields onChatMessageenable proper tool call pairing for all OpenAI-compatible providers (Cohere, Groq, Together, Mistral, etc.). Previously only Anthropic format worked - CLI vault AI provider resolution:
--provider cohere(or any provider) now resolves API keys from the AeroFTP vault using the GUI-generated unique IDs, falling back transparently when env vars are not set - ShareLink Phase 3 - Link Management: New "Manage" tab in ShareLinkModal shows existing share links with copy, open in browser, and revoke actions. Implemented
list_share_links()for 8 providers: Dropbox, pCloud, Koofr, kDrive, Box, Zoho WorkDrive, Yandex Disk, Drime Cloud.ShareLinkCapabilitiesnow advertisessupports_list_linksandsupports_revoke - CLI
putwith directory destination:put file.css /remote/assets/now auto-appends the local filename instead of failing with "Is a directory" - Complete exit code mapping: All 19
ProviderErrorvariants now mapped to 12 distinct exit codes (0-11, 99, 130). New codes: 9 (AlreadyExists/DirectoryNotEmpty), 10 (ServerError/ParseError), 11 (IoError). Agents can now distinguish every error category without parsing messages mkdir --parents(-p): Idempotent recursive directory creation. No error if directory exists. Falls back tostat()probe for providers that don't returnAlreadyExistsrm --forceidempotent delete:-fflag now suppressesNotFounderrors (exit 0 instead of exit 2). Also handles FTP/WebDAV servers that returnServerErrorwith "not found" in the message instead of typedNotFoundput --no-clobber(-n): Skip upload if remote file already exists. Returns exit 9 with JSON{"status":"skipped","reason":"already_exists"}. No overwrite, no ambiguity--chunk-size/--buffer-sizeglobal flags: Override upload chunk and download buffer sizes per-session. S3 multipart part size (default 5 MB, cap 512 MB), SFTP/FTP buffer (default 32/8 KB, cap 16 MB). Configurable in~/.config/aeroftp/config.toml- S3 multipart parallel upload: Upload parts execute concurrently via
tokio::spawn(4 parts in-flight).S3ProviderderivesClone(reqwest::Clientis Arc-based, clone is free). Batched disk pre-read overlaps I/O with network - Azure Blob server-side copy: New
server_copy()using Azure Copy Blob API (x-ms-copy-sourceheader). Path resolution viaresolve_blob_path()prevents prefix bypass.Content-Length: 0header for Azure compliance - CLI path traversal prevention:
resolve_cli_remote_path()now rejects..path components with error on stderr. Prevents agents and scripts from escaping the profile root on FTP/SFTP/WebDAV - Agent tool reclassification:
local_diffelevated fromsafetomedium(reads file content, requires approval in Normal mode). Content-reading tools (local_grep,local_head,local_tail,rag_search) were alreadymedium - Agent data egress metadata:
agent-info --jsonnow reports accuratedata_egresslevels:contentfor tools that send file data to the AI model,metadatafor stat/tree/info tools,previewforrag_index. Previousnoneclassification was misleading for policy engines - Unified error sanitization:
sanitize_api_error()(provider/MCP path) now callssanitize_error_message()(AI path), applying all 5 regex patterns (sk-ant-, sk-, Bearer tokens, x-api-key, Google key=). Single sanitization pipeline for all error surfaces - Threat model (STRIDE): New
docs/THREAT-MODEL.mdwith formal STRIDE analysis, 4 AI-specific attack scenarios (prompt injection, CLI argument injection, cross-provider exfiltration, output poisoning), residual risk register, and security controls matrix - LLM Integration Guide: New
docs/LLM-INTEGRATION-GUIDE.mdwith safe patterns, anti-patterns, exit code recovery table, MCP best practices, batch scripting guidance, and performance tuning flags
Security Audits
- 3-agent internal audit (Claude Opus 4.6): Focus on agent attack surface, CLI misuse, prompt injection vectors. 6 findings (0 CRITICAL, 0 HIGH, 3 MEDIUM, 3 LOW). All 3 MEDIUM fixed: OOM via unbounded
--chunk-size(capped 512 MB S3/16 MB SFTP-FTP), Azureserver_copyprefix bypass (addedresolve_blob_path), CLI..path traversal (reject instead of rewrite) - Multi-agent external audit (GPT-5.4 with sub-agents): Independent read-only audit across backend, CLI, frontend, MCP, and threat model. 5 findings (0 CRITICAL, 2 HIGH, 3 MEDIUM). HIGH-1: local read tools classified
safeallow exfiltration without approval - fixed (local_diffelevated, all content-readers alreadymedium). HIGH-2:agent-infometadata underestimates egress risk - fixed (data_egressanddanger_levelcorrected). MEDIUM-3: agent memory injection patterns bypassable - accepted as residual risk. MEDIUM-4: error sanitization weaker in provider vs AI path - fixed (unified pipeline). MEDIUM-5: path traversal rewrite instead of reject - fixed - Live CLI test battery: 4 providers tested (FTP axpdev.it, S3 Storj, WebDAV Koofr, Azure Blob). All exit codes, idempotency, path traversal, server-side copy, and JSON output verified. 2 bugs found and fixed during testing: FTP
rm -fon non-existent files (ServerError pattern matching), Azure Copy BlobContent-Length: 0header
Fixed
- CLI reconcile with absolute remote path (critical): Using a leading
/in the remote path (e.g./front/includesinstead offront/includes) produced completely wrong results: all files reported as missing even when they existed remotely. Root cause: when the profile base path is/, absolute paths were passed through to FTP verbatim instead of being treated as relative to the working directory. Additionally, listing errors were silently swallowed in JSON mode, hiding the real failure. Both issues fixed - CLI sync re-uploading identical files: Sync uploaded every file regardless of content match because FTP MLSD timestamps (e.g.
2024-01-15 10:30:00Z) could not be parsed by the mtime comparator due to the trailingZsuffix. All files failed the timestamp comparison and were re-uploaded. Now parsesZandUTCsuffixed timestamps correctly - CLI sync "Using profile" noise: Each parallel worker printed
Using profile: ...on every file transfer, producing hundreds of identical lines. Now prints once per session, with automatic reset on batch CONNECT commands - CLI reconcile missing --exclude flag:
reconcileonly supported global--exclude-globalwhilesynchad per-command-e/--exclude. Now both commands accept-e/--excludewith consistent behavior, merging per-command and global patterns - CLI sync directory upload error: Symlinks pointing to directories were not filtered during local scan, causing
IO error: Is a directory (os error 21)during upload with wasted retry cycles. Local scan now usesis_file()instead of!is_dir()to skip all non-regular files - Audit remediation (10 findings): MEGA timeout (120s global replaced with connect 30s + read 300s), Koofr/Swift timeout pattern corrected, 6 providers with duplicate
connect_timeoutcleaned up, OneDrive added to trash manager map, Azureset_blob_tiermissingx-ms-date/x-ms-versionheaders, MEGA download u64 underflow protected withsaturating_sub, MEGA upload chunk HTTP status check before reading body, AzureTrashManager passingitem.nameinstead ofitem.pathto undelete,onTabLabelChangerender loop stabilized with ref pattern - HTTP 416 cleanup: S3, Azure, and WebDAV now delete stale
.aerotmpon Range Not Satisfiable (416), preventing infinite retry loops - SFTP hints consistency:
transfer_optimization_hintsno longer claims resume support for SFTP (not implemented) - Toast
{error}placeholder not interpolated:folderCreateFailedandrenameFailedtoast messages showed literal{error}instead of the actual error. Now usest()interpolation correctly - CLI glob no-match exit code:
put "*.csv"andget "*.csv"with no matching files now return exit code 2 (not found) instead of 0 (success), enabling reliable scripting and agent error handling - CLI
ai_settingsvault prefix: CLI now readsconfig_ai_settings(with GUI'sconfig_prefix) instead of bareai_settings, fixing provider discovery from vault - CLI vault provider ID mismatch: GUI saves API keys under unique generated IDs (e.g.
ai_apikey_mmw96fix-hqlohwhwr), not provider names. Newresolve_vault_ai_provider()reads the settings JSON to map provider types to their actual vault key IDs - MEGA zeroization hardening:
set_session_id,persist_session, andtry_resume_sessionnow zeroize sensitive strings (session ID, master key) after use - Azure
list_deleted_blobspagination: Now paginates with NextMarker instead of stopping at 5000 blobs. Also extracts size and modified date from deleted blob XML - InternxtTrashManager dead code: Removed unused selection state, actionLoading, and humanLog from read-only component
Downloads:
- Windows:
.msiinstaller,.exe, or.zipportable (no installation required) - macOS:
.dmgdisk image - Linux:
.deb,.rpm,.snap, or.AppImage