diff --git a/docs/macos-codesigning.md b/docs/macos-codesigning.md new file mode 100644 index 00000000..5224cd8b --- /dev/null +++ b/docs/macos-codesigning.md @@ -0,0 +1,151 @@ +# macOS Codesigning + +All projects (Flow, Seq, Lin, Rise) sign binaries with a Developer ID Application certificate. This makes macOS TCC grants (Accessibility, Screen Recording, Input Monitoring) survive rebuilds and work for daemon processes. + +## Why this matters + +macOS ties TCC permission grants to code signatures. When a binary is ad-hoc signed (`--sign -`) or signed with an Apple Development certificate, the signature changes on every rebuild. That means: + +- Accessibility grants break after every `f deploy` +- Input Monitoring grants break after every rebuild +- Screen Recording grants break after every rebuild +- Daemon processes (like `seqd`) that aren't launched from a terminal never inherit terminal TCC grants at all + +Developer ID Application signing solves all of this. The identity is stable across rebuilds, so TCC grants persist. + +## How it works + +### Shared helper: `~/.config/flow/codesign.sh` + +All projects source a single shared script that handles identity detection: + +```bash +source "${HOME}/.config/flow/codesign.sh" +flow_codesign /path/to/binary +``` + +The helper resolves the signing identity in this order: + +1. `$FLOW_CODESIGN_IDENTITY` env var (explicit override) +2. `Developer ID Application` from the keychain +3. `Apple Development` from the keychain (fallback) +4. Skip silently (no certificate available) + +The identity is resolved once per shell session (first `source` call queries the keychain, subsequent calls reuse the cached value). Signing failure never breaks a build - all calls are best-effort. + +### Per-project integration + +**Flow** (`scripts/deploy.sh`) +- Signs `f` and `lin` binaries after `cargo build`, before copying to `~/bin` +- Since copies are made from the signed source binaries, all installed copies (`~/bin/f`, `~/bin/lin`, `~/.local/bin/f`, etc.) carry the signature + +**Seq** (`cli/cpp/run.sh`) +- Signs `seq` binary after clang build +- Signs `libseqmem.dylib` if present (Swift memory engine) +- Signs the `SeqDaemon.app` bundle (required for daemon TCC grants) +- Respects `$SEQ_CODE_SIGN_IDENTITY` override (maps to `$FLOW_CODESIGN_IDENTITY` internally) + +**Lin** (`mac/flow.toml`) +- `release-mac` task: sources the helper, signs `/Applications/Lin.app` with `--deep` (required for `.app` bundles) +- `r` task (quick incremental build): same logic, falls back to ad-hoc if no identity found + +**Rise** (`scripts/deploy-cli.sh`, `ffi/cpp/build.sh`) +- `deploy-cli.sh`: signs `rise-bin` after copying to `~/.local/bin` and `~/bin` +- `build.sh`: signs `librise.dylib` after clang build + +## Verifying signatures + +Check that a binary is signed with Developer ID: + +```bash +codesign -dvv /path/to/binary 2>&1 | grep Authority +``` + +Expected output: + +``` +Authority=Developer ID Application: Nikita Voloboev (6J8K2M6486) +Authority=Developer ID Certification Authority +Authority=Apple Root CA +``` + +Quick verification commands for each project: + +```bash +# Flow +codesign -dvv ~/bin/f 2>&1 | grep Authority + +# Seq +codesign -dvv ~/code/seq/cli/cpp/out/bin/seq 2>&1 | grep Authority +codesign -dvv ~/code/seq/cli/cpp/out/bin/SeqDaemon.app 2>&1 | grep Authority + +# Lin +codesign -dvv /Applications/Lin.app 2>&1 | grep Authority + +# Rise +codesign -dvv ~/.local/bin/rise-bin 2>&1 | grep Authority +``` + +If you see `Authority=Apple Development` instead of `Developer ID Application`, the Developer ID certificate is not in the keychain. If you see no Authority line or `(code or signature modified)`, the binary was modified after signing. + +## Troubleshooting + +### TCC grants break after rebuild + +1. Verify the binary is signed: `codesign -dvv 2>&1 | grep Authority` +2. If unsigned or ad-hoc, check that `~/.config/flow/codesign.sh` exists and is sourced by the build script +3. Check that the Developer ID certificate is in the keychain: `security find-identity -p codesigning -v | grep "Developer ID"` +4. Rebuild and verify again + +### "Developer ID Application" not found + +The certificate must be installed in the login keychain. If you have it in a `.p12` file: + +```bash +security import developer-id.p12 -k ~/Library/Keychains/login.keychain-db -T /usr/bin/codesign +``` + +If you only have an Apple Development certificate, the helper falls back to it. TCC grants will still work but may be less stable across Xcode updates. + +### seqd loses Accessibility after rebuild + +The daemon runs inside `SeqDaemon.app` specifically so it gets its own TCC entry. After rebuilding: + +1. Verify: `codesign -dvv ~/code/seq/cli/cpp/out/bin/SeqDaemon.app 2>&1 | grep Authority` +2. If correct, the old TCC grant should still apply. If not, re-grant in System Settings > Privacy & Security > Accessibility +3. Test: `printf 'AX_STATUS\n' | nc -U /tmp/seqd.sock` should print `1` + +### Lin loses Screen Recording after rebuild + +1. Verify: `codesign -dvv /Applications/Lin.app 2>&1 | grep Authority` +2. If signed with Developer ID, the TCC grant should persist. If it doesn't, reset and re-grant: + ```bash + tccutil reset ScreenCapture io.linsa + ``` + Then reopen Lin and grant Screen Recording again. + +### Overriding the identity + +Set `$FLOW_CODESIGN_IDENTITY` before building to use a specific certificate: + +```bash +export FLOW_CODESIGN_IDENTITY="Developer ID Application: Someone Else (XXXXXXXXXX)" +f deploy +``` + +For Seq specifically, `$SEQ_CODE_SIGN_IDENTITY` also works (it takes precedence). + +### Checking what certificates are available + +```bash +security find-identity -p codesigning -v +``` + +This lists all valid codesigning identities in your keychains. The helper picks the first `Developer ID Application` match, or the first `Apple Development` match if no Developer ID is available. + +## Architecture notes + +- The shared helper lives in `~/.config/flow/` (Flow's config directory) because it's a cross-project concern managed by Flow +- `.app` bundles (SeqDaemon.app, Lin.app) require `--deep` flag to sign embedded frameworks and executables +- Signing happens on the source binary before copies are made, so all install locations get the signed version +- The helper is sourced with `2>/dev/null || true` so missing file never breaks builds (e.g. on Linux or CI where no keychain exists) diff --git a/scripts/deploy.sh b/scripts/deploy.sh index 81609048..05c41981 100755 --- a/scripts/deploy.sh +++ b/scripts/deploy.sh @@ -15,6 +15,11 @@ cargo build "${BUILD_ARGS[@]}" --quiet SOURCE_F="${ROOT_DIR}/target/${TARGET_DIR}/f" SOURCE_LIN="${ROOT_DIR}/target/${TARGET_DIR}/lin" +# Codesign source binaries so all copies inherit the signature +source "${HOME}/.config/flow/codesign.sh" 2>/dev/null || true +flow_codesign "$SOURCE_F" 2>/dev/null || true +flow_codesign "$SOURCE_LIN" 2>/dev/null || true + PRIMARY_DIR="${HOME}/bin" ALT_DIR="${HOME}/.local/bin" PRIMARY_F="$(command -v f 2>/dev/null || true)" diff --git a/src/agents.rs b/src/agents.rs index d68a6abe..c3997099 100644 --- a/src/agents.rs +++ b/src/agents.rs @@ -175,8 +175,13 @@ fn run_agents_rules(profile: Option<&str>, repo: Option<&str>) -> Result<()> { bail!("Missing profile file: {}", source.display()); } let target = repo_path.join("agents.md"); - fs::copy(&source, &target) - .with_context(|| format!("failed to copy {} to {}", source.display(), target.display()))?; + fs::copy(&source, &target).with_context(|| { + format!( + "failed to copy {} to {}", + source.display(), + target.display() + ) + })?; let default_path = agents_dir.join(".default"); fs::write(&default_path, &profile_name) diff --git a/src/ai.rs b/src/ai.rs index 2d483fa6..6b5d676c 100644 --- a/src/ai.rs +++ b/src/ai.rs @@ -218,11 +218,19 @@ pub fn run(action: Option) -> Result<()> { AiAction::Import => import_sessions()?, AiAction::Copy { session } => copy_session(session, Provider::All)?, AiAction::CopyClaude { search } => { - let query = if search.is_empty() { None } else { Some(search.join(" ")) }; + let query = if search.is_empty() { + None + } else { + Some(search.join(" ")) + }; copy_last_session(Provider::Claude, query)? } AiAction::CopyCodex { search } => { - let query = if search.is_empty() { None } else { Some(search.join(" ")) }; + let query = if search.is_empty() { + None + } else { + Some(search.join(" ")) + }; copy_last_session(Provider::Codex, query)? } AiAction::Context { @@ -2370,7 +2378,9 @@ fn copy_session_by_search(provider: Provider, query: &str) -> Result<()> { if let Some(project_path) = cwd { println!( "Copied session {} from {} ({} lines) to clipboard", - id_short, project_path.display(), line_count + id_short, + project_path.display(), + line_count ); return Ok(()); } diff --git a/src/ai_server.rs b/src/ai_server.rs index ed4c7d63..8233f8d0 100644 --- a/src/ai_server.rs +++ b/src/ai_server.rs @@ -111,7 +111,12 @@ pub fn quick_prompt( let text = parsed .choices .first() - .and_then(|c| c.message.as_ref().map(|m| m.content.clone()).or(c.text.clone())) + .and_then(|c| { + c.message + .as_ref() + .map(|m| m.content.clone()) + .or(c.text.clone()) + }) .map(|t| t.trim().to_string()) .unwrap_or_default(); diff --git a/src/ask.rs b/src/ask.rs index a9038a72..91e0fa60 100644 --- a/src/ask.rs +++ b/src/ask.rs @@ -58,12 +58,8 @@ fn run_with_tasks(opts: AskOpts, tasks: Vec) -> Result<()> { let valid_subcommands = valid_subcommand_set(&commands); let prompt = build_prompt(&query_display, &tasks, &commands); - let response = ai_server::quick_prompt( - &prompt, - opts.model.as_deref(), - opts.url.as_deref(), - None, - )?; + let response = + ai_server::quick_prompt(&prompt, opts.model.as_deref(), opts.url.as_deref(), None)?; let selection = parse_ask_response(&response, &tasks, &valid_subcommands)?; @@ -115,9 +111,16 @@ fn flow_command_candidates() -> Vec { let cmd = Cli::command(); for sub in cmd.get_subcommands() { let name = sub.get_name().to_string(); - let about = sub.get_about().map(|s| s.to_string()).filter(|s| !s.is_empty()); + let about = sub + .get_about() + .map(|s| s.to_string()) + .filter(|s| !s.is_empty()); let aliases = sub.get_all_aliases().map(|a| a.to_string()).collect(); - commands.push(FlowCommand { name, aliases, about }); + commands.push(FlowCommand { + name, + aliases, + about, + }); } commands.push(FlowCommand { @@ -247,9 +250,8 @@ fn normalize_command(raw: &str, valid_subcommands: &HashSet) -> Result Result } } - bail!( - "Could not parse task name from AI response: '{}'", - response - ) + bail!("Could not parse task name from AI response: '{}'", response) } #[cfg(test)] diff --git a/src/cli.rs b/src/cli.rs index 148e9fc0..02120f3c 100644 --- a/src/cli.rs +++ b/src/cli.rs @@ -491,6 +491,32 @@ pub enum Commands { alias = "px" )] Proxy(ProxyCommand), + #[command( + about = "Create a GitHub PR from the current branch.", + long_about = "Creates a branch/bookmark from the latest commit, pushes it, and opens a GitHub PR. Works with both jj and pure git." + )] + Pr(PrOpts), +} + +#[derive(Args, Debug, Clone)] +pub struct PrOpts { + /// Arguments: + /// - `preview`: create the PR on your fork (origin) only (no upstream). + /// - `TITLE`: optional PR title. If provided, Flow will create a commit/change for current working copy changes. + #[arg(value_name = "ARGS", num_args = 0.., trailing_var_arg = true)] + pub args: Vec, + /// Base branch (default: main). + #[arg(long, default_value = "main")] + pub base: String, + /// Create as draft PR. + #[arg(long)] + pub draft: bool, + /// Custom branch name (auto-generated from commit message if omitted). + #[arg(long, short)] + pub branch: Option, + /// Don't open in browser after creation. + #[arg(long)] + pub no_open: bool, } #[derive(Args, Debug, Clone)] @@ -1366,12 +1392,24 @@ pub enum CommitQueueAction { /// Push even if the commit is not at HEAD. #[arg(long, short = 'f')] force: bool, + /// Allow pushing even if the queued commit has review issues recorded. + #[arg(long)] + allow_issues: bool, + /// Allow pushing even if the review timed out or is missing. + #[arg(long)] + allow_unreviewed: bool, }, /// Approve all queued commits on the current branch (push once). ApproveAll { /// Push even if the branch is behind its remote. #[arg(long, short = 'f')] force: bool, + /// Allow pushing even if some queued commits have review issues recorded. + #[arg(long)] + allow_issues: bool, + /// Allow pushing even if some queued commits have review timed out / missing. + #[arg(long)] + allow_unreviewed: bool, }, /// Remove a commit from the queue without pushing. Drop { diff --git a/src/commit.rs b/src/commit.rs index 3574ac9f..0d770ede 100644 --- a/src/commit.rs +++ b/src/commit.rs @@ -12,19 +12,20 @@ use std::time::Duration; use anyhow::{Context, Result, bail}; use clap::ValueEnum; -use sha1::{Digest, Sha1}; +use regex::Regex; use reqwest::StatusCode; use reqwest::blocking::Client; use serde::{Deserialize, Serialize}; use serde_json::json; +use sha1::{Digest, Sha1}; use tempfile::{Builder as TempBuilder, NamedTempFile, TempDir}; -use tracing::{debug, info}; -use regex::Regex; +use tracing::{debug, info, warn}; use crate::ai; use crate::cli::{CommitQueueAction, CommitQueueCommand, DaemonAction}; use crate::config; use crate::daemon; +use crate::env as flow_env; use crate::git_guard; use crate::hub; use crate::notify; @@ -32,7 +33,6 @@ use crate::setup; use crate::supervisor; use crate::undo; use crate::vcs; -use crate::env as flow_env; const MODEL: &str = "gpt-4.1-nano"; const MAX_DIFF_CHARS: usize = 12_000; @@ -273,15 +273,24 @@ fn warn_sensitive_files(files: &[String]) -> Result<()> { const SECRET_PATTERNS: &[(&str, &str)] = &[ // API Keys with known prefixes ("AWS Access Key", r"AKIA[0-9A-Z]{16}"), - ("AWS Secret Key", r#"(?i)aws.{0,20}secret.{0,20}['"][0-9a-zA-Z/+]{40}['"]"#), + ( + "AWS Secret Key", + r#"(?i)aws.{0,20}secret.{0,20}['"][0-9a-zA-Z/+]{40}['"]"#, + ), ("GitHub Token", r"ghp_[0-9a-zA-Z]{36}"), ("GitHub OAuth", r"gho_[0-9a-zA-Z]{36}"), ("GitHub App Token", r"ghu_[0-9a-zA-Z]{36}"), ("GitHub Refresh Token", r"ghr_[0-9a-zA-Z]{36}"), ("GitLab Token", r"glpat-[0-9a-zA-Z\-_]{20,}"), ("Slack Token", r"xox[baprs]-[0-9a-zA-Z]{10,48}"), - ("Slack Webhook", r"https://hooks\.slack\.com/services/T[0-9A-Z]{8,}/B[0-9A-Z]{8,}/[0-9a-zA-Z]{24}"), - ("Discord Webhook", r"https://discord(?:app)?\.com/api/webhooks/[0-9]{17,}/[0-9a-zA-Z_-]{60,}"), + ( + "Slack Webhook", + r"https://hooks\.slack\.com/services/T[0-9A-Z]{8,}/B[0-9A-Z]{8,}/[0-9a-zA-Z]{24}", + ), + ( + "Discord Webhook", + r"https://discord(?:app)?\.com/api/webhooks/[0-9]{17,}/[0-9a-zA-Z_-]{60,}", + ), ("Stripe Key", r"sk_live_[0-9a-zA-Z]{24,}"), ("Stripe Restricted", r"rk_live_[0-9a-zA-Z]{24,}"), // OpenAI keys - multiple formats (legacy, project, service account) @@ -291,25 +300,52 @@ const SECRET_PATTERNS: &[(&str, &str)] = &[ ("Anthropic Key", r"sk-ant-[0-9a-zA-Z\-_]{90,}"), ("Google API Key", r"AIza[0-9A-Za-z\-_]{35}"), ("Groq API Key", r"gsk_[0-9a-zA-Z]{50,}"), - ("Mistral API Key", r#"(?i)mistral.{0,10}(api[_-]?key|key).{0,5}[=:].{0,5}["'][0-9a-zA-Z]{32,}["']"#), - ("Cohere API Key", r#"(?i)cohere.{0,10}(api[_-]?key|key).{0,5}[=:].{0,5}["'][0-9a-zA-Z]{40,}["']"#), - ("Heroku API Key", r"(?i)heroku.{0,20}[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"), + ( + "Mistral API Key", + r#"(?i)mistral.{0,10}(api[_-]?key|key).{0,5}[=:].{0,5}["'][0-9a-zA-Z]{32,}["']"#, + ), + ( + "Cohere API Key", + r#"(?i)cohere.{0,10}(api[_-]?key|key).{0,5}[=:].{0,5}["'][0-9a-zA-Z]{40,}["']"#, + ), + ( + "Heroku API Key", + r"(?i)heroku.{0,20}[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}", + ), ("NPM Token", r"npm_[0-9a-zA-Z]{36}"), ("PyPI Token", r"pypi-[0-9a-zA-Z_-]{50,}"), ("Telegram Bot Token", r"[0-9]{8,10}:[0-9A-Za-z_-]{35}"), ("Twilio Key", r"SK[0-9a-fA-F]{32}"), ("SendGrid Key", r"SG\.[0-9a-zA-Z_-]{22}\.[0-9a-zA-Z_-]{43}"), ("Mailgun Key", r"key-[0-9a-zA-Z]{32}"), - ("Private Key", r"-----BEGIN (RSA |EC |DSA |OPENSSH )?PRIVATE KEY-----"), - ("Supabase Key", r"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9\.[0-9a-zA-Z_-]{50,}"), - ("Firebase Key", r#"(?i)firebase.{0,20}["'][A-Za-z0-9_-]{30,}["']"#), + ( + "Private Key", + r"-----BEGIN (RSA |EC |DSA |OPENSSH )?PRIVATE KEY-----", + ), + ( + "Supabase Key", + r"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9\.[0-9a-zA-Z_-]{50,}", + ), + ( + "Firebase Key", + r#"(?i)firebase.{0,20}["'][A-Za-z0-9_-]{30,}["']"#, + ), // Generic patterns (higher false positive risk, but catch common mistakes) - ("Generic API Key Assignment", r#"(?i)(api[_-]?key|apikey)\s*[:=]\s*['"][0-9a-zA-Z\-_]{20,}['"]"#), - ("Generic Secret Assignment", r#"(?i)(secret|password|passwd|pwd)\s*[:=]\s*['"][^'"]{8,}['"]"#), + ( + "Generic API Key Assignment", + r#"(?i)(api[_-]?key|apikey)\s*[:=]\s*['"][0-9a-zA-Z\-_]{20,}['"]"#, + ), + ( + "Generic Secret Assignment", + r#"(?i)(secret|password|passwd|pwd)\s*[:=]\s*['"][^'"]{8,}['"]"#, + ), ("Bearer Token", r"(?i)bearer\s+[0-9a-zA-Z\-_.]{20,}"), ("Basic Auth", r"(?i)basic\s+[A-Za-z0-9+/=]{20,}"), // High-entropy strings that look like secrets (env var assignments) - ("Env Var Secret", r#"(?i)(KEY|TOKEN|SECRET|PASSWORD|CREDENTIAL|AUTH)[_A-Z]*\s*=\s*['"]?[0-9a-zA-Z\-_/+=]{32,}['"]?"#), + ( + "Env Var Secret", + r#"(?i)(KEY|TOKEN|SECRET|PASSWORD|CREDENTIAL|AUTH)[_A-Z]*\s*=\s*['"]?[0-9a-zA-Z\-_/+=]{32,}['"]?"#, + ), ]; const SECRET_SCAN_IGNORE_MARKERS: &[&str] = &[ @@ -327,9 +363,7 @@ fn should_ignore_secret_scan_line(content: &str) -> bool { } fn extract_first_quoted_value(s: &str) -> Option<&str> { - let (qpos, qch) = s - .char_indices() - .find(|(_, c)| *c == '"' || *c == '\'')?; + let (qpos, qch) = s.char_indices().find(|(_, c)| *c == '"' || *c == '\'')?; let end = s.rfind(qch)?; if end <= qpos { return None; @@ -344,7 +378,8 @@ fn looks_like_identifier_reference(value: &str) -> bool { !v.is_empty() && v.len() >= 8 && v.contains('_') - && v.chars().all(|c| c.is_ascii_uppercase() || c.is_ascii_digit() || c == '_' || c == '.') + && v.chars() + .all(|c| c.is_ascii_uppercase() || c.is_ascii_digit() || c == '_' || c == '.') } fn looks_like_secret_lookup(value: &str) -> bool { @@ -418,8 +453,8 @@ fn generic_secret_assignment_is_false_positive(content: &str, matched: &str) -> // If the whole line is clearly a dynamic lookup, treat as non-hardcoded. // This catches cases where the regex match boundaries don't capture the full value cleanly. - let lc = content.to_lowercase(); - lc.contains("$(get_env ") + let lc = content.to_lowercase(); + lc.contains("$(get_env ") } /// Scan staged diff content for hardcoded secrets. @@ -447,9 +482,7 @@ fn scan_diff_for_secrets(repo_root: &Path) -> Vec<(String, usize, String, String // Compile regexes let patterns: Vec<(&str, regex::Regex)> = SECRET_PATTERNS .iter() - .filter_map(|(name, pattern)| { - regex::Regex::new(pattern).ok().map(|re| (*name, re)) - }) + .filter_map(|(name, pattern)| regex::Regex::new(pattern).ok().map(|re| (*name, re))) .collect(); for line in diff.lines() { @@ -464,7 +497,10 @@ fn scan_diff_for_secrets(repo_root: &Path) -> Vec<(String, usize, String, String if line.starts_with("@@") { if let Some(plus_pos) = line.find('+') { let after_plus = &line[plus_pos + 1..]; - let num_str: String = after_plus.chars().take_while(|c| c.is_ascii_digit()).collect(); + let num_str: String = after_plus + .chars() + .take_while(|c| c.is_ascii_digit()) + .collect(); current_line = num_str.parse().unwrap_or(0); } ignore_next_added_line = false; @@ -516,7 +552,9 @@ fn scan_diff_for_secrets(repo_root: &Path) -> Vec<(String, usize, String, String || matched_lower.contains("fixme") || matched == "sk-..." || matched == "sk-xxxx" - || matched.chars().all(|c| c == 'x' || c == 'X' || c == '.' || c == '-' || c == '_') + || matched + .chars() + .all(|c| c == 'x' || c == 'X' || c == '.' || c == '-' || c == '_') { continue; } @@ -529,7 +567,7 @@ fn scan_diff_for_secrets(repo_root: &Path) -> Vec<(String, usize, String, String // Redact the middle of the matched secret for display let redacted = if matched.len() > 12 { - format!("{}...{}", &matched[..6], &matched[matched.len()-4..]) + format!("{}...{}", &matched[..6], &matched[matched.len() - 4..]) } else { matched.to_string() }; @@ -563,19 +601,20 @@ fn warn_secrets_in_diff( } if env::var("FLOW_ALLOW_SECRET_COMMIT").ok().as_deref() == Some("1") { - println!("\nāš ļø Warning: Potential secrets detected but FLOW_ALLOW_SECRET_COMMIT=1, continuing..."); + println!( + "\nāš ļø Warning: Potential secrets detected but FLOW_ALLOW_SECRET_COMMIT=1, continuing..." + ); return Ok(()); } println!(); - print_secret_findings( - "šŸ” Potential secrets detected in staged changes:", - findings, - ); + print_secret_findings("šŸ” Potential secrets detected in staged changes:", findings); println!(); println!("If these are false positives (examples, placeholders, tests), you can:"); println!(" - Set FLOW_ALLOW_SECRET_COMMIT=1 to override for this commit"); - println!(" - Mark the line with '# flow:secret:ignore' (or add it on the line above to ignore the next line)"); + println!( + " - Mark the line with '# flow:secret:ignore' (or add it on the line above to ignore the next line)" + ); println!(" - Use placeholder values like 'xxx' for example secrets"); println!(" - Re-stage files if you recently edited them: git add "); println!(); @@ -685,8 +724,7 @@ fn should_run_sync_for_secret_fixes(repo_root: &Path) -> Result { let agent_name = env::var("FLOW_FIX_COMMIT_AGENT").unwrap_or_else(|_| "fix-f-commit".to_string()); - let hive_enabled = - agent_name.trim().to_lowercase() != "off" && which::which("hive").is_ok(); + let hive_enabled = agent_name.trim().to_lowercase() != "off" && which::which("hive").is_ok(); let ai_available = which::which("ai").is_ok(); if !hive_enabled && !ai_available { return Ok(false); @@ -745,7 +783,10 @@ fn run_fix_f_commit_ai(repo_root: &Path, task: &str) -> Result<()> { fn build_fix_f_commit_task(findings: &[(String, usize, String, String)]) -> String { let mut summary = String::new(); for (file, line, pattern, matched) in findings { - summary.push_str(&format!("- {}:{} — {} ({})\n", file, line, pattern, matched)); + summary.push_str(&format!( + "- {}:{} — {} ({})\n", + file, line, pattern, matched + )); } let task = format!( @@ -761,10 +802,7 @@ After fixing, restage changes." sanitize_hive_task(&task) } -fn print_secret_findings( - header: &str, - findings: &[(String, usize, String, String)], -) { +fn print_secret_findings(header: &str, findings: &[(String, usize, String, String)]) { println!("{}", header); for (file, line, pattern, matched) in findings { println!(" {}:{} - {} ({})", file, line, pattern, matched); @@ -803,7 +841,10 @@ fn sanitize_hive_task(task: &str) -> String { fn resolve_hive_env() -> Vec<(String, String)> { let mut vars = Vec::new(); - if std::env::var("CEREBRAS_API_KEY").map(|v| v.trim().is_empty()).unwrap_or(true) { + if std::env::var("CEREBRAS_API_KEY") + .map(|v| v.trim().is_empty()) + .unwrap_or(true) + { if is_local_env_backend() { if let Ok(store) = crate::env::fetch_personal_env_vars(&["CEREBRAS_API_KEY".to_string()]) @@ -913,8 +954,7 @@ pub fn resolve_review_selection_from_config() -> Option { Some(ReviewSelection::Opencode { model }) } "openrouter" => { - let model = - model.unwrap_or_else(|| "arcee-ai/trinity-large-preview:free".to_string()); + let model = model.unwrap_or_else(|| "arcee-ai/trinity-large-preview:free".to_string()); Some(ReviewSelection::OpenRouter { model }) } "rise" => { @@ -1487,7 +1527,11 @@ pub fn run(push: bool, queue: CommitQueueMode, include_unhash: bool) -> Result<( pub fn run_sync(push: bool, queue: CommitQueueMode, include_unhash: bool) -> Result<()> { let queue_enabled = queue.enabled; let push = push && !queue_enabled; - info!(push = push, queue = queue_enabled, "starting commit workflow"); + info!( + push = push, + queue = queue_enabled, + "starting commit workflow" + ); // Ensure we're in a git repo ensure_git_repo()?; @@ -1552,12 +1596,9 @@ pub fn run_sync(push: bool, queue: CommitQueueMode, include_unhash: bool) -> Res print!("Generating commit message... "); io::stdout().flush()?; let mut message = match commit_message_override { - Some(CommitMessageOverride::Kimi { model }) => generate_commit_message_kimi( - &diff_for_prompt, - &status, - truncated, - model.as_deref(), - )?, + Some(CommitMessageOverride::Kimi { model }) => { + generate_commit_message_kimi(&diff_for_prompt, &status, truncated, model.as_deref())? + } None => { let commit_provider = commit_provider.as_ref().expect("commit provider missing"); info!(model = MODEL, "calling OpenAI API"); @@ -1684,7 +1725,12 @@ pub fn run_sync(push: bool, queue: CommitQueueMode, include_unhash: bool) -> Res } /// Run a fast commit with the provided message (no AI review). -pub fn run_fast(message: &str, push: bool, queue: CommitQueueMode, include_unhash: bool) -> Result<()> { +pub fn run_fast( + message: &str, + push: bool, + queue: CommitQueueMode, + include_unhash: bool, +) -> Result<()> { let queue_enabled = queue.enabled; let push = push && !queue_enabled; ensure_git_repo()?; @@ -2399,11 +2445,20 @@ pub fn run_with_check_sync( model.as_deref(), ), }; + // Review is informational only. If the reviewer fails (provider flake, empty output, etc), + // proceed with a warning rather than blocking the commit workflow. let review = match review { Ok(review) => review, Err(err) => { - restore_staged_snapshot_in(&repo_root, &staged_snapshot)?; - return Err(err); + warn!(error = %err, "review failed; proceeding without review"); + println!("⚠ Review failed: {err}"); + ReviewResult { + issues_found: false, + issues: Vec::new(), + summary: Some(format!("Review failed: {err}")), + future_tasks: Vec::new(), + timed_out: true, + } } }; @@ -2569,8 +2624,12 @@ pub fn run_with_check_sync( } } ReviewSelection::OpenRouter { model } => { - match generate_commit_message_openrouter(&diff_for_prompt, &status, truncated, model) - { + match generate_commit_message_openrouter( + &diff_for_prompt, + &status, + truncated, + model, + ) { Ok(message) => message, Err(err) => match commit_provider { CommitMessageProvider::Remote { .. } => { @@ -2654,12 +2713,9 @@ pub fn run_with_check_sync( }, } } - _ => commit_message_from_provider( - commit_provider, - &diff_for_prompt, - &status, - truncated, - )?, + _ => { + commit_message_from_provider(commit_provider, &diff_for_prompt, &status, truncated)? + } } }; let message = sanitize_commit_message(&message); @@ -3658,8 +3714,14 @@ fn run_kimi_review( .context("failed to write prompt to kimi")?; } - let stdout = child.stdout.take().context("failed to capture kimi stdout")?; - let stderr = child.stderr.take().context("failed to capture kimi stderr")?; + let stdout = child + .stdout + .take() + .context("failed to capture kimi stdout")?; + let stderr = child + .stderr + .take() + .context("failed to capture kimi stderr")?; let (stdout_tx, stdout_rx) = mpsc::channel::>(); let (stderr_tx, stderr_rx) = mpsc::channel::(); @@ -3703,14 +3765,28 @@ fn run_kimi_review( let stdout_text = String::from_utf8_lossy(&stdout_bytes).trim().to_string(); if stdout_text.is_empty() { - bail!("kimi returned empty output"); + warn!("kimi returned empty output; proceeding without review"); + return Ok(ReviewResult { + issues_found: false, + issues: Vec::new(), + summary: Some("kimi returned empty output".to_string()), + future_tasks: Vec::new(), + timed_out: true, + }); } // Parse the stream-json output from kimi // Format: {"role":"assistant","content":[{"type":"think","think":"..."},{"type":"text","text":"..."}]} let result = extract_kimi_text_content(&stdout_text).unwrap_or_else(|| stdout_text.clone()); if result.is_empty() { - bail!("kimi returned empty review output (no text content in response)"); + warn!("kimi returned empty review output; proceeding without review"); + return Ok(ReviewResult { + issues_found: false, + issues: Vec::new(), + summary: Some("kimi returned empty review output".to_string()), + future_tasks: Vec::new(), + timed_out: true, + }); } // Try to parse JSON from output @@ -3853,7 +3929,17 @@ fn run_openrouter_review( .unwrap_or_default(); if output.is_empty() { - bail!("OpenRouter returned empty review output"); + warn!( + model = model_id, + "OpenRouter returned empty review output; proceeding without review" + ); + return Ok(ReviewResult { + issues_found: false, + issues: Vec::new(), + summary: Some("OpenRouter returned empty review output".to_string()), + future_tasks: Vec::new(), + timed_out: true, + }); } println!("{}", output); @@ -4079,9 +4165,7 @@ fn openrouter_chat_completion_with_retry( } } - Err(last_err.unwrap_or_else(|| { - anyhow::anyhow!("OpenRouter request failed after retries") - })) + Err(last_err.unwrap_or_else(|| anyhow::anyhow!("OpenRouter request failed after retries"))) } /// Run Rise daemon to review staged changes for bugs and performance issues. @@ -4181,7 +4265,7 @@ fn run_rise_review( }) } -fn ensure_git_repo() -> Result<()> { +pub(crate) fn ensure_git_repo() -> Result<()> { let _ = vcs::ensure_jj_repo()?; let output = Command::new("git") .args(["rev-parse", "--git-dir"]) @@ -4196,7 +4280,7 @@ fn ensure_git_repo() -> Result<()> { Ok(()) } -fn git_root_or_cwd() -> std::path::PathBuf { +pub(crate) fn git_root_or_cwd() -> std::path::PathBuf { match git_capture(&["rev-parse", "--show-toplevel"]) { Ok(root) => std::path::PathBuf::from(root.trim()), Err(_) => std::env::current_dir().unwrap_or_default(), @@ -4282,7 +4366,9 @@ fn ensure_no_unwanted_staged(repo_root: &Path) -> Result<()> { if path == ".rise" || path.starts_with(".rise/") || path.contains("/.rise/") { ignore_entries.insert(".rise/"); } - if path.ends_with(".pyc") || path.contains("/__pycache__/") || path.ends_with("/__pycache__") + if path.ends_with(".pyc") + || path.contains("/__pycache__/") + || path.ends_with("/__pycache__") { ignore_entries.insert("__pycache__/"); ignore_entries.insert("*.pyc"); @@ -4363,7 +4449,10 @@ fn unwanted_reason(path: &str) -> Option<&'static str> { if path.ends_with(".pyc") { return Some("python bytecode"); } - if path.ends_with("/__pycache__") || path.contains("/__pycache__/") || path.starts_with("__pycache__/") { + if path.ends_with("/__pycache__") + || path.contains("/__pycache__/") + || path.starts_with("__pycache__/") + { return Some("python cache"); } None @@ -4444,6 +4533,21 @@ fn record_undo_action(repo_root: &Path, pushed: bool, message: Option<&str>) { const COMMIT_QUEUE_DIR: &str = ".ai/internal/commit-queue"; +#[derive(Debug, Serialize)] +struct CommitQueueEvent { + ts_ms: i64, + action: String, // queued, approved, dropped, updated, pr_create, ... + repo_root: String, + branch: String, + commit_sha: String, + subject: String, + record_path: Option, + review_issues_found: bool, + review_issues_count: i64, + review_timed_out: bool, + review_summary: Option, +} + #[derive(Debug, Clone, Serialize, Deserialize)] struct CommitQueueEntry { version: u8, @@ -4508,11 +4612,7 @@ struct RiseReviewSession { } fn short_sha(sha: &str) -> &str { - if sha.len() <= 7 { - sha - } else { - &sha[..7] - } + if sha.len() <= 7 { sha } else { &sha[..7] } } fn commit_queue_dir(repo_root: &Path) -> PathBuf { @@ -4532,6 +4632,105 @@ fn write_commit_queue_entry(repo_root: &Path, entry: &CommitQueueEntry) -> Resul Ok(path) } +fn commit_queue_events_path() -> Option { + let home = dirs::home_dir()?; + Some( + home.join(".config") + .join("flow") + .join("commit-queue") + .join("events.jsonl"), + ) +} + +fn append_commit_queue_event(event: &CommitQueueEvent) { + let Some(path) = commit_queue_events_path() else { + return; + }; + if let Some(parent) = path.parent() { + let _ = fs::create_dir_all(parent); + } + let Ok(line) = serde_json::to_string(event) else { + return; + }; + // Best-effort. Append a single JSON object per line. + let _ = fs::OpenOptions::new() + .create(true) + .append(true) + .open(&path) + .and_then(|mut f| { + use std::io::Write; + f.write_all(line.as_bytes())?; + f.write_all(b"\n")?; + Ok(()) + }); +} + +fn emit_commit_queue_side_effects( + action: &str, + entry: &CommitQueueEntry, + record_path: Option<&Path>, + review: Option<&ReviewResult>, +) { + let ts_ms = chrono::Utc::now().timestamp_millis(); + let subject = entry + .message + .lines() + .next() + .unwrap_or("no message") + .trim() + .to_string(); + + let (issues_found, issues_count, timed_out, summary) = match review { + Some(r) => ( + r.issues_found, + r.issues.len().min(i64::MAX as usize) as i64, + r.timed_out, + r.summary.clone(), + ), + None => (false, 0, false, None), + }; + + append_commit_queue_event(&CommitQueueEvent { + ts_ms, + action: action.to_string(), + repo_root: entry.repo_root.clone(), + branch: entry.branch.clone(), + commit_sha: entry.commit_sha.clone(), + subject: subject.clone(), + record_path: record_path.map(|p| p.display().to_string()), + review_issues_found: issues_found, + review_issues_count: issues_count, + review_timed_out: timed_out, + review_summary: summary.clone(), + }); + + // When a commit is queued due to issues, proactively suggest a Codex review on that commit. + // This is a Lin proposal (human-in-the-loop), not an auto-run. + if action == "queued" && (issues_found || timed_out) { + let title = format!("Review queued commit {}", short_sha(&entry.commit_sha)); + let mut ctx = format!( + "Repo: {}\nBranch: {}\nCommit: {}\nSubject: {}\n", + entry.repo_root, entry.branch, entry.commit_sha, subject + ); + if let Some(s) = summary + .as_deref() + .map(|s| s.trim()) + .filter(|s| !s.is_empty()) + { + ctx.push_str("\nReview summary:\n"); + ctx.push_str(s); + ctx.push('\n'); + } + + let action = format!( + "codex review --commit {} --title \"{}\"", + entry.commit_sha, + subject.replace('"', "'") + ); + let _ = notify::send_proposal(&action, Some(&title), Some(&ctx), 60 * 30); + } +} + fn format_review_body(review: &ReviewResult) -> Option { if review.issues.is_empty() { return None; @@ -4606,7 +4805,11 @@ fn resolve_review_files(repo_root: &Path, commit_sha: &str) -> Vec Resu }; let path = review_dir.join(format!("review-{}.json", entry.commit_sha)); - let payload = serde_json::to_string_pretty(&session).context("serialize rise review session")?; + let payload = + serde_json::to_string_pretty(&session).context("serialize rise review session")?; fs::write(&path, payload).context("write rise review session")?; Ok(path) } @@ -4729,7 +4933,9 @@ fn queue_commit_for_review( message: &str, review: Option<&ReviewResult>, ) -> Result { - let commit_sha = git_capture_in(repo_root, &["rev-parse", "HEAD"])?.trim().to_string(); + let commit_sha = git_capture_in(repo_root, &["rev-parse", "HEAD"])? + .trim() + .to_string(); let branch = git_capture_in(repo_root, &["rev-parse", "--abbrev-ref", "HEAD"]) .unwrap_or_else(|_| "unknown".to_string()) .trim() @@ -4766,7 +4972,7 @@ fn queue_commit_for_review( }; let path = write_commit_queue_entry(repo_root, &entry)?; - let _ = path; + emit_commit_queue_side_effects("queued", &entry, Some(&path), review); if let Err(err) = write_rise_review_session(repo_root, &entry) { debug!("failed to write rise review session: {}", err); } @@ -4776,8 +4982,7 @@ fn queue_commit_for_review( fn open_review_in_rise(repo_root: &Path, commit_sha: &str) { // Prefer rise-app (VS Code fork) because it has the best multi-file diff UX. // Fall back to `rise review open` if rise-app isn't installed. - let (cmd, args): (String, Vec) = if let Ok(rise_app_path) = which::which("rise-app") - { + let (cmd, args): (String, Vec) = if let Ok(rise_app_path) = which::which("rise-app") { // Ensure review file exists, then open it explicitly. let review_file = rise_review_path(repo_root, commit_sha); if !review_file.exists() { @@ -4791,8 +4996,14 @@ fn open_review_in_rise(repo_root: &Path, commit_sha: &str) { // In that case, execute it with node. let launch_with_node = fs::read(&rise_app_path) .ok() - .and_then(|bytes| bytes.get(0..128).map(|chunk| String::from_utf8_lossy(chunk).to_string())) - .map(|head| !head.starts_with("#!") && (head.starts_with("/*") || head.starts_with("//"))) + .and_then(|bytes| { + bytes + .get(0..128) + .map(|chunk| String::from_utf8_lossy(chunk).to_string()) + }) + .map(|head| { + !head.starts_with("#!") && (head.starts_with("/*") || head.starts_with("//")) + }) .unwrap_or(false); if launch_with_node { @@ -4894,7 +5105,12 @@ fn refresh_queue_entry_commit(repo_root: &Path, entry: &mut CommitQueueEntry) -> ) else { return Ok(false); }; - let new_sha = output.split_whitespace().next().unwrap_or_default().trim().to_string(); + let new_sha = output + .split_whitespace() + .next() + .unwrap_or_default() + .trim() + .to_string(); if new_sha.is_empty() || new_sha == entry.commit_sha { return Ok(false); } @@ -5057,7 +5273,7 @@ fn jj_bookmark_exists(repo_root: &Path, name: &str) -> bool { false } -fn jj_run_in(repo_root: &Path, args: &[&str]) -> Result<()> { +pub(crate) fn jj_run_in(repo_root: &Path, args: &[&str]) -> Result<()> { let status = Command::new("jj") .current_dir(repo_root) .args(args) @@ -5069,7 +5285,7 @@ fn jj_run_in(repo_root: &Path, args: &[&str]) -> Result<()> { Ok(()) } -fn jj_capture_in(repo_root: &Path, args: &[&str]) -> Result { +pub(crate) fn jj_capture_in(repo_root: &Path, args: &[&str]) -> Result { let output = Command::new("jj") .current_dir(repo_root) .args(args) @@ -5081,7 +5297,7 @@ fn jj_capture_in(repo_root: &Path, args: &[&str]) -> Result { Ok(String::from_utf8_lossy(&output.stdout).to_string()) } -fn ensure_gh_available() -> Result<()> { +pub(crate) fn ensure_gh_available() -> Result<()> { let status = Command::new("gh") .args(["--version"]) .stdout(Stdio::null()) @@ -5094,7 +5310,7 @@ fn ensure_gh_available() -> Result<()> { Ok(()) } -fn gh_capture_in(repo_root: &Path, args: &[&str]) -> Result { +pub(crate) fn gh_capture_in(repo_root: &Path, args: &[&str]) -> Result { let output = Command::new("gh") .current_dir(repo_root) .args(args) @@ -5110,7 +5326,7 @@ fn gh_capture_in(repo_root: &Path, args: &[&str]) -> Result { Ok(String::from_utf8_lossy(&output.stdout).to_string()) } -fn github_repo_from_remote_url(url: &str) -> Option { +pub(crate) fn github_repo_from_remote_url(url: &str) -> Option { let trimmed = url.trim().trim_end_matches('/'); if trimmed.is_empty() { return None; @@ -5129,7 +5345,7 @@ fn github_repo_from_remote_url(url: &str) -> Option { None } -fn resolve_github_repo(repo_root: &Path) -> Result { +pub(crate) fn resolve_github_repo(repo_root: &Path) -> Result { // First try origin URL. if let Ok(url) = git_capture_in(repo_root, &["remote", "get-url", "origin"]) { if let Some(repo) = github_repo_from_remote_url(&url) { @@ -5152,12 +5368,14 @@ fn resolve_github_repo(repo_root: &Path) -> Result { .context("failed to resolve GitHub repo for current directory")?; let repo = repo.trim(); if repo.is_empty() { - bail!("unable to determine GitHub repo (origin URL not GitHub, and `gh repo view` returned empty)"); + bail!( + "unable to determine GitHub repo (origin URL not GitHub, and `gh repo view` returned empty)" + ); } Ok(repo.to_string()) } -fn sanitize_ref_component(input: &str) -> String { +pub(crate) fn sanitize_ref_component(input: &str) -> String { let mut out = String::new(); let mut last_sep = false; for ch in input.chars() { @@ -5197,16 +5415,26 @@ fn default_pr_head(entry: &CommitQueueEntry) -> String { ) } -fn ensure_pr_head_pushed(repo_root: &Path, head: &str, commit_sha: &str) -> Result<()> { +pub(crate) fn ensure_pr_head_pushed(repo_root: &Path, head: &str, commit_sha: &str) -> Result<()> { // Prefer jj bookmarks when available. if which::which("jj").is_ok() { // Ensure bookmark points at the commit, then push it. jj_run_in( repo_root, - &["bookmark", "set", head, "-r", commit_sha, "--allow-backwards"], + &[ + "bookmark", + "set", + head, + "-r", + commit_sha, + "--allow-backwards", + ], )?; // We often push a brand new review/pr bookmark as the PR head. - jj_run_in(repo_root, &["git", "push", "--bookmark", head, "--allow-new"])?; + jj_run_in( + repo_root, + &["git", "push", "--bookmark", head, "--allow-new"], + )?; return Ok(()); } @@ -5226,7 +5454,7 @@ fn pr_number_from_url(url: &str) -> Option { parts.last()?.parse().ok() } -fn gh_find_open_pr_by_head( +pub(crate) fn gh_find_open_pr_by_head( repo_root: &Path, repo: &str, head: &str, @@ -5262,7 +5490,7 @@ fn gh_find_open_pr_by_head( } } -fn gh_create_pr( +pub(crate) fn gh_create_pr( repo_root: &Path, repo: &str, head: &str, @@ -5272,17 +5500,7 @@ fn gh_create_pr( draft: bool, ) -> Result<(u64, String)> { let mut args: Vec<&str> = vec![ - "pr", - "create", - "--repo", - repo, - "--head", - head, - "--base", - base, - "--title", - title, - "--body", + "pr", "create", "--repo", repo, "--head", head, "--base", base, "--title", title, "--body", body, ]; if draft { @@ -5323,7 +5541,7 @@ fn gh_create_pr( ); } -fn open_in_browser(url: &str) -> Result<()> { +pub(crate) fn open_in_browser(url: &str) -> Result<()> { #[cfg(target_os = "macos")] { let status = Command::new("open").arg(url).status()?; @@ -5343,7 +5561,7 @@ fn open_in_browser(url: &str) -> Result<()> { } } -fn commit_message_title_body(message: &str) -> (String, String) { +pub(crate) fn commit_message_title_body(message: &str) -> (String, String) { let mut lines = message.lines(); let title = lines.next().unwrap_or("no title").trim().to_string(); let rest = lines.collect::>().join("\n").trim().to_string(); @@ -5366,12 +5584,7 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { println!("Queued commits:"); for mut entry in entries { let _ = refresh_queue_entry_commit(&repo_root, &mut entry); - let subject = entry - .message - .lines() - .next() - .unwrap_or("no message") - .trim(); + let subject = entry.message.lines().next().unwrap_or("no message").trim(); let created_at = format_queue_created_at(&entry.created_at); let bookmark = entry .review_bookmark @@ -5469,9 +5682,8 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { ); } - let current_branch = - git_capture_in(&repo_root, &["rev-parse", "--abbrev-ref", "HEAD"]) - .unwrap_or_else(|_| "unknown".to_string()); + let current_branch = git_capture_in(&repo_root, &["rev-parse", "--abbrev-ref", "HEAD"]) + .unwrap_or_else(|_| "unknown".to_string()); if current_branch.trim() != entry.branch && !force { bail!( "Queued commit was created on branch {} but current branch is {}. Checkout the branch or re-run with --force.", @@ -5481,9 +5693,10 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { } if git_try_in(&repo_root, &["fetch", "--quiet"]).is_ok() { - if let Ok(counts) = - git_capture_in(&repo_root, &["rev-list", "--left-right", "--count", "@{u}...HEAD"]) - { + if let Ok(counts) = git_capture_in( + &repo_root, + &["rev-list", "--left-right", "--count", "@{u}...HEAD"], + ) { let parts: Vec<&str> = counts.split_whitespace().collect(); if parts.len() == 2 { let behind = parts[0].parse::().unwrap_or(0); @@ -5540,6 +5753,12 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { } if pushed { + emit_commit_queue_side_effects( + "approved", + &entry, + entry.record_path.as_deref(), + None, + ); if let (Some(before_sha), Ok(after_sha)) = ( before_sha, git_capture_in(&repo_root, &["rev-parse", "HEAD"]), @@ -5577,9 +5796,8 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { let _ = refresh_queue_entry_commit(&repo_root, entry); } - let current_branch = - git_capture_in(&repo_root, &["rev-parse", "--abbrev-ref", "HEAD"]) - .unwrap_or_else(|_| "unknown".to_string()); + let current_branch = git_capture_in(&repo_root, &["rev-parse", "--abbrev-ref", "HEAD"]) + .unwrap_or_else(|_| "unknown".to_string()); let current_branch = current_branch.trim().to_string(); let mut candidates = Vec::new(); @@ -5606,9 +5824,10 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { } if git_try_in(&repo_root, &["fetch", "--quiet"]).is_ok() { - if let Ok(counts) = - git_capture_in(&repo_root, &["rev-list", "--left-right", "--count", "@{u}...HEAD"]) - { + if let Ok(counts) = git_capture_in( + &repo_root, + &["rev-list", "--left-right", "--count", "@{u}...HEAD"], + ) { let parts: Vec<&str> = counts.split_whitespace().collect(); if parts.len() == 2 { let behind = parts[0].parse::().unwrap_or(0); @@ -5684,8 +5903,8 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { ); } - let head_sha = git_capture_in(&repo_root, &["rev-parse", "HEAD"]) - .unwrap_or_default(); + let head_sha = + git_capture_in(&repo_root, &["rev-parse", "HEAD"]).unwrap_or_default(); let head_sha = head_sha.trim(); let mut approved = 0; let mut skipped = 0; @@ -5723,6 +5942,7 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { CommitQueueAction::Drop { hash } => { let mut entry = resolve_commit_queue_entry(&repo_root, &hash)?; let _ = refresh_queue_entry_commit(&repo_root, &mut entry); + emit_commit_queue_side_effects("dropped", &entry, entry.record_path.as_deref(), None); if let Some(bookmark) = entry.review_bookmark.as_ref() { delete_review_bookmark(&repo_root, bookmark); } @@ -5744,33 +5964,35 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { let head = default_pr_head(&entry); ensure_pr_head_pushed(&repo_root, &head, &entry.commit_sha)?; - let (number, url) = if let Some(found) = gh_find_open_pr_by_head(&repo_root, &repo, &head)? { - found - } else { - let (title, body_rest) = commit_message_title_body(&entry.message); - let mut body = String::new(); - if !body_rest.is_empty() { - body.push_str(&body_rest); - body.push_str("\n\n"); - } - if let Some(summary) = entry - .summary - .as_deref() - .map(|s| s.trim()) - .filter(|s| !s.is_empty()) - { - body.push_str("Review summary:\n"); - body.push_str(summary); - body.push('\n'); - } - gh_create_pr(&repo_root, &repo, &head, &base, &title, body.trim(), draft)? - }; + let (number, url) = + if let Some(found) = gh_find_open_pr_by_head(&repo_root, &repo, &head)? { + found + } else { + let (title, body_rest) = commit_message_title_body(&entry.message); + let mut body = String::new(); + if !body_rest.is_empty() { + body.push_str(&body_rest); + body.push_str("\n\n"); + } + if let Some(summary) = entry + .summary + .as_deref() + .map(|s| s.trim()) + .filter(|s| !s.is_empty()) + { + body.push_str("Review summary:\n"); + body.push_str(summary); + body.push('\n'); + } + gh_create_pr(&repo_root, &repo, &head, &base, &title, body.trim(), draft)? + }; entry.pr_number = Some(number); entry.pr_url = Some(url.clone()); entry.pr_head = Some(head.clone()); entry.pr_base = Some(base.clone()); let _ = write_commit_queue_entry(&repo_root, &entry); + emit_commit_queue_side_effects("pr_create", &entry, entry.record_path.as_deref(), None); println!("PR: {}", url); if open { @@ -5798,13 +6020,26 @@ pub fn run_commit_queue(cmd: CommitQueueCommand) -> Result<()> { // Create it if missing (as draft). ensure_pr_head_pushed(&repo_root, &head, &entry.commit_sha)?; let (title, body_rest) = commit_message_title_body(&entry.message); - let (number, url) = - gh_create_pr(&repo_root, &repo, &head, &base, &title, body_rest.trim(), true)?; + let (number, url) = gh_create_pr( + &repo_root, + &repo, + &head, + &base, + &title, + body_rest.trim(), + true, + )?; entry.pr_number = Some(number); entry.pr_url = Some(url.clone()); entry.pr_head = Some(head.clone()); entry.pr_base = Some(base.clone()); let _ = write_commit_queue_entry(&repo_root, &entry); + emit_commit_queue_side_effects( + "pr_open_create", + &entry, + entry.record_path.as_deref(), + None, + ); url }; @@ -6069,7 +6304,7 @@ fn git_run(args: &[&str]) -> Result<()> { Ok(()) } -fn git_run_in(workdir: &std::path::Path, args: &[&str]) -> Result<()> { +pub(crate) fn git_run_in(workdir: &std::path::Path, args: &[&str]) -> Result<()> { let mut cmd = Command::new("git"); if args.first() == Some(&"commit") { cmd.env("FLOW_COMMIT", "1"); @@ -6174,7 +6409,7 @@ fn git_try_in(workdir: &std::path::Path, args: &[&str]) -> Result<()> { Ok(()) } -fn git_capture(args: &[&str]) -> Result { +pub(crate) fn git_capture(args: &[&str]) -> Result { let output = Command::new("git") .args(args) .output() @@ -6187,7 +6422,7 @@ fn git_capture(args: &[&str]) -> Result { Ok(String::from_utf8_lossy(&output.stdout).to_string()) } -fn git_capture_in(workdir: &std::path::Path, args: &[&str]) -> Result { +pub(crate) fn git_capture_in(workdir: &std::path::Path, args: &[&str]) -> Result { let output = Command::new("git") .current_dir(workdir) .args(args) @@ -6768,9 +7003,7 @@ fn openrouter_api_key() -> Result { } if is_local_env_backend() { - if let Ok(vars) = - crate::env::fetch_personal_env_vars(&["OPENROUTER_API_KEY".to_string()]) - { + if let Ok(vars) = crate::env::fetch_personal_env_vars(&["OPENROUTER_API_KEY".to_string()]) { if let Some(value) = vars.get("OPENROUTER_API_KEY") { if !value.trim().is_empty() { return Ok(value.clone()); @@ -6812,7 +7045,13 @@ fn record_review_tasks(repo_root: &Path, review: &ReviewResult, model_label: &st let Some(beads_dir) = review_tasks_beads_dir(repo_root) else { return; }; - match write_review_tasks(&beads_dir, repo_root, tasks, review.summary.as_deref(), model_label) { + match write_review_tasks( + &beads_dir, + repo_root, + tasks, + review.summary.as_deref(), + model_label, + ) { Ok(created) => { if created > 0 { println!( @@ -6834,24 +7073,28 @@ fn review_tasks_beads_dir(repo_root: &Path) -> Option { let trimmed = dir.trim(); if !trimmed.is_empty() { let candidate = std::path::PathBuf::from(trimmed); - return Some(resolve_review_tasks_dir(repo_root, candidate, allow_in_repo)); + return Some(resolve_review_tasks_dir( + repo_root, + candidate, + allow_in_repo, + )); } } if let Ok(root) = env::var("FLOW_REVIEW_TASKS_ROOT") { let trimmed = root.trim(); if !trimmed.is_empty() { let candidate = std::path::PathBuf::from(trimmed).join(".beads"); - return Some(resolve_review_tasks_dir(repo_root, candidate, allow_in_repo)); + return Some(resolve_review_tasks_dir( + repo_root, + candidate, + allow_in_repo, + )); } } dirs::home_dir().map(|home| home.join(".beads")) } -fn resolve_review_tasks_dir( - repo_root: &Path, - candidate: PathBuf, - allow_in_repo: bool, -) -> PathBuf { +fn resolve_review_tasks_dir(repo_root: &Path, candidate: PathBuf, allow_in_repo: bool) -> PathBuf { let resolved = if candidate.is_relative() { repo_root.join(candidate) } else { @@ -6891,7 +7134,9 @@ fn write_review_tasks( .file_name() .map(|name| name.to_string_lossy().to_string()) .unwrap_or_else(|| "project".to_string()); - let summary = summary.map(|text| text.trim()).filter(|text| !text.is_empty()); + let summary = summary + .map(|text| text.trim()) + .filter(|text| !text.is_empty()); let mut created = 0; for task in tasks { @@ -7476,11 +7721,7 @@ fn read_tail_bytes(path: &Path, max_bytes: u64) -> Result> { Ok(buf) } -fn write_agent_trace_file( - bundle_path: &Path, - rel_path: &str, - data: &[u8], -) -> Result<()> { +fn write_agent_trace_file(bundle_path: &Path, rel_path: &str, data: &[u8]) -> Result<()> { let target = bundle_path.join(rel_path); if let Some(parent) = target.parent() { fs::create_dir_all(parent)?; @@ -7557,7 +7798,10 @@ fn write_agent_traces(bundle_path: &Path, repo_root: &Path) { ("agent/fish/last.stdout", fish_dir.join("last.stdout")), ("agent/fish/last.stderr", fish_dir.join("last.stderr")), ("agent/fish/rise.meta", fish_dir.join("rise.meta")), - ("agent/fish/rise.history.jsonl", fish_dir.join("rise.history.jsonl")), + ( + "agent/fish/rise.history.jsonl", + fish_dir.join("rise.history.jsonl"), + ), ]; for (rel, path) in fish_files { if !path.exists() { @@ -7607,9 +7851,7 @@ fn write_agent_learning( .filter(|s| !s.trim().is_empty()) .unwrap_or_else(|| commit_message.to_string()); let issues = review.map(|r| r.issues.clone()).unwrap_or_default(); - let future_tasks = review - .map(|r| r.future_tasks.clone()) - .unwrap_or_default(); + let future_tasks = review.map(|r| r.future_tasks.clone()).unwrap_or_default(); let root_cause = if !summary.trim().is_empty() { summary.clone() @@ -7657,16 +7899,29 @@ fn write_agent_learning( } let _ = write_agent_trace_file(bundle_path, "agent/decision.md", decision_md.as_bytes()); let _ = write_agent_trace_file(bundle_path, "agent/regression.md", regression_md.as_bytes()); - let _ = write_agent_trace_file(bundle_path, "agent/patch_summary.md", patch_summary_md.as_bytes()); + let _ = write_agent_trace_file( + bundle_path, + "agent/patch_summary.md", + patch_summary_md.as_bytes(), + ); - let _ = append_learning_store(repo_root, &learn_json, &decision_md, ®ression_md, &patch_summary_md); + let _ = append_learning_store( + repo_root, + &learn_json, + &decision_md, + ®ression_md, + &patch_summary_md, + ); } fn classify_learning_tags(texts: &[String]) -> Vec { let mut tags = HashSet::new(); for text in texts { let lowered = text.to_lowercase(); - if lowered.contains("perf") || lowered.contains("performance") || lowered.contains("latency") { + if lowered.contains("perf") + || lowered.contains("performance") + || lowered.contains("latency") + { tags.insert("perf".to_string()); } if lowered.contains("security") || lowered.contains("vulnerability") { @@ -7692,9 +7947,15 @@ fn classify_learning_tags(texts: &[String]) -> Vec { } fn render_learning_decision_md(learn: &serde_json::Value) -> String { - let summary = learn.get("root_cause").and_then(|v| v.as_str()).unwrap_or("n/a"); + let summary = learn + .get("root_cause") + .and_then(|v| v.as_str()) + .unwrap_or("n/a"); let fix = learn.get("fix").and_then(|v| v.as_str()).unwrap_or("n/a"); - let prevention = learn.get("prevention").and_then(|v| v.as_str()).unwrap_or("n/a"); + let prevention = learn + .get("prevention") + .and_then(|v| v.as_str()) + .unwrap_or("n/a"); format!( "# Decision\n\n## Summary\n{}\n\n## Fix\n{}\n\n## Prevention\n{}\n", summary, fix, prevention @@ -7702,8 +7963,14 @@ fn render_learning_decision_md(learn: &serde_json::Value) -> String { } fn render_learning_regression_md(learn: &serde_json::Value) -> String { - let issue = learn.get("issue").and_then(|v| v.as_str()).unwrap_or("none"); - let prevention = learn.get("prevention").and_then(|v| v.as_str()).unwrap_or("n/a"); + let issue = learn + .get("issue") + .and_then(|v| v.as_str()) + .unwrap_or("none"); + let prevention = learn + .get("prevention") + .and_then(|v| v.as_str()) + .unwrap_or("n/a"); format!( "# Regression Guard\n\n- If you see: {}\n- Do: {}\n", issue, prevention @@ -7781,20 +8048,22 @@ fn learning_store_root(repo_root: &Path) -> Result { if let Ok(value) = env::var("FLOW_BASE_DIR") { let trimmed = value.trim(); if !trimmed.is_empty() { - return Ok(PathBuf::from(trimmed).join(".ai").join("internal").join("learn")); + return Ok(PathBuf::from(trimmed) + .join(".ai") + .join("internal") + .join("learn")); } } if let Some(home) = dirs::home_dir() { - return Ok( - home.join("code") - .join("org") - .join("linsa") - .join("base") - .join(".ai") - .join("internal") - .join("learn"), - ); + return Ok(home + .join("code") + .join("org") + .join("linsa") + .join("base") + .join(".ai") + .join("internal") + .join("learn")); } Ok(repo_root.join(".ai").join("internal").join("learn")) @@ -7889,8 +8158,8 @@ fn try_capture_unhash_bundle( None => ai::get_sessions_for_gitedit(&repo_root.to_path_buf()).unwrap_or_default(), }; if !sessions_data.is_empty() { - let json = serde_json::to_string_pretty(&sessions_data) - .context("serialize sessions.json")?; + let json = + serde_json::to_string_pretty(&sessions_data).context("serialize sessions.json")?; fs::write(bundle_path.join("sessions.json"), json).context("write sessions.json")?; } @@ -7948,8 +8217,7 @@ fn try_capture_unhash_bundle( gitedit_session_hash: gitedit_session_hash.map(|s| s.to_string()), session_count: sessions_data.len(), }; - let meta_json = - serde_json::to_string_pretty(&metadata).context("serialize commit.json")?; + let meta_json = serde_json::to_string_pretty(&metadata).context("serialize commit.json")?; fs::write(bundle_path.join("commit.json"), meta_json).context("write commit.json")?; let out_file = TempBuilder::new() @@ -7971,10 +8239,7 @@ fn try_capture_unhash_bundle( if !output.status.success() { let stdout = String::from_utf8_lossy(&output.stdout); let stderr = String::from_utf8_lossy(&output.stderr); - debug!( - "unhash failed: {} {}{}", - output.status, stdout, stderr - ); + debug!("unhash failed: {} {}{}", output.status, stdout, stderr); return Ok(None); } diff --git a/src/config.rs b/src/config.rs index e9a6fe61..92152c25 100644 --- a/src/config.rs +++ b/src/config.rs @@ -207,7 +207,12 @@ pub struct JjConfig { #[serde(default)] pub remote: Option, /// Auto-track bookmarks on create. - #[serde(default, rename = "auto_track", alias = "auto-track", alias = "autoTrack")] + #[serde( + default, + rename = "auto_track", + alias = "auto-track", + alias = "autoTrack" + )] pub auto_track: Option, /// Prefix for review bookmarks created by flow (e.g., "review"). #[serde( diff --git a/src/daemon.rs b/src/daemon.rs index 39b9f185..05bec0b6 100644 --- a/src/daemon.rs +++ b/src/daemon.rs @@ -17,15 +17,12 @@ use reqwest::blocking::Client; use crate::{ cli::{DaemonAction, DaemonCommand}, config::{self, DaemonConfig, DaemonRestartPolicy}, - env, - supervisor, + env, supervisor, }; /// Run the daemon command. pub fn run(cmd: DaemonCommand) -> Result<()> { - let action = cmd - .action - .unwrap_or(DaemonAction::Status { name: None }); + let action = cmd.action.unwrap_or(DaemonAction::Status { name: None }); let config_path = resolve_flow_toml_path(); if supervisor::try_handle_daemon_action(&action, config_path.as_deref())? { @@ -390,12 +387,6 @@ fn spawn_daemon_process(daemon: &DaemonConfig, binary: &Path) -> Result Result { return Ok(false); } - let install = prompt_yes( - "zerobrew (zb) not found. Install it now? [y/N]: ", - false, - ); + let install = prompt_yes("zerobrew (zb) not found. Install it now? [y/N]: ", false); if !install { return Ok(false); diff --git a/src/fix.rs b/src/fix.rs index d6453af3..3c967451 100644 --- a/src/fix.rs +++ b/src/fix.rs @@ -1,7 +1,7 @@ use std::io::{self, IsTerminal, Read, Write}; use std::process::{Command, Stdio}; -use anyhow::{bail, Context, Result}; +use anyhow::{Context, Result, bail}; use crate::cli::FixOpts; use crate::opentui_prompt; @@ -51,7 +51,8 @@ fn try_run_commit_repair(repo_root: &std::path::Path, message: &str) -> Result Result { if trimmed.is_empty() { return Ok(true); } - return Ok(matches!( - trimmed.to_ascii_lowercase().as_str(), - "y" | "yes" - )); + return Ok(matches!(trimmed.to_ascii_lowercase().as_str(), "y" | "yes")); } let mut input = String::new(); @@ -132,10 +130,7 @@ fn confirm_default_yes(prompt: &str) -> Result { if trimmed.is_empty() { return Ok(true); } - Ok(matches!( - trimmed.to_ascii_lowercase().as_str(), - "y" | "yes" - )) + Ok(matches!(trimmed.to_ascii_lowercase().as_str(), "y" | "yes")) } fn run_fix_agent(repo_root: &std::path::Path, agent: &str, message: &str) -> Result<()> { @@ -180,7 +175,11 @@ fn git_top_level() -> Result { Ok(std::path::PathBuf::from(root)) } -fn ensure_clean_or_stash(repo_root: &std::path::Path, allow_stash: bool, stashed: &mut bool) -> Result<()> { +fn ensure_clean_or_stash( + repo_root: &std::path::Path, + allow_stash: bool, + stashed: &mut bool, +) -> Result<()> { let status = git_output(repo_root, &["status", "--porcelain"])?; if status.trim().is_empty() { return Ok(()); @@ -191,7 +190,10 @@ fn ensure_clean_or_stash(repo_root: &std::path::Path, allow_stash: bool, stashed } println!("Stashing local changes..."); - git_status(repo_root, &["stash", "push", "-u", "-m", "f fix auto-stash"])?; + git_status( + repo_root, + &["stash", "push", "-u", "-m", "f fix auto-stash"], + )?; *stashed = true; Ok(()) } diff --git a/src/git_guard.rs b/src/git_guard.rs index e41bfc04..f5289575 100644 --- a/src/git_guard.rs +++ b/src/git_guard.rs @@ -5,6 +5,38 @@ use anyhow::{Context, Result, bail}; use crate::cli::GitRepairOpts; +/// Returns true when the repo has a `.jj` directory (jj colocated mode). +fn is_jj_colocated(repo_root: &Path) -> bool { + repo_root.join(".jj").is_dir() +} + +/// In a jj-colocated repo, detached HEAD is normal. Attach it to main/master +/// so git operations (add, commit, push) work. This is safe because jj keeps +/// git HEAD at the same commit as the main bookmark. +fn jj_auto_checkout(repo_root: &Path) -> Result { + if !is_jj_colocated(repo_root) { + return Ok(false); + } + let target = if git_ref_exists(repo_root, "main") { + "main" + } else if git_ref_exists(repo_root, "master") { + "master" + } else { + return Ok(false); + }; + // Silently attach HEAD — no file changes, just moves HEAD to the branch. + let status = Command::new("git") + .current_dir(repo_root) + .args(["checkout", target]) + .stdout(std::process::Stdio::null()) + .stderr(std::process::Stdio::null()) + .status(); + match status { + Ok(s) if s.success() => Ok(true), + _ => Ok(false), + } +} + #[derive(Debug, Clone)] struct GitState { rebase: bool, @@ -50,7 +82,10 @@ fn ensure_clean_state(repo_root: &Path, action: &str) -> Result<()> { )); } if state.detached { - issues.push("detached HEAD".to_string()); + // In jj-colocated repos, detached HEAD is normal — auto-fix it. + if !jj_auto_checkout(repo_root).unwrap_or(false) { + issues.push("detached HEAD".to_string()); + } } if !issues.is_empty() { @@ -99,18 +134,23 @@ pub fn run_git_repair(opts: GitRepairOpts) -> Result<()> { } if state.detached { - let target = if git_ref_exists(&repo_root, branch) { - branch.to_string() - } else if git_ref_exists(&repo_root, "master") { - "master".to_string() + // Try jj auto-checkout first (silent, safe). + if jj_auto_checkout(&repo_root).unwrap_or(false) { + did_work = true; } else { - bail!( - "Detached HEAD and branch '{}' not found. Checkout a branch manually.", - branch - ); - }; - git_run_in(&repo_root, &["checkout", &target])?; - did_work = true; + let target = if git_ref_exists(&repo_root, branch) { + branch.to_string() + } else if git_ref_exists(&repo_root, "master") { + "master".to_string() + } else { + bail!( + "Detached HEAD and branch '{}' not found. Checkout a branch manually.", + branch + ); + }; + git_run_in(&repo_root, &["checkout", &target])?; + did_work = true; + } } if did_work { diff --git a/src/hash.rs b/src/hash.rs index 6d473ff1..3130939a 100644 --- a/src/hash.rs +++ b/src/hash.rs @@ -2,7 +2,7 @@ use std::env; use std::io::IsTerminal; use std::process::Command; -use anyhow::{bail, Context, Result}; +use anyhow::{Context, Result, bail}; use crate::cli::HashOpts; use crate::env as flow_env; diff --git a/src/health.rs b/src/health.rs index 8b17d8db..7cc12501 100644 --- a/src/health.rs +++ b/src/health.rs @@ -99,14 +99,8 @@ fn ensure_ai_server() -> Result<()> { } let url = resolved.get("AI_SERVER_URL").cloned().unwrap_or_default(); - let model = resolved - .get("AI_SERVER_MODEL") - .cloned() - .unwrap_or_default(); - let token = resolved - .get("AI_SERVER_TOKEN") - .cloned() - .unwrap_or_default(); + let model = resolved.get("AI_SERVER_MODEL").cloned().unwrap_or_default(); + let token = resolved.get("AI_SERVER_TOKEN").cloned().unwrap_or_default(); if url.trim().is_empty() { println!("āš ļø AI server env not configured (AI_SERVER_URL)."); @@ -144,7 +138,9 @@ fn ensure_ai_server() -> Result<()> { } if model.trim().is_empty() { - println!("āš ļø AI_SERVER_MODEL not set. Example: f env set --personal AI_SERVER_MODEL=zai-glm-4.7"); + println!( + "āš ļø AI_SERVER_MODEL not set. Example: f env set --personal AI_SERVER_MODEL=zai-glm-4.7" + ); } if token.trim().is_empty() { @@ -201,7 +197,10 @@ fn ensure_rise_health() -> Result<()> { let rise_root = config::expand_path("~/code/rise"); if !rise_root.exists() { - println!("ā„¹ļø rise repo not found at {}; skipping.", rise_root.display()); + println!( + "ā„¹ļø rise repo not found at {}; skipping.", + rise_root.display() + ); return Ok(()); } @@ -231,7 +230,10 @@ fn ensure_rise_health() -> Result<()> { println!("āœ… rise health ok"); } Ok(status) => { - println!("āš ļø rise health failed (exit {}).", status.code().unwrap_or(-1)); + println!( + "āš ļø rise health failed (exit {}).", + status.code().unwrap_or(-1) + ); } Err(err) => { println!("āš ļø failed to run rise health: {}", err); @@ -251,7 +253,10 @@ fn ensure_linsa_base_health() -> Result<()> { if base_root.join("flow.toml").exists() { println!("āœ… linsa/base found at {}", base_root.display()); } else { - println!("āš ļø linsa/base found but flow.toml missing: {}", base_root.display()); + println!( + "āš ļø linsa/base found but flow.toml missing: {}", + base_root.display() + ); } Ok(()) @@ -267,7 +272,11 @@ fn ensure_zerg_ai_health() -> Result<()> { let url = env::var("ZERG_AI_URL") .ok() .filter(|v| !v.trim().is_empty()) - .or_else(|| env::var("AI_SERVER_URL").ok().filter(|v| !v.trim().is_empty())) + .or_else(|| { + env::var("AI_SERVER_URL") + .ok() + .filter(|v| !v.trim().is_empty()) + }) .unwrap_or_else(|| "http://127.0.0.1:7331".to_string()); let base = base_ai_url(&url); diff --git a/src/hive.rs b/src/hive.rs index a110b80d..4ade5a11 100644 --- a/src/hive.rs +++ b/src/hive.rs @@ -159,7 +159,9 @@ fn find_agent_spec(name: &str) -> Option<(PathBuf, AgentSource)> { // 2. Global flow: ~/.config/flow/agents/.md if let Some(home) = dirs::home_dir() { - let global_flow = home.join(".config/flow/agents").join(format!("{}.md", name)); + let global_flow = home + .join(".config/flow/agents") + .join(format!("{}.md", name)); if global_flow.exists() { return Some((global_flow, AgentSource::GlobalFlow)); } @@ -202,7 +204,10 @@ pub fn discover_agents(project_agents: &[AgentConfig]) -> Vec { for entry in entries.filter_map(|e| e.ok()) { let path = entry.path(); if path.extension().map_or(false, |e| e == "md") { - let stem = path.file_stem().and_then(|s| s.to_str()).map(|s| s.to_string()); + let stem = path + .file_stem() + .and_then(|s| s.to_str()) + .map(|s| s.to_string()); if let Some(name) = stem { if seen.insert(name.clone()) { agents.push(Agent { @@ -227,7 +232,10 @@ pub fn discover_agents(project_agents: &[AgentConfig]) -> Vec { for entry in entries.filter_map(|e| e.ok()) { let path = entry.path(); if path.extension().map_or(false, |e| e == "md") { - let stem = path.file_stem().and_then(|s| s.to_str()).map(|s| s.to_string()); + let stem = path + .file_stem() + .and_then(|s| s.to_str()) + .map(|s| s.to_string()); if let Some(name) = stem { if seen.insert(name.clone()) { agents.push(Agent { @@ -311,7 +319,11 @@ pub fn run_agent(agent: &str, prompt: &str) -> Result<()> { .context("Failed to run hive")?; if !status.success() { - anyhow::bail!("hive agent '{}' exited with status {:?}", agent, status.code()); + anyhow::bail!( + "hive agent '{}' exited with status {:?}", + agent, + status.code() + ); } Ok(()) @@ -333,7 +345,11 @@ pub fn run_agent_interactive(agent: &str) -> Result<()> { .context("Failed to run hive")?; if !status.success() { - anyhow::bail!("hive agent '{}' exited with status {:?}", agent, status.code()); + anyhow::bail!( + "hive agent '{}' exited with status {:?}", + agent, + status.code() + ); } Ok(()) @@ -376,8 +392,8 @@ pub fn create_agent(name: &str, global: bool) -> Result { /// Edit an agent spec file pub fn edit_agent(name: &str) -> Result<()> { - let (path, _source) = find_agent_spec(name) - .ok_or_else(|| anyhow::anyhow!("Agent '{}' not found", name))?; + let (path, _source) = + find_agent_spec(name).ok_or_else(|| anyhow::anyhow!("Agent '{}' not found", name))?; let editor = std::env::var("EDITOR").unwrap_or_else(|_| "vim".to_string()); let status = Command::new(&editor) diff --git a/src/jazz_state_stub.rs b/src/jazz_state_stub.rs index cea3c07c..28b28d38 100644 --- a/src/jazz_state_stub.rs +++ b/src/jazz_state_stub.rs @@ -1,7 +1,7 @@ use anyhow::Result; -use crate::history::InvocationRecord; use crate::base_tool; +use crate::history::InvocationRecord; pub fn record_task_run(record: &InvocationRecord) -> Result<()> { // Best-effort: never fail the parent task run if base isn't installed or errors out. diff --git a/src/lib.rs b/src/lib.rs index bd5242a7..15b9d544 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -54,6 +54,7 @@ pub mod opentui_prompt; pub mod otp; pub mod palette; pub mod parallel; +pub mod pr; pub mod processes; pub mod projects; pub mod proxy; diff --git a/src/macos.rs b/src/macos.rs index 4d5275d5..a138e394 100644 --- a/src/macos.rs +++ b/src/macos.rs @@ -173,12 +173,7 @@ fn run_list(opts: MacosListOpts) -> Result<()> { } else { "disabled".to_string() }; - println!( - " {} [{}] - {}", - svc.id, - svc.category.as_str(), - status - ); + println!(" {} [{}] - {}", svc.id, svc.category.as_str(), status); } println!(); } @@ -202,16 +197,8 @@ fn run_status() -> Result<()> { println!("Running non-Apple services:\n"); for svc in running { - let pid_str = svc - .pid - .map(|p| format!(" (pid {})", p)) - .unwrap_or_default(); - println!( - " {} [{}]{}", - svc.id, - svc.category.as_str(), - pid_str - ); + let pid_str = svc.pid.map(|p| format!(" (pid {})", p)).unwrap_or_default(); + println!(" {} [{}]{}", svc.id, svc.category.as_str(), pid_str); if let Some(prog) = &svc.program { println!(" {}", prog); } @@ -318,10 +305,7 @@ fn run_disable(opts: MacosDisableOpts) -> Result<()> { } if !opts.yes { - print!( - "Disable service '{}'? [y/N] ", - svc.id - ); + print!("Disable service '{}'? [y/N] ", svc.id); io::stdout().flush()?; let mut input = String::new(); @@ -506,10 +490,7 @@ fn parse_plist(path: &Path, service_type: ServiceType) -> Option fn enrich_with_launchctl_status(services: &mut [LaunchdService]) { // Query user domain let uid = get_uid(); - if let Ok(output) = Command::new("launchctl") - .args(["list"]) - .output() - { + if let Ok(output) = Command::new("launchctl").args(["list"]).output() { if output.status.success() { let stdout = String::from_utf8_lossy(&output.stdout); parse_launchctl_list(&stdout, services); @@ -583,10 +564,7 @@ fn categorize_service(label: &str) -> ServiceCategory { } // AI tools - if label.contains("lmstudio") - || label.contains("ollama") - || label.contains("copilot") - { + if label.contains("lmstudio") || label.contains("ollama") || label.contains("copilot") { return ServiceCategory::Ai; } @@ -847,10 +825,7 @@ mod tests { #[test] fn test_pattern_match() { - let patterns = vec![ - "com.nikiv.*".to_string(), - "exact.match".to_string(), - ]; + let patterns = vec!["com.nikiv.*".to_string(), "exact.match".to_string()]; assert!(is_pattern_match("com.nikiv.service", &patterns)); assert!(is_pattern_match("com.nikiv.other", &patterns)); diff --git a/src/main.rs b/src/main.rs index 3e5534af..2b6627ec 100644 --- a/src/main.rs +++ b/src/main.rs @@ -11,7 +11,7 @@ use flowd::{ }, code, commit, commits, daemon, deploy, deps, docs, doctor, env, ext, fish_install, fish_trace, fix, fixup, git_guard, hash, health, help_search, history, hive, home, hub, info, init, - init_tracing, install, jj, latest, log_server, macos, notify, otp, palette, parallel, + init_tracing, install, jj, latest, log_server, macos, notify, otp, palette, parallel, pr, processes, projects, proxy, publish, push, registry, release, repos, services, setup, skills, ssh_keys, storage, supervisor, sync, task_match, tasks, todo, tools, traces, undo, upgrade, upstream, web, @@ -461,6 +461,9 @@ fn main() -> Result<()> { Some(Commands::Proxy(cmd)) => { proxy_command(cmd)?; } + Some(Commands::Pr(opts)) => { + pr::run(opts)?; + } Some(Commands::TaskShortcut(args)) => { let Some(task_name) = args.first() else { bail!("no task name provided"); diff --git a/src/notify.rs b/src/notify.rs index 9b8caa9e..09d7ae03 100644 --- a/src/notify.rs +++ b/src/notify.rs @@ -8,6 +8,18 @@ use std::path::PathBuf; use std::time::{SystemTime, UNIX_EPOCH}; use uuid::Uuid; +fn env_flag(name: &str) -> bool { + std::env::var(name) + .ok() + .map(|value| { + matches!( + value.trim().to_ascii_lowercase().as_str(), + "1" | "true" | "yes" | "on" + ) + }) + .unwrap_or(false) +} + /// Proposal format matching Lin's ProposalService.swift #[derive(Debug, Serialize, Deserialize)] struct Proposal { @@ -42,6 +54,66 @@ fn get_proposals_path() -> Result { Ok(path) } +/// Send a proposal to Lin (best-effort). +/// +/// Disable by setting `FLOW_NOTIFY_DISABLE=1`. +pub fn send_proposal( + action: &str, + title: Option<&str>, + context: Option<&str>, + expires: i64, +) -> Result<()> { + if env_flag("FLOW_NOTIFY_DISABLE") { + return Ok(()); + } + + let proposals_path = get_proposals_path()?; + + // Ensure the directory exists + if let Some(parent) = proposals_path.parent() { + fs::create_dir_all(parent)?; + } + + // Read existing proposals + let mut proposals: Vec = if proposals_path.exists() { + let content = fs::read_to_string(&proposals_path)?; + serde_json::from_str(&content).unwrap_or_default() + } else { + Vec::new() + }; + + // Get current timestamp + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .context("Time went backwards")? + .as_secs() as i64; + + // Create title from action if not provided + let title = title.map(|s| s.to_string()).unwrap_or_else(|| { + if action.starts_with("f ") { + format!("Run: {}", &action[2..]) + } else { + format!("Run: {}", action) + } + }); + + let proposal = Proposal { + id: Uuid::new_v4().to_string(), + timestamp: now, + title, + action: action.to_string(), + context: context.map(|s| s.to_string()), + expires_at: now + expires, + }; + + proposals.push(proposal); + + let content = serde_json::to_string_pretty(&proposals)?; + fs::write(&proposals_path, content)?; + + Ok(()) +} + /// Run the notify command - write a proposal to Lin's proposals.json. pub fn run(cmd: NotifyCommand) -> Result<()> { let proposals_path = get_proposals_path()?; @@ -136,6 +208,10 @@ impl AlertKind { /// Send an alert to Lin's notification banner. /// Alerts are shown as floating banners - errors/warnings stay for 10+ seconds. pub fn send_alert(text: &str, kind: AlertKind) -> Result<()> { + if env_flag("FLOW_NOTIFY_DISABLE") { + return Ok(()); + } + let alerts_path = get_alerts_path()?; // Ensure the directory exists diff --git a/src/opentui_prompt.rs b/src/opentui_prompt.rs index 9fe3310d..daa4535f 100644 --- a/src/opentui_prompt.rs +++ b/src/opentui_prompt.rs @@ -3,7 +3,7 @@ use std::io::{self, IsTerminal}; use crossterm::event::{self, Event, KeyCode, KeyEventKind, KeyModifiers}; use crossterm::terminal::{disable_raw_mode, enable_raw_mode}; -use opentui_lite::{Color, OpenTui, ATTR_BOLD, BORDER_SIMPLE}; +use opentui_lite::{ATTR_BOLD, BORDER_SIMPLE, Color, OpenTui}; pub fn confirm(title: &str, lines: &[String], default_yes: bool) -> Option { if !io::stdin().is_terminal() || !io::stdout().is_terminal() { @@ -12,7 +12,9 @@ pub fn confirm(title: &str, lines: &[String], default_yes: bool) -> Option let (width, height) = crossterm::terminal::size().ok()?; let opentui = OpenTui::load().ok()?; - let renderer = opentui.create_renderer(width as u32, height as u32, false).ok()?; + let renderer = opentui + .create_renderer(width as u32, height as u32, false) + .ok()?; renderer.setup_terminal(true); @@ -65,9 +67,20 @@ pub fn confirm(title: &str, lines: &[String], default_yes: bool) -> Option let hint_y = height.saturating_sub(2) as u32; buffer.draw_text(&hint_line, 3, hint_y, muted, None, 0); - let action = if default_yes { "[Y] Confirm" } else { "[N] Cancel" }; + let action = if default_yes { + "[Y] Confirm" + } else { + "[N] Cancel" + }; let action_line = truncate_width(action, max_width); - buffer.draw_text(&action_line, 3, hint_y.saturating_sub(1), accent, None, ATTR_BOLD); + buffer.draw_text( + &action_line, + 3, + hint_y.saturating_sub(1), + accent, + None, + ATTR_BOLD, + ); renderer.render(true); diff --git a/src/otp.rs b/src/otp.rs index 85a88dd1..e7d02f0e 100644 --- a/src/otp.rs +++ b/src/otp.rs @@ -172,10 +172,7 @@ fn fetch_item( fn extract_totp_uri(item: &FullItem, field_label: Option<&str>) -> Result { let fields = item.fields.as_ref().ok_or_else(|| { - anyhow::anyhow!( - "item '{}' has no fields; expected a TOTP field", - item.title - ) + anyhow::anyhow!("item '{}' has no fields; expected a TOTP field", item.title) })?; let mut candidates: Vec<&Field> = fields @@ -238,7 +235,12 @@ fn compute_totp(uri: &str) -> Result { compute_totp_from_secret(&secret, period, digits, &algorithm) } -fn compute_totp_from_secret(secret: &str, period: u64, digits: u32, algorithm: &str) -> Result { +fn compute_totp_from_secret( + secret: &str, + period: u64, + digits: u32, + algorithm: &str, +) -> Result { let key = decode_base32(secret)?; let timestamp = SystemTime::now() diff --git a/src/pr.rs b/src/pr.rs new file mode 100644 index 00000000..4b982fdd --- /dev/null +++ b/src/pr.rs @@ -0,0 +1,607 @@ +use std::path::Path; +use std::process::Command; + +use anyhow::{Context, Result, bail}; + +use crate::cli::PrOpts; +use crate::commit; + +pub fn run(opts: PrOpts) -> Result<()> { + commit::ensure_git_repo()?; + commit::ensure_gh_available()?; + + let repo_root = commit::git_root_or_cwd(); + let is_jj = which::which("jj").is_ok() && repo_root.join(".jj").is_dir(); + + let origin_repo = remote_repo(&repo_root, "origin"); + let parsed = parse_pr_args(&opts.args); + let base_repo = if parsed.preview { + // Preview mode: PR exists only on your repo (origin). + origin_repo + .clone() + .or_else(|| commit::resolve_github_repo(&repo_root).ok()) + .context("failed to resolve origin repo for preview PR")? + } else { + // Default: PR base repo prefers upstream (fork workflow). + resolve_base_repo(&repo_root)? + }; + + if is_jj && is_rise_context(&repo_root) { + return run_rise_pr( + &repo_root, + &base_repo, + origin_repo.as_deref(), + &parsed, + &opts, + ); + } + + if is_jj { + return run_jj_pr( + &repo_root, + &base_repo, + origin_repo.as_deref(), + &parsed, + &opts, + ); + } + + run_git_pr( + &repo_root, + &base_repo, + origin_repo.as_deref(), + &parsed, + &opts, + ) +} + +#[derive(Debug, Clone, Default)] +struct ParsedPrArgs { + preview: bool, + title: Option, +} + +fn parse_pr_args(args: &[String]) -> ParsedPrArgs { + let mut preview = false; + let mut rest: Vec<&str> = Vec::new(); + + for (i, a) in args.iter().enumerate() { + if i == 0 && a == "preview" { + preview = true; + continue; + } + rest.push(a.as_str()); + } + + let title = rest + .iter() + .map(|s| s.trim()) + .filter(|s| !s.is_empty()) + .collect::>() + .join(" "); + ParsedPrArgs { + preview, + title: if title.is_empty() { None } else { Some(title) }, + } +} + +fn run_git_pr( + repo_root: &Path, + base_repo: &str, + origin_repo: Option<&str>, + parsed: &ParsedPrArgs, + opts: &PrOpts, +) -> Result<()> { + let dirty = git_is_dirty(repo_root); + + let (sha, title, body, branch_name) = if dirty { + let message = parsed + .title + .as_deref() + .map(|s| s.to_string()) + .context("working copy has uncommitted changes; provide a PR title (e.g. `f pr \"...\"`) or commit first")?; + + let (title, body) = commit::commit_message_title_body(&message); + let branch_name = opts + .branch + .clone() + .unwrap_or_else(|| derive_branch_name(&title)); + + ensure_on_git_branch(repo_root, &branch_name)?; + let sha = git_commit_all(repo_root, &message)?; + (sha, title, body, branch_name) + } else { + let sha = commit::git_capture_in(repo_root, &["rev-parse", "HEAD"])? + .trim() + .to_string(); + let message = commit::git_capture_in(repo_root, &["log", "-1", "--format=%B"])? + .trim() + .to_string(); + if message.is_empty() && parsed.title.is_none() { + bail!("HEAD commit has no message"); + } + + let (title, body) = match parsed.title.as_deref().filter(|s| !s.is_empty()) { + Some(m) => commit::commit_message_title_body(m), + None => commit::commit_message_title_body(&message), + }; + let branch_name = opts + .branch + .clone() + .unwrap_or_else(|| derive_branch_name(&title)); + (sha, title, body, branch_name) + }; + + let head_ref = head_ref(base_repo, origin_repo, &branch_name); + + let existing_pr = commit::gh_find_open_pr_by_head(repo_root, base_repo, &head_ref)?; + if let Some((_number, url)) = &existing_pr { + println!("Updating existing PR: {}", url); + push_branch(repo_root, &branch_name, &sha, false)?; + println!("Force-pushed {} to {}", short_sha(&sha), &branch_name); + if !opts.no_open { + let _ = commit::open_in_browser(url); + } + return Ok(()); + } + + push_branch(repo_root, &branch_name, &sha, false)?; + println!("Pushed branch {}", &branch_name); + + let (_number, url) = commit::gh_create_pr( + repo_root, base_repo, &head_ref, &opts.base, &title, &body, opts.draft, + )?; + println!("Created PR: {}", url); + if !opts.no_open { + let _ = commit::open_in_browser(&url); + } + + Ok(()) +} + +fn run_jj_pr( + repo_root: &Path, + base_repo: &str, + origin_repo: Option<&str>, + parsed: &ParsedPrArgs, + opts: &PrOpts, +) -> Result<()> { + // If a title is provided and the current change is undescribed, set it. + if let Some(title) = parsed.title.as_deref().filter(|s| !s.is_empty()) { + let description = jj_description(repo_root).unwrap_or_default(); + if description.trim().is_empty() { + commit::jj_run_in(repo_root, &["describe", "-m", title])?; + } + } + + let description = jj_description(repo_root)?; + let (title, body) = match parsed.title.as_deref().filter(|s| !s.is_empty()) { + Some(m) => commit::commit_message_title_body(m), + None => commit::commit_message_title_body(&description), + }; + + let branch_name = opts + .branch + .clone() + .unwrap_or_else(|| derive_branch_name(&title)); + let head_ref = head_ref(base_repo, origin_repo, &branch_name); + + let sha = commit::jj_capture_in( + repo_root, + &["log", "-r", "@", "--no-graph", "-T", "commit_id"], + )? + .trim() + .to_string(); + if sha.is_empty() { + bail!("failed to resolve jj commit_id for current change"); + } + + let existing_pr = commit::gh_find_open_pr_by_head(repo_root, base_repo, &head_ref)?; + if let Some((_number, url)) = &existing_pr { + println!("Updating existing PR: {}", url); + push_branch(repo_root, &branch_name, &sha, true)?; + println!("Force-pushed {} to {}", short_sha(&sha), &branch_name); + if !opts.no_open { + let _ = commit::open_in_browser(url); + } + return Ok(()); + } + + push_branch(repo_root, &branch_name, &sha, true)?; + println!("Pushed bookmark {}", &branch_name); + + let (_number, url) = commit::gh_create_pr( + repo_root, base_repo, &head_ref, &opts.base, &title, &body, opts.draft, + )?; + println!("Created PR: {}", url); + + if !opts.no_open { + let _ = commit::open_in_browser(&url); + } + + Ok(()) +} + +fn jj_description(repo_root: &Path) -> Result { + let description = commit::jj_capture_in( + repo_root, + &["log", "-r", "@", "--no-graph", "-T", "description"], + )? + .trim() + .to_string(); + if description.is_empty() { + bail!( + "Current change has no description. Set one with `jj describe` (or pass a title to `f pr \"...\"`)." + ); + } + Ok(description) +} + +/// Push a branch/bookmark for the given commit. +fn push_branch(repo_root: &Path, branch: &str, sha: &str, is_jj: bool) -> Result<()> { + if is_jj { + // Use jj bookmarks: set the bookmark, then push. + commit::jj_run_in( + repo_root, + &["bookmark", "set", branch, "-r", sha, "--allow-backwards"], + )?; + commit::jj_run_in( + repo_root, + &["git", "push", "--bookmark", branch, "--allow-new"], + )?; + } else { + // Pure git: create/update branch and push to origin. + commit::git_run_in(repo_root, &["branch", "-f", branch, sha])?; + commit::git_run_in( + repo_root, + &["push", "-u", "origin", branch, "--force-with-lease"], + )?; + } + Ok(()) +} + +fn resolve_base_repo(repo_root: &Path) -> Result { + if let Some(repo) = remote_repo(repo_root, "upstream") { + return Ok(repo); + } + // If upstream remote is missing, prefer the GitHub parent repo when this is a fork. + if let Ok(repo) = commit::gh_capture_in( + repo_root, + &[ + "repo", + "view", + "--json", + "parent,nameWithOwner", + "-q", + ".parent.nameWithOwner // .nameWithOwner", + ], + ) { + let repo = repo.trim(); + if !repo.is_empty() { + return Ok(repo.to_string()); + } + } + commit::resolve_github_repo(repo_root) +} + +fn remote_repo(repo_root: &Path, remote: &str) -> Option { + let url = commit::git_capture_in(repo_root, &["remote", "get-url", remote]).ok()?; + commit::github_repo_from_remote_url(&url) +} + +fn head_ref(base_repo: &str, origin_repo: Option<&str>, branch: &str) -> String { + let Some(origin_repo) = origin_repo else { + return branch.to_string(); + }; + if origin_repo == base_repo { + return branch.to_string(); + } + let owner = origin_repo.split('/').next().unwrap_or(origin_repo); + format!("{}:{}", owner, branch) +} + +fn git_is_dirty(repo_root: &Path) -> bool { + commit::git_capture_in(repo_root, &["status", "--porcelain"]) + .map(|s| !s.trim().is_empty()) + .unwrap_or(false) +} + +fn ensure_on_git_branch(repo_root: &Path, branch: &str) -> Result<()> { + let current = commit::git_capture_in(repo_root, &["rev-parse", "--abbrev-ref", "HEAD"])? + .trim() + .to_string(); + if current == branch { + return Ok(()); + } + + if git_branch_exists(repo_root, branch) { + commit::git_run_in(repo_root, &["checkout", branch])?; + } else { + commit::git_run_in(repo_root, &["checkout", "-b", branch])?; + } + Ok(()) +} + +fn git_branch_exists(repo_root: &Path, branch: &str) -> bool { + Command::new("git") + .current_dir(repo_root) + .args([ + "show-ref", + "--verify", + "--quiet", + &format!("refs/heads/{}", branch), + ]) + .status() + .map(|s| s.success()) + .unwrap_or(false) +} + +fn git_commit_all(repo_root: &Path, message: &str) -> Result { + commit::git_run_in(repo_root, &["add", "-A"])?; + + let staged = commit::git_capture_in(repo_root, &["diff", "--cached", "--name-only"])? + .trim() + .to_string(); + if staged.is_empty() { + bail!("no changes to commit"); + } + + let (title, body) = commit::commit_message_title_body(message); + if title.trim().is_empty() { + bail!("PR title/commit message is empty"); + } + + if body.trim().is_empty() { + commit::git_run_in(repo_root, &["commit", "-m", &title])?; + } else { + commit::git_run_in(repo_root, &["commit", "-m", &title, "-m", &body])?; + } + + Ok(commit::git_capture_in(repo_root, &["rev-parse", "HEAD"])? + .trim() + .to_string()) +} + +fn short_sha(sha: &str) -> &str { + if sha.len() <= 7 { sha } else { &sha[..7] } +} + +/// Derive a branch name from a title. +/// e.g. "Reduce latency with external engine" → "reduce-latency-with-external-engine" +fn derive_branch_name(title: &str) -> String { + let lowered = title.to_lowercase(); + let sanitized = commit::sanitize_ref_component(&lowered); + // Truncate to a reasonable length. + let max_len = 60; + if sanitized.len() > max_len { + match sanitized[..max_len].rfind('-') { + Some(pos) => sanitized[..pos].to_string(), + None => sanitized[..max_len].to_string(), + } + } else { + sanitized + } +} + +/// Check if the current working copy is a descendant of the rise bookmark. +fn is_rise_context(repo_root: &Path) -> bool { + if !repo_root.join(".rise").is_dir() { + return false; + } + let check = commit::jj_capture_in( + repo_root, + &[ + "log", + "-r", + "@ & descendants(rise)", + "--no-graph", + "-T", + "change_id", + ], + ) + .unwrap_or_default(); + !check.trim().is_empty() +} + +/// Create a PR from a rise child by extracting changed files onto a clean branch off main. +fn run_rise_pr( + repo_root: &Path, + base_repo: &str, + origin_repo: Option<&str>, + parsed: &ParsedPrArgs, + opts: &PrOpts, +) -> Result<()> { + // Capture current change's description. + let mut description = commit::jj_capture_in( + repo_root, + &["log", "-r", "@", "--no-graph", "-T", "description"], + )? + .trim() + .to_string(); + + if let Some(title) = parsed.title.as_deref().filter(|s| !s.is_empty()) { + description = title.to_string(); + } + + if description.is_empty() { + bail!( + "Current change has no description. Set one with `jj describe` (or pass a title to `f pr \"...\"`)." + ); + } + + // Capture changed files. + let diff_summary = commit::jj_capture_in(repo_root, &["diff", "--summary", "-r", "@"])?; + let changed_files: Vec<&str> = diff_summary + .lines() + .filter_map(|line| { + let line = line.trim(); + if line.len() > 2 && line.as_bytes()[1] == b' ' { + Some(line[2..].trim()) + } else { + None + } + }) + .collect(); + if changed_files.is_empty() { + bail!("No changed files in current change."); + } + + let original_change = commit::jj_capture_in( + repo_root, + &["log", "-r", "@", "--no-graph", "-T", "change_id"], + )? + .trim() + .to_string(); + + let (title, body) = commit::commit_message_title_body(&description); + let branch_name = opts + .branch + .clone() + .unwrap_or_else(|| derive_branch_name(&title)); + let head_ref = head_ref(base_repo, origin_repo, &branch_name); + + // Determine the base revision — use the PR base option to find the right remote ref. + let base = &opts.base; + // Try upstream first (fork workflow), then origin. In preview mode: origin only. + let base_rev = if parsed.preview { + format!("{}@origin", base) + } else if commit::jj_capture_in( + repo_root, + &[ + "log", + "-r", + &format!("{}@upstream", base), + "--no-graph", + "-T", + "change_id", + ], + ) + .is_ok() + { + format!("{}@upstream", base) + } else { + format!("{}@origin", base) + }; + + println!("==> Rise PR: creating clean change off {}...", base_rev); + + // Create a temp change off the base. + commit::jj_run_in( + repo_root, + &["new", &base_rev, "--no-edit", "-m", &description], + )?; + + // Find the temp change (latest empty child of base). + let temp_change = commit::jj_capture_in( + repo_root, + &[ + "log", + "-r", + &format!("latest(children({}) & empty())", base_rev), + "--no-graph", + "-T", + "change_id", + ], + )? + .trim() + .to_string(); + if temp_change.is_empty() { + bail!("Failed to create temp change for rise PR"); + } + + let result = (|| -> Result<()> { + // Restore changed files from original into temp. + let mut restore_args: Vec<&str> = + vec!["restore", "--from", &original_change, "--to", &temp_change]; + for f in &changed_files { + restore_args.push(f); + } + commit::jj_run_in(repo_root, &restore_args)?; + + // Get the git commit SHA for the temp change. + let sha = commit::jj_capture_in( + repo_root, + &["log", "-r", &temp_change, "--no-graph", "-T", "commit_id"], + )? + .trim() + .to_string(); + + let existing_pr = commit::gh_find_open_pr_by_head(repo_root, base_repo, &head_ref)?; + if let Some((_number, url)) = &existing_pr { + println!("Updating existing PR: {}", url); + push_branch(repo_root, &branch_name, &sha, true)?; + println!("Force-pushed {} to {}", short_sha(&sha), &branch_name); + if !opts.no_open { + let _ = commit::open_in_browser(url); + } + return Ok(()); + } + + push_branch(repo_root, &branch_name, &sha, true)?; + println!("Pushed bookmark {}", &branch_name); + + let (_number, url) = commit::gh_create_pr( + repo_root, base_repo, &head_ref, &opts.base, &title, &body, opts.draft, + )?; + println!("Created PR: {}", url); + if !opts.no_open { + let _ = commit::open_in_browser(&url); + } + Ok(()) + })(); + + // Always abandon the temp change. + let _ = commit::jj_run_in(repo_root, &["abandon", &temp_change]); + + result +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_derive_branch_name() { + assert_eq!( + derive_branch_name("Reduce latency with external engine"), + "reduce-latency-with-external-engine" + ); + assert_eq!(derive_branch_name("A/B.C"), "a-b.c"); + } + + #[test] + fn test_head_ref_fork() { + assert_eq!( + head_ref("jj-vcs/jj", Some("nikivdev/jj"), "my-branch"), + "nikivdev:my-branch" + ); + assert_eq!( + head_ref("nikivdev/jj", Some("nikivdev/jj"), "my-branch"), + "my-branch" + ); + assert_eq!(head_ref("x/y", None, "b"), "b"); + } + + #[test] + fn test_parse_pr_args() { + let p = parse_pr_args(&[]); + assert!(!p.preview); + assert!(p.title.is_none()); + + let p = parse_pr_args(&[String::from("preview")]); + assert!(p.preview); + assert!(p.title.is_none()); + + let p = parse_pr_args(&[String::from("preview"), String::from("hello world")]); + assert!(p.preview); + assert_eq!(p.title.as_deref(), Some("hello world")); + + let p = parse_pr_args(&[ + String::from("preview"), + String::from("reduce"), + String::from("latency"), + ]); + assert!(p.preview); + assert_eq!(p.title.as_deref(), Some("reduce latency")); + } +} diff --git a/src/proxy/mod.rs b/src/proxy/mod.rs index 9bf2d153..3744659b 100644 --- a/src/proxy/mod.rs +++ b/src/proxy/mod.rs @@ -141,10 +141,7 @@ pub fn parse_duration(s: &str) -> Duration { } /// Start the proxy server with the given configuration -pub async fn start( - config: ProxyConfig, - targets: Vec, -) -> Result<()> { +pub async fn start(config: ProxyConfig, targets: Vec) -> Result<()> { // Parse listen address let listen_addr: SocketAddr = if config.listen.starts_with(':') { format!("127.0.0.1{}", config.listen).parse() @@ -162,8 +159,8 @@ pub async fn start( let trace_size = parse_size(&config.trace_size); - let trace_buffer = TraceBuffer::init(&trace_dir, trace_size) - .context("Failed to initialize trace buffer")?; + let trace_buffer = + TraceBuffer::init(&trace_dir, trace_size).context("Failed to initialize trace buffer")?; let trace_buffer = Arc::new(trace_buffer); // Build backends @@ -200,7 +197,11 @@ pub async fn start( let summary_state = Arc::new(SummaryState::new(target_names, config.slow_threshold_ms)); // Create server - let server = Arc::new(ProxyServer::new(router, trace_buffer.clone(), summary_state.clone())); + let server = Arc::new(ProxyServer::new( + router, + trace_buffer.clone(), + summary_state.clone(), + )); // Start summary writer if enabled if config.trace_summary { diff --git a/src/proxy/server.rs b/src/proxy/server.rs index 07822384..f904e72c 100644 --- a/src/proxy/server.rs +++ b/src/proxy/server.rs @@ -4,20 +4,20 @@ use std::collections::HashMap; use std::net::SocketAddr; -use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::Arc; +use std::sync::atomic::{AtomicU64, Ordering}; use std::time::Instant; use anyhow::{Context, Result}; +use axum::Router; use axum::body::Body; use axum::extract::State; use axum::http::{Request, Response, StatusCode}; use axum::routing::any; -use axum::Router; use tokio::sync::RwLock; use super::summary::SummaryState; -use super::trace::{hash_path, now_ns, TraceBuffer, TraceRecord}; +use super::trace::{TraceBuffer, TraceRecord, hash_path, now_ns}; /// A backend target #[derive(Debug, Clone)] diff --git a/src/proxy/summary.rs b/src/proxy/summary.rs index e90973d6..fba7b74d 100644 --- a/src/proxy/summary.rs +++ b/src/proxy/summary.rs @@ -155,7 +155,10 @@ pub fn compute_summary(buffer: &TraceBuffer, state: &SummaryState) -> TraceSumma let total_requests = buffer.write_index(); let total_errors = records.iter().filter(|r| r.is_error()).count() as u64; let error_rate = if total_requests > 0 { - format!("{:.1}%", (total_errors as f64 / records.len() as f64) * 100.0) + format!( + "{:.1}%", + (total_errors as f64 / records.len() as f64) * 100.0 + ) } else { "0%".to_string() }; @@ -171,7 +174,10 @@ pub fn compute_summary(buffer: &TraceBuffer, state: &SummaryState) -> TraceSumma let mut sorted = latencies.clone(); sorted.sort(); let p99_idx = (sorted.len() as f64 * 0.99) as usize; - sorted.get(p99_idx.min(sorted.len() - 1)).copied().unwrap_or(0) + sorted + .get(p99_idx.min(sorted.len() - 1)) + .copied() + .unwrap_or(0) } else { 0 }; @@ -290,14 +296,14 @@ pub fn compute_summary(buffer: &TraceBuffer, state: &SummaryState) -> TraceSumma if let Some(latencies) = target_latencies.get(&idx) { if !latencies.is_empty() { health.avg_latency_ms = (latencies.iter().map(|&l| l as u64).sum::() - / latencies.len() as u64) as u32; + / latencies.len() as u64) + as u32; } } if let Some((error_count, last_error)) = target_errors.get(&idx) { health.error_count = *error_count; - health.error_rate = - format!("{:.1}%", (*error_count as f64 / count as f64) * 100.0); + health.error_rate = format!("{:.1}%", (*error_count as f64 / count as f64) * 100.0); health.healthy = (*error_count as f64 / count as f64) < 0.1; // < 10% errors if let Some((err, ts)) = last_error { diff --git a/src/proxy/trace.rs b/src/proxy/trace.rs index c0d406dc..80f580cb 100644 --- a/src/proxy/trace.rs +++ b/src/proxy/trace.rs @@ -8,8 +8,8 @@ use std::os::fd::AsRawFd; use std::os::unix::fs::OpenOptionsExt; use std::path::PathBuf; use std::ptr::{null_mut, write_unaligned}; -use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::OnceLock; +use std::sync::atomic::{AtomicU64, Ordering}; use std::time::Instant; use libc::{CLOCK_MONOTONIC, MAP_SHARED, PROT_READ, PROT_WRITE}; @@ -110,10 +110,8 @@ impl TraceRecord { #[inline] pub fn set_latency_status(&mut self, latency_us: u32, status: u16, method: Method, flags: u8) { - self.words[IDX_LATENCY_STATUS] = (latency_us as u64) << 32 - | (status as u64) << 16 - | (method as u64) << 8 - | flags as u64; + self.words[IDX_LATENCY_STATUS] = + (latency_us as u64) << 32 | (status as u64) << 16 | (method as u64) << 8 | flags as u64; } #[inline] @@ -125,8 +123,9 @@ impl TraceRecord { pub fn set_target_and_trace_id(&mut self, target_idx: u8, path_len: u8, trace_id: u128) { let trace_id_high = (trace_id >> 64) as u64; let trace_id_low = trace_id as u64; - self.words[IDX_TARGET_PATH_LEN] = - (target_idx as u64) << 56 | (path_len as u64) << 48 | (trace_id_high & 0xFFFF_FFFF_FFFF); + self.words[IDX_TARGET_PATH_LEN] = (target_idx as u64) << 56 + | (path_len as u64) << 48 + | (trace_id_high & 0xFFFF_FFFF_FFFF); self.words[IDX_TRACE_ID_LOW] = trace_id_low; } diff --git a/src/release_signing.rs b/src/release_signing.rs index a64960b0..d0ca766b 100644 --- a/src/release_signing.rs +++ b/src/release_signing.rs @@ -7,11 +7,18 @@ use std::{ }; use crate::{ - cli::{ReleaseSigningAction, ReleaseSigningCommand, ReleaseSigningStoreOpts, ReleaseSigningSyncOpts}, + cli::{ + ReleaseSigningAction, ReleaseSigningCommand, ReleaseSigningStoreOpts, + ReleaseSigningSyncOpts, + }, env, }; -const SIGNING_KEYS: [&str; 3] = ["MACOS_SIGN_P12_B64", "MACOS_SIGN_P12_PASSWORD", "MACOS_SIGN_IDENTITY"]; +const SIGNING_KEYS: [&str; 3] = [ + "MACOS_SIGN_P12_B64", + "MACOS_SIGN_P12_PASSWORD", + "MACOS_SIGN_IDENTITY", +]; pub fn run(cmd: ReleaseSigningCommand) -> Result<()> { match cmd.action { @@ -44,12 +51,16 @@ fn status() -> Result<()> { } } else if !apple_dev.is_empty() { println!("Keychain: no Developer ID Application identity found."); - println!("Keychain: Apple Development identity found (not recommended for public distribution):"); + println!( + "Keychain: Apple Development identity found (not recommended for public distribution):" + ); for name in apple_dev { println!(" - {}", name); } println!(); - println!("Next: create/download a Developer ID Application certificate (Apple Developer) and export it as .p12."); + println!( + "Next: create/download a Developer ID Application certificate (Apple Developer) and export it as .p12." + ); } else { println!("Keychain: no code signing identities found."); } @@ -60,7 +71,12 @@ fn status() -> Result<()> { println!(); // This may prompt for Touch ID if using cloud env store. - match env::fetch_personal_env_vars(&SIGNING_KEYS.iter().map(|s| s.to_string()).collect::>()) { + match env::fetch_personal_env_vars( + &SIGNING_KEYS + .iter() + .map(|s| s.to_string()) + .collect::>(), + ) { Ok(vars) => { for key in SIGNING_KEYS { if let Some(value) = vars.get(key) { @@ -78,7 +94,9 @@ fn status() -> Result<()> { } println!(); - println!("GitHub: `f release signing sync` will copy env store values into GitHub Actions secrets via `gh`."); + println!( + "GitHub: `f release signing sync` will copy env store values into GitHub Actions secrets via `gh`." + ); Ok(()) } @@ -95,7 +113,9 @@ fn list_codesign_identities() -> Result> { for line in text.lines() { // Example: // 1) "Developer ID Application: Name (TEAMID)" - let Some(quoted) = line.split('"').nth(1) else { continue }; + let Some(quoted) = line.split('"').nth(1) else { + continue; + }; let name = quoted.trim(); if !name.is_empty() { out.push(name.to_string()); @@ -222,7 +242,10 @@ fn gh_secret_set(repo: Option<&str>, name: &str, value: &str) -> Result<()> { .with_context(|| format!("failed to spawn `gh secret set {}`", name))?; { - let stdin = child.stdin.as_mut().context("failed to open stdin for gh")?; + let stdin = child + .stdin + .as_mut() + .context("failed to open stdin for gh")?; stdin.write_all(value.as_bytes())?; } diff --git a/src/skills.rs b/src/skills.rs index 27e33a51..2c9a848b 100644 --- a/src/skills.rs +++ b/src/skills.rs @@ -498,10 +498,7 @@ f {} ) } -fn sync_tasks_to_skills( - skills_dir: &Path, - tasks: &[config::TaskConfig], -) -> Result<(usize, usize)> { +fn sync_tasks_to_skills(skills_dir: &Path, tasks: &[config::TaskConfig]) -> Result<(usize, usize)> { fs::create_dir_all(skills_dir)?; let mut created = 0; @@ -657,7 +654,10 @@ pub fn auto_sync_skills() { } } -pub fn ensure_project_skills_at(project_root: &Path, cfg: &config::Config) -> Result { +pub fn ensure_project_skills_at( + project_root: &Path, + cfg: &config::Config, +) -> Result { ensure_default_skills_at(project_root)?; enforce_skills_from_config(project_root, cfg) } diff --git a/src/supervisor.rs b/src/supervisor.rs index 5e7da513..266cb656 100644 --- a/src/supervisor.rs +++ b/src/supervisor.rs @@ -419,14 +419,15 @@ fn launch_agent_label() -> &'static str { #[cfg(target_os = "macos")] fn launch_agent_plist_path() -> Result { let dir = config::expand_path("~/Library/LaunchAgents"); - fs::create_dir_all(&dir) - .with_context(|| format!("failed to create {}", dir.display()))?; + fs::create_dir_all(&dir).with_context(|| format!("failed to create {}", dir.display()))?; Ok(dir.join(format!("{}.plist", launch_agent_label()))) } #[cfg(target_os = "macos")] fn launch_agent_installed() -> bool { - launch_agent_plist_path().map(|p| p.exists()).unwrap_or(false) + launch_agent_plist_path() + .map(|p| p.exists()) + .unwrap_or(false) } #[cfg(target_os = "macos")] @@ -457,11 +458,7 @@ fn launch_agent_program_args(socket_path: &Path, boot: bool) -> Result, -) -> Result { +fn launch_agent_plist(socket_path: &Path, boot: bool, log_path: Option<&Path>) -> Result { let args = launch_agent_program_args(socket_path, boot)?; let mut buf = String::new(); buf.push_str("\n"); @@ -470,10 +467,7 @@ fn launch_agent_plist( ); buf.push_str("\n\n"); buf.push_str(" Label\n"); - buf.push_str(&format!( - " {}\n", - launch_agent_label() - )); + buf.push_str(&format!(" {}\n", launch_agent_label())); buf.push_str(" ProgramArguments\n \n"); for arg in args { buf.push_str(&format!(" {}\n", xml_escape(&arg))); @@ -526,9 +520,7 @@ fn install_launch_agent(socket_path: &Path, boot: bool) -> Result<()> { ); } - let _ = Command::new("launchctl") - .args(["enable", &target]) - .output(); + let _ = Command::new("launchctl").args(["enable", &target]).output(); let output = Command::new("launchctl") .args(["kickstart", "-k", &target]) @@ -815,14 +807,7 @@ fn monitor_daemons(state: SharedState) -> Result<()> { let mut to_restart: Vec<(String, Option)> = Vec::new(); let mut to_stop: Vec<(String, Option)> = Vec::new(); - let mut updates: Vec<( - String, - Option, - bool, - u32, - u32, - Option, - )> = Vec::new(); + let mut updates: Vec<(String, Option, bool, u32, u32, Option)> = Vec::new(); for entry in entries { if entry.disabled { @@ -881,8 +866,9 @@ fn monitor_daemons(state: SharedState) -> Result<()> { continue; } - let delay_secs = - 2u64.saturating_pow(entry.restart_attempts.saturating_add(1)).min(60); + let delay_secs = 2u64 + .saturating_pow(entry.restart_attempts.saturating_add(1)) + .min(60); let next_restart_at = Some(now + Duration::from_secs(delay_secs)); updates.push(( key, @@ -956,7 +942,15 @@ fn monitor_daemons(state: SharedState) -> Result<()> { if !updates.is_empty() { let mut state = state.lock().expect("supervisor state lock"); - for (key, retry_remaining, disabled, health_failures, restart_attempts, next_restart_at) in updates { + for ( + key, + retry_remaining, + disabled, + health_failures, + restart_attempts, + next_restart_at, + ) in updates + { if let Some(entry) = state.managed.get_mut(&key) { entry.retry_remaining = retry_remaining; entry.disabled = disabled; diff --git a/src/sync.rs b/src/sync.rs index accea53a..f032ff10 100644 --- a/src/sync.rs +++ b/src/sync.rs @@ -20,8 +20,8 @@ use crossterm::{ use serde::Serialize; use crate::ai_context; -use crate::commit; use crate::cli::SyncCommand; +use crate::commit; use crate::config; #[derive(Serialize, Clone)] @@ -74,18 +74,31 @@ struct SyncRecorder { impl SyncRecorder { fn new(cmd: &SyncCommand) -> Result { - let repo_root = git_capture(&["rev-parse", "--show-toplevel"]).unwrap_or_else(|_| ".".to_string()); + let repo_root = + git_capture(&["rev-parse", "--show-toplevel"]).unwrap_or_else(|_| ".".to_string()); let repo_root = repo_root.trim().to_string(); let repo_name = std::path::Path::new(&repo_root) .file_name() .and_then(|s| s.to_str()) .unwrap_or("repo") .to_string(); - let branch_before = git_capture(&["rev-parse", "--abbrev-ref", "HEAD"]).unwrap_or_default().trim().to_string(); - let head_before = git_capture(&["rev-parse", "HEAD"]).unwrap_or_default().trim().to_string(); - let upstream_before = git_capture(&["rev-parse", "--abbrev-ref", "@{upstream}"]).ok().map(|s| s.trim().to_string()); - let origin_url = git_capture(&["remote", "get-url", "origin"]).ok().map(|s| s.trim().to_string()); - let upstream_url = git_capture(&["remote", "get-url", "upstream"]).ok().map(|s| s.trim().to_string()); + let branch_before = git_capture(&["rev-parse", "--abbrev-ref", "HEAD"]) + .unwrap_or_default() + .trim() + .to_string(); + let head_before = git_capture(&["rev-parse", "HEAD"]) + .unwrap_or_default() + .trim() + .to_string(); + let upstream_before = git_capture(&["rev-parse", "--abbrev-ref", "@{upstream}"]) + .ok() + .map(|s| s.trim().to_string()); + let origin_url = git_capture(&["remote", "get-url", "origin"]) + .ok() + .map(|s| s.trim().to_string()); + let upstream_url = git_capture(&["remote", "get-url", "upstream"]) + .ok() + .map(|s| s.trim().to_string()); let status_before = git_capture(&["status", "--porcelain"]).unwrap_or_default(); let mut recorder = SyncRecorder { @@ -104,7 +117,13 @@ impl SyncRecorder { rebase: cmd.rebase, pushed: !cmd.no_push, }; - recorder.record("start", format!("sync start (rebase={}, stash={}, push={})", cmd.rebase, cmd.stash, !cmd.no_push)); + recorder.record( + "start", + format!( + "sync start (rebase={}, stash={}, push={})", + cmd.rebase, cmd.stash, !cmd.no_push + ), + ); Ok(recorder) } @@ -147,9 +166,17 @@ impl SyncRecorder { return; } - let branch_after = git_capture(&["rev-parse", "--abbrev-ref", "HEAD"]).unwrap_or_default().trim().to_string(); - let head_after = git_capture(&["rev-parse", "HEAD"]).unwrap_or_default().trim().to_string(); - let upstream_after = git_capture(&["rev-parse", "--abbrev-ref", "@{upstream}"]).ok().map(|s| s.trim().to_string()); + let branch_after = git_capture(&["rev-parse", "--abbrev-ref", "HEAD"]) + .unwrap_or_default() + .trim() + .to_string(); + let head_after = git_capture(&["rev-parse", "HEAD"]) + .unwrap_or_default() + .trim() + .to_string(); + let upstream_after = git_capture(&["rev-parse", "--abbrev-ref", "@{upstream}"]) + .ok() + .map(|s| s.trim().to_string()); let status_after = git_capture(&["status", "--porcelain"]).unwrap_or_default(); let snapshot = SyncSnapshot { @@ -215,28 +242,43 @@ Use `f commit-queue list` to review, or re-run with `--allow-queue`." return run_jj_sync(repo_root_path, &cmd, auto_fix, &mut recorder); } - // Check for unmerged files (can exist even without active merge/rebase) - let unmerged = git_capture(&["diff", "--name-only", "--diff-filter=U"]).unwrap_or_default(); - if !unmerged.trim().is_empty() { - let unmerged_files: Vec<&str> = unmerged.lines().filter(|l| !l.is_empty()).collect(); - println!( - "==> Found {} unmerged files, resolving...", - unmerged_files.len() - ); - recorder.record("unmerged", format!("found {} unmerged files", unmerged_files.len())); + // Check for unmerged files (can exist even without active merge/rebase) + let unmerged = git_capture(&["diff", "--name-only", "--diff-filter=U"]).unwrap_or_default(); + if !unmerged.trim().is_empty() { + let unmerged_files: Vec<&str> = unmerged.lines().filter(|l| !l.is_empty()).collect(); + println!( + "==> Found {} unmerged files, resolving...", + unmerged_files.len() + ); + recorder.record( + "unmerged", + format!("found {} unmerged files", unmerged_files.len()), + ); - let should_fix = auto_fix || prompt_for_auto_fix()?; - if should_fix { - if try_resolve_conflicts()? { - let _ = git_run(&["add", "-A"]); - // Check if we're in a merge - if is_merge_in_progress() { - let _ = Command::new("git").args(["commit", "--no-edit"]).output(); + let should_fix = auto_fix || prompt_for_auto_fix()?; + if should_fix { + if try_resolve_conflicts()? { + let _ = git_run(&["add", "-A"]); + // Check if we're in a merge + if is_merge_in_progress() { + let _ = Command::new("git").args(["commit", "--no-edit"]).output(); + } + println!(" āœ“ Unmerged files resolved"); + } else { + // Couldn't resolve - reset the conflicted files to HEAD + println!(" Could not auto-resolve. Resetting unmerged files..."); + for file in &unmerged_files { + let _ = Command::new("git") + .args(["checkout", "HEAD", "--", file]) + .output(); + } + if is_merge_in_progress() { + let _ = Command::new("git").args(["merge", "--abort"]).output(); + } } - println!(" āœ“ Unmerged files resolved"); } else { - // Couldn't resolve - reset the conflicted files to HEAD - println!(" Could not auto-resolve. Resetting unmerged files..."); + // User declined - reset the files + println!(" Resetting unmerged files..."); for file in &unmerged_files { let _ = Command::new("git") .args(["checkout", "HEAD", "--", file]) @@ -246,112 +288,110 @@ Use `f commit-queue list` to review, or re-run with `--allow-queue`." let _ = Command::new("git").args(["merge", "--abort"]).output(); } } - } else { - // User declined - reset the files - println!(" Resetting unmerged files..."); - for file in &unmerged_files { - let _ = Command::new("git") - .args(["checkout", "HEAD", "--", file]) - .output(); - } - if is_merge_in_progress() { - let _ = Command::new("git").args(["merge", "--abort"]).output(); - } } - } - // Check for in-progress rebase/merge and handle it - if is_rebase_in_progress() { - println!("==> Rebase in progress, attempting to resolve..."); - let should_fix = auto_fix || prompt_for_rebase_action()?; - if should_fix { - if try_resolve_rebase_conflicts()? { - println!(" āœ“ Rebase completed"); + // Check for in-progress rebase/merge and handle it + if is_rebase_in_progress() { + println!("==> Rebase in progress, attempting to resolve..."); + let should_fix = auto_fix || prompt_for_rebase_action()?; + if should_fix { + if try_resolve_rebase_conflicts()? { + println!(" āœ“ Rebase completed"); + } else { + println!(" Could not auto-resolve. Aborting rebase..."); + let _ = Command::new("git").args(["rebase", "--abort"]).output(); + } } else { - println!(" Could not auto-resolve. Aborting rebase..."); + println!(" Aborting rebase..."); let _ = Command::new("git").args(["rebase", "--abort"]).output(); } - } else { - println!(" Aborting rebase..."); - let _ = Command::new("git").args(["rebase", "--abort"]).output(); } - } - // Check for in-progress merge - if is_merge_in_progress() { - println!("==> Merge in progress, attempting to resolve..."); - let should_fix = auto_fix || prompt_for_auto_fix()?; - if should_fix { - if try_resolve_conflicts()? { - let _ = git_run(&["add", "-A"]); - let _ = Command::new("git").args(["commit", "--no-edit"]).output(); - println!(" āœ“ Merge completed"); + // Check for in-progress merge + if is_merge_in_progress() { + println!("==> Merge in progress, attempting to resolve..."); + let should_fix = auto_fix || prompt_for_auto_fix()?; + if should_fix { + if try_resolve_conflicts()? { + let _ = git_run(&["add", "-A"]); + let _ = Command::new("git").args(["commit", "--no-edit"]).output(); + println!(" āœ“ Merge completed"); + } else { + println!(" Could not auto-resolve. Aborting merge..."); + let _ = Command::new("git").args(["merge", "--abort"]).output(); + } } else { - println!(" Could not auto-resolve. Aborting merge..."); + println!(" Aborting merge..."); let _ = Command::new("git").args(["merge", "--abort"]).output(); } - } else { - println!(" Aborting merge..."); - let _ = Command::new("git").args(["merge", "--abort"]).output(); } - } - let current = git_capture(&["rev-parse", "--abbrev-ref", "HEAD"])?; - let current = current.trim(); + let current = git_capture(&["rev-parse", "--abbrev-ref", "HEAD"])?; + let current = current.trim(); - // Check for uncommitted changes - let status = git_capture(&["status", "--porcelain"])?; - let has_changes = !status.trim().is_empty(); + // Check for uncommitted changes + let status = git_capture(&["status", "--porcelain"])?; + let has_changes = !status.trim().is_empty(); - if has_changes && !cmd.stash { - println!("You have uncommitted changes. Use --stash to auto-stash them."); - recorder.record("stash", "skipped (uncommitted changes without --stash)"); - bail!("Uncommitted changes"); - } + if has_changes && !cmd.stash { + println!("You have uncommitted changes. Use --stash to auto-stash them."); + recorder.record("stash", "skipped (uncommitted changes without --stash)"); + bail!("Uncommitted changes"); + } - // Stash if needed - let mut stashed = false; - if has_changes && cmd.stash { - println!("==> Stashing local changes..."); - let stash_count_before = git_capture(&["stash", "list"]) - .map(|s| s.lines().count()) - .unwrap_or(0); + // Stash if needed + let mut stashed = false; + if has_changes && cmd.stash { + println!("==> Stashing local changes..."); + let stash_count_before = git_capture(&["stash", "list"]) + .map(|s| s.lines().count()) + .unwrap_or(0); - let _ = git_run(&["stash", "push", "-m", "f sync auto-stash"]); + let _ = git_run(&["stash", "push", "-m", "f sync auto-stash"]); - let stash_count_after = git_capture(&["stash", "list"]) - .map(|s| s.lines().count()) - .unwrap_or(0); - stashed = stash_count_after > stash_count_before; - } - recorder.set_stashed(stashed); - if has_changes && cmd.stash { - recorder.record("stash", format!("stashed={}", stashed)); - } + let stash_count_after = git_capture(&["stash", "list"]) + .map(|s| s.lines().count()) + .unwrap_or(0); + stashed = stash_count_after > stash_count_before; + } + recorder.set_stashed(stashed); + if has_changes && cmd.stash { + recorder.record("stash", format!("stashed={}", stashed)); + } - // Check if we have an origin remote - let has_origin = git_capture(&["remote", "get-url", "origin"]).is_ok(); - let has_upstream = git_capture(&["remote", "get-url", "upstream"]).is_ok(); + // Check if we have an origin remote + let has_origin = git_capture(&["remote", "get-url", "origin"]).is_ok(); + let has_upstream = git_capture(&["remote", "get-url", "upstream"]).is_ok(); - // Check if origin remote is reachable (repo exists on remote) - let origin_reachable = - has_origin && git_capture(&["ls-remote", "--exit-code", "-q", "origin"]).is_ok(); + // Check if origin remote is reachable (repo exists on remote) + let origin_reachable = + has_origin && git_capture(&["ls-remote", "--exit-code", "-q", "origin"]).is_ok(); - // Step 1: Pull from origin (if tracking branch exists and repo is reachable) - if has_origin && origin_reachable { - let tracking = git_capture(&["rev-parse", "--abbrev-ref", "@{upstream}"]); - if tracking.is_ok() { - println!("==> Pulling from origin..."); - recorder.record("pull", format!("pulling from origin (rebase={})", cmd.rebase)); - if cmd.rebase { - if let Err(_) = git_run(&["pull", "--rebase", "origin", current]) { - // Check if we're in a rebase conflict - if is_rebase_in_progress() { - let should_fix = auto_fix || prompt_for_auto_fix()?; - if should_fix { - if try_resolve_rebase_conflicts()? { - println!(" āœ“ Rebase conflicts auto-resolved"); - recorder.record("pull", "rebase conflicts auto-resolved"); + // Step 1: Pull from origin (if tracking branch exists and repo is reachable) + if has_origin && origin_reachable { + let tracking = git_capture(&["rev-parse", "--abbrev-ref", "@{upstream}"]); + if tracking.is_ok() { + println!("==> Pulling from origin..."); + recorder.record( + "pull", + format!("pulling from origin (rebase={})", cmd.rebase), + ); + if cmd.rebase { + if let Err(_) = git_run(&["pull", "--rebase", "origin", current]) { + // Check if we're in a rebase conflict + if is_rebase_in_progress() { + let should_fix = auto_fix || prompt_for_auto_fix()?; + if should_fix { + if try_resolve_rebase_conflicts()? { + println!(" āœ“ Rebase conflicts auto-resolved"); + recorder.record("pull", "rebase conflicts auto-resolved"); + } else { + restore_stash(stashed); + recorder.record("pull", "rebase conflicts unresolved"); + bail!( + "Rebase conflicts. Resolve manually:\n git status\n # fix conflicts\n git add . && git rebase --continue" + ); + } } else { restore_stash(stashed); recorder.record("pull", "rebase conflicts unresolved"); @@ -361,30 +401,31 @@ Use `f commit-queue list` to review, or re-run with `--allow-queue`." } } else { restore_stash(stashed); - recorder.record("pull", "rebase conflicts unresolved"); - bail!( - "Rebase conflicts. Resolve manually:\n git status\n # fix conflicts\n git add . && git rebase --continue" - ); + recorder.record("pull", "git pull --rebase failed"); + bail!("git pull --rebase failed"); } - } else { - restore_stash(stashed); - recorder.record("pull", "git pull --rebase failed"); - bail!("git pull --rebase failed"); } - } - } else { - if let Err(_) = git_run(&["pull", "origin", current]) { - // Check for merge conflicts - let conflicts = git_capture(&["diff", "--name-only", "--diff-filter=U"]) - .unwrap_or_default(); - if !conflicts.trim().is_empty() { - let should_fix = auto_fix || prompt_for_auto_fix()?; - if should_fix { - if try_resolve_conflicts()? { - let _ = git_run(&["add", "-A"]); - let _ = Command::new("git").args(["commit", "--no-edit"]).output(); - println!(" āœ“ Merge conflicts auto-resolved"); - recorder.record("pull", "merge conflicts auto-resolved"); + } else { + if let Err(_) = git_run(&["pull", "origin", current]) { + // Check for merge conflicts + let conflicts = git_capture(&["diff", "--name-only", "--diff-filter=U"]) + .unwrap_or_default(); + if !conflicts.trim().is_empty() { + let should_fix = auto_fix || prompt_for_auto_fix()?; + if should_fix { + if try_resolve_conflicts()? { + let _ = git_run(&["add", "-A"]); + let _ = + Command::new("git").args(["commit", "--no-edit"]).output(); + println!(" āœ“ Merge conflicts auto-resolved"); + recorder.record("pull", "merge conflicts auto-resolved"); + } else { + restore_stash(stashed); + recorder.record("pull", "merge conflicts unresolved"); + bail!( + "Merge conflicts. Resolve manually:\n git status\n # fix conflicts\n git add . && git commit" + ); + } } else { restore_stash(stashed); recorder.record("pull", "merge conflicts unresolved"); @@ -394,90 +435,83 @@ Use `f commit-queue list` to review, or re-run with `--allow-queue`." } } else { restore_stash(stashed); - recorder.record("pull", "merge conflicts unresolved"); - bail!( - "Merge conflicts. Resolve manually:\n git status\n # fix conflicts\n git add . && git commit" - ); + recorder.record("pull", "git pull failed"); + bail!("git pull failed"); } - } else { - restore_stash(stashed); - recorder.record("pull", "git pull failed"); - bail!("git pull failed"); } } + recorder.record("pull", "pull complete"); + } else { + println!("==> No tracking branch, skipping pull"); + recorder.record("pull", "skipped (no tracking branch)"); } - recorder.record("pull", "pull complete"); - } else { - println!("==> No tracking branch, skipping pull"); - recorder.record("pull", "skipped (no tracking branch)"); + } else if has_origin && !origin_reachable { + println!("==> Origin repo not found, skipping pull"); + recorder.record("pull", "skipped (origin not reachable)"); } - } else if has_origin && !origin_reachable { - println!("==> Origin repo not found, skipping pull"); - recorder.record("pull", "skipped (origin not reachable)"); - } - // Step 2: Sync upstream if it exists - if has_upstream { - println!("==> Syncing upstream..."); - recorder.record("upstream", "syncing upstream"); - if let Err(e) = sync_upstream_internal(current, auto_fix, &mut recorder) { - restore_stash(stashed); - return Err(e); + // Step 2: Sync upstream if it exists + if has_upstream { + println!("==> Syncing upstream..."); + recorder.record("upstream", "syncing upstream"); + if let Err(e) = sync_upstream_internal(current, auto_fix, &mut recorder) { + restore_stash(stashed); + return Err(e); + } + } else { + recorder.record("upstream", "skipped (no upstream remote)"); } - } else { - recorder.record("upstream", "skipped (no upstream remote)"); - } - - // Step 3: Push to origin - if has_origin && !cmd.no_push { - // Check if origin == upstream (read-only clone, no fork) - let origin_url = git_capture(&["remote", "get-url", "origin"]).unwrap_or_default(); - let upstream_url = git_capture(&["remote", "get-url", "upstream"]).unwrap_or_default(); - let is_read_only = - has_upstream && normalize_git_url(&origin_url) == normalize_git_url(&upstream_url); - if is_read_only { - println!("==> Skipping push (origin == upstream, read-only clone)"); - println!(" To push, create a fork first: gh repo fork --remote"); - recorder.record("push", "skipped (origin == upstream)"); - } else if !origin_reachable { - // Origin repo doesn't exist - if cmd.create_repo { - println!("==> Creating origin repo..."); - if try_create_origin_repo()? { - println!("==> Pushing to origin..."); - git_run(&["push", "-u", "origin", current])?; - recorder.record("push", "created repo and pushed to origin"); + // Step 3: Push to origin + if has_origin && !cmd.no_push { + // Check if origin == upstream (read-only clone, no fork) + let origin_url = git_capture(&["remote", "get-url", "origin"]).unwrap_or_default(); + let upstream_url = git_capture(&["remote", "get-url", "upstream"]).unwrap_or_default(); + let is_read_only = + has_upstream && normalize_git_url(&origin_url) == normalize_git_url(&upstream_url); + + if is_read_only { + println!("==> Skipping push (origin == upstream, read-only clone)"); + println!(" To push, create a fork first: gh repo fork --remote"); + recorder.record("push", "skipped (origin == upstream)"); + } else if !origin_reachable { + // Origin repo doesn't exist + if cmd.create_repo { + println!("==> Creating origin repo..."); + if try_create_origin_repo()? { + println!("==> Pushing to origin..."); + git_run(&["push", "-u", "origin", current])?; + recorder.record("push", "created repo and pushed to origin"); + } else { + println!(" Could not create repo, skipping push"); + recorder.record("push", "skipped (create repo failed)"); + } } else { - println!(" Could not create repo, skipping push"); - recorder.record("push", "skipped (create repo failed)"); + println!("==> Origin repo not found, skipping push"); + println!(" Use --create-repo to create it"); + recorder.record("push", "skipped (origin not found)"); } } else { - println!("==> Origin repo not found, skipping push"); - println!(" Use --create-repo to create it"); - recorder.record("push", "skipped (origin not found)"); + println!("==> Pushing to origin..."); + let push_result = push_with_autofix(current, auto_fix, cmd.max_fix_attempts); + if let Err(e) = push_result { + restore_stash(stashed); + recorder.record("push", "push failed"); + return Err(e); + } + recorder.record("push", "push complete"); } + } else if cmd.no_push { + recorder.record("push", "skipped (--no-push)"); } else { - println!("==> Pushing to origin..."); - let push_result = push_with_autofix(current, auto_fix, cmd.max_fix_attempts); - if let Err(e) = push_result { - restore_stash(stashed); - recorder.record("push", "push failed"); - return Err(e); - } - recorder.record("push", "push complete"); + recorder.record("push", "skipped (no origin)"); } - } else if cmd.no_push { - recorder.record("push", "skipped (--no-push)"); - } else { - recorder.record("push", "skipped (no origin)"); - } - // Restore stash - restore_stash(stashed); - if stashed { - recorder.record("stash", "stash restored"); - } + // Restore stash + restore_stash(stashed); + if stashed { + recorder.record("stash", "stash restored"); + } println!("\nāœ“ Sync complete!"); recorder.record("complete", "sync complete"); @@ -504,8 +538,8 @@ fn run_jj_sync( bail!("Unmerged files detected. Resolve them before syncing."); } - let head_ref = git_capture(&["rev-parse", "--abbrev-ref", "HEAD"]) - .unwrap_or_else(|_| "HEAD".to_string()); + let head_ref = + git_capture(&["rev-parse", "--abbrev-ref", "HEAD"]).unwrap_or_else(|_| "HEAD".to_string()); let head_ref = head_ref.trim(); let current_branch = if head_ref == "HEAD" || head_ref.is_empty() { recorder.record("jj", "detached head (ignored, using default branch)"); @@ -522,8 +556,9 @@ fn run_jj_sync( // Keep jj fetch output small. In most workflows, only the current branch + upstream trunk are // needed for a sync/rebase. let upstream_branch_opt = resolve_upstream_branch(); - let upstream_branch_for_fetch = - upstream_branch_opt.clone().unwrap_or_else(|| jj_default_branch(repo_root)); + let upstream_branch_for_fetch = upstream_branch_opt + .clone() + .unwrap_or_else(|| jj_default_branch(repo_root)); if has_origin || has_upstream { println!("==> Fetching remotes via jj..."); @@ -626,7 +661,9 @@ fn run_jj_sync( if let Err(err) = jj_run_in(repo_root, &["rebase", "-d", &dest]) { recorder.record("jj", "jj rebase failed"); if !is_read_only { - println!("==> Rebase blocked by immutable commits; retrying with --ignore-immutable..."); + println!( + "==> Rebase blocked by immutable commits; retrying with --ignore-immutable..." + ); recorder.record("jj", "jj rebase retry --ignore-immutable"); jj_run_in(repo_root, &["rebase", "--ignore-immutable", "-d", &dest])?; } else { @@ -635,6 +672,11 @@ fn run_jj_sync( } did_rebase = true; + // Auto-resolve conflicts from rebase (lockfiles, etc.). + if let Some(ref d) = dest_ref { + try_resolve_jj_conflicts(repo_root, d, auto_fix, recorder)?; + } + if commit::commit_queue_has_entries(repo_root) { if let Ok(updated) = commit::refresh_commit_queue(repo_root) { if updated > 0 { @@ -689,13 +731,32 @@ fn run_jj_sync( recorder.record("push", "skipped (no origin)"); } + // Rise: keep overlay bookmark on top of latest target. + if let Some(ref dest) = dest_ref { + if has_rise_bookmark(repo_root) { + println!("==> Rebasing rise overlay onto {}...", dest); + recorder.record("rise", format!("jj rebase -b rise -d {}", dest)); + match jj_run_in(repo_root, &["rebase", "-b", "rise", "-d", dest]) { + Ok(()) => recorder.record("rise", "rise bookmark rebased"), + Err(e) => { + println!(" Warning: rise rebase failed: {}", e); + recorder.record("rise", format!("rise rebase failed: {}", e)); + } + } + } + } + println!("\nāœ“ Sync complete (jj)!"); recorder.record("complete", "sync complete (jj)"); Ok(()) } /// Sync from upstream remote into current branch. -fn sync_upstream_internal(current_branch: &str, auto_fix: bool, recorder: &mut SyncRecorder) -> Result<()> { +fn sync_upstream_internal( + current_branch: &str, + auto_fix: bool, + recorder: &mut SyncRecorder, +) -> Result<()> { // Fetch upstream git_run(&["fetch", "upstream", "--prune"])?; recorder.record("upstream", "fetched upstream"); @@ -731,7 +792,10 @@ fn sync_upstream_internal(current_branch: &str, auto_fix: bool, recorder: &mut S if behind > 0 { println!(" Merging {} commits from upstream...", behind); - recorder.record("upstream", format!("merging {} commits from upstream", behind)); + recorder.record( + "upstream", + format!("merging {} commits from upstream", behind), + ); // Try fast-forward first if git_run(&["merge", "--ff-only", &upstream_ref]).is_err() { @@ -883,16 +947,195 @@ fn jj_capture_in(repo_root: &Path, args: &[&str]) -> Result { Ok(String::from_utf8_lossy(&output.stdout).to_string()) } +fn has_rise_bookmark(repo_root: &Path) -> bool { + let output = jj_capture_in(repo_root, &["bookmark", "list"]).unwrap_or_default(); + output.lines().any(|line| { + line.trim_start() + .split(':') + .next() + .map_or(false, |name| name.trim() == "rise") + }) +} + fn jj_has_divergence(repo_root: &Path, current: &str, dest: &str) -> Result { let revset = format!("{}..{}", dest, current); - let output = jj_capture_in(repo_root, &["log", "-r", &revset, "--no-graph", "-T", "commit_id"])?; + let output = jj_capture_in( + repo_root, + &["log", "-r", &revset, "--no-graph", "-T", "commit_id"], + )?; Ok(!output.trim().is_empty()) } +/// Check for JJ conflicts after rebase and auto-resolve what we can. +/// +/// Lockfiles are resolved by restoring from the rebase destination (accept upstream). +/// Code conflicts are attempted with Claude if auto_fix is enabled. +/// Returns Ok(true) if all conflicts resolved, Ok(false) if some remain. +fn try_resolve_jj_conflicts( + repo_root: &Path, + dest: &str, + auto_fix: bool, + recorder: &mut SyncRecorder, +) -> Result { + // Check if working copy has conflicts via jj log. + let status = jj_capture_in( + repo_root, + &["log", "-r", "@", "--no-graph", "-T", "conflict"], + )?; + if !status.trim().contains("true") { + return Ok(true); + } + + // Get list of conflicted files from jj status. + let jj_status = jj_capture_in(repo_root, &["status"])?; + let conflicted_files: Vec<&str> = jj_status + .lines() + .filter_map(|line| { + let trimmed = line.trim(); + if trimmed.starts_with("C ") { + Some(trimmed.strip_prefix("C ").unwrap().trim()) + } else { + None + } + }) + .collect(); + + if conflicted_files.is_empty() { + // Conflict detected but no files listed — odd state, report it. + println!(" Warning: jj reports conflicts but no conflicted files found in status"); + return Ok(false); + } + + println!( + "==> {} conflicted file(s) after rebase: {}", + conflicted_files.len(), + conflicted_files.join(", ") + ); + recorder.record( + "conflicts", + format!("{} conflicted file(s)", conflicted_files.len()), + ); + + let auto_generated = [ + "STATS.md", + "stats.md", + "CHANGELOG.md", + "changelog.md", + "package-lock.json", + "yarn.lock", + "bun.lock", + "pnpm-lock.yaml", + "Cargo.lock", + "Gemfile.lock", + "poetry.lock", + "composer.lock", + ]; + + let mut resolved_count = 0; + let mut needs_claude: Vec = Vec::new(); + + for file in &conflicted_files { + let filename = file.rsplit('/').next().unwrap_or(file); + if auto_generated + .iter() + .any(|&ag| filename.eq_ignore_ascii_case(ag)) + { + println!(" Auto-resolving {} (accepting upstream)", file); + if jj_run_in(repo_root, &["restore", "--from", dest, file]).is_ok() { + resolved_count += 1; + recorder.record("conflicts", format!("auto-resolved {}", file)); + } else { + println!(" Warning: failed to restore {} from {}", file, dest); + needs_claude.push(file.to_string()); + } + } else { + needs_claude.push(file.to_string()); + } + } + + if needs_claude.is_empty() { + println!(" āœ“ All conflicts auto-resolved"); + return Ok(true); + } + + if !auto_fix { + println!( + " {} conflict(s) remaining: {}", + needs_claude.len(), + needs_claude.join(", ") + ); + return Ok(false); + } + + // Try Claude for remaining code conflicts. + println!( + " Trying Claude for {} remaining conflict(s)...", + needs_claude.len() + ); + let context = ai_context::load_command_context("sync").unwrap_or_default(); + let context_section = if !context.is_empty() { + format!("## Context\n\n{}\n\n", context) + } else { + String::new() + }; + + for file in &needs_claude { + let file_path = repo_root.join(file); + let content = std::fs::read_to_string(&file_path).unwrap_or_default(); + if content.contains("<<<<<<<") || content.contains("%%%%%%%") || content.contains("+++++++") + { + let truncated = if content.len() > 8000 { + &content[..8000] + } else { + &content + }; + let prompt = format!( + "{}This file has JJ merge conflicts (markers: <<<<<<< / %%%%%%% / +++++++ / >>>>>>>). Resolve them by keeping the best of both versions. Output ONLY the resolved file content, no explanations:\n\n{}", + context_section, truncated + ); + + let output = Command::new("claude") + .args(["--print", "--dangerously-skip-permissions", &prompt]) + .output(); + + if let Ok(out) = output { + if out.status.success() { + let resolved = String::from_utf8_lossy(&out.stdout); + if !resolved.contains("<<<<<<<") + && !resolved.contains(">>>>>>>") + && !resolved.contains("%%%%%%%") + { + if std::fs::write(&file_path, resolved.as_ref()).is_ok() { + resolved_count += 1; + recorder.record("conflicts", format!("claude-resolved {}", file)); + println!(" āœ“ Resolved {}", file); + continue; + } + } + } + } + println!(" āœ— Could not resolve {}", file); + } + } + + let all_resolved = resolved_count == conflicted_files.len(); + if !all_resolved { + let remaining = conflicted_files.len() - resolved_count; + println!( + " {} conflict(s) remain. Resolve manually, then run `jj squash`.", + remaining + ); + } + Ok(all_resolved) +} + fn jj_stash_commits(repo_root: &Path, current: &str, dest: &str) -> Result { let ts = Utc::now().format("%Y%m%d-%H%M%S").to_string(); let stash_name = format!("f-sync-stash/{}/{}", current, ts); - jj_run_in(repo_root, &["bookmark", "create", &stash_name, "-r", current])?; + jj_run_in( + repo_root, + &["bookmark", "create", &stash_name, "-r", current], + )?; jj_run_in(repo_root, &["bookmark", "set", current, "-r", dest])?; jj_run_in(repo_root, &["edit", current])?; Ok(stash_name) @@ -1554,7 +1797,11 @@ fn write_sync_snapshot(snapshot: &SyncSnapshot) -> Result<()> { let target_dir = base.join("sync"); if !target_dir.exists() { if let Err(err) = fs::create_dir_all(&target_dir) { - eprintln!("warn: unable to create sync log dir {}: {}", target_dir.display(), err); + eprintln!( + "warn: unable to create sync log dir {}: {}", + target_dir.display(), + err + ); continue; } } diff --git a/src/task_failure_agents.rs b/src/task_failure_agents.rs index 17dec26b..dba06811 100644 --- a/src/task_failure_agents.rs +++ b/src/task_failure_agents.rs @@ -108,7 +108,11 @@ fn run_hive_agent(agent: &str, prompt: &str) -> Result<()> { .stderr(Stdio::inherit()) .status()?; if !status.success() { - eprintln!("⚠ hive agent '{}' exited with status {:?}", agent, status.code()); + eprintln!( + "⚠ hive agent '{}' exited with status {:?}", + agent, + status.code() + ); } Ok(()) } @@ -135,7 +139,10 @@ pub fn maybe_run_task_failure_agents( return; } if settings.tool != "hive" { - eprintln!("⚠ task-failure agents: unsupported tool '{}'", settings.tool); + eprintln!( + "⚠ task-failure agents: unsupported tool '{}'", + settings.tool + ); return; } if !std::io::stdin().is_terminal() { diff --git a/src/tasks.rs b/src/tasks.rs index e165f954..5612a1d8 100644 --- a/src/tasks.rs +++ b/src/tasks.rs @@ -34,8 +34,7 @@ use crate::{ history::{self, InvocationRecord}, hub, init, jazz_state, projects, running::{self, RunningProcess}, - task_match, - task_failure_agents, + task_failure_agents, task_match, }; /// Global state for cancel cleanup handler. @@ -1130,13 +1129,7 @@ fn execute_task( &output, status.code(), ); - maybe_run_task_failure_hook( - &task.name, - command, - workdir, - &output, - status.code(), - ); + maybe_run_task_failure_hook(&task.name, command, workdir, &output, status.code()); bail!( "task '{}' exited with status {}", task.name, @@ -1419,10 +1412,7 @@ fn maybe_run_task_failure_hook( match cmd.status() { Ok(status) if status.success() => {} Ok(status) => { - eprintln!( - "⚠ task failure hook exited with status {:?}", - status.code() - ); + eprintln!("⚠ task failure hook exited with status {:?}", status.code()); } Err(err) => { eprintln!("⚠ failed to run task failure hook: {}", err); @@ -1760,7 +1750,10 @@ fn run_flox_interactive_command( Ok((status, String::new())) } -fn run_command_with_tee(mut cmd: Command, ctx: Option) -> Result<(ExitStatus, String)> { +fn run_command_with_tee( + mut cmd: Command, + ctx: Option, +) -> Result<(ExitStatus, String)> { inject_global_env(&mut cmd); // Only use `script` for tasks explicitly marked as interactive // This avoids issues with non-interactive tasks hanging diff --git a/src/todo.rs b/src/todo.rs index 048ccdd8..a29576f4 100644 --- a/src/todo.rs +++ b/src/todo.rs @@ -4,6 +4,7 @@ use std::process::Command; use anyhow::{Context, Result, bail}; use chrono::Utc; +use sha1::{Digest, Sha1}; use serde::{Deserialize, Serialize}; use uuid::Uuid; @@ -19,6 +20,8 @@ struct TodoItem { updated_at: Option, note: Option, session: Option, + #[serde(default, skip_serializing_if = "Option::is_none")] + external_ref: Option, } pub fn run(cmd: TodoCommand) -> Result<()> { @@ -129,6 +132,7 @@ fn add( updated_at: None, note: note.map(|n| n.trim().to_string()).filter(|n| !n.is_empty()), session: session_ref, + external_ref: None, }; items.push(item.clone()); save_items(&path, &items)?; @@ -249,6 +253,24 @@ fn load_items() -> Result<(PathBuf, Vec)> { Ok((path, items)) } +fn load_items_at_root(root: &Path) -> Result<(PathBuf, Vec)> { + let dir = root.join(".ai").join("todos"); + let path = dir.join("todos.json"); + + if !path.exists() { + return Ok((path, Vec::new())); + } + + let content = + fs::read_to_string(&path).with_context(|| format!("failed to read {}", path.display()))?; + if content.trim().is_empty() { + return Ok((path, Vec::new())); + } + let items = serde_json::from_str(&content) + .with_context(|| format!("failed to parse {}", path.display()))?; + Ok((path, items)) +} + fn save_items(path: &Path, items: &[TodoItem]) -> Result<()> { if let Some(parent) = path.parent() { fs::create_dir_all(parent)?; @@ -258,6 +280,107 @@ fn save_items(path: &Path, items: &[TodoItem]) -> Result<()> { Ok(()) } +fn todo_title_compact(title: &str) -> String { + let trimmed = title.trim().trim_start_matches('-').trim(); + let max_len = 120; + let mut out = String::new(); + let mut count = 0; + for ch in trimmed.chars() { + if count >= max_len { + out.push_str("..."); + break; + } + out.push(ch); + count += 1; + } + if out.is_empty() { + "todo".to_string() + } else { + out + } +} + +fn external_ref_for_review_issue(commit_sha: &str, issue: &str) -> String { + let mut hasher = Sha1::new(); + hasher.update(commit_sha.trim().as_bytes()); + hasher.update(b":"); + hasher.update(issue.trim().as_bytes()); + let hex = hex::encode(hasher.finalize()); + let short = hex.get(..12).unwrap_or(&hex); + format!("flow-review-issue-{}", short) +} + +/// Record review issues as project-scoped todos under `.ai/todos/todos.json`. +/// Returns ids for created items (deduplicated by `external_ref`). +pub fn record_review_issues_as_todos( + repo_root: &Path, + commit_sha: &str, + issues: &[String], + summary: Option<&str>, + model_label: &str, +) -> Result> { + if issues.is_empty() { + return Ok(Vec::new()); + } + + let (path, mut items) = load_items_at_root(repo_root)?; + let mut existing_refs = std::collections::HashSet::new(); + for item in &items { + if let Some(r) = item.external_ref.as_deref().map(|s| s.trim()).filter(|s| !s.is_empty()) + { + existing_refs.insert(r.to_string()); + } + } + + let mut created_ids = Vec::new(); + let now = Utc::now().to_rfc3339(); + let summary = summary.map(|s| s.trim()).filter(|s| !s.is_empty()); + + for issue in issues { + let ext = external_ref_for_review_issue(commit_sha, issue); + if existing_refs.contains(&ext) { + continue; + } + + let title = todo_title_compact(issue); + let mut note = String::new(); + note.push_str("Source: flow review\n"); + note.push_str("Commit: "); + note.push_str(commit_sha.trim()); + note.push('\n'); + note.push_str("Model: "); + note.push_str(model_label.trim()); + note.push('\n'); + if let Some(summary) = summary { + note.push_str("Review summary: "); + note.push_str(summary); + note.push('\n'); + } + note.push('\n'); + note.push_str(issue.trim()); + + let id = Uuid::new_v4().simple().to_string(); + items.push(TodoItem { + id: id.clone(), + title, + status: status_to_string(TodoStatusArg::Pending).to_string(), + created_at: now.clone(), + updated_at: None, + note: Some(note), + session: None, + external_ref: Some(ext.clone()), + }); + existing_refs.insert(ext); + created_ids.push(id); + } + + if !created_ids.is_empty() { + save_items(&path, &items)?; + } + + Ok(created_ids) +} + fn find_item_index(items: &[TodoItem], id: &str) -> Result { let mut matches = Vec::new(); for (idx, item) in items.iter().enumerate() { diff --git a/src/traces_stub.rs b/src/traces_stub.rs index 4405bf25..eecbffc4 100644 --- a/src/traces_stub.rs +++ b/src/traces_stub.rs @@ -1,7 +1,7 @@ use anyhow::{Result, bail}; -use crate::cli::{TraceSessionOpts, TraceSource, TracesOpts}; use crate::base_tool; +use crate::cli::{TraceSessionOpts, TraceSource, TracesOpts}; pub fn run(opts: TracesOpts) -> Result<()> { let Some(bin) = base_tool::resolve_bin() else { @@ -12,20 +12,32 @@ pub fn run(opts: TracesOpts) -> Result<()> { ); }; - let mut args: Vec = vec!["trace".to_string(), "--limit".to_string(), opts.limit.to_string()]; + let mut args: Vec = vec![ + "trace".to_string(), + "--limit".to_string(), + opts.limit.to_string(), + ]; if opts.follow { args.push("--follow".to_string()); } - if let Some(project) = opts.project.as_deref().map(|s| s.trim()).filter(|s| !s.is_empty()) { + if let Some(project) = opts + .project + .as_deref() + .map(|s| s.trim()) + .filter(|s| !s.is_empty()) + { args.push("--project".to_string()); args.push(project.to_string()); } args.push("--source".to_string()); - args.push(match opts.source { - TraceSource::All => "all", - TraceSource::Tasks => "tasks", - TraceSource::Ai => "ai", - }.to_string()); + args.push( + match opts.source { + TraceSource::All => "all", + TraceSource::Tasks => "tasks", + TraceSource::Ai => "ai", + } + .to_string(), + ); base_tool::run_inherit_stdio(&bin, &args) } diff --git a/src/undo.rs b/src/undo.rs index 2e19efd0..4b740321 100644 --- a/src/undo.rs +++ b/src/undo.rs @@ -2,7 +2,7 @@ //! //! Tracks undoable actions (commit, push, etc.) and provides undo functionality. -use anyhow::{bail, Context, Result}; +use anyhow::{Context, Result, bail}; use serde::{Deserialize, Serialize}; use std::fs; use std::path::{Path, PathBuf}; @@ -186,8 +186,8 @@ pub fn get_last_action(repo_root: &Path) -> Result> { // Get the last line let last_line = lines.last().unwrap(); - let record: UndoRecord = serde_json::from_str(last_line) - .context("failed to parse last undo record")?; + let record: UndoRecord = + serde_json::from_str(last_line).context("failed to parse last undo record")?; Ok(Some(record)) } @@ -242,8 +242,8 @@ pub struct UndoResult { /// Undo the last action. pub fn undo_last(repo_root: &Path, opts: &UndoOpts) -> Result { - let record = get_last_action(repo_root)? - .ok_or_else(|| anyhow::anyhow!("No actions to undo"))?; + let record = + get_last_action(repo_root)?.ok_or_else(|| anyhow::anyhow!("No actions to undo"))?; // Check if we're on the same branch let current_branch = git_capture(repo_root, &["rev-parse", "--abbrev-ref", "HEAD"])?; @@ -257,7 +257,10 @@ pub fn undo_last(repo_root: &Path, opts: &UndoOpts) -> Result { // Check if HEAD matches the after_sha let current_sha = git_capture(repo_root, &["rev-parse", "HEAD"])?; - if !current_sha.trim().starts_with(&record.after_sha[..7.min(record.after_sha.len())]) { + if !current_sha + .trim() + .starts_with(&record.after_sha[..7.min(record.after_sha.len())]) + { // Try short comparison let current_short = ¤t_sha.trim()[..7.min(current_sha.len())]; let record_short = &record.after_sha[..7.min(record.after_sha.len())]; @@ -272,7 +275,11 @@ pub fn undo_last(repo_root: &Path, opts: &UndoOpts) -> Result { } if opts.dry_run { - println!("Would undo: {} ({})", record.action, short_sha(&record.after_sha)); + println!( + "Would undo: {} ({})", + record.action, + short_sha(&record.after_sha) + ); println!(" Reset to: {}", short_sha(&record.before_sha)); if record.pushed { println!(" Would force push to remote"); @@ -292,17 +299,13 @@ pub fn undo_last(repo_root: &Path, opts: &UndoOpts) -> Result { } ActionType::Push => { if !opts.force { - bail!( - "Undoing a push requires --force flag (this will force push to remote)" - ); + bail!("Undoing a push requires --force flag (this will force push to remote)"); } undo_push(repo_root, &record)?; } ActionType::CommitPush => { if record.pushed && !opts.force { - bail!( - "This action was pushed to remote. Use --force to undo (will force push)" - ); + bail!("This action was pushed to remote. Use --force to undo (will force push)"); } undo_commit_push(repo_root, &record, opts.force)?; } @@ -311,7 +314,8 @@ pub fn undo_last(repo_root: &Path, opts: &UndoOpts) -> Result { // Remove from undo log after successful undo remove_last_action(repo_root)?; - let force_pushed = record.pushed && (record.action == ActionType::Push || record.action == ActionType::CommitPush); + let force_pushed = record.pushed + && (record.action == ActionType::Push || record.action == ActionType::CommitPush); Ok(UndoResult { action_type: record.action, @@ -356,7 +360,12 @@ fn undo_push(repo_root: &Path, record: &UndoRecord) -> Result<()> { ], )?; - println!("āœ“ Force pushed {} to {}/{}", short_sha(&record.before_sha), remote, record.branch); + println!( + "āœ“ Force pushed {} to {}/{}", + short_sha(&record.before_sha), + remote, + record.branch + ); Ok(()) } @@ -392,7 +401,10 @@ pub fn show_last(repo_root: &Path) -> Result<()> { println!(" Before: {}", short_sha(&record.before_sha)); println!(" After: {}", short_sha(&record.after_sha)); if record.pushed { - println!(" Pushed: yes (to {})", record.remote.as_deref().unwrap_or("origin")); + println!( + " Pushed: yes (to {})", + record.remote.as_deref().unwrap_or("origin") + ); } if let Some(msg) = &record.message { let short_msg = if msg.len() > 60 { @@ -431,14 +443,26 @@ pub fn list_actions(repo_root: &Path, limit: usize) -> Result<()> { println!("Recent actions (newest first):"); println!(); - let start = if lines.len() > limit { lines.len() - limit } else { 0 }; + let start = if lines.len() > limit { + lines.len() - limit + } else { + 0 + }; for (i, line) in lines[start..].iter().rev().enumerate() { if let Ok(record) = serde_json::from_str::(line) { let pushed_indicator = if record.pushed { " [pushed]" } else { "" }; - let msg_short = record.message.as_ref().map(|m| { - if m.len() > 40 { format!("{:.40}...", m) } else { m.clone() } - }).unwrap_or_default(); + let msg_short = record + .message + .as_ref() + .map(|m| { + if m.len() > 40 { + format!("{:.40}...", m) + } else { + m.clone() + } + }) + .unwrap_or_default(); if i == 0 { println!( diff --git a/src/upgrade.rs b/src/upgrade.rs index 2e0f7ffd..5ef7b879 100644 --- a/src/upgrade.rs +++ b/src/upgrade.rs @@ -576,8 +576,10 @@ pub fn run(opts: UpgradeOpts) -> Result<()> { let target = detect_release_target()?; let asset_name = format!("flow-{}.tar.gz", target); let (legacy_os, legacy_arch) = detect_legacy_platform()?; - let legacy_asset_name = - format!("flow_{}_{}_{}.tar.gz", release.tag_name, legacy_os, legacy_arch); + let legacy_asset_name = format!( + "flow_{}_{}_{}.tar.gz", + release.tag_name, legacy_os, legacy_arch + ); let tarball_asset = release .assets