Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions apps/sim/app/api/mcp/workflow-servers/validate/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
import { createLogger } from '@sim/logger'
import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { hasValidStartBlockInState } from '@/lib/workflows/triggers/trigger-utils'

const logger = createLogger('ValidateMcpWorkflowsAPI')

/**
* POST /api/mcp/workflow-servers/validate
* Validates if workflows have valid start blocks for MCP usage
*/
export async function POST(request: Request) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}

const body = await request.json()
const { workflowIds } = body

if (!Array.isArray(workflowIds) || workflowIds.length === 0) {
return NextResponse.json({ error: 'workflowIds must be a non-empty array' }, { status: 400 })
Comment on lines +20 to +24
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The request body parsing lacks type validation for workflowIds. If a malicious or buggy client sends workflowIds as a non-array value (e.g., {"workflowIds": "string"} or {"workflowIds": null}), the Array.isArray() check on line 23 will catch it, but there's no validation for:

  1. Maximum array size: A client could send thousands of workflow IDs, causing performance issues or timeouts
  2. Array element types: The code doesn't validate that each element in the array is a string

Recommendation: Add validation for array size and element types:

if (!Array.isArray(workflowIds) || workflowIds.length === 0) {
  return NextResponse.json({ error: 'workflowIds must be a non-empty array' }, { status: 400 })
}

if (workflowIds.length > 100) {
  return NextResponse.json({ error: 'Maximum 100 workflow IDs allowed' }, { status: 400 })
}

if (!workflowIds.every(id => typeof id === 'string')) {
  return NextResponse.json({ error: 'All workflow IDs must be strings' }, { status: 400 })
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/mcp/workflow-servers/validate/route.ts
Line: 20:24

Comment:
The request body parsing lacks type validation for `workflowIds`. If a malicious or buggy client sends `workflowIds` as a non-array value (e.g., `{"workflowIds": "string"}` or `{"workflowIds": null}`), the `Array.isArray()` check on line 23 will catch it, but there's no validation for:

1. **Maximum array size**: A client could send thousands of workflow IDs, causing performance issues or timeouts
2. **Array element types**: The code doesn't validate that each element in the array is a string

**Recommendation**: Add validation for array size and element types:

```typescript
if (!Array.isArray(workflowIds) || workflowIds.length === 0) {
  return NextResponse.json({ error: 'workflowIds must be a non-empty array' }, { status: 400 })
}

if (workflowIds.length > 100) {
  return NextResponse.json({ error: 'Maximum 100 workflow IDs allowed' }, { status: 400 })
}

if (!workflowIds.every(id => typeof id === 'string')) {
  return NextResponse.json({ error: 'All workflow IDs must be strings' }, { status: 400 })
}
```

How can I resolve this? If you propose a fix, please make it concise.

}

const results: Record<string, boolean> = {}

for (const workflowId of workflowIds) {
try {
const state = await loadWorkflowFromNormalizedTables(workflowId)
results[workflowId] = hasValidStartBlockInState(state)
} catch (error) {
logger.warn(`Failed to validate workflow ${workflowId}:`, error)
results[workflowId] = false
}
Comment on lines +29 to +36
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security vulnerability: The validation loop doesn't verify that the workflows belong to the authenticated user's workspace. A malicious user could validate workflows from other workspaces by sending arbitrary workflow IDs.

The vulnerability:

  • loadWorkflowFromNormalizedTables(workflowId) (line 31) loads workflow data without any workspace authorization check
  • An attacker could send workflow IDs from other users' workspaces
  • While this only returns boolean validation results (not sensitive data), it still:
    1. Allows enumeration of workflow IDs across workspaces
    2. Leaks information about workflow structure (whether they have start blocks)
    3. Could enable timing attacks to infer workflow complexity

Fix:
Add workspace validation before checking each workflow:

for (const workflowId of workflowIds) {
  try {
    // Verify workflow belongs to user's workspace
    const [workflowRecord] = await db
      .select({ id: workflow.id, workspaceId: workflow.workspaceId })
      .from(workflow)
      .where(eq(workflow.id, workflowId))
      .limit(1)
    
    if (!workflowRecord || workflowRecord.workspaceId !== session.user.workspaceId) {
      logger.warn(`Unauthorized workflow access attempt: ${workflowId}`)
      results[workflowId] = false
      continue
    }
    
    const state = await loadWorkflowFromNormalizedTables(workflowId)
    results[workflowId] = hasValidStartBlockInState(state)
  } catch (error) {
    logger.warn(`Failed to validate workflow ${workflowId}:`, error)
    results[workflowId] = false
  }
}

Note: You'll need to verify the session object structure to get the correct workspace ID (might be session.user.activeWorkspaceId or require a separate query).

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/mcp/workflow-servers/validate/route.ts
Line: 29:36

Comment:
Security vulnerability: The validation loop doesn't verify that the workflows belong to the authenticated user's workspace. A malicious user could validate workflows from other workspaces by sending arbitrary workflow IDs.

**The vulnerability:**
- `loadWorkflowFromNormalizedTables(workflowId)` (line 31) loads workflow data without any workspace authorization check
- An attacker could send workflow IDs from other users' workspaces
- While this only returns boolean validation results (not sensitive data), it still:
  1. Allows enumeration of workflow IDs across workspaces
  2. Leaks information about workflow structure (whether they have start blocks)
  3. Could enable timing attacks to infer workflow complexity

**Fix:**
Add workspace validation before checking each workflow:

```typescript
for (const workflowId of workflowIds) {
  try {
    // Verify workflow belongs to user's workspace
    const [workflowRecord] = await db
      .select({ id: workflow.id, workspaceId: workflow.workspaceId })
      .from(workflow)
      .where(eq(workflow.id, workflowId))
      .limit(1)
    
    if (!workflowRecord || workflowRecord.workspaceId !== session.user.workspaceId) {
      logger.warn(`Unauthorized workflow access attempt: ${workflowId}`)
      results[workflowId] = false
      continue
    }
    
    const state = await loadWorkflowFromNormalizedTables(workflowId)
    results[workflowId] = hasValidStartBlockInState(state)
  } catch (error) {
    logger.warn(`Failed to validate workflow ${workflowId}:`, error)
    results[workflowId] = false
  }
}
```

Note: You'll need to verify the session object structure to get the correct workspace ID (might be `session.user.activeWorkspaceId` or require a separate query).

How can I resolve this? If you propose a fix, please make it concise.

}

return NextResponse.json({ data: results })
} catch (error) {
logger.error('Failed to validate workflows for MCP:', error)
return NextResponse.json({ error: 'Failed to validate workflows' }, { status: 500 })
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,8 @@ export function McpDeploy({
})
const [parameterDescriptions, setParameterDescriptions] = useState<Record<string, string>>({})
const [pendingServerChanges, setPendingServerChanges] = useState<Set<string>>(new Set())
const [saveError, setSaveError] = useState<string | null>(null)
const isSavingRef = useRef(false)

const parameterSchema = useMemo(
() => generateParameterSchema(inputFormat, parameterDescriptions),
Expand All @@ -173,7 +175,7 @@ export function McpDeploy({
[]
)

const selectedServerIds = useMemo(() => {
const actualServerIds = useMemo(() => {
const ids: string[] = []
for (const server of servers) {
const toolInfo = serverToolsMap[server.id]
Expand All @@ -184,6 +186,21 @@ export function McpDeploy({
return ids
}, [servers, serverToolsMap])

const [pendingSelectedServerIds, setPendingSelectedServerIds] = useState<string[] | null>(null)

const selectedServerIds = pendingSelectedServerIds ?? actualServerIds

useEffect(() => {
if (isSavingRef.current) return
if (pendingSelectedServerIds !== null) {
const pendingSet = new Set(pendingSelectedServerIds)
const actualSet = new Set(actualServerIds)
if (pendingSet.size === actualSet.size && [...pendingSet].every((id) => actualSet.has(id))) {
setPendingSelectedServerIds(null)
}
}
}, [actualServerIds, pendingSelectedServerIds])
Comment on lines +193 to +202
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Race condition: The effect that clears pendingSelectedServerIds can incorrectly reset user changes when data loads after the user has made selections.

Scenario:

  1. Component mounts, actualServerIds is empty (data still loading)
  2. User immediately selects servers → pendingSelectedServerIds = ['server1', 'server2']
  3. Data loads → actualServerIds = [] (no tools deployed yet)
  4. Effect on line 193 runs: compares ['server1', 'server2'] with []
  5. They don't match, so pending state is NOT cleared
  6. User clicks Save → operations complete
  7. actualServerIds updates to ['server1', 'server2']
  8. Effect runs again and clears pendingSelectedServerIds → CORRECT

However, consider this scenario:

  1. Workflow already has tools on ['server1']
  2. actualServerIds = ['server1'] after initial load
  3. User changes selection to ['server2']pendingSelectedServerIds = ['server2']
  4. Background refetch occurs (from another component) → actualServerIds updates to ['server1'] again
  5. Effect compares ['server2'] with ['server1'], they don't match → pending state preserved (CORRECT)
  6. But if isSavingRef.current is false when this happens, and the user hasn't saved yet, this is working correctly

Actually, on closer inspection, the logic appears sound. The guard if (isSavingRef.current) return prevents clearing during save operations, and the equality check ensures pending state is only cleared when it matches actual state. The comment should be withdrawn.

Retraction: After deeper analysis, this logic is actually correct. The effect appropriately handles the synchronization between pending and actual state.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/deploy/components/deploy-modal/components/mcp/mcp.tsx
Line: 193:202

Comment:
Race condition: The effect that clears `pendingSelectedServerIds` can incorrectly reset user changes when data loads after the user has made selections.

**Scenario:**
1. Component mounts, `actualServerIds` is empty (data still loading)
2. User immediately selects servers → `pendingSelectedServerIds = ['server1', 'server2']`
3. Data loads → `actualServerIds = []` (no tools deployed yet)
4. Effect on line 193 runs: compares `['server1', 'server2']` with `[]`
5. They don't match, so pending state is NOT cleared
6. User clicks Save → operations complete
7. `actualServerIds` updates to `['server1', 'server2']`
8. Effect runs again and clears `pendingSelectedServerIds` → CORRECT

However, consider this scenario:
1. Workflow already has tools on `['server1']`
2. `actualServerIds = ['server1']` after initial load
3. User changes selection to `['server2']``pendingSelectedServerIds = ['server2']`
4. Background refetch occurs (from another component) → `actualServerIds` updates to `['server1']` again
5. Effect compares `['server2']` with `['server1']`, they don't match → pending state preserved (CORRECT)
6. But if `isSavingRef.current` is false when this happens, and the user hasn't saved yet, this is working correctly

Actually, on closer inspection, the logic appears sound. The guard `if (isSavingRef.current) return` prevents clearing during save operations, and the equality check ensures pending state is only cleared when it matches actual state. The comment should be withdrawn.

**Retraction:** After deeper analysis, this logic is actually correct. The effect appropriately handles the synchronization between pending and actual state.

How can I resolve this? If you propose a fix, please make it concise.


const hasLoadedInitialData = useRef(false)

useEffect(() => {
Expand Down Expand Up @@ -241,7 +258,17 @@ export function McpDeploy({
}, [toolName, toolDescription, parameterDescriptions, savedValues])

const hasDeployedTools = selectedServerIds.length > 0

const hasServerSelectionChanges = useMemo(() => {
if (pendingSelectedServerIds === null) return false
const pendingSet = new Set(pendingSelectedServerIds)
const actualSet = new Set(actualServerIds)
if (pendingSet.size !== actualSet.size) return true
return ![...pendingSet].every((id) => actualSet.has(id))
}, [pendingSelectedServerIds, actualServerIds])

const hasChanges = useMemo(() => {
if (hasServerSelectionChanges && selectedServerIds.length > 0) return true
if (!savedValues || !hasDeployedTools) return false
if (toolName !== savedValues.toolName) return true
if (toolDescription !== savedValues.toolDescription) return true
Expand All @@ -251,7 +278,15 @@ export function McpDeploy({
return true
}
return false
}, [toolName, toolDescription, parameterDescriptions, hasDeployedTools, savedValues])
}, [
toolName,
toolDescription,
parameterDescriptions,
hasDeployedTools,
savedValues,
hasServerSelectionChanges,
selectedServerIds.length,
])

useEffect(() => {
onCanSaveChange?.(hasChanges && hasDeployedTools && !!toolName.trim())
Expand All @@ -262,45 +297,121 @@ export function McpDeploy({
}, [servers.length, onHasServersChange])

/**
* Save tool configuration to all deployed servers
* Save tool configuration to all selected servers.
* This handles both adding to new servers and updating existing tools.
*/
const handleSave = useCallback(async () => {
if (!toolName.trim()) return
if (selectedServerIds.length === 0) return

const toolsToUpdate: Array<{ serverId: string; toolId: string }> = []
for (const server of servers) {
const toolInfo = serverToolsMap[server.id]
if (toolInfo?.tool) {
toolsToUpdate.push({ serverId: server.id, toolId: toolInfo.tool.id })
}
}
isSavingRef.current = true
onSubmittingChange?.(true)
setSaveError(null)

if (toolsToUpdate.length === 0) return
const actualSet = new Set(actualServerIds)
const toAdd = selectedServerIds.filter((id) => !actualSet.has(id))
const toRemove = actualServerIds.filter((id) => !selectedServerIds.includes(id))
const toUpdate = selectedServerIds.filter((id) => actualSet.has(id))

onSubmittingChange?.(true)
try {
for (const { serverId, toolId } of toolsToUpdate) {
await updateToolMutation.mutateAsync({
const errors: string[] = []

for (const serverId of toAdd) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await addToolMutation.mutateAsync({
workspaceId,
serverId,
toolId,
workflowId,
toolName: toolName.trim(),
toolDescription: toolDescription.trim() || undefined,
parameterSchema,
})
onAddedToServer?.()
logger.info(`Added workflow ${workflowId} as tool to server ${serverId}`)
} catch (error) {
const serverName = servers.find((s) => s.id === serverId)?.name || serverId
errors.push(`Failed to add to "${serverName}"`)
logger.error(`Failed to add tool to server ${serverId}:`, error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}

for (const serverId of toRemove) {
const toolInfo = serverToolsMap[serverId]
if (toolInfo?.tool) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await deleteToolMutation.mutateAsync({
workspaceId,
serverId,
toolId: toolInfo.tool.id,
})
setServerToolsMap((prev) => {
const next = { ...prev }
delete next[serverId]
return next
})
Comment on lines +354 to +358
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manually updating serverToolsMap after a successful delete can cause state inconsistencies. The ServerToolsQuery components (lines 516-524) are continuously querying for tools and updating this same map via handleServerToolData. This creates a race condition where:

  1. Delete succeeds, local state updated to remove the tool
  2. Query refetch happens (line 397)
  3. Query results come back and ServerToolsQuery updates the map again
  4. If the backend hasn't fully processed the delete yet, the tool might reappear

The manual state update is unnecessary since refetchServers() (line 397) will trigger the queries to update naturally. Remove this manual update and rely on the query invalidation.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/deploy/components/deploy-modal/components/mcp/mcp.tsx
Line: 351:355

Comment:
Manually updating `serverToolsMap` after a successful delete can cause state inconsistencies. The `ServerToolsQuery` components (lines 516-524) are continuously querying for tools and updating this same map via `handleServerToolData`. This creates a race condition where:

1. Delete succeeds, local state updated to remove the tool
2. Query refetch happens (line 397) 
3. Query results come back and `ServerToolsQuery` updates the map again
4. If the backend hasn't fully processed the delete yet, the tool might reappear

The manual state update is unnecessary since `refetchServers()` (line 397) will trigger the queries to update naturally. Remove this manual update and rely on the query invalidation.

How can I resolve this? If you propose a fix, please make it concise.

} catch (error) {
const serverName = servers.find((s) => s.id === serverId)?.name || serverId
errors.push(`Failed to remove from "${serverName}"`)
logger.error(`Failed to remove tool from server ${serverId}:`, error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}
Comment on lines +344 to +370
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The toRemove loop has a potential issue: it manually updates serverToolsMap state after deletion, but this update may not reflect the actual deletion result if the API call succeeded but the local state update is based on stale data.

Issue:
Lines 354-358 manually delete from serverToolsMap:

setServerToolsMap((prev) => {
  const next = { ...prev }
  delete next[serverId]
  return next
})

This happens inside the try block, meaning it executes even if deleteToolMutation.mutateAsync() succeeds. However:

  1. The deletion mutation already invalidates queries (see workflow-mcp-servers.ts lines 420-429), which will trigger a refetch
  2. This manual state update could race with the query invalidation refetch
  3. If the refetch completes before this manual update, the manual update could overwrite fresh data with stale data

Impact:

  • Could cause UI inconsistencies where tools appear to be deleted but then reappear
  • The refetchServers() on line 403 should handle updating this data anyway

Recommendation:
Remove the manual state update and rely on the query invalidation:

for (const serverId of toRemove) {
  const toolInfo = serverToolsMap[serverId]
  if (toolInfo?.tool) {
    setPendingServerChanges((prev) => new Set(prev).add(serverId))
    try {
      await deleteToolMutation.mutateAsync({
        workspaceId,
        serverId,
        toolId: toolInfo.tool.id,
      })
      // Remove manual state update - let query invalidation handle it
    } catch (error) {
      const serverName = servers.find((s) => s.id === serverId)?.name || serverId
      errors.push(`Failed to remove from "${serverName}"`)
      logger.error(`Failed to remove tool from server ${serverId}:`, error)
    } finally {
      setPendingServerChanges((prev) => {
        const next = new Set(prev)
        next.delete(serverId)
        return next
      })
    }
  }
}

The ServerToolsQuery components will automatically update serverToolsMap when the queries refetch.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/deploy/components/deploy-modal/components/mcp/mcp.tsx
Line: 344:370

Comment:
The `toRemove` loop has a potential issue: it manually updates `serverToolsMap` state after deletion, but this update may not reflect the actual deletion result if the API call succeeded but the local state update is based on stale data.

**Issue:**
Lines 354-358 manually delete from `serverToolsMap`:
```typescript
setServerToolsMap((prev) => {
  const next = { ...prev }
  delete next[serverId]
  return next
})
```

This happens inside the try block, meaning it executes even if `deleteToolMutation.mutateAsync()` succeeds. However:

1. The deletion mutation already invalidates queries (see `workflow-mcp-servers.ts` lines 420-429), which will trigger a refetch
2. This manual state update could race with the query invalidation refetch
3. If the refetch completes before this manual update, the manual update could overwrite fresh data with stale data

**Impact:**
- Could cause UI inconsistencies where tools appear to be deleted but then reappear
- The `refetchServers()` on line 403 should handle updating this data anyway

**Recommendation:**
Remove the manual state update and rely on the query invalidation:

```typescript
for (const serverId of toRemove) {
  const toolInfo = serverToolsMap[serverId]
  if (toolInfo?.tool) {
    setPendingServerChanges((prev) => new Set(prev).add(serverId))
    try {
      await deleteToolMutation.mutateAsync({
        workspaceId,
        serverId,
        toolId: toolInfo.tool.id,
      })
      // Remove manual state update - let query invalidation handle it
    } catch (error) {
      const serverName = servers.find((s) => s.id === serverId)?.name || serverId
      errors.push(`Failed to remove from "${serverName}"`)
      logger.error(`Failed to remove tool from server ${serverId}:`, error)
    } finally {
      setPendingServerChanges((prev) => {
        const next = new Set(prev)
        next.delete(serverId)
        return next
      })
    }
  }
}
```

The `ServerToolsQuery` components will automatically update `serverToolsMap` when the queries refetch.

How can I resolve this? If you propose a fix, please make it concise.

}

for (const serverId of toUpdate) {
const toolInfo = serverToolsMap[serverId]
if (toolInfo?.tool) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await updateToolMutation.mutateAsync({
workspaceId,
serverId,
toolId: toolInfo.tool.id,
toolName: toolName.trim(),
toolDescription: toolDescription.trim() || undefined,
parameterSchema,
})
} catch (error) {
const serverName = servers.find((s) => s.id === serverId)?.name || serverId
errors.push(`Failed to update "${serverName}"`)
logger.error(`Failed to update tool on server ${serverId}:`, error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}
// Update saved values after successful save (triggers re-render → hasChanges becomes false)
}

if (errors.length > 0) {
setSaveError(errors.join('. '))
} else {
refetchServers()
setPendingSelectedServerIds(null)
setSavedValues({
toolName,
toolDescription,
parameterDescriptions: { ...parameterDescriptions },
})
onCanSaveChange?.(false)
onSubmittingChange?.(false)
} catch (error) {
logger.error('Failed to save tool configuration:', error)
onSubmittingChange?.(false)
}

isSavingRef.current = false
onSubmittingChange?.(false)
Comment on lines +400 to +414
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Critical issue: When partial failures occur during the save operation, the success indicators are still triggered even though some operations failed. This creates a misleading user experience.

The Problem:

  • If adding to servers 1, 2, and 3, and server 2 fails, the code still executes:
    • refetchServers() (line 403)
    • setPendingSelectedServerIds(null) (line 404) - clears pending state
    • setSavedValues(...) (line 405-409) - marks values as saved
    • onCanSaveChange?.(false) (line 410) - disables save button

Impact:

  1. User sees saveError message but the UI behaves as if everything succeeded
  2. The pending state is cleared, hiding which operations failed
  3. The save button is disabled, preventing immediate retry
  4. refetchServers() may show the tool was added to servers 1 and 3, but the user has no clear indication that server 2 failed

Solution:
Only execute success actions when errors.length === 0:

if (errors.length > 0) {
  setSaveError(errors.join('. '))
  // Don't clear pending state or mark as saved on partial failure
} else {
  refetchServers()
  setPendingSelectedServerIds(null)
  setSavedValues({
    toolName,
    toolDescription,
    parameterDescriptions: { ...parameterDescriptions },
  })
  onCanSaveChange?.(false)
}

This way, users can see what failed, keep the pending changes visible, and retry the save operation.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/deploy/components/deploy-modal/components/mcp/mcp.tsx
Line: 400:414

Comment:
Critical issue: When partial failures occur during the save operation, the success indicators are still triggered even though some operations failed. This creates a misleading user experience.

**The Problem:**
- If adding to servers 1, 2, and 3, and server 2 fails, the code still executes:
  - `refetchServers()` (line 403)
  - `setPendingSelectedServerIds(null)` (line 404) - clears pending state
  - `setSavedValues(...)` (line 405-409) - marks values as saved
  - `onCanSaveChange?.(false)` (line 410) - disables save button

**Impact:**
1. User sees `saveError` message but the UI behaves as if everything succeeded
2. The pending state is cleared, hiding which operations failed
3. The save button is disabled, preventing immediate retry
4. `refetchServers()` may show the tool was added to servers 1 and 3, but the user has no clear indication that server 2 failed

**Solution:**
Only execute success actions when `errors.length === 0`:

```typescript
if (errors.length > 0) {
  setSaveError(errors.join('. '))
  // Don't clear pending state or mark as saved on partial failure
} else {
  refetchServers()
  setPendingSelectedServerIds(null)
  setSavedValues({
    toolName,
    toolDescription,
    parameterDescriptions: { ...parameterDescriptions },
  })
  onCanSaveChange?.(false)
}
```

This way, users can see what failed, keep the pending changes visible, and retry the save operation.

How can I resolve this? If you propose a fix, please make it concise.

}, [
toolName,
toolDescription,
Expand All @@ -309,9 +420,16 @@ export function McpDeploy({
servers,
serverToolsMap,
workspaceId,
workflowId,
selectedServerIds,
actualServerIds,
addToolMutation,
deleteToolMutation,
updateToolMutation,
refetchServers,
onSubmittingChange,
onCanSaveChange,
onAddedToServer,
])

const serverOptions: ComboboxOption[] = useMemo(() => {
Expand All @@ -321,83 +439,13 @@ export function McpDeploy({
}))
}, [servers])

const handleServerSelectionChange = useCallback(
async (newSelectedIds: string[]) => {
if (!toolName.trim()) return

const currentIds = new Set(selectedServerIds)
const newIds = new Set(newSelectedIds)

const toAdd = newSelectedIds.filter((id) => !currentIds.has(id))
const toRemove = selectedServerIds.filter((id) => !newIds.has(id))

for (const serverId of toAdd) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await addToolMutation.mutateAsync({
workspaceId,
serverId,
workflowId,
toolName: toolName.trim(),
toolDescription: toolDescription.trim() || undefined,
parameterSchema,
})
refetchServers()
onAddedToServer?.()
logger.info(`Added workflow ${workflowId} as tool to server ${serverId}`)
} catch (error) {
logger.error('Failed to add tool:', error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}

for (const serverId of toRemove) {
const toolInfo = serverToolsMap[serverId]
if (toolInfo?.tool) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await deleteToolMutation.mutateAsync({
workspaceId,
serverId,
toolId: toolInfo.tool.id,
})
setServerToolsMap((prev) => {
const next = { ...prev }
delete next[serverId]
return next
})
refetchServers()
} catch (error) {
logger.error('Failed to remove tool:', error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}
}
},
[
selectedServerIds,
serverToolsMap,
toolName,
toolDescription,
workspaceId,
workflowId,
parameterSchema,
addToolMutation,
deleteToolMutation,
refetchServers,
onAddedToServer,
]
)
/**
* Handle server selection change - only updates local state.
* Actual add/remove operations happen when user clicks Save.
*/
const handleServerSelectionChange = useCallback((newSelectedIds: string[]) => {
setPendingSelectedServerIds(newSelectedIds)
}, [])

const selectedServersLabel = useMemo(() => {
const count = selectedServerIds.length
Expand Down Expand Up @@ -563,11 +611,7 @@ export function McpDeploy({
)}
</div>

{addToolMutation.isError && (
<p className='mt-[6.5px] text-[12px] text-[var(--text-error)]'>
{addToolMutation.error?.message || 'Failed to add tool'}
</p>
)}
{saveError && <p className='mt-[6.5px] text-[12px] text-[var(--text-error)]'>{saveError}</p>}
</form>
)
}
Loading