Conversation
| ) | ||
|
|
||
| return resp, flight_id | ||
| if resp.flight_plan_status in { |
There was a problem hiding this comment.
nit: improve readability with early return with return resp, flight_id, None if status is not among those
(could also apply to inner if with return resp, flight_id, flight_info)
|
|
||
|
|
||
| def values_exactly_equal( | ||
| as_requested: str | Number | StringBasedDateTime, |
There was a problem hiding this comment.
How were those types chosen? Is it because that's the exhaustive list of types the JSON values can take? Although I expect not because there is no object-like type.
Also, nit: define a type alias for those? Notably if we expect this list will be extended in the future.
| return | ||
| if dtype == StringBasedDateTime: | ||
| assert isinstance(as_planned, StringBasedDateTime) | ||
| assert isinstance(as_requested, StringBasedDateTime) |
There was a problem hiding this comment.
Redundant since we are in if isinstance(as_requested, dtype) and if dtype == StringBasedDateTime blocks.
| ) | ||
| return | ||
| if dtype == StringBasedDateTime: | ||
| assert isinstance(as_planned, StringBasedDateTime) |
There was a problem hiding this comment.
Isn't this redundant as well since we previously checked for if not isinstance(as_planned, dtype)?
|
|
||
|
|
||
| def _resolve_addresses( | ||
| paths: Iterable[JSONPath] | JSONPath, content: dict[str, Any] |
There was a problem hiding this comment.
Since we check below if that's an str type?
| paths: Iterable[JSONPath] | JSONPath, content: dict[str, Any] | |
| paths: Iterable[JSONPath] | JSONPath | str, content: dict[str, Any] |
| details=f"As requested, {address} has {len(as_requested)} elements, but as planned, {address} has {len(as_planned)} elements (equality expected)", | ||
| ) | ||
| return | ||
| for i, v in enumerate(as_requested): |
There was a problem hiding this comment.
This implies the order is important. Is it actually? Wouldn't two lists with the same items but in different order be equivalent? I don't remember some entities in the APIs we use where it is. But I have not checked that exhaustively.
| self._begin_step_fragment() | ||
| oi_ref = self._operational_intent_shared_check(flight_info, skip_if_not_found) | ||
| if flight_info is None: | ||
| raise ScenarioLogicError( |
There was a problem hiding this comment.
Isn't ScenarioDidNotStopError more appropriate here?
| flight1_planned, | ||
| ) | ||
| # TODO(#1326): Validate that flight as planned still allows this scenario to proceed | ||
| assert as_planned is not None |
There was a problem hiding this comment.
Are the asserts in this file (and also in others) really required? If there is an actual probability of this happening shouldn't it be handled properly? (i.e. if because of USS -> fail, if because of inconsistency in test suite -> raise exception)
| ImplicitDict.parse(query.response.json["as_planned"], FlightPlan) | ||
| ) | ||
| except ValueError: | ||
| # Best effort failed so it's ok to ignore additional `as_planned` field |
There was a problem hiding this comment.
Would it make sense to log as a warning the error here? That would probably help troubleshoot issues, if any.
| if "as_planned" in inject_resp and inject_resp.as_planned: | ||
| resp.as_planned = inject_resp.as_planned.to_flight_plan() | ||
| for k, v in inject_resp.items(): | ||
| if k not in {"planning_result", "flight_plan_status", "has_conflict"}: |
There was a problem hiding this comment.
To be sure that it is not a mistake: has_conflict is no longer required here?
This PR addresses (but does not fully resolve) #1326 by:
as_plannedfield in flight_planning responses after "snapping" requested start times to the current wall time (preventing users from rewriting history by saying a flight started in the past)as_plannedin the flight_planning API response and populate a newas_plannedfield in the business object responseas_plannedresponse matched what was requested (very strictly, for now) via a new injection fidelity checkTo accomplish 4 above, the large injection_evaluation.py tool is added. This attempts to make evaluation of as-planned flights as future-scalable as practical by performing bulk field-by-field comparisons automatically according to comparators defined per JSONPath in addition to a default comparator.
Ideally, each scenario would define what is acceptable for an as-planned flight (though hopefully they would reuse many of the same definitions -- e.g., "anything that overlaps the other flight"). However, making all these changes in one PR would be far too large. Instead, placeholder TODOs referencing the appropriate issue are added in the places where each scenario could (in the future) define what is acceptable for as-planned flights, and one "global" validation is added in flight_planning/test_steps.py:submit_flight which requires all fields in as_planned to match exactly (status quo), except start times are allowed to be adjusted forward to the wall clock.
In addition to the standard CI, I also tested as_planned validation by temporarily setting mock_uss to set start time to wall clock + 5 seconds (instead of just wall clock in this actual PR) and observing that the Injection fidelity check failed. See the diff of mock_uss/flights/planning.py on the second commit for this approach (though the timing did not correspond to exactly commit 2).
A number of small errors/basedpyright findings are resolved as discovered during development of the core feature.