Module 7: Analysis & Trade Studies
How to structure quantitative analyses, define objective functions and calculations, and apply the trade study pattern to compare design alternatives — all within SysML v2’s formal analysis framework.
KerML → SysML: Analysis Concepts
SysML v2 analysis modelling is built on KerML’s Function and Case abstractions. Every analysis keyword in this module maps to a KerML construct that provides its formal semantics.
| SysML v2 concept (L2) | Underlying KerML construct (L1) | What SysML adds |
|---|---|---|
calc def | Function (a Behavior that returns a value) | Named, reusable mathematical expression with typed parameters and a return type |
calc (usage) | Expression typed by a Function | Evaluates the function in a specific context, binding arguments to parameters |
analysis case def | CaseDefinition (specialises CalculationDefinition) | Structures a complete analysis with subject, objective, and return values |
analysis case (usage) | CaseUsage | An instance of the analysis applied to a specific design context |
objective | RequirementUsage (scoped to the case) | Defines the success criterion the analysis must satisfy or evaluate |
subject | SubjectMembership (a parameter) | Identifies which system element is being analysed |
return | ReturnParameterMembership | The computed result value of the analysis or calculation |
require constraint | RequirementConstraintMembership | A Boolean constraint that must hold for the objective to be met |
In KerML, a Function is a Behavior that computes and returns a value — it specialises Behavior with a result parameter. CaseDefinition further specialises CalculationDefinition (which itself specialises Function) to add the structured notions of subject and objective. This means every analysis case is, at the KerML level, a function that returns a value.
Analysis Case Deep Dive
KerML origin: analysis case def → CaseDefinition (specialises CalculationDefinition)
An analysis case is SysML v2’s structured container for any quantitative evaluation. It packages a subject (the element under analysis), an objective (the success criterion), and a return value (the computed result) into a single, reusable definition. Unlike a loose collection of constraints, an analysis case is a first-class model element that can be instantiated, specialised, and traced to requirements.
Think of a laboratory experiment report. It states what is being tested (subject), what constitutes success (objective), describes the procedure (body), and records the result (return value). An analysis case is the modelling equivalent — a self-contained, repeatable experiment definition.
Basic structure
1 analysis case def MassAnalysis {
2 subject vehicle : Vehicle; // what we are analysing
3
4 objective massObjective {
5 doc /* Total mass shall not exceed 2000 kg */
6 require constraint { vehicle.totalMass <= 2000 [kg] }
7 }
8
9 return totalMass : ISQ::MassValue = vehicle.totalMass;
10 }
Subject and scope
The subject keyword declares the element under study. It is a directed in parameter at the KerML level — the analysis case receives the subject from the context in which it is used. The subject can be any part, connection, or item definition:
1 analysis case def ThermalAnalysis {
2 subject heatExchanger : HeatExchanger; // scoped to one component
3 // ... objective, calculations, return
4 }
5
6 // Instantiation: apply analysis to a specific design element
7 analysis case thermalCheck : ThermalAnalysis {
8 subject heatExchanger = mainRadiator; // binds to a specific part
9 }
Return values
Every analysis case can declare a return parameter — the computed output of the analysis. Because CaseDefinition specialises CalculationDefinition (which specialises Function), every analysis case is inherently a function that returns a value:
1 analysis case def PowerBudgetAnalysis {
2 subject sys : ElectricalSystem;
3 objective { require constraint { sys.totalDraw <= sys.capacity } }
4 return margin : ISQ::PowerValue = sys.capacity - sys.totalDraw;
5 }
The return value is not just documentation — it is a typed Feature that can be referenced downstream. Other analysis cases, requirements, or trade studies can read the return value of a completed analysis case to make further decisions.
Objective Functions
KerML origin: objective → RequirementUsage scoped to a CaseDefinition
The objective is the core of an analysis case — it states what must be true for the analysis to succeed. At the KerML level, an objective is a RequirementUsage owned by the case, which means it carries all the machinery of requirements: constraints, documentation, and traceability.
Simple constraint objectives
The most common form uses require constraint to specify a Boolean condition:
1 analysis case def VibrationAnalysis {
2 subject structure : AircraftWing;
3 objective vibrationLimit {
4 doc /* First natural frequency must exceed 25 Hz */
5 require constraint {
6 structure.firstNaturalFrequency >= 25 [Hz]
7 }
8 }
9 return fn1 : ISQ::FrequencyValue = structure.firstNaturalFrequency;
10 }
Compound objectives
Objectives can contain multiple require constraint blocks. All constraints must hold simultaneously for the objective to be satisfied:
1 objective performanceGoals {
2 doc /* The sensor must meet all three criteria */
3 require constraint { subject.range >= 200 [m] }
4 require constraint { subject.accuracy <= 0.05 [m] }
5 require constraint { subject.updateRate >= 20 [Hz] }
6 }
Measures of effectiveness
For trade studies, objectives often define measures of effectiveness (MOEs) — quantitative metrics that allow alternatives to be compared on a common scale. These are typically calculations referenced by the objective:
1 objective sensorEffectiveness {
2 doc /* Maximise the composite effectiveness score */
3 require constraint {
4 computeEffectiveness(subject) >= 0.7 // minimum threshold
5 }
6 }
Keep objectives declarative — state what must hold, not how to compute it. Delegate the computational steps to calc def definitions (Section 3). This separation keeps the objective readable and allows the same calculation to be reused across multiple analysis cases.
Calculation Definitions
KerML origin: calc def → Function (a Behavior that returns a value)
A calculation definition (calc def) defines a reusable mathematical relationship. It is the SysML v2 mechanism for expressing formulas, unit conversions, scoring functions, and any computable expression. At the KerML level, calc def maps to Function, which specialises Behavior with a result parameter.
Basic calculation definition
1 calc def KineticEnergy {
2 in mass : ISQ::MassValue;
3 in velocity : ISQ::SpeedValue;
4 return energy : ISQ::EnergyValue = 0.5 * mass * velocity ** 2;
5 }
Using calculations
A calc usage evaluates a calc def in a specific context by binding arguments to the declared parameters:
1 part def Projectile {
2 attribute mass : ISQ::MassValue;
3 attribute speed : ISQ::SpeedValue;
4 attribute ke : ISQ::EnergyValue = KineticEnergy(mass, speed);
5 }
Calculations with ISQ units
SysML v2 integrates with the International System of Quantities (ISQ) library, providing SI-based unit types. Calculations naturally carry units through expressions, enabling dimensional consistency checking:
1 calc def ThermalResistance {
2 in thickness : ISQ::LengthValue;
3 in conductivity : ISQ::ThermalConductivityValue;
4 in area : ISQ::AreaValue;
5 return resistance : ISQ::ThermalResistanceValue
6 = thickness / (conductivity * area);
7 }
Composing calculations
Calculation definitions can call other calculations, building complex expressions from simple, tested components:
1 calc def WeightedScore {
2 in rawScore : Real;
3 in weight : Real;
4 return : Real = rawScore * weight;
5 }
6
7 calc def CompositeScore {
8 in scores : Real[*]; // array of raw scores
9 in weights : Real[*]; // corresponding weights
10 return : Real = sum(WeightedScore(scores, weights));
11 }
A calc def is not an imperative function. It is a declarative mathematical relationship — the expression defines what the result equals, not a sequence of steps to compute it. SysML v2 tools may evaluate it eagerly, lazily, or symbolically depending on the analysis context.
Trade Study Pattern
Combines: analysis case def, calc def, objective, and variation points
A trade study compares multiple design alternatives against a common set of evaluation criteria and selects the best option. SysML v2 provides a structured pattern for this using analysis cases, calculation definitions, and variation points. The pattern has four steps: define alternatives, define evaluation criteria, score each alternative, and select the winner.
Step 1: Define alternatives
Each alternative is modelled as a separate usage or variant of the subject type:
1 part def Sensor {
2 attribute range : ISQ::LengthValue;
3 attribute accuracy : ISQ::LengthValue;
4 attribute updateRate : ISQ::FrequencyValue;
5 attribute unitCost : Real; // in USD
6 attribute mass : ISQ::MassValue;
7 attribute powerDraw : ISQ::PowerValue;
8 }
9
10 part radarSensor : Sensor {
11 attribute :>> range = 250 [m];
12 attribute :>> accuracy = 0.10 [m];
13 attribute :>> updateRate = 15 [Hz];
14 attribute :>> unitCost = 1200;
15 attribute :>> mass = 0.8 [kg];
16 attribute :>> powerDraw = 12 [W];
17 }
18
19 part lidarSensor : Sensor {
20 attribute :>> range = 150 [m];
21 attribute :>> accuracy = 0.02 [m];
22 attribute :>> updateRate = 10 [Hz];
23 attribute :>> unitCost = 4500;
24 attribute :>> mass = 1.2 [kg];
25 attribute :>> powerDraw = 18 [W];
26 }
27
28 part cameraSensor : Sensor {
29 attribute :>> range = 100 [m];
30 attribute :>> accuracy = 0.15 [m];
31 attribute :>> updateRate = 30 [Hz];
32 attribute :>> unitCost = 350;
33 attribute :>> mass = 0.3 [kg];
34 attribute :>> powerDraw = 5 [W];
35 }
Step 2: Define evaluation criteria
Each criterion is a calc def that normalises a raw attribute to a 0–1 scale. This enables fair comparison across different physical quantities:
1 calc def NormaliseRange {
2 in range : ISQ::LengthValue;
3 return : Real = range / 300 [m]; // max expected range
4 }
5
6 calc def NormaliseAccuracy {
7 in accuracy : ISQ::LengthValue;
8 return : Real = 1.0 - (accuracy / 0.20 [m]); // lower is better
9 }
10
11 calc def NormaliseCost {
12 in cost : Real;
13 return : Real = 1.0 - (cost / 5000); // lower cost is better
14 }
Step 3: Build a scoring model
A composite scoring calculation applies weights to each normalised criterion:
1 calc def SensorScore {
2 in sensor : Sensor;
3 return : Real =
4 0.30 * NormaliseRange(sensor.range) +
5 0.25 * NormaliseAccuracy(sensor.accuracy) +
6 0.20 * NormaliseCost(sensor.unitCost) +
7 0.15 * (sensor.updateRate / 30 [Hz]) +
8 0.10 * (1.0 - sensor.mass / 2.0 [kg]);
9 }
The weights (0.30, 0.25, 0.20, 0.15, 0.10) reflect stakeholder priorities and must sum to 1.0. Changing these weights allows sensitivity analysis — examining how the ranking of alternatives shifts when priorities change.
Evaluation & Results
Combines: analysis case usages, result comparison, and decision documentation
With alternatives defined and the scoring model in place, the final step is to evaluate each alternative, compare scores, and document the decision. This is done by instantiating the analysis case for each alternative.
Evaluating each alternative
1 analysis case def SensorTradeStudy {
2 subject sensor : Sensor;
3 objective { require constraint { SensorScore(sensor) >= 0.5 } }
4 return score : Real = SensorScore(sensor);
5 }
6
7 // Instantiate for each alternative
8 analysis case radarEval : SensorTradeStudy { subject sensor = radarSensor; }
9 analysis case lidarEval : SensorTradeStudy { subject sensor = lidarSensor; }
10 analysis case cameraEval : SensorTradeStudy { subject sensor = cameraSensor; }
Comparing results
The scores can be laid out in a decision matrix. Based on the scoring model defined in Section 4, the computed results are:
| Alternative | Range (0.30) | Accuracy (0.25) | Cost (0.20) | Update Rate (0.15) | Mass (0.10) | Total Score |
|---|---|---|---|---|---|---|
| Radar | 0.250 | 0.125 | 0.152 | 0.075 | 0.060 | 0.662 |
| LiDAR | 0.150 | 0.225 | 0.020 | 0.050 | 0.040 | 0.485 |
| Camera | 0.100 | 0.063 | 0.186 | 0.150 | 0.085 | 0.584 |
With these weights, Radar scores highest (0.662), followed by Camera (0.584) and LiDAR (0.485). The LiDAR fails the minimum threshold of 0.5 set in the objective.
Documenting the decision
The analysis case return values and objective satisfaction results become part of the model’s permanent record. This traceability allows future reviewers to understand why a design choice was made and what assumptions (weights, normalisation functions) underpinned it:
1 // Decision record: select radar as primary perception sensor
2 part autonomousVehicle : Vehicle {
3 part primarySensor : Sensor = radarSensor;
4 // Justified by: radarEval.score = 0.662 (highest)
5 // Rationale: best balance of range, cost, and mass
6 }
Run sensitivity analyses by varying the weights. If the ranking changes significantly with small weight adjustments, the decision is fragile and may need additional criteria or data to strengthen. SysML v2 makes this easy: create new analysis case usages with different weight sets and compare the return values.
Complete Worked Example
The following model integrates all concepts from this module into a complete sensor selection trade study for an autonomous vehicle perception system. It defines alternatives, calculations, an analysis case with objective, and evaluates the alternatives.
1 package SensorTradeStudyPackage {
2 private import ISQ::*;
3 private import SI::*;
4 private import ScalarValues::*;
5
6 // ── Sensor definition ────────────────────────────────────
7 part def Sensor {
8 attribute range : ISQ::LengthValue;
9 attribute accuracy : ISQ::LengthValue;
10 attribute updateRate : ISQ::FrequencyValue;
11 attribute unitCost : Real;
12 attribute mass : ISQ::MassValue;
13 attribute powerDraw : ISQ::PowerValue;
14 }
15
16 // ── Alternatives ─────────────────────────────────────────
17 part radarSensor : Sensor {
18 attribute :>> range = 250 [m];
19 attribute :>> accuracy = 0.10 [m];
20 attribute :>> updateRate = 15 [Hz];
21 attribute :>> unitCost = 1200;
22 attribute :>> mass = 0.8 [kg];
23 attribute :>> powerDraw = 12 [W];
24 }
25
26 part lidarSensor : Sensor {
27 attribute :>> range = 150 [m];
28 attribute :>> accuracy = 0.02 [m];
29 attribute :>> updateRate = 10 [Hz];
30 attribute :>> unitCost = 4500;
31 attribute :>> mass = 1.2 [kg];
32 attribute :>> powerDraw = 18 [W];
33 }
34
35 part cameraSensor : Sensor {
36 attribute :>> range = 100 [m];
37 attribute :>> accuracy = 0.15 [m];
38 attribute :>> updateRate = 30 [Hz];
39 attribute :>> unitCost = 350;
40 attribute :>> mass = 0.3 [kg];
41 attribute :>> powerDraw = 5 [W];
42 }
43
44 // ── Normalisation calculations ────────────────────────────
45 calc def NormRange {
46 in r : ISQ::LengthValue;
47 return : Real = r / 300 [m];
48 }
49
50 calc def NormAccuracy {
51 in a : ISQ::LengthValue;
52 return : Real = 1.0 - (a / 0.20 [m]);
53 }
54
55 calc def NormCost {
56 in c : Real;
57 return : Real = 1.0 - (c / 5000);
58 }
59
60 calc def NormRate {
61 in f : ISQ::FrequencyValue;
62 return : Real = f / 30 [Hz];
63 }
64
65 calc def NormMass {
66 in m : ISQ::MassValue;
67 return : Real = 1.0 - (m / 2.0 [kg]);
68 }
69
70 // ── Composite scoring calculation ────────────────────────
71 calc def SensorScore {
72 in sensor : Sensor;
73 return : Real =
74 0.30 * NormRange(sensor.range) +
75 0.25 * NormAccuracy(sensor.accuracy) +
76 0.20 * NormCost(sensor.unitCost) +
77 0.15 * NormRate(sensor.updateRate) +
78 0.10 * NormMass(sensor.mass);
79 }
80
81 // ── Analysis case definition ─────────────────────────────
82 analysis case def SensorTradeStudy {
83 subject sensor : Sensor;
84
85 objective suitability {
86 doc /* Sensor must achieve a minimum composite score */
87 require constraint { SensorScore(sensor) >= 0.5 }
88 }
89
90 return score : Real = SensorScore(sensor);
91 }
92
93 // ── Evaluate each alternative ─────────────────────────────
94 analysis case radarEval : SensorTradeStudy {
95 subject sensor = radarSensor; // score = 0.662
96 }
97
98 analysis case lidarEval : SensorTradeStudy {
99 subject sensor = lidarSensor; // score = 0.485 (fails objective)
100 }
101
102 analysis case cameraEval : SensorTradeStudy {
103 subject sensor = cameraSensor; // score = 0.584
104 }
105
106 // ── Decision: apply best alternative ──────────────────────
107 part def AutonomousVehicle {
108 part perceptionSuite {
109 // Selected: radar (highest score, meets objective)
110 part primarySensor : Sensor = radarSensor;
111 // Complementary: camera (second highest, low cost)
112 part secondarySensor : Sensor = cameraSensor;
113 }
114 }
115 }
This model demonstrates: calc def for normalisation and scoring functions, analysis case def with subject and objective, require constraint for minimum threshold, instantiation of the analysis case for each alternative, and the final design decision traced back to the trade study results.
In a real project, you would complement this trade study with sensitivity analysis (varying weights), additional constraints (power budget, integration complexity), and links to verification cases that confirm the selected sensor meets system-level requirements from Module 5.
Module Summary
| SysML v2 concept | KerML origin | Key rule |
|---|---|---|
calc def | Function (specialises Behavior) | Declarative mathematical relationship with typed parameters and a return value |
calc (usage) | Expression typed by Function | Evaluates a calc def in context; binds arguments to parameters |
analysis case def | CaseDefinition | Structured container with subject, objective, and return value |
analysis case (usage) | CaseUsage | Instance of an analysis applied to a specific design element |
objective | RequirementUsage | Success criterion scoped to the analysis case; carries constraints |
subject | SubjectMembership | Identifies which element is under analysis; acts as an input parameter |
return | ReturnParameterMembership | The computed result value; can be referenced by downstream elements |
require constraint | RequirementConstraintMembership | Boolean condition that must hold for the objective to be satisfied |
| Trade study pattern | Composition of above | Define alternatives, normalise criteria, score with weights, compare, decide |