Module 5: The Modelling Tool
The role of tools in MBSE, categories of tools, and selection criteria — remaining tool-agnostic while explaining what tools must do.
Why Tools Matter
A model that exists only on a whiteboard is not MBSE. It may capture good ideas, but it cannot be queried, validated, versioned, or shared across a distributed team. Tools are the third pillar of MBSE because they turn a conceptual model into a living, computable asset.
Without a tool, your model is a snapshot. With a tool, it becomes a persistent, evolving artefact that the entire organisation can interact with. Tools provide four key capabilities:
- Create — author model elements using the chosen modelling language (e.g. SysML), with syntax checking and graphical/textual editors.
- Store — persist the model in a repository so it survives beyond a single session and is protected against data loss.
- Analyse — run consistency checks, generate reports, perform simulations, and trace requirements to design elements.
- Share — enable multiple engineers to view, review, and contribute to the model concurrently.
Think of the difference between handwriting a letter and using a word processor. Both produce text, but the word processor enables spell-check, formatting, collaboration, and versioning. You could write perfectly good prose by hand, but you would lose the ability to search, reformat, or merge edits from five co-authors. MBSE tools do the same for models — they do not change what you model, but they fundamentally change what you can do with the model.
The four capabilities in practice
Every MBSE tool, regardless of vendor, must support some combination of these four capabilities. Some tools focus on authoring (Create), others on long-term management (Store), and still others on downstream use (Analyse, Share). Understanding this split is the key to understanding the tool landscape.
| Capability | What it enables | Without it… |
|---|---|---|
| Create | Graphical/textual editors, drag-and-drop diagrams, syntax validation | Engineers cannot author models efficiently |
| Store | Persistent repository, backup, access control | Models are lost when the session ends |
| Analyse | Consistency checks, simulation hooks, traceability matrices, reports | Models are static pictures with no computable value |
| Share | Multi-user access, review workflows, export to stakeholders | Models are siloed on one engineer's machine |
Categories of MBSE Tools
The MBSE tool ecosystem is broad. Rather than recommending a single product, it is more useful to understand the categories of tools and the role each category plays.
Authoring tools
These are the tools engineers interact with daily. They provide editors for creating and modifying models in a specific modelling language. Examples include Cameo Systems Modeler / Magic Systems of Systems Architect, IBM Rhapsody, Eclipse Papyrus, and Capella (which uses its own Arcadia method). Authoring tools typically support graphical diagram editing and, increasingly, textual notation.
Repository and PLM tools
Once models grow beyond a single file, they need a managed repository. Repository tools provide version control, access control, and lifecycle management for model artefacts. Product Lifecycle Management (PLM) platforms can also serve this role, integrating model data alongside CAD, documents, and bill-of-materials data. Examples include Teamwork Cloud and IBM DOORS (for requirements-focused repositories).
Simulation tools
MBSE models gain significant value when they can be simulated. Simulation tools take behavioural or parametric aspects of a model and execute them — predicting performance, verifying constraints, or exploring trade-offs. Examples include MATLAB/Simulink and Modelica-based environments (e.g. OpenModelica, Dymola).
Integration platforms
In practice, no single tool does everything. Integration platforms — sometimes called model buses or API connectors — bridge different tools so that data flows between them. They enable, for instance, a SysML model to feed into a simulation tool or to synchronise requirements between a modelling tool and a requirements management tool.
| Category | Purpose | Example Tools |
|---|---|---|
| Authoring | Create and edit models in a modelling language | Cameo/MagicSystems, Rhapsody, Papyrus, Capella |
| Repository / PLM | Store, version, and manage model artefacts | Teamwork Cloud, DOORS, Windchill, 3DEXPERIENCE |
| Simulation | Execute models to predict behaviour and performance | Simulink, OpenModelica, Dymola, Cameo Simulation Toolkit |
| Integration | Connect tools and synchronise data across the toolchain | OSLC connectors, Intercax Syndeia, Phoenix Integration ModelCenter |
This tutorial series is tool-agnostic. We mention specific products only as examples of each category. The concepts taught here apply regardless of which tool you ultimately choose.
Model Repository and Collaboration
Real-world MBSE is a team sport. Multiple engineers — systems architects, domain specialists, requirements analysts — all need to contribute to the same model. This makes collaboration infrastructure just as important as the authoring tool itself.
Multi-user access
A model repository allows multiple users to work on the model concurrently, with access control ensuring that engineers only modify the parts they are responsible for. Role-based permissions (read, write, admin) prevent accidental damage to shared artefacts.
Branching and merging models
Just as software developers use Git to branch code, model repositories can support branching — allowing a team to explore a design variant without affecting the main model. When the variant is approved, it is merged back. However, model merging is significantly harder than code merging because models are graphs of interconnected elements, not lines of text.
Version control for models
Every change to the model should be tracked: who changed what, when, and why. This provides an audit trail, supports rollback, and is often required for certification in regulated industries (aerospace, medical devices, automotive).
The analogy to Git is useful but imperfect. Code diffs show line-by-line changes in text files. Model diffs must compare graph structures — added elements, deleted relationships, changed attributes — and present them in a way that a human can review. This is an area of active tool development.
Concurrent model editing is hard. When two engineers modify the same model element at the same time, the merge may produce conflicts that are difficult to resolve. Unlike a text merge conflict (which shows two versions of a line), a model conflict may involve contradictory relationships, incompatible type changes, or broken references. Best practice: define clear ownership boundaries (e.g. by package) and use locking where the tool supports it.
Model Exchange and Interoperability
No organisation uses a single tool for everything. Models must be exchanged between tools, between teams, and between organisations (e.g. a prime contractor and its suppliers). This is the domain of interoperability standards.
Think of USB vs. proprietary chargers. Before USB became the standard, every phone manufacturer had its own connector — you needed a different cable for every device. A standard connector means any device works with any charger. Model exchange standards aim for the same thing: any conformant tool should be able to read a model produced by any other conformant tool.
Key interoperability standards
| Standard | Purpose | Scope |
|---|---|---|
| XMI (XML Metadata Interchange) | Serialise UML/SysML v1 models as XML files | Model exchange between UML/SysML v1 tools |
| ReqIF (Requirements Interchange Format) | Exchange requirements data between tools | Requirements management (e.g. DOORS ↔ other tools) |
| SysML v2 API | Standard REST API for reading/writing SysML v2 model data | SysML v2 tool interoperability (the new standard) |
| FMI (Functional Mock-up Interface) | Exchange simulation components between tools | Co-simulation and model exchange for dynamic models |
| OSLC (Open Services for Lifecycle Collaboration) | Link artefacts across lifecycle tools via web APIs | Cross-tool traceability (requirements, tests, models) |
Why interoperability is hard
Even with standards, interoperability remains challenging. Different tools may interpret the same standard slightly differently, support different subsets of the standard, or add proprietary extensions. The result is that a model exported from Tool A and imported into Tool B may lose information, require manual repair, or behave unexpectedly.
SysML v2 addresses this directly with its companion API standard — a REST-based interface that defines exactly how tools exchange model data. This is a significant improvement over XMI-based exchange in SysML v1, which was notoriously inconsistent across tools.
When evaluating tools, always test interoperability with your actual toolchain. Conformance to a standard on paper does not guarantee seamless exchange in practice. Request a proof-of-concept or trial before committing.
Selecting a Tool — Decision Framework
Choosing an MBSE tool is a significant decision that affects your team for years. The wrong choice can create vendor lock-in, limit collaboration, or impose workflows that conflict with your method. The right choice accelerates adoption and amplifies the value of your models.
Decision criteria
The following table lists the key criteria to evaluate. Weight each criterion according to your organisation's context — a small startup has different priorities from a defence prime contractor.
| Criterion | Questions to ask | Why it matters |
|---|---|---|
| Language support | Does it support SysML v1, SysML v2, UML, or domain-specific languages? | Must match the language your method requires |
| Method support | Does it support your chosen method (OOSEM, MagicGrid, Arcadia, etc.)? | A tool that fights your method slows adoption |
| Collaboration | Multi-user repository? Branching? Role-based access? Web review? | MBSE is a team activity; solo-only tools do not scale |
| Simulation capability | Built-in simulation? Integration with Simulink/Modelica? | Executable models multiply the value of MBSE |
| Interoperability | XMI, ReqIF, SysML v2 API, FMI, OSLC support? | Your tool must fit into a larger toolchain |
| Cost | Licence model (per-seat, floating, subscription)? Total cost of ownership? | Budget constraints are real; include training and infrastructure |
| Vendor lock-in | Can you export your models to another tool if needed? | Switching costs can trap you with an unsuitable tool |
| Ecosystem | Community size? Plugins? Training resources? Vendor roadmap? | A vibrant ecosystem reduces risk and accelerates learning |
| Usability | Learning curve? UI quality? Documentation? | Engineer adoption depends on the day-to-day experience |
A pragmatic approach
No tool scores perfectly on every criterion. The goal is to identify your must-haves (non-negotiable requirements) and nice-to-haves (desirable but not essential), then shortlist tools that satisfy all must-haves and score well on nice-to-haves.
- Define your requirements — what language, method, team size, and integration needs do you have?
- Survey the market — identify candidate tools in each category.
- Request demonstrations — see the tool in action with your actual use cases.
- Run a pilot — trial the shortlisted tool(s) on a real (but low-risk) project.
- Evaluate and decide — score against your criteria and make a documented decision.
This series is tool-agnostic. We teach MBSE concepts, methods, and frameworks that apply regardless of which tool you use. A good systems engineer understands the principles first — the tool is the vehicle, not the destination.