On 2018-10-07 04:53, Nicholas Mc Guire wrote:
While I think that the system theoretical approch Leveson takes is
the only sound one fror complex systems it is arguable that for
very simple systems where (Type-A) 1) all possible faults are known
2) behavior under fault condition fully understood, presertification
be meaningful as long as you ensure that the behaviroal subset of your
system is a strict subset of the analyzed functions and behaviors.
OK, but as you know, simple systems don't involve microprocessors, let
alone Linux :)
A CPU is itself a complex (sub)system.
System theory would say NO even to that but the argument would be
the goal is tollerable safety and not absolute safety - but again - if
at all then for Type-A systems and if you take the definition of Type-A
systems in a strict sense there are very very few Type-A systems.
I believe that your assessment is absolutely correct with respect to
and if any credit can ever be taken from previous certification then
for sure not
in the first case - so it is hard to understand that industry is still
hoping for a SEooC GNU/Linux (or any other complex OS) to start with.
It seems to me that the 'root cause' may be that too many folks have a
tendency to let others (or google) do their thinking for them :-)
Or we could call it "herd instinct".
IT may turn out that there are strong common patterns that allow
re-use of e.g. design patterns or evidence data-bases - but that will
happen until we went through 10+ full system certifications without any
In the meantime, some vendors will have been paid for helping in 10+
certifications, which clearly provides an incentive :-)
> Even so I do expect that we will soon establish viable ways to
> benefit from Linux-based software in systems high SIL/ASIL demands. To
> support this effort I think we need to demystify some of the terms and
> techniques applied by the safety community, and perhaps consider reuse
> risk mitigation patterns from other disciplines.
lets start by demystifying SEooC :)
Absolutely. I'm not sure that this list is the place for that, though.
Are you aware of any existing public lexicon for the safety community's
magic spells, which would be helpful to non-safety, non-academic readers
approaching from a software/systems perspective?
I've started some naive documentation steps towards a 'safety argument'
- I think a lexicon (or a link
to one) will be needed there in any case.
Note that I've put quotes around 'safety argument' because I'm actually
seeking evidence to justify trustability, not just argumentation.
Achieving safe systems that utilize GNU/Linux is doable - I´m quite
about that and the key statement you made is "establish viable ways to
justify" - with some that can be derived from previous approaches and
new requiring measures and techniques that will, I believe,
need to be developed from the ground up starting at a sound
theoretical basis (like system theory).
Excellent, it seems we are thinking along similar lines.