Safety-Critical Systems Workshop

WIllGlover

The theme of the workshop was "Software Safety - Where's the Evidence?". This report from Will Glover from the AOS UK office.

The day was divided into 5 sessions, each containing a twenty minute presentation followed by 40 minutes of discussion. The 5 sessions were:


The Role of Uncertainty (Bev Littlewood, City University)

Bev discussed the principles of ASARP (As Safe as Reasonably Practicable) and ACARP (As Confident as Reasonably Practicable). Confidence in safety arguments was a concept discussed by Tim Kelly (University of York) at the recent Assurance Case Forum. Bev's opinion seemed to differ from Tim Kelly's in that he was in favour of explicitly expressing confidence in terms of Bayesian probabilities, whereas Tim was in favour of expressing confidence qualitatively. Tim Kelly did not attend this meeting, however, John McDermid (University of York) made the point that confidence can sometimes be expressed quantitatively, but not always, it depends a lot on the context. For example, expressing assumption doubt probabilistically would be difficult to do formally. In the end it seems to come down to expert judgement at which point the benefits of using probabilities become less clear.


Design and Analysis Evidence (John McDermid, University of York)

This discussion looked at the difference between process based standards such as DO178B, and goal-based standards such as 00-56. The overall conclusions seemed to be that goal-based standards are a good thing, but will usually lead to a process being followed anyway. John made the point that in the case of using Goal-based standards, the process will end up being appropriate to the goal, whereas using a standard such as DO178B, the process is always the same and will hence sometimes be inappropriate. This is especially true given the time it takes to generate new process standards. John commented that DO178C has taken significant time and money to produce but is only a minor revision of DO178B.


Testing Evidence (Lorenzo Strigini, City University)

Lorenzo discussed different types of testing including: Stress testing and in-use testing. There was a lot of discussion on what each type of testing is best for, but no firm conclusions.


Proven in Use Evidence (Mark Machin, Logica)

The most interesting (and terrifying!) thing about this session was Mark's experience with medical IT projects. In the medical domain, safety is not always taken as seriously as other domains. This is partly because when a patient is extremely ill, an unsafe medical system is better than nothing at all. It seems strange to talk about systems needing to be 10-6 or greater in this context. However, in the context of routine checkups, safety of IT systems is very important. Logica has had difficulties dealing with suppliers who weren't able to supply the necessary safety evidence to them, forcing Logica to create this evidence themselves. Also, some of their suppliers patch software on a weekly basis. Logica has had to queue up multiple patches from their suppliers and release at a slower pace, so that they have time to perform quality assurance. Some hospitals use these pieces of software directly, without going through Logica, in which case they are patched on a weekly basis.


Process-Based Evidence (Paul Edwards, Altran-Praxis)

Altran-Praxis is the company that produces Spark ADA. The main discussion point for this session was looking at the difference between process based evidence and product based evidence. The overall conclusion seemed to be that process-based evidence will increase (or reduce!) confidence in product based evidence i.e. product based support for software is meaningless without evidence of those tests being performed in the correct manner.

Privacy Policy | Contact Us | Site Map                 © AOS Group 2015