Курсы английского
<<  Городской пейзаж архитектура 6 класс Digital television as a backup warning system for the means of Civil Defense and Emergency Management  >>
Examples of Safety Critical Systems--No Backup
Examples of Safety Critical Systems--No Backup
Examples of Safety Critical Systems--No Backup
Examples of Safety Critical Systems--No Backup
What are the Alternatives in Case of Failure
What are the Alternatives in Case of Failure
Example: Slightly-out-of-Specification (SOS) Failure
Example: Slightly-out-of-Specification (SOS) Failure
Need for Determinism in TMR Systems
Need for Determinism in TMR Systems
Need for Determinism in TMR Systems
Need for Determinism in TMR Systems
Diagnostic Deficiency in CAN
Diagnostic Deficiency in CAN
Картинки из презентации «Twelve Principles for the Design of Safety-Critical Real-Time Systems» к уроку английского языка на тему «Курсы английского»

Автор: H K. Чтобы познакомиться с картинкой полного размера, нажмите на её эскиз. Чтобы можно было использовать все картинки для урока английского языка, скачайте бесплатно презентацию «Twelve Principles for the Design of Safety-Critical Real-Time Systems.ppt» со всеми картинками в zip-архиве размером 91 КБ.

Twelve Principles for the Design of Safety-Critical Real-Time Systems

содержание презентации «Twelve Principles for the Design of Safety-Critical Real-Time Systems.ppt»
Сл Текст Сл Текст
1Twelve Principles for the Design of 18erroneous output action of the faulty node
Safety-Critical Real-Time Systems. H. to the environment that is under the
Kopetz TU Vienna April 2004. node’s control. A propagated error
2Outline. Introduction Design invalidates the independence assumption.
Challenges The Twelve Design Principles The error detector must be in a different
Conclusion. FCR than the faulty unit. Distinguish
3Examples of Safety Critical between architecture-based and
Systems--No Backup. Fly-by-wire Airplane: application-based error detection
There is no mechanical or hydraulic Distinguish between error detection in the
connection between the pilot controls and time-domain and error detection in the
the control surfaces. Drive-by-wire Car: value domain.
There is no mechanical or hydraulic 19Fault Containment vs. Error
connection between the steering wheel and Containment. We do not need an error
the wheels. detector if we assume fail-silence. No
4What are the Alternatives in Case of Error Detection. Error Detection. Error
Failure? Design an architecture that will detecting FCR must be independent of the
tolerate the failure of any one of its FCR that has failed--at least two FCRs are
components. Fall back to human control in required if a restricted failure mode is
case of a component failure. Can humans assumed.
manage the functional difference between 20Establish a Consistent Notion of Time
the computer control system and the manual and State. A system-wide consistent notion
backup system? of a discrete time is a prerequisite for a
5Design Challenges in Safety-Critical consistent notion of state, since the
Applications. In Safety-Critical notion of state is introduced in order to
Applications, where the safety of the separate the past from the future: “The
system-at-large (e.g., an airplane or a state enables the determination of a
car) depends on the correct operation of future output solely on the basis of the
the computer system (e.g., the primary future input and the state the system is
flight control system or the in. In other word, the state enables a
by-wire-system in a car) the following “decoupling” of the past from the present
challenges must be addressed: The 10-9 and future. The state embodies all past
Challenge The Process of Abstracting history of a system. Knowing the state
Physical Hardware Faults Design Faults “supplants” knowledge of the past.
Human Failures. Apparently, for this role to be
6The 10-9 Challenge. The system as a meaningful, the notion of past and future
whole must be more reliable than any one must be relevant for the system
of its components: e.g., System considered.” (Taken from Mesarovic,
Dependability 1 FIT--Component Abstract System Theory, p.45)
dependability 1000 FIT (1FIT: 1 failure in Fault-masking by voting requires a
109 hours) Architecture must support consistent notion of state in distributed
fault-tolerance to mask component failures Fault Containment Regions (FCRs).
System as a whole is not testable to the 21Fault-Tolerant Sparse Time Base. If
required level of dependability. The the occurrence of events is restricted to
safety argument is based on a combination some active intervals with duration ? with
of experimental evidence and formal an interval of silence of duration ?
reasoning using an analytical between any two active intervals, then we
dependability model. call the time base ?/?-sparse, or sparse
7The Process of Abstracting. The for short.
behavior of a safety-critical computer 22Need for Determinism in TMR Systems.
system must be explainable by a FCU. FCU. FCU. FCU. FCU. Voter Actuator.
hierarchically structured set of Fault Tolerant Smart Sensor. TMR Replicas.
behavioral models, each one of them of a 23Partition the System along
cognitive complexity that can be handled well-specified LIFs. “Divide and Conquer”
by the human mind. Establish a clear is a well-proven method to master
relationship between the behavioral model complexity. A linking interface (LIF) is
and the dependability model at such a high an interface of a component that is used
level of abstraction that the analysis of in order to integrate the component into a
the dependability model becomes tractable. system-of-components. We have identified
Example: Any migration of a function from two different types LIFs: time sensitive
one ECU to another ECU changes the LIFs and not time sensitive LIFs Within an
dependability model and requires a new architecture, all LIFs of a given type
dependability analysis From the hardware should have the same generic structure
point of view a complete chip forms a Avoid concurrency at the LIF level The
single fault containment region (FCR) that architecture must support the precise
can fail in an arbitrary failure mode. specification of LIFs in the domains of
8Physical Hardware Faults of SoCs: time and value and provide a
Assumed Behavioral Hardware Failure Rates comprehensible interface model.
(Orders of Magnitude): Design Assumption 24The LIF Specification hides the
in Aerospace: A chip can fail with a Implementation. Component Operating System
probability of 10-6 hours in an arbitrary Middleware Programming Language WCET
failure mode. Type of Failure. Failure Scheduling Memory Management Etc. Linking
Rate in Fit. Source. Transient Node Interface Specification (In Messages, Out
Failures (fail silent). 1 000 000 Fit Messages, Temporal, Meaning-- Interface
(MTTF = 1000 hours). Neutron bombardment Model).
Aerospace. Transient Node Failure 25The LIF Specification hides the
(non-fail silent). 10 000 Fit (MTTF= 100 Implementation. Component Operating System
000) Tendency: increase. Fault Injection Middleware Programming Language WCET
Experiments. Permanent Hardware Failures. Scheduling Memory Management Etc. Linking
100 Fit (MTTF= 10 000 000). Automotive Interface Specification (In Messages, Out
Field Data. Messages, Temporal, Meaning-- Interface
9Design Faults. No silver bullet has Model).
been found yet--and this is no silver 26Composability in Distributed Systems.
bullet either: Interface Centric Design! Communication System Delay, Dependability.
Partition the system along well-specified Interface Specification B. Interface
linking interfaces (LIF) into nearly Specification A.
independent software units. Provide a 27A Component may support many LIFs.
hierarchically structured set of Service X. X. Fault Isolation in Mixed
ways-and-means models of the LIFs, each Criticality Components. Y. Service Y. Z.
one of a cognitive complexity that is Service Z.
commensurate with the human cognitive 28Make Certain that Components Fail
capabilities. Design and validate the Independently. Any dependence of FCR
components in isolation w.r.t. the LIF failures must be reflected in the
specification und make sure that the dependability model--a challenging task!
composition is free of side effects Independence is a system property.
(composability of the architecture). Independence of FCRs can be compromised by
Beware of Heisenbugs! Shared physical resources (hardware, power
10The Twelve Design Principles. Regard supply, time-base, etc.) External faults
the Safety Case as a Design Driver Start (EMI, heat, shock, spatial proximity)
with a Precise Specification of the Design Design Flow of erroneous messages.
Hypotheses Ensure Error Containment 29Follow the Self-Confidence Principle.
Establish a Consistent Notion of Time and The self-confidence principles states that
State Partition the System along an FCR should consider itself correct,
well-specified LIFs Make Certain that unless two or more independent FCRs
Components Fail Independently Follow the classify it as incorrect. If the
Self-Confidence Principle Hide the self-confidence principle is observed then
Fault-Tolerance Mechanisms Design for a correct FCR will always make the correct
Diagnosis Create an Intuitive and decision under the assumption of a single
Forgiving Man-Machine Interface Record faulty FCR Only a faulty FCR will make
Every Single Anomaly Provide a Never false decisions.
Give-Up Strategy. 30Hide the Fault-Tolerance Mechanisms.
11Regard the Safety Case as a Design The complexity of the FT algorithms can
Driver (I). A safety case is a set of increase the probability of design faults
documented arguments in order to convince and beat its purpose. Fault tolerance
experts in the field (e.g., a mechanisms (such as voting, recovery) are
certification authority) that the provided generic mechanisms that should be
system as a whole is safe to deploy in a separated from the application in order
given environment. The safety case, which not to increase the complexity of the
considers the system as whole, determines application. Any fault-tolerant system
the criticality of the computer system and requires a capability to detect faults
analyses the impact of the computer-system that are masked by the fault-tolerance
failure modes on the safety of the mechanisms--this is a generic diagnostic
application: Example: Driver assistance requirement that should be part of the
versus automatic control of a car. The architecture.
safety case should be regarded as a design 31Design for Diagnosis. The architecture
driver since it establishes the critical and the application of a safety-critical
failure modes of the computer system. system must support the identification of
12Regard the Safety Case as a Design a field-replaceable unit that violates the
Driver II). In the safety case the specification: Diagnosis must be possible
multiple defenses between a subsystem on the basis of the LIF specification and
failure and a potential catastrophic the information that is accessible at the
system failures must be meticulously LIF Transient errors pose the biggest
analyzed. The distributed computer system problems--Condition based maintenance
should be structured such that the Determinism of the Architecture helps!
required experimental evidence can be Avoid Diagnostic Deficiencies
collected with reasonable effort and that Scrubbing--Ensure that the FT mechanisms
the dependability models that are needed work.
to arrive at the system-level safety are 32Diagnostic Deficiency in CAN. I/O.
tractable. Even an expert cannot decide who sent the
13Start with a Precise Specification of erroneous message. Erroneous CAN message
the Design Hypotheses. The design with wrong identifier. I/O. I/O. I/O. I/O.
hypotheses is a statement about the CC: Communication Controller. Driver
assumptions that are made in the design of Interface. Assistant System. Gateway Body.
the system. Of particular importance for CC. CC. CC. CC. CC. CC. CC. Brake Manager.
safety critical real-time systems is the Engine Control. Steering Manager. Suspen-
fault-hypotheses: a statement about the sion.
number and types of faults that the system 33Create an Intuitive and Forgiving
is expected to tolerate: Determine the Man-Machine Interface. The system designer
Fault-Containment Regions (FCR): A must assume that human errors will occur
fault-containment region (FCR) is the set and must provide mechanisms that mitigate
of subsystems that share one or more the consequences of human errors. Three
common resources and that can be affected levels of human errors Mistakes
by a single fault. Specification of the (misconception at the cognitive level)
Failure Modes of the FCRs and their Lapses (wrong rule from memory) Slips
Probabilities Be aware of Scenarios that (error in the execution of a rule).
are not covered by the Fault-Hypothesis 34Record Every Single Anomaly. Every
Example: Total loss of communication for a single anomaly that is observed during the
certain duration. operation of a safety critical computer
14Contents of the Fault Hypothesis. Unit system must be investigated until an
of Failure: What is the Fault-Containment explanation can be given. This requires a
Region (FCR)?--A complete chip? Failure well-structured design with precise
Modes: What are the failure modes of the external interface (LIF) specifications in
FCR? Frequency of Failures: What is the the domains of time and value. Since in a
assumed MTTF between failures for the fault-tolerant system many anomalies are
different failure modes eg. transient masked by the fault-tolerance mechanisms
failures vs permanent failures? Detection: from the application, the observation
How are failures detected? How long is the mechanisms must access the
detection latency? State Recovery: How non-fault-tolerant layer. It cannot be
long does it take to repair corrupted performed at the application level.
state (in case of a transient fault)? 35Provide a Never Give-Up Strategy.
15Failure Modes of an FCR--Are there There will be situations when the
Restrictions? C. A. B. assumption fault-hypothesis is violated and the fault
fail-silent k+1. no assumption (arbitrary) tolerant system will fail. Chances are
3k + 1. assumption synchronized 2k + 1. good that the faults are transient and a
What is the assumption coverage in cases A restart of the whole system will succeed.
and B? Provide algorithms that detect the
16Example: Slightly-out-of-Specification violation of the fault hypothesis and that
(SOS) Failure. The following is an example initiate the restart. Ensure that the
for the type of asymmetric non-fail-silent environment is safe (e.g., freezing of
failures that have been observed during actuators) while the system restart is in
the experiments: Receive Window. progress. Provide an upper bound on the
17Example Brake by Wire Application. restart duration as a parameter of the
Consider the scenario where the right two architecture.
brakes do not accept an SOS-faulty 36Approach to Safety: The Swiss-Cheese
brake-command message, while the left two Model. Normal Function. Subsystem Failure.
brakes do accept this message and brake. Fault Tolerance. Never Give Up Strategy.
RF. RB. LF. LB. If the two left wheels Catastrophic System Event. Multiple Layers
brake, while the two right wheels do not of Defenses. Independence of Layers of
brake, the car will turn. Error Detection are important. From
18Ensure Error Containment. In a Reason, J Managing the Risk of
distributed computer system the Organizational Accidents 1997.
consequences of a fault, the ensuing 37Every one of these twelve design
error, can propagate outside the principles can be the topic of a separate
originating FCR (Fault Containment Region) talk! Thank you. Conclusion.
either by an erroneous message or by an
Twelve Principles for the Design of Safety-Critical Real-Time Systems.ppt
http://900igr.net/kartinka/anglijskij-jazyk/twelve-principles-for-the-design-of-safety-critical-real-time-systems-200634.html
cсылка на страницу

Twelve Principles for the Design of Safety-Critical Real-Time Systems

другие презентации на тему «Twelve Principles for the Design of Safety-Critical Real-Time Systems»

«Женщина the woman» - The wife is the key to the house. Значение понятия «женщина» в семье. Баба слезами беде помогает. 9 семантических подгрупп, характеризующих женщин по: A good wife makes a good husband. Бабий язык, куда ни завались, достанет. «Un homme»- франц. « A man »- англ. Человек = мужчина. От нашего ребра нам не ждать добра;

«The green movement» - "Green" movement in the world. The main objective — to achieve the decision of global environmental problems, including by attraction to them of attention of the public and the authorities. Their features. One of the largest victories гринписовцев in the given campaign can name refusal of flooding of an oil platform brent spar as it contained many toxic substances.

«The english-speaking countries» - Australia. Great Britain. Disneyland. Scotland. USA. The English-speaking countries.

«English for you» - Ты научишься правильно строить предложение. EuroTalk. При выполнении заданий программа оценивает твой результат и предоставляет отчёт. Ты сможешь совершенствовать своё произношение. You are welcome! Узнать насколько хорошо ты усвоил материал тебе помогут: Все слова и выражения озвучены носителями языка.

«The animals» - HIPPO. GIRAFFE. SEAL. WOMBAT. KANGAROO. KOALA. FLAMINGO. The animals which live in the OCEAN. ELEPHANT. SNAKE. STARFISH. DOLPHIN. The animals which live in a SAVANNA. GORILLA. SQUIRREL. ZEBRA. SCORPIO. PANDA. BEAR. FOX. GRIFFIN. The animals which live in the rainforest and tropics. REINDEER. The ANIMALS of our planet.

«Он-лайн обучение английскому» - Анализ Объемов обучения. Результаты. Бюджет. Дистанционное обучение без преподавателя. Gap analysis. Testing. Традиционное обучение. Параметры, определяющие выбор программы обучения. Results. Tailor-made courses. Специально построенные программы. Языковая политика. Онлайн тестирование. Объём курса сотрудника.

Курсы английского

25 презентаций о курсах английского
Урок

Английский язык

29 тем
Картинки
900igr.net > Презентации по английскому языку > Курсы английского > Twelve Principles for the Design of Safety-Critical Real-Time Systems