Repairable Systems Analysis Through Simulation

From ReliaWiki

Jump to: navigation, search

Chapter 7: Repairable Systems Analysis Through Simulation



Index

Chapter 7  
Repairable Systems Analysis Through Simulation  

Contents

Available Software:
BlockSim

More Resources:
BlockSim Examples Collection

Download Reference Book:
System Analysis (*.pdf)

Generate Reference Book:
File may be more up-to-date

NOTE: Some of the examples in this reference use time values with specified units (e.g., hours, days, etc.) while other examples use the abbreviation “tu” for values that could be interpreted as any given time unit. For details, see Time Units.

Having introduced some of the basic theory and terminology for repairable systems in Introduction to Repairable Systems, we will now examine the steps involved in the analysis of such complex systems. We will begin by examining system behavior through a sequence of discrete deterministic events and expand the analysis using discrete event simulation.

Simple Repairs

Deterministic View, Simple Series

To first understand how component failures and simple repairs affect the system and to visualize the steps involved, let's begin with a very simple deterministic example with two components, and , in series.

Component fails every 100 hours and component fails every 120 hours. Both require 10 hours to get repaired. Furthermore, assume that the surviving component stops operating when the system fails (thus not aging). NOTE: When a failure occurs in certain systems, some or all of the system's components may or may not continue to accumulate operating time while the system is down. For example, consider a transmitter-satellite-receiver system. This is a series system and the probability of failure for this system is the probability that any of the subsystems fail. If the receiver fails, the satellite continues to operate even though the receiver is down. In this case, the continued aging of the components during the system inoperation must be taken into consideration, since this will affect their failure characteristics and have an impact on the overall system downtime and availability.

The system behavior during an operation from 0 to 300 hours would be as shown in the figure below.

Overview of system and components for a simple series system with two components. Component A fails every 100 hours and component B fails every 120 hours. Both require 10 hours to get repaired and do not age(operate through failure) when the system is in a failed state.

Specifically, component would fail at 100 hours, causing the system to fail. After 10 hours, component would be restored and so would the system. The next event would be the failure of component . We know that component fails every 120 hours (or after an age of 120 hours). Since a component does not age while the system is down, component would have reached an age of 120 when the clock reaches 130 hours. Thus, component would fail at 130 hours and be repaired by 140 and so forth. Overall in this scenario, the system would be failed for a total of 40 hours due to four downing events (two due to and two due to ). The overall system availability (average or mean availability) would be . Point availability is the availability at a specific point time. In this deterministic case, the point availability would always be equal to 1 if the system is up at that time and equal to zero if the system is down at that time.

Operating Through System Failure

In the prior section we made the assumption that components do not age when the system is down. This assumption applies to most systems. However, under special circumstances, a unit may age even while the system is down. In such cases, the operating profile will be different from the one presented in the prior section. The figure below illustrates the case where the components operate continuously, regardless of the system status.

Overview of up and down states for a simple series system with two components. Component A failes every 100 hours and component B fails every 120 hours. Both require 10 hours to get repaired and age when the system is in a failed state(operate through failure).

Effects of Operating Through Failure

Consider a component with an increasing failure rate, as shown in the figure below. In the case that the component continues to operate through system failure, then when the system fails at the surviving component's failure rate will be , as illustrated in figure below. When the system is restored at , the component would have aged by and its failure rate would now be .

In the case of a component that does not operate through failure, then the surviving component would be at the same failure rate, when the system resumes operation.

Illustration of a component with a linearly increasing failure rate and the effect of operation through system failure.

Deterministic View, Simple Parallel

Consider the following system where fails every 100, every 120, every 140 and every 160 time units. Each takes 10 time units to restore. Furthermore, assume that components do not age when the system is down.

A deterministic system view is shown in the figure below. The sequence of events is as follows:

  1. At 100, fails and is repaired by 110. The system is failed.
  2. At 130, fails and is repaired by 140. The system continues to operate.
  3. At 150, fails and is repaired by 160. The system continues to operate.
  4. At 170, fails and is repaired by 180. The system is failed.
  5. At 220, fails and is repaired by 230. The system is failed.
  6. At 280, fails and is repaired by 290. The system continues to operate.
  7. End at 300.
Overview of simple redundant system with four components.

Additional Notes

It should be noted that we are dealing with these events deterministically in order to better illustrate the methodology. When dealing with deterministic events, it is possible to create a sequence of events that one would not expect to encounter probabilistically. One such example consists of two units in series that do not operate through failure but both fail at exactly 100, which is highly unlikely in a real-world scenario. In this case, the assumption is that one of the events must occur at least an infinitesimal amount of time ( before the other. Probabilistically, this event is extremely rare, since both randomly generated times would have to be exactly equal to each other, to 15 decimal points. In the rare event that this happens, BlockSim would pick the unit with the lowest ID value as the first failure. BlockSim assigns a unique numerical ID when each component is created. These can be viewed by selecting the Show Block ID option in the Diagram Options window.

Deterministic Views of More Complex Systems

Even though the examples presented are fairly simplistic, the same approach can be repeated for larger and more complex systems. The reader can easily observe/visualize the behavior of more complex systems in BlockSim using the Up/Down plots. These are the same plots used in this chapter. It should be noted that BlockSim makes these plots available only when a single simulation run has been performed for the analysis (i.e., Number of Simulations = 1). These plots are meaningless when doing multiple simulations because each run will yield a different plot.

Probabilistic View, Simple Series

In a probabilistic case, the failures and repairs do not happen at a fixed time and for a fixed duration, but rather occur randomly and based on an underlying distribution, as shown in the following figures.

A single component with a probabilistic failure time and repair duration.
A system up/down plot illustrating a probabilistic failure time and repair duration for component B.

We use discrete event simulation in order to analyze (understand) the system behavior. Discrete event simulation looks at each system/component event very similarly to the way we looked at these events in the deterministic example. However, instead of using deterministic (fixed) times for each event occurrence or duration, random times are used. These random times are obtained from the underlying distribution for each event. As an example, consider an event following a 2-parameter Weibull distribution. The cdf of the 2-parameter Weibull distribution is given by:

The Weibull reliability function is given by:

Then, to generate a random time from a Weibull distribution with a given and , a uniform random number from 0 to 1, , is first obtained. The random time from a Weibull distribution is then obtained from:

To obtain a conditional time, the Weibull conditional reliability function is given by:

Or:

The random time would be the solution for for .

To illustrate the sequence of events, assume a single block with a failure and a repair distribution. The first event, , would be the failure of the component. Its first time-to-failure would be a random number drawn from its failure distribution, . Thus, the first failure event, , would be at . Once failed, the next event would be the repair of the component, . The time to repair the component would now be drawn from its repair distribution, . The component would be restored by time . The next event would now be the second failure of the component after the repair, . This event would occur after a component operating time of after the item is restored (again drawn from the failure distribution), or at . This process is repeated until the end time. It is important to note that each run will yield a different sequence of events due to the probabilistic nature of the times. To arrive at the desired result, this process is repeated many times and the results from each run (simulation) are recorded. In other words, if we were to repeat this 1,000 times, we would obtain 1,000 different values for , or . The average of these values, , would then be the average time to the first event, , or the mean time to first failure (MTTFF) for the component. Obviously, if the component were to be 100% renewed after each repair, then this value would also be the same for the second failure, etc.

General Simulation Results

To further illustrate this, assume that components A and B in the prior example had normal failure and repair distributions with their means equal to the deterministic values used in the prior example and standard deviations of 10 and 1 respectively. That is, . The settings for components C and D are not changed. Obviously, given the probabilistic nature of the example, the times to each event will vary. If one were to repeat this number of times, one would arrive at the results of interest for the system and its components. Some of the results for this system and this example, over 1,000 simulations, are provided in the figure below and explained in the next sections.

Summary of system results for 1,000 simulations.

The simulation settings are shown in the figure below.

BlockSim simulation window.

General

Mean Availability (All Events),

This is the mean availability due to all downing events, which can be thought of as the operational availability. It is the ratio of the system uptime divided by the total simulation time (total time). For this example:

Std Deviation (Mean Availability)

This is the standard deviation of the mean availability of all downing events for the system during the simulation.

Mean Availability (w/o PM, OC & Inspection),

This is the mean availability due to failure events only and it is 0.971 for this example. Note that for this case, the mean availability without preventive maintenance, on condition maintenance and inspection is identical to the mean availability for all events. This is because no preventive maintenance actions or inspections were defined for this system. We will discuss the inclusion of these actions in later sections.

Downtimes caused by PM and inspections are not included. However, if the PM or inspection action results in the discovery of a failure, then these times are included. As an example, consider a component that has failed but its failure is not discovered until the component is inspected. Then the downtime from the time failed to the time restored after the inspection is counted as failure downtime, since the original event that caused this was the component's failure.

Point Availability (All Events),

This is the probability that the system is up at time . As an example, to obtain this value at = 300, a special counter would need to be used during the simulation. This counter is increased by one every time the system is up at 300 hours. Thus, the point availability at 300 would be the times the system was up at 300 divided by the number of simulations. For this example, this is 0.930, or 930 times out of the 1000 simulations the system was up at 300 hours.

Reliability (Fail Events),

This is the probability that the system has not failed by time . This is similar to point availability with the major exception that it only looks at the probability that the system did not have a single failure. Other (non-failure) downing events are ignored. During the simulation, a special counter again must be used. This counter is increased by one (once in each simulation) if the system has had at least one failure up to 300 hours. Thus, the reliability at 300 would be the number of times the system did not fail up to 300 divided by the number of simulations. For this example, this is 0 because the system failed prior to 300 hours 1000 times out of the 1000 simulations.

It is very important to note that this value is not always the same as the reliability computed using the analytical methods, depending on the redundancy present. The reason that it may differ is best explained by the following scenario:

Assume two units in parallel. The analytical system reliability, which does not account for repairs, is the probability that both units fail. In this case, when one unit goes down, it does not get repaired and the system fails after the second unit fails. In the case of repairs, however, it is possible for one of the two units to fail and get repaired before the second unit fails. Thus, when the second unit fails, the system will still be up due to the fact that the first unit was repaired.

Expected Number of Failures,

This is the average number of system failures. The system failures (not downing events) for all simulations are counted and then averaged. For this case, this is 3.188, which implies that a total of 3,188 system failure events occurred over 1000 simulations. Thus, the expected number of system failures for one run is 3.188. This number includes all failures, even those that may have a duration of zero.

Std Deviation (Number of Failures)

This is the standard deviation of the number of failures for the system during the simulation.

MTTFF

MTTFF is the mean time to first failure for the system. This is computed by keeping track of the time at which the first system failure occurred for each simulation. MTTFF is then the average of these times. This may or may not be identical to the MTTF obtained in the analytical solution for the same reasons as those discussed in the Point Reliability section. For this case, this is 100.2511. This is fairly obvious for this case since the mean of one of the components in series was 100 hours.

It is important to note that for each simulation run, if a first failure time is observed, then this is recorded as the system time to first failure. If no failure is observed in the system, then the simulation end time is used as a right censored (suspended) data point. MTTFF is then computed using the total operating time until the first failure divided by the number of observed failures (constant failure rate assumption). Furthermore, and if the simulation end time is much less than the time to first failure for the system, it is also possible that all data points are right censored (i.e., no system failures were observed). In this case, the MTTFF is again computed using a constant failure rate assumption, or:

where is the simulation end time and is the number of simulations. One should be aware that this formulation may yield unrealistic (or erroneous) results if the system does not have a constant failure rate. If you are trying to obtain an accurate (realistic) estimate of this value, then your simulation end time should be set to a value that is well beyond the MTTF of the system (as computed analytically). As a general rule, the simulation end time should be at least three times larger than the MTTF of the system.

MTBF (Total Time)

This is the mean time between failures for the system based on the total simulation time and the expected number of system failures. For this example:

MTBF (Uptime)

This is the mean time between failures for the system, considering only the time that the system was up. This is calculated by dividing system uptime by the expected number of system failures. You can also think of this as the mean uptime. For this example:

MTBE (Total Time)

This is the mean time between all downing events for the system, based on the total simulation time and including all system downing events. This is calculated by dividing the simulation run time by the number of downing events ().

MTBE (Uptime)

his is the mean time between all downing events for the system, considering only the time that the system was up. This is calculated by dividing system uptime by the number of downing events ().

System Uptime/Downtime

Uptime,

This is the average time the system was up and operating. This is obtained by taking the sum of the uptimes for each simulation and dividing it by the number of simulations. For this example, the uptime is 269.137. To compute the Operational Availability, for this system, then:

CM Downtime,

This is the average time the system was down for corrective maintenance actions (CM) only. This is obtained by taking the sum of the CM downtimes for each simulation and dividing it by the number of simulations. For this example, this is 30.863. To compute the Inherent Availability, for this system over the observed time (which may or may not be steady state, depending on the length of the simulation), then:

Inspection Downtime

This is the average time the system was down due to inspections. This is obtained by taking the sum of the inspection downtimes for each simulation and dividing it by the number of simulations. For this example, this is zero because no inspections were defined.

PM Downtime,

This is the average time the system was down due to preventive maintenance (PM) actions. This is obtained by taking the sum of the PM downtimes for each simulation and dividing it by the number of simulations. For this example, this is zero because no PM actions were defined.

OC Downtime,

This is the average time the system was down due to on-condition maintenance (PM) actions. This is obtained by taking the sum of the OC downtimes for each simulation and dividing it by the number of simulations. For this example, this is zero because no OC actions were defined.

Waiting Downtime,

This is the amount of time that the system was down due to crew and spare part wait times or crew conflict times. For this example, this is zero because no crews or spare part pools were defined.

Total Downtime,

This is the downtime due to all events. In general, one may look at this as the sum of the above downtimes. However, this is not always the case. It is possible to have actions that overlap each other, depending on the options and settings for the simulation. Furthermore, there are other events that can cause the system to go down that do not get counted in any of the above categories. As an example, in the case of standby redundancy with a switch delay, if the settings are to reactivate the failed component after repair, the system may be down during the switch-back action. This downtime does not fall into any of the above categories but it is counted in the total downtime.

For this example, this is identical to .

System Downing Events

System downing events are events associated with downtime. Note that events with zero duration will appear in this section only if the task properties specify that the task brings the system down or if the task properties specify that the task brings the item down and the item’s failure brings the system down.

Number of Failures,

This is the average number of system downing failures. Unlike the Expected Number of Failures, this number does not include failures with zero duration. For this example, this is 3.188.

Number of CMs,

This is the number of corrective maintenance actions that caused the system to fail. It is obtained by taking the sum of all CM actions that caused the system to fail divided by the number of simulations. It does not include CM events of zero duration. For this example, this is 3.188. Note that this may differ from the Number of Failures, . An example would be a case where the system has failed, but due to other settings for the simulation, a CM is not initiated (e.g., an inspection is needed to initiate a CM).

Number of Inspections,

This is the number of inspection actions that caused the system to fail. It is obtained by taking the sum of all inspection actions that caused the system to fail divided by the number of simulations. It does not include inspection events of zero duration. For this example, this is zero.

Number of PMs,

This is the number of PM actions that caused the system to fail. It is obtained by taking the sum of all PM actions that caused the system to fail divided by the number of simulations. It does not include PM events of zero duration. For this example, this is zero.

Number of OCs,

This is the number of OC actions that caused the system to fail. It is obtained by taking the sum of all OC actions that caused the system to fail divided by the number of simulations. It does not include OC events of zero duration. For this example, this is zero.

Number of OFF Events by Trigger,

This is the total number of events where the system is turned off by state change triggers. An OFF event is not a system failure but it may be included in system reliability calculations. For this example, this is zero.

Total Events,

This is the total number of system downing events. It also does not include events of zero duration. It is possible that this number may differ from the sum of the other listed events. As an example, consider the case where a failure does not get repaired until an inspection, but the inspection occurs after the simulation end time. In this case, the number of inspections, CMs and PMs will be zero while the number of total events will be one.

Costs and Throughput

Cost and throughput results are discussed in later sections.

Note About Overlapping Downing Events

It is important to note that two identical system downing events (that are continuous or overlapping) may be counted and viewed differently. As shown in Case 1 of the following figure, two overlapping failure events are counted as only one event from the system perspective because the system was never restored and remained in the same down state, even though that state was caused by two different components. Thus, the number of downing events in this case is one and the duration is as shown in CM system. In the case that the events are different, as shown in Case 2 of the figure below, two events are counted, the CM and the PM. However, the downtime attributed to each event is different from the actual time of each event. In this case, the system was first down due to a CM and remained in a down state due to the CM until that action was over. However, immediately upon completion of that action, the system remained down but now due to a PM action. In this case, only the PM action portion that kept the system down is counted.

Duration and count of different overlapping events.

System Point Results

The system point results, as shown in the figure below, shows the Point Availability (All Events), , and Point Reliability, , as defined in the previous section. These are computed and returned at different points in time, based on the number of intervals selected by the user. Additionally, this window shows , , ,, , , , , , and .

The number of intervals shown is based on the increments set. In this figure, the number of increments set was 300, which implies that the results should be shown every hour. The results shown in this figure are for 10 increments, or shown every 30 hours.

Results by Component

Simulation results for each component can also be viewed. The figure below shows the results for component A. These results are explained in the sections that follow.

The Block Details results for component A.

General Information

Number of Block Downing Events,

This the number of times the component went down (failed). It includes all downing events.

Number of System Downing Events,

This is the number of times that this component's downing caused the system to be down. For component , this is 2.038. Note that this value is the same in this case as the number of component failures, since the component A is reliability-wise in series with components D and components B, C. If this were not the case (e.g., if they were in a parallel configuration, like B and C), this value would be different.

Number of Failures,

This is the number of times the component failed and does not include other downing events. Note that this could also be interpreted as the number of spare parts required for CM actions for this component. For component , this is 2.038.

Number of System Downing Failures,

This is the number of times that this component's failure caused the system to be down. Note that this may be different from the Number of System Downing Events. It only counts the failure events that downed the system and does not include zero duration system failures.

Number of OFF events by Trigger,

The total number of events where the block is turned off by state change triggers. An OFF event is not a failure but it may be included in system reliability calculations.

Mean Availability (All Events),

This has the same definition as for the system with the exception that this accounts only for the component.

Mean Availability (w/o PM, OC & Inspection),

The mean availability of all downing events for the block, not including preventive, on condition or inspection tasks, during the simulation.

Block Uptime,

This is tThe total amount of time that the block was up (i.e., operational) during the simulation. For component , this is 279.8212.

Block Downtime,

This is the average time the component was down for any reason. For component , this is 20.1788.

Block Downtime shows the total amount of time that the block was down (i.e., not operational) during the simulation.

Metrics

RS DECI

The ReliaSoft Downing Event Criticality Index for the block. This is a relative index showing the percentage of times that a downing event of the block caused the system to go down (i.e., the number of system downing events caused by the block divided by the total number of system downing events). For component , this is 63.93%. This implies that 63.93% of the times that the system went down, the system failure was due to the fact that component went down. This is obtained from:

Mean Time Between Downing Events

This is the mean time between downing events of the component, which is computed from:

For component , this is 137.3019.

RS FCI

ReliaSoft's Failure Criticality Index (RS FCI) is a relative index showing the percentage of times that a failure of this component caused a system failure. For component , this is 63.93%. This implies that 63.93% of the times that the system failed, it was due to the fact that component failed. This is obtained from:

is a special counter of system failures not included in . This counter is not explicitly shown in the results but is maintained by the software. The reason for this counter is the fact that zero duration failures are not counted in since they really did not down the system. However, these zero duration failures need to be included when computing RS FCI.

It is important to note that for both RS DECI and RS FCI, and if overlapping events are present, the component that caused the system event gets credited with the system event. Subsequent component events that do not bring the system down (since the system is already down) do not get counted in this metric.

MTBF,

Mean time between failures is the mean (average) time between failures of this component, in real clock time. This is computed from:

is the downtime of the component due to failures only (without PM, OC and inspection). The discussion regarding what is a failure downtime that was presented in the section explaining Mean Availability (w/o PM & Inspection) also applies here. For component , this is 137.3019. Note that this value could fluctuate for the same component depending on the simulation end time. As an example, consider the deterministic scenario for this component. It fails every 100 hours and takes 10 hours to repair. Thus, it would be failed at 100, repaired by 110, failed at 210 and repaired by 220. Therefore, its uptime is 280 with two failure events, MTBF = 280/2 = 140. Repeating the same scenario with an end time of 330 would yield failures at 100, 210 and 320. Thus, the uptime would be 300 with three failures, or MTBF = 300/3 = 100. Note that this is not the same as the MTTF (mean time to failure), commonly referred to as MTBF by many practitioners.

Mean Downtime per Event,

Mean downtime per event is the average downtime for a component event. This is computed from:

RS DTCI

The ReliaSoft Downtime Criticality Index for the block. This is a relative index showing the contribution of the block to the system’s downtime (i.e., the system downtime caused by the block divided by the total system downtime).

RS BCCI

The ReliaSoft Block Cost Criticality Index for the block. This is a relative index showing the contribution of the block to the total costs (i.e., the total block costs divided by the total costs).

Non-Waiting Time CI

A relative index showing the contribution of repair times to the block’s total downtime. (The ratio of the time that the crew is actively working on the item to the total down time).

Total Waiting Time CI

A relative index showing the contribution of wait factor times to the block’s total downtime. Wait factors include crew conflict times, crew wait times and spare part wait times. (The ratio of downtime not including active repair time).

Waiting for Opportunity/Maximum Wait Time Ratio

A relative index showing the contribution of crew conflict times. This is the ratio of the time spent waiting for the crew to respond (not including crew logistic delays) to the total wait time (not including the active repair time).

Crew/Part Wait Ratio

The ratio of the crew and part delays. A value of 100% means that both waits are equal. A value greater than 100% indicates that the crew delay was in excess of the part delay. For example, a value of 200% would indicate that the wait for the crew is two times greater than the wait for the part.

Part/Crew Wait Ratio

The ratio of the part and crew delays. A value of 100% means that both waits are equal. A value greater than 100% indicates that the part delay was in excess of the crew delay. For example, a value of 200% would indicate that the wait for the part is two times greater than the wait for the crew.

Downtime Summary

Non-Waiting Time

Time that the block was undergoing active maintenance/inspection by a crew. If no crew is defined, then this will return zero.

Waiting for Opportunity

The total downtime for the block due to crew conflicts (i.e., time spent waiting for a crew while the crew is busy with another task). If no crew is defined, then this will return zero.

Waiting for Crew

The total downtime for the block due to crew wait times (i.e., time spent waiting for a crew due to logistical delay). If no crew is defined, then this will return zero.

Waiting for Parts

The total downtime for the block due to spare part wait times. If no spare part pool is defined then this will return zero.

Other Results of Interest

The remaining component (block) results are similar to those defined for the system with the exception that now they apply only to the component.

Imperfect Repairs

Restoration Factors (RF)

In the prior discussion it was assumed that a repaired component is as good as new after repair. This is usually the case when replacing a component with a new one. The concept of a restoration factor may be used in cases in which one wants to model imperfect repair, or a repair with a used component. The best way to indicate that a component is not as good as new is to give the component some age. As an example, if one is dealing with car tires, a tire that is not as good as new would have some pre-existing wear on it. In other words, the tire would have some accumulated mileage. A restoration factor concept is used to better describe the existing age of a component. The restoration factor is used to determine the age of the component after a repair or any other maintenance action (addressed in later sections, such as a PM action or inspection).

The restoration factor in BlockSim is defined as a number between 0 and 1 and has the following effect:

  1. A restoration factor of 1 (100%) implies that the component is as good as new after repair, which in effect implies that the starting age of the component is 0.
  2. A restoration factor of 0 implies that the component is the same as it was prior to repair, which in effect implies that the starting age of the component is the same as the age of the component at failure.
  3. A restoration factor of 0.25 (25%) implies that the starting age of the component is equal to 75% of the age of the component at failure.

The figure below provides a visual demonstration of restoration factors. It should be noted that for successive maintenance actions on the same component, the age of the component after such an action is the initial age plus the time to failure since the last maintenance action.

Different restoration factors(RF).

Type I and Type II RFs

BlockSim offers two kinds of restoration factors. The type I restoration factor is based on Kijima [12, 13] model I and assumes that the repairs can only fix the wear-out and damage incurred during the last period of operation. Thus, the nth repair can only remove the damage incurred during the time between the (n-1)th and nth failures. The type II restoration factor, based on Kijima model II, assumes that the repairs fix all of the wear-out and damage accumulated up to the current time. As a result, the nth repair not only removes the damage incurred during the time between the (n-1)th and nth failures, but can also fix the cumulative damage incurred during the time from the first failure to the (n-1)th failure.

A Repairable System Structure

To illustrate this, consider a repairable system, observed from time , as shown in figure above. Let the successive failure times be denoted by , , ... and let the times between failures be denoted by , , .... Let denote the restoration factor, then the age of the system at time using the two types of restoration factors is:

Type I Restoration Factor:

Type II Restoration Factor:

Illustrating Type I RF Through an Example

Assume that you have a component with a Weibull failure distribution ( = 1.5, = 1000 hours), RF type I = 0.25 and the component undergoes instant repair. Furthermore, assume that the component starts life new (i.e., with a start age of zero). The simulation steps are as follows:

  1. Generate a uniform random number, .
  2. Then, the first failure event will be at 500 hours.
  3. After instantaneous repair, the component will begin life with an age after repair of 375 hours ((1 - 0.25) x 500).
  4. Generate another uniform random number, .
  5. The next failure event is now determined using the conditional reliability equation, or:



Thus, the next failure event will be at 500 + 126.024 = 626.024 hours. Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.
6. At this failure point, the item's age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.
7. Thus, the age after the second repair will be the sum of the previous age and the restoration factor times the age of the component since the last failure, or 375 + ((1-0.25) x 126.024) = 469.518 hours.
8. Go to Step 4 and repeat the process.

Illustrating Type II RF Through an Example

Assume that you have a component with a Weibull failure distribution ( = 1.5, = 1000 hours), RF type II = 0.25 and the component undergoes instant repair. Furthermore, assume that the component starts life new (i.e., with a start age of zero). The simulation steps are as follows:

  1. Generate a uniform random number, .
  2. Then, the first failure event will be at 500 hours.
  3. After instantaneous repair, the component will begin life with an age after repair of 375 hours ((1 - 0.25) x 500).
  4. Generate another uniform random number, .
  1. The next failure event is now determined using the conditional reliability equation, or:



Thus, the next failure event will be at 500 + 126.024 = 626.024 hours. Note that if the component had been as good as new (i.e., RF = 100%), then the next failure would have been at 750 hours (500 + 250), where 250 hours is the time corresponding to a reliability of 0.8824969, which is the random number that was generated in Step 4.
6. At this failure point, the item's age will now be equal to the initial age, after the first corrective action, plus the additional time it operated, or 375 + 126.024 = 501.024 hours.
7. Thus, the age after the second repair will be the restoration factor times the age of the component at failure, or (1 - 0.25) x (375 + 126.024) = 375.768 hours.
8. Go to Step 4 and repeat the process.

Discussion of Type I and Type II RFs

As an application example, consider an automotive engine that fails after six years of operation. The engine is rebuilt. The rebuild has the effect of rejuvenating the engine to a condition as if it were three years old (i.e., a 50% RF). Assume that the rebuild affects all of the damage on the engine (i.e., a Type II restoration). The engine fails again after three years (when it again reaches an age of six) and another rebuild is required. This rebuild will also rejuvenate the engine by 50%, thus making it three years old again.

Now consider a similar engine subjected to a similar rebuild, but that the rebuild only affects the damage since the last repair (i.e., a Type I restoration of 50%). The first rebuild will rejuvenate the engine to a three-year-old condition. The engine will fail again after three years, but the rebuild this time will only affect the age (of three years) after the first rebuild. Thus the engine will have an age of four and a half years after the second rebuild ( ). After the second rebuild the engine will fail again after a period of one and a half years and a third rebuild will be required. The age of the engine after the third rebuild will be five years and three months ( ).

It should be pointed out that when dealing with constant failure rates (i.e., with a distribution such as the exponential), the restoration factor has no effect.

Calculations to Obtain RFs

The two types of restoration factors discussed in the previous sections can be calculated using the parametric RDA (Recurrent Data Analysis) tool in Weibull++. This tool uses the GRP (General Renewal Process) model to analyze failure data of a repairable item. More information on the Parametric RDA tool and the GRP (General Renewal Process) model can be found in [25]. As an example, consider the times to failure for an air-conditioning unit of an aircraft recorded in the following table. Assume that each time the unit is repaired, the repair can only remove the damage incurred during the last period of operation. This assumption implies a type I RF factor which is specified as an analysis setting in the Weibull++ folio. The type I RF for the air-conditioning unit can be calculated using the results from Weibull++ shown in the figure below.

Using the Parametric RDA tool in Weibull++ to calculate restoration factors.

The value of the action effectiveness factor obtained from Weibull++ is:

The type I RF factor is calculated using as:

The parameters of the Weibull distribution for the air-conditioning unit can also be calculated. is obtained from Weibull++ as 1.1976. can be calculated using the and values from Weibull++ as:

The values of the type I RF, and calculated above can now be used to model the air-conditioning unit as a component in BlockSim.

Using Resources: Pools and Crews

In order to make the analysis more realistic, one may wish to consider additional sources of delay times in the analysis or study the effect of limited resources. In the prior examples, we used a repair distribution to identify how long it takes to restore a component. The factors that one chooses to consider in this time may include the time it takes to do the repair and/or the time it takes to get a crew, a spare part, etc. While all of these factors may be included in the repair duration, optimized usage of these resources can only be achieved if the resources are studied individually and their dependencies are identified.

As an example, consider the situation where two components in parallel fail at the same time and only a single repair person is available. Because this person would not be able to execute the repair on both components simultaneously, an additional delay will be encountered that also needs to be included in the modeling. One way to accomplish this is to assign a specific repair crew to each component.

Including Crews

BlockSim allows you to assign maintenance crews to each component and one or more crews may be assigned to each component from the Maintenance Task Properties window. Note that there may be different crews for each action, (i.e., corrective, preventive, on condition and inspection).

A crew record needs to be defined for each named crew, as shown in the picture below. The basic properties for each crew include factors such as:

  • Logistic delays. How long does it take for the crew to arrive?
  • Is there a limit to the number of tasks this crew can perform at the same time? If yes, how many simultaneous tasks can the crew perform?
  • What is the cost per hour for the crew?
  • What is the cost per incident for the crew?

Illustrating Crew Use

To illustrate the use of crews in BlockSim, consider the deterministic scenario described by the following RBD and properties.


Unit Failure Repair Crew
Crew  : Delay = 20, Single Task
Crew  : Delay = 20, Single Task
Crew  : Delay = 20, Single Task
Crew  : Delay = 20, Single Task


As shown in the figure above, the System Up/Down plot illustrates the sequence of events, which are:

  1. At 100, fails. It takes 20 to get the crew and 10 to repair, thus the component is repaired by 130. The system is failed/down during this time.
  2. At 150, fails since it would have accumulated an operating age of 120 by this time. It again has to wait for the crew and is repaired by 190.
  3. At 170, fails. Upon this failure, requests the only available crew. However, this crew is currently engaged by and, since the crew can only perform one task at a time, it cannot respond immediately to the request by . Thus, will remain failed until the crew becomes available. The crew will finish with unit at 190 and will then be dispatched to . Upon dispatch, the logistic delay will again be considered and will be repaired by 230. The system continues to operate until the failures of and overlap (i.e., the system is down from 170 to 190)
  4. At 210, fails. It again has to wait for the crew and repair.
  5. is up at 260.

The following figure shows an example of some of the possible crew results (details), which are presented next.

Crew results shown in the BlockSim's Simulation Results Explorer.

Explanation of the Crew Details

  1. Each request made to a crew is logged.
  2. If a request is successful (i.e., the crew is available), the call is logged once in the Calls Received counter and once in the Accepted Calls counter.
  3. If a request is not accepted (i.e., the crew is busy), the call is logged once in the Calls Received counter and once in the Rejected Calls counter. When the crew is free and can be called upon again, the call is logged once in the Calls Received counter and once in the Accepted Calls counter.
  4. In this scenario, there were two instances when the crew was not available, Rejected Calls = 2, and there were four instances when the crew performed an action, Calls Accepted = 4, for a total of six calls, Calls Received = 6.
  5. Percent Accepted and Percent Rejected are the ratios of calls accepted and calls rejected with respect to the total calls received.
  6. Total Utilization is the total time that the crew was used. It includes both the time required to complete the repair action and the logistic time. In this case, this is 140, or:
6. Average Call Duration is the average duration of each crew usage, and it also includes both logistic and repair time. It is the total usage divided by the number of accepted calls. In this case, this is 35.
7. Total Wait Time is the time that blocks in need of a repair waited for this crew. In this case, it is 40 ( and both waited 20 each).
8. Total Crew Costs are the total costs for this crew. It includes the per incident charge as well as the per unit time costs. In this case, this is 180. There were four incidents at 10 each for a total of 40, as well as 140 time units of usage at 1 cost unit per time unit.
9. Average Cost per Call is the total cost divided by the number of accepted calls. In this case, this is 45.

Note that crew costs that are attributed to individual blocks can be obtained from the Blocks reports, as shown in the figure below.

Allocation of crew costs.

How BlockSim Handles Crews

  1. Crew logistic time is added to each repair time.
  2. The logistic time is always present, and the same, regardless of where the crew was called from (i.e., whether the crew was at another job or idle at the time of the request).
  3. For any given simulation, each crew's logistic time is constant (taken from the distribution) across that single simulation run regardless of the task (CM, PM or inspection).
  4. A crew can perform either a finite number of simultaneous tasks or an infinite number.
  5. If the finite limit of tasks is reached, the crew will not respond to any additional request until the number of tasks the crew is performing is less than its finite limit.
  6. If a crew is not available to respond, the component will "wait" until a crew becomes available.
  7. BlockSim maintains the queue of rejected calls and will dispatch the crew to the next repair on a "first come, first served" basis.
  8. Multiple crews can be assigned to a single block (see overview in the next section).
  9. If no crew has been assigned for a block, it is assumed that no crew restrictions exist and a default crew is used. The default crew can perform an infinite number of simultaneous tasks and has no delays or costs.

Looking at Multiple Crews

Multiple crews may be available to perform maintenance for a particular component. When multiple crews have been assigned to a block in BlockSim, the crews are assigned to perform maintenance based on their order in the crew list, as shown in the figure below.

A single component with two corrective maintenance crews assigned to it.

In the case where more than one crew is assigned to a block, and if the first crew is unavailable, then the next crew is called upon and so forth. As an example, consider the prior case but with the following modifications (i.e., Crews and are assigned to all blocks):


Unit Failure Repair Crew


Crew  ; Delay = 20, Single Task
Crew  ; Delay = 30, Single Task


The system would behave as shown in the figure below.

In this case, Crew was used for the repair since Crew was busy. On all others, Crew was used. It is very important to note that once a crew has been assigned to a task it will complete the task. For example, if we were to change the delay time for Crew to 100, the system behavior would be as shown in the figure below.

System up/down plot with the delay time for Crew B changed to 100.

In other words, even though Crew would have finished the repair on more quickly if it had been available when originally called, was assigned the task because was not available at the instant that the crew was needed.

Additional Rules on Crews

1. If all assigned crews are engaged, the next crew that will be chosen is the crew that can get there first.
a) This accounts for the time it would take a particular crew to complete its current task (or all tasks in its queue) and its logistic time.
2. If a crew is available, it gets used regardless of what its logistic delay time is.
a) In other words, if a crew with a shorter logistic time is busy, but almost done, and another crew with a much higher logistic time is currently free, the free one will get assigned to the task.
3. For each simulation each crew's logistic time is computed (taken randomly from its distribution or its fixed time) at the beginning of the simulation and remains constant across that one simulation for all actions (CM, PM and inspection).

Using Spare Part Pools

BlockSim also allows you to specify spare part pools (or depots). Spare part pools allow you to model and manage spare part inventory and study the effects associated with limited inventories. Each component can have a spare part pool associated with it. If a spare part pool has not been defined for a block, BlockSim's analysis assumes a default pool of infinite spare parts. To speed up the simulation, no details on pool actions are kept during the simulation if the default pool is used.

Pools allow you to define multiple aspects of the spare part process, including stock levels, logistic delays and restock options. Every time a part is repaired under a CM or scheduled action (PM, OC and Inspection), a spare part is obtained from the pool. If a part is available in the pool, it is then used for the repair. Spare part pools perform their actions based on the simulation clock time.

Spare Properties

A spare part pool is identified by a name. The general properties of the pool are its stock level (must be greater than zero), cost properties and logistic delay time. If a part is available (in stock), the pool will dispense that part to the requesting block after the specified logistic time has elapsed. One needs to think of a pool as an independent entity. It accepts requests for parts from blocks and dispenses them to the requesting blocks after a given logistic time. Requests for spares are handled on a first come, first served basis. In other words, if two blocks request a part and only one part is in stock, the first block that made the request will receive the part. Blocks request parts from the pool immediately upon the initiation of a CM or scheduled event (PM, OC and Inspection).

Restocking the Pool

If the pool has a finite number of spares, restock actions may be incorporated. The figure below shows the restock properties. Specifically, a pool can restock itself either through a scheduled restock action or based on specified conditions.

A scheduled restock action adds a set number of parts to the pool on a predefined scheduled part arrival time. For the settings in the figure above, one spare part would be added to the pool every 100 hours, based on the system (simulation) time. In other words, for a simulation of 1,000 hours, a spare part would arrive at 100 hours, 200 hours, etc. The part is available to the pool immediately after the restock action and without any logistic delays.

In an on-condition restock, a restock action is initiated when the stock level reaches (or is below) a specified value. In figure above, five parts are ordered when the stock level reaches 0. Note that unlike the scheduled restock, parts added through on-condition restock become available after a specified logistic delay time. In other words, when doing a scheduled restock, the parts are pre-ordered and arrive when needed. Whereas in the on-condition restock, the parts are ordered when the condition occurs and thus arrive after a specified time. For on-condition restocks, the condition is triggered if and only if the stock level drops to or below the specified stock level, regardless of how the spares arrived to the pool or were distributed by the pool. In addition, the restock trigger value must be less than the initial stock.

Lastly, a maximum capacity can be assigned to the pool. If the maximum capacity is reached, no more restock actions are performed. This maximum capacity must be equal to or greater than the initial stock. When this limit is reached, no more items are added to the pool. For example, if the pool has a maximum capacity of ten and a current stock level of eight and if a restock action is set to add five items to the pool, then only two will be accepted.

Obtaining Emergency Spares

Emergency restock actions can also be defined. The figure below illustrates BlockSim's Emergency Spare Provisions options. An emergency action is triggered only when a block requests a spare and the part is not currently in stock. This is the only trigger condition. It does not account for whether a part has been ordered or if one is scheduled to arrive. Emergency spares are ordered when the condition is triggered and arrive after a time equal to the required time to obtain emergency spare(s).

Summary of Rules for Spare Part Pools

The following rules summarize some of the logic when dealing with spare part pools.

Basic Logic Rules

1. Queue Based: Requests for spare parts from blocks are queued and executed on a "first come, first served" basis.
2. Emergency: Emergency restock actions are performed only when a part is not available.
3. Scheduled Restocks: Scheduled restocks are added instantaneously to the pool at the scheduled time.
4. On-Condition Restock: On-condition restock happens when the specified condition is reached (e.g., when the stock drops to two or if a request is received for a part and the stock is below the restock level).
a) For example, if a pool has three items in stock and it dispenses one, an on-condition restock is initiated the instant that the request is received (without regard to the logistic delay time). The restocked items will be available after the required time for stock arrival has elapsed.
b) The way that this is defined allows for the possibility of multiple restocks. Specifically, every time a part needs to be dispensed and the stock is lower than the specified quantity, parts are ordered. In the case of a long logistic delay time, it is possible to have multiple re-orders in the queue.
5. Parts Become Available after Spare Acquisition Logistic Delay: If there is a spare acquisition logistic time delay, the requesting block will get the part after that delay.
a) For example, if a block with a repair duration of 10 fails at 100 and requests a part from a pool with a logistic delay time of 10, that block will not be up until 120.
6. Compound Delays: If a part is not available and an emergency part (or another part) can be obtained, then the total wait time for the part is the sum of both the logistic time and the required time to obtain a spare.
7. First Available Part is Dispensed to the First Block in the Queue: The pool will dispense a requested part if it has one in stock or when it becomes available, regardless of what action (i.e., as needed restock or emergency restock) that request may have initiated.
a) For example, if Block A requests a part from a pool and that triggers an emergency restock action, but a part arrives before the emergency restock through another action (e.g., scheduled restock), then the pool will dispense the newly arrived part to Block A (if Block A is next in the queue to receive a part).
8. Blocks that Trigger an Action Get Charged with the Action: A block that triggers an emergency restock is charged for the additional cost to obtain the emergency part, even if it does not use an emergency part (i.e., even if another part becomes available first).
9. Triggered Action Cannot be Canceled. If a block triggers a restock action but then receives a part from another source, the action that the block triggered is not canceled.
a) For example, if Block A initiates an emergency restock action but was then able to use a part that became available through other actions, the emergency request is not canceled and an emergency spare part will be added to the pool's stock level.
b) Another way to explain this is by looking at the part acquisition logistic times as transit times. Because an ordered part is en-route to you after you order it, you will receive it regardless of whether the conditions have changed and you no longer need it.

Simultaneous Dispatch of Crews and Parts Logic

Some special rules apply when a block has both logistic delays in acquiring parts from a pool and when waiting for crews. BlockSim dispatches requests for crews and spare parts simultaneously. The repair action does not start until both crew and part arrive, as shown next.

If a crew arrives and it has to wait for a part, then this time (and cost) is added to the crew usage time.

Example Using Both Crews and Pools

Consider the following example, using both crews and pools.

where:

And the crews are:

While the spare pool is:

The behavior of this system from 0 to 300 is shown graphically in the figure below.

The discrete system events during that time are as follows:

1. Component fails at 100 and Crew is engaged.
a) At 110, Crew arrives and completes the repair by 120.
b) This repair uses the only spare part in inventory and triggers an on-condition restock. A part is ordered and is scheduled to arrive at 160.
c) A scheduled restock part is also set to arrive at 150.
d) Pool [on-hand = 0, pending: 150, 160].
2. Component fails at 121. Crew is available and it is engaged.
a) Crew arrives by 131 but no part is available.
b) The failure finds the pool with no parts, triggering the on-condition restock. A part was ordered and is scheduled to arrive at 181.
c) Pool [on-hand = 0, pending: 150, 160, 181].
d) At 150, the first part arrives and is used by Component .
e) Repair on Component is completed 20 time units later, at 170.
f) Pool [on-hand=0, pending: 160, 181].
3. Component fails at 122. Crew is already engaged by Component , thus Crew is engaged.
a) Crew arrives at 137 but no part is available.
b) The failure finds the pool with no parts, triggering the on-condition restock. A part is ordered and is scheduled to arrive at 182.
c) Pool [on-hand = 0, pending: 160, 181,182].
d) At 160, the part arrives and Component is repaired by 180.
e) Pool [on-hand = 0, pending: 181,182].
4. Component fails at 123. No crews are available until 170 when Crew becomes available.
a) Crew arrives by 180 and has to wait for a part.
b) The failure found the pool with no parts, triggering the on-condition restock. A part is ordered and is scheduled to arrive at 183.
c) Pool [on-hand = 0, pending: 181,182, 183].
d) At 181, a part is obtained.
e) By 201, the repair is completed.
f) Pool [on-hand = 0, pending: 182, 183]
5. Component fails at 171 with no crew available.
a) Crew becomes available at 180 and arrives by 195.
b) The failure finds the pool with no parts, triggering the on-condition restock. A part is ordered and is scheduled to arrive at 231.
c) The next part becomes available at 182 and the repair is completed by 205.
d) Pool [on-hand = 0, pending: 183, 231]
6. End time is at 300. The last scheduled part arrives at the pool at 300.

Using Maintenance Tasks

One of the most important benefits of simulation is the ability to define how and when actions are performed. In our case, the actions of interest are part repairs/replacements. This is accomplished in BlockSim through the use of maintenance tasks. Specifically, four different types of tasks can be defined for maintenance actions: corrective maintenance, preventive maintenance, on condition maintenance and inspection.

Corrective Maintenance Tasks

A corrective maintenance task defines when a corrective maintenance (CM) action is performed. The figure below shows a corrective maintenance task assigned to a block in BlockSim. Corrective actions will be performed either immediately upon failure of the item or upon finding that the item has failed (for hidden failures that are not detected until an inspection). BlockSim allows the selection of either category.

  • Upon item failure: The CM action is initiated immediately upon failure. If the user doesn't specify the choice for a CM, then this is the default option. All prior examples were based on the instruction to perform a CM upon failure.
  • When found failed during an Inspection: The CM action will only be initiated after an inspection is done on the failed component. How and when the inspections are performed is defined by the block's inspection properties. This has the effect of defining a dependency between the corrective maintenance task and the inspection task.



More application examples are available! See also:

CM Triggered by Subsystem Down


Scheduled Tasks

Scheduled tasks can be performed on a known schedule, which can be based on any of the following:

  • A time interval, either fixed or dynamic, based on the item's age (item clock) or on calendar time (system clock). See Item and System Ages.
  • The occurrence of certain events, including:
    • The system goes down.
    • Certain events happen in a maintenance group. The events and groups are user-specified, and the item that the task is assigned to does not need to be part of the selected maintenance group(s).

The types of scheduled tasks include:

  • Inspection tasks
  • Preventive maintenance tasks
  • On condition tasks

Item and System Ages

It is important to keep in mind that the system and each component of the system maintain separate clocks within the simulation. When setting intervals to perform a scheduled task, the intervals can be based on either type of clock. Specifically:

  • Item age refers to the accumulated age of the block, which gets adjusted each time the block is repaired (i.e., restored). If the block is repaired at least once during the simulation, this will be different from the elapsed simulation time. For example, if the restoration factor is 1 (i.e., “as good as new”) and the assigned interval is 100 days based on item age, then the task will be scheduled to be performed for the first time at 100 days of elapsed simulation time. However, if the block fails at 85 days and it takes 5 days to complete the repair, then the block will be fully restored at 90 days and its accumulated age will be reset to 0 at that point. Therefore, if another failure does not occur in the meantime, the task will be performed for the first time 100 days later at 190 days of elapsed simulation time.
  • Calendar time refers to the elapsed simulation time. If the assigned interval is 100 days based on calendar time, then the task will be performed for the first time at 100 days of elapsed simulation time, for the second time at 200 days of elapsed simulation time and so on, regardless of whether the block fails and gets repaired correctively between those times.

Inspection Tasks

Like all scheduled tasks, inspections can be performed based on a time interval or upon certain events. Inspections can be specified to bring the item or system down or not.

Preventive Maintenance Tasks

The figure below shows the options available in a preventive maintenance (PM) task within BlockSim. PMs can be performed based on a time interval or upon certain events. Because PM tasks always bring the item down, one can also specify whether preventive maintenance will be performed if the task brings the system down.

On Condition Tasks

On condition maintenance relies on the capability to detect failures before they happen so that preventive maintenance can be initiated. If, during an inspection, maintenance personnel can find evidence that the equipment is approaching the end of its life, then it may be possible to delay the failure, prevent it from happening or replace the equipment at the earliest convenience rather then allowing the failure to occur and possibly cause severe consequences. In BlockSim, on condition tasks consist of an inspection task that triggers a preventive task when an impending failure is detected during inspection.

Failure Detection

Inspection tasks can be used to check for indications of an approaching failure. BlockSim models such indications of when an approaching failure will become detectable upon inspection using Failure Detection Threshold and P-F Interval. Failure detection threshold allows the user to enter a number between 0 and 1 indicating the percentage of an item's life that must elapse before an approaching failure can be detected. For instance, if the failure detection threshold value is set as 0.8 then this means that the failure of a component can be detected only during the last 20% of its life. If an inspection occurs during this time, an approaching failure is detected and the inspection triggers a preventive maintenance task to take the necessary precautions to delay the failure by either repairing or replacing the component.

The P-F interval allows the user to enter the amount of time before the failure of a component when the approaching failure can be detected by an inspection. The P-F interval represents the warning period that spans from P(when a potential failure can be detected) to F(when the failure occurs). If a P-F interval is set as 200 hours, then the approaching failure of the component can only be detected at 200 hours before the failure of the component. Thus, if a component has a fixed life of 1,000 hours and the P-F interval is set to 200 hours, then if an inspection occurs at or beyond 800 hours, then the approaching failure of the component that is to occur at 1,000 hours is detected by this inspection and a preventive maintenance task is triggered to take action against this failure.

Rules for On Condition Tasks
  • An inspection that finds a block at or beyond the failure detection threshold or within the range of the P-F interval will trigger the associated preventive task as long as preventive maintenance can be performed on that block.
  • If a non-downing inspection triggers a preventive maintenance action because the failure detection threshold or P-F interval range was reached, no other maintenance task will be performed between the inspection and the triggered preventive task; tasks that would otherwise have happened at that time due to system age, system down or group maintenance will be ignored.
  • A preventive task that would have been triggered by a non-downing inspection will not happen if the block fails during the inspection, as corrective maintenance will take place instead.
  • If a failure will occur within the failure detection threshold or P-F interval set for the inspection, but the preventive task is only supposed to be performed when the system is down, the simulation waits until the requirements of the preventive task are met to perform the preventive maintenance.
  • If the on condition inspection triggers the preventive maintenance part of the task, the simulation assumes that the maintenance crew will forego any routine servicing associated with the inspection part of the task. In other words, the restoration will come from the preventive maintenance, so any restoration factor defined for the inspection will be ignored in these circumstances.
Example Using P-F Interval

To illustrate the use of the P-F interval in BlockSim, consider a component that fails every 700 . The corrective maintenance on this equipment takes 100 to complete, while the preventive maintenance takes 50 to complete. Both the corrective and preventive maintenance actions have a type II restoration factor of 1. Inspection tasks of 10 duration are performed on the component every 300 . There is no restoration of the component during the inspections. The P-F interval for this component is 100 .

The component behavior from 0 to 2000 is shown in the figure below and described next.

  1. At 300 the first scheduled inspection of 10 duration occurs. At this time the age of the component is 300 . This inspection does not lie in the P-F interval of 100 (which begins at the age of 600 and ends at the age of 700 ). Thus, no approaching failure is detected during this inspection.
  2. At 600 the second scheduled inspection of 10 duration occurs. At this time the age of the component is 590 (no age is accumulated during the first inspection from 300 tu to 310 as the component does not operate during this inspection). Again this inspection does not lie in the P-F interval. Thus, no approaching failure is detected during this inspection.
  3. At 720 the component fails after having accumulated an age of 700 . A corrective maintenance task of 100 duration occurs to restore the component to as-good-as-new condition.
  4. At 900 the third scheduled inspection occurs. At this time the age of the component is 80 . This inspection does not lie in the P-F interval (from age 600 to 700 ). Thus, no approaching failure is detected during this inspection.
  5. At 1200 the fourth scheduled inspection occurs. At this time the age of the component is 370 . Again, this inspection does not lie in the P-F interval and no approaching failure is detected.
  6. At 1500 the fifth scheduled inspection occurs. At this time the age of the component is 660 , which lies in the P-F interval. As a result, an approaching failure is detected and the inspection triggers a preventive maintenance task. A preventive maintenance task of 50 duration occurs at 1510 to restore the component to as-good-as-new condition.
  7. At 1800 the sixth scheduled inspection occurs. At this time the age of the component is 240 . This inspection does not lie in the P-F interval (from age 600 tu to 700 ) and no approaching failure is detected.

Rules for PMs and Inspections

All the options available in the Maintenance task window were designed to maximize the modeling flexibility within BlockSim. However, maximizing the modeling flexibility introduces issues that you need to be aware of and requires you to carefully select options in order to assure that the selections do not contradict one another. One obvious case would be to define a PM action on a component in series (which will always bring the system down) and then assign a PM policy to the block that has the Do not perform maintenance if the action brings the system down option set. With these settings, no PMs will ever be performed on the component during the BlockSim simulation. The following sections summarize some issues and special cases to consider when defining maintenance properties in BlockSim.

  1. Inspections do not consume spare parts. However, an inspection can have a renewal effect on the component if the restoration factor is set to a number other than the default of 0.
  2. On the inspection tab, if Inspection brings system down is selected, this also implies that the inspection brings the item down.
  3. If a PM or an inspection are scheduled based on the item's age, then they will occur exactly when the item reaches that age. However, it is important to note that failed items do not age. Thus, if an item fails before it reaches that age, the action will not be performed. This means that if the item fails before the scheduled inspection (based on item age) and the CM is set to be performed upon inspection, the CM will never take place. The reason that this option is allowed in BlockSim is for the flexibility of specifying renewing inspections.
  4. Downtime due to a failure discovered during a non-downing inspection is included when computing results "w/o PM, OC & Inspections."
  5. If a PM upon item age is scheduled and is not performed because it brings the system down (based on the option in the PM task) the PM will not happen unless the item reaches that age again (after restoration by CM, inspection or another type of PM).
  6. If the CM task is upon inspection and a failed component is scheduled for PM prior to the inspection, the PM action will restore the component and the CM will not take place.
  7. In the case of simultaneous events, only one event is executed (except the case in maintenance phase, in maintenance phase, all simultaneous events in maintenance phase are executed in a order). The following precedence order is used: 1). Tasks based on intervals or upon start of a maintenance phase; 2). Tasks based on events in a maintenance group, where the triggering event applies to a block; 3). Tasks based on system down; 4). Tasked on events in a maintenance group, where the triggering event applies to a subdiagram. Within these categories, order is determined according to the priorities specified in the URD (i.e., the higher the task in on the list, the higher the priority).
  8. The PM option of Do not perform if it brings the system down is only considered at the time that the PM needs to be initiated. If the system is down at that time, due to another item, then the PM will be performed regardless of any future consequences to the system up state. In other words, when the other item is fixed, it is possible that the system will remain down due to this PM action. In this case, the PM time difference is added to the system PM downtime.
  9. Downing events cannot overlap. If a component is down due to a PM and another PM is suggested based on another trigger, the second call is ignored.
  10. A non-downing inspection with a restoration factor restores the block based on the age of the block at the beginning of the inspection (i.e., duration is not restored).
  11. Non-downing events can overlap with downing events. If in a non-downing inspection and a downing event happen concurrently, the non-downing event will be managed in parallel with the downing event.
  12. If a failure or PM occurs during a non-downing inspection and the CM or PM has a restoration factor and the inspection action has a restoration factor, then both restoration factors are used (compounded).
  13. A PM or inspection on system down is triggered only if the system was up at the time that the event brought the system down.
  14. A non-downing inspection with restoration factor of 0 does not affect the block.

Example

To illustrate the use of maintenance policies in BlockSim we will use the same example from Example Using Both Crews and Pools with the following modifications (The figures below also show these settings):

Blocks A and D:

  1. Belong to the same group (Group 1).
  2. Corrective maintenance actions are upon inspection (not upon failure) and the inspections are performed every 30 hours, based on system time. Inspections have a duration of 1 hour. Furthermore, unlimited free crews are available to perform the inspections.
  3. Whenever either item get CM, the other one gets a PM.
  4. The PM has a fixed duration of 10 hours.
  5. The same crews are used for both corrective and preventive maintenance actions.
CM and Inspection settings for blocks A and D


CM and Inspection settings for blocks A and D


PM settings for blocks A and D

System Overview

The item and system behavior from 0 to 300 hours is shown in the figure below and described next.

1. At 100, block goes down and brings the system down.
a) No maintenance action is performed since an upon inspection policy was used.
b) The next scheduled inspection is at 120, thus Crew is called to perform the maintenance by 121 (end of the inspection).
2. Crew arrives and initiates the repair on at 131.
a) The only part in the pool is used and an on-condition restock is triggered.
b) Pool [on-hand = 0, pending: 150 , 181].
c) Block is repaired by 141.
3. At the same time (121), a PM is initiated for block because the PM task called for "PM upon the start of corrective maintenance on another group item."
a) Crew is called for block and arrives at 136.
b) No part is available until 150. An on-condition restock is triggered for 181.
c) Pool [on-hand = 0, pending: 150 , 181, 181].
d) At 150, a part becomes available and the PM is completed by 160.
e) Pool [on-hand = 0, pending: 181, 181].
4. At 161, block fails (corrective maintenance upon failure).
a) Block gets Crew , which arrives at 171.
b) No part is available until 181. An on-condition restock is triggered for 221.
c) Pool [on-hand = 0, pending: 181, 181, 221].
d) A part arrives at 181.
e) The repair is completed by 201.
f) Pool [on-hand = 0, pending: 181, 221].
5. At 162, block fails.
a) Block gets Crew , which arrives at 177.
b) No part is available until 181. An on-condition restock is triggered for 222.
c) Pool [on-hand = 0, pending: 181, 221, 222].
d) A part arrives at 181.
e) The repair is completed by 201.
f) Pool [on-hand = 0, pending: 221, 222].
6. At 163, block fails and brings the system down.
a) Block calls Crew then . Both are busy.
b) Crew will be the first available so .. calls again and waits.
c) No part is available until 221. An on-condition restock is triggered for 223.
d) Pool [on-hand = 0, pending: 221, 222, 223].
e) Crew arrives at 211.
f) Repair begins at 221.
g) Repair is completed by 241.
h) Pool [on-hand = 0, pending: 222, 223].
7. At 298, block goes down and brings the system down.

System Uptimes/Downtimes

1. Uptime: This is 200 hours.
a) This can be obtained by observing the following system up durations: 0 to 100, 160 to 163 and 201 to 298.
2. CM Downtime: This is 58 hours.
a) Observe that even though the system failed at 100, the CM action (on block ) was initiated at 121 and lasted until 141, thus only 20 hours of this downtime are attributed to the CM action.
b) The next CM action started at 163 when block failed and lasted until 201 when blocks and were restored, thus adding another 38 hours of CM downtime.
3. Inspection Downtime: This is 1 hour.
a) The only time the system was under inspection was from 120 to 121, during the inspection of block .
4. PM Downtime: This is 19 hours.
a) Note that the entire PM action duration on block was from 121 to 160.
b) Until 141, and from the system perspective, the CM on block was the cause for the downing. Once block was restored (at 141), then the reason for the system being down became the PM on block .
c) Thus, the PM on block was only responsible for the downtime after block was restored, or from 141 to 160.
5. OC Downtime: This is 0. There is not on condition task in this example.
6. Total Downtime: This is 100 hours.
a) This includes all of the above downtimes plus the 20 hours (100 to 120) and the 2 hours (298 to 300) that the system was down due the undiscovered failure of block .

System Metrics

1. Mean Availability (All Events):
2. Mean Availability (w/o PM & Inspection):
a) This is due to the CM downtime of 58, the undiscovered downtime of 22 and the inspection downtime of 1, or:
b) It should be noted that the inspection downtime was included even though the definition was "w/o PM & Inspection." The reason for this is that the inspection did not cause the downtime in this case. Only downtimes caused by the PM or inspections are excluded.
3. Point Availability and Reliability at 300 is zero because the system was down at 300.
4. Expected Number of Failures is 3.
a) The system failed at 100, 163 and 298.
5. The standard deviation of the number of failures is 0.
6. The MTTFF is 100 because the example is deterministic.

The System Downing Events

1. Number of Failures is 3.
a) The first is the failure of block , the second is the failure of block and the third is the failure of block .
2. Number of CMs is 2.
a) The first is the CM on block and the second is the CM on block .
3. Number of Inspections is 1.
4. Number of PMs is 1.
5. Total Events are 6. These are events that the downtime can be attributed to. Specifically, the following events were observed:
a) The failure of block at 100.
b) Inspection on block at 120.
c) The CM action on block .
d) The PM action on block (after was fixed).
e) The failure of block at 163.
f) The failure of block at 298.

Block Details

The details for blocks and are shown below.

Block details for this example.

We will discuss some of these results. First note that there are four downing events on block  : initial failure, inspection and CM, plus the last failure at 298. All others have just one. Also, block had a total downtime of , giving it a mean availability of 0.8567. The first time-to-failure for block occurred at 100 while the second occurred after hours of operation, yielding an average time between failures (MTBF) of . (Note that this is the same as uptime/failures.) Block never failed, so its MTBF cannot be determined. Furthermore, MTBDE for each item is determined by dividing the block's uptime by the number of events. The RS FCI and RS DECI metrics are obtained by looking at the SD Failures and SD Events of the item and the number of system failures and events. Specifically, the only items that caused system failure are blocks and  ; at 100 and 298 and at 163. It is important to note that even though one could argue that block alone did not cause the failure ( and were also failed), the downing was attributed to because the system reached a failed state only when block failed.

On the number of inspections, which were scheduled every 30 hours, nine occurred for block [30, 60, 90, 120, 150, 180, 210, 240, 270] and eight for block . Block did not get inspected at 150 because block was undergoing a PM action at that time.

Crew Details

The figure below shows the crew results.

Crew details for this example.

Crew received a total of six calls and accepted three. Specifically,

  1. At 121, the crew was called by block and the call was accepted.
  2. At 121, block also called for its PM action and was rejected. Block then called crew , which accepted the call.
  3. At 161, block called crew . Crew accepted.
  4. At 162, block called crew . Crew rejected and block called crew , which accepted the call.
  5. At 163, block called crew and then crew and both rejected. Block then waited until a crew became available at 201 and called that crew again. This was crew , which accepted.

The total wait time is the time that blocks had to wait for the maintenance crew. Block is the only component that waited, waiting 38 hours for crew .

Also, the costs for crew were 1 per unit time and 10 per incident, thus the total costs were 100 + 30. The costs for Crew were 2 per unit time and 20 per incident, thus the total costs were 156 + 40.

Pool Details

The figure below shows the spare part pool results.

Pool details for this example.

The pool started with a stock level of 1 and ended up with 2. Specifically,

  1. At 121, the pool dispensed a part to block and ordered another to arrive at 181.
  2. At 121, it dispensed a part to block and ordered another to arrive at 181.
  3. At 150, a scheduled part arrived to restock the pool.
  4. At 161 the pool dispensed a part to block and ordered another to arrive at 221.
  5. At 181, it dispensed a part to block and ordered another to arrive at 222.
  6. At 221, it dispensed a part to block and ordered another to arrive at 223.
  7. The 222 and 223 arrivals remained in stock until the end of the simulation.

Overall, five parts were dispensed. Blocks had to wait a total of 126 hours to receive parts (B: 181-161=20, C: 181-162=19, D: 150-121=29 and F: 221-163=58).

Subdiagrams and Multi Blocks in Simulation

Any subdiagrams and multi blocks that may be present in the BlockSim RBD are expanded and/or merged into a single diagram before the system is simulated. As an example, consider the system shown in the figure below.

A system made up of three subsystems, A, B, and C.

BlockSim will internally merge the system into a single diagram before the simulation, as shown in the figure below. This means that all the failure and repair properties of the items in the subdiagrams are also considered.

The simulation engine view of the system and subdiagrams

In the case of multi blocks, the blocks are also fully expanded before simulation. This means that unlike the analytical solution, the execution speed (and memory requirements) for a multi block representing ten blocks in series is identical to the representation of ten individual blocks in series.

Containers in Simulation

Standby Containers

When you simulate a diagram that contains a standby container, the container acts as the switch mechanism (as shown below) in addition to defining the standby relationships and the number of active units that are required. The container's failure and repair properties are really that of the switch itself. The switch can fail with a distribution, while waiting to switch or during the switch action. Repair properties restore the switch regardless of how the switch failed. Failure of the switch itself does not bring the container down because the switch is not really needed unless called upon to switch. The container will go down if the units within the container fail or the switch is failed when a switch action is needed. The restoration time for this is based on the repair distributions of the contained units and the switch. Furthermore, the container is down during a switch process that has a delay.

The standby container acts as the switch, thus the failure distribution of the container is the failure distribution of the switch. The container can also fail when called upon to switch.

To better illustrate this, consider the following deterministic case.

  1. Units and are contained in a standby container.
  2. The standby container is the only item in the diagram, thus failure of the container is the same as failure of the system.
  3. is the active unit and is the standby unit.
  4. Unit fails every 100 (active) and takes 10 to repair.
  5. fails every 3 (active) and also takes 10 to repair.
  6. The units cannot fail while in quiescent (standby) mode.
  7. Furthermore, assume that the container (acting as the switch) fails every 30 while waiting to switch and takes 4 to repair. If not failed, the container switches with 100% probability.
  8. The switch action takes 7 to complete.
  9. After repair, unit is always reactivated.
  10. The container does not operate through system failure and thus the components do not either.

Keep in mind that we are looking at two events on the container. The container down and container switch down.

The system event log is shown in the figure below and is as follows:

The system behavior using a standby container.
  1. At 30, the switch fails and gets repaired by 34. The container switch is failed and being repaired; however, the container is up during this time.
  2. At 64, the switch fails and gets repaired by 68. The container is up during this time.
  3. At 98, the switch fails. It will be repaired by 102.
  4. At 100, unit fails. Unit attempts to activate the switch to go to  ; however, the switch is failed.
  5. At 102, the switch is operational.
  6. From 102 to 109, the switch is in the process of switching from unit to unit . The container and system are down from 100 to 109.
  7. By 110, unit is fixed and the system is switched back to from . The return switch action brings the container down for 7 , from 110 to 117. During this time, note that unit has only functioned for 1 , 109 to 110.
  8. At 146, the switch fails and gets repaired by 150. The container is up during this time.
  9. At 180, the switch fails and gets repaired by 184. The container is up during this time.
  10. At 214, the switch fails and gets repaired by 218.
  11. At 217, unit fails. The switch is failed at this time.
  12. At 218, the switch is operational and the system is switched to unit within 7 . The container is down from 218 to 225.
  13. At 225, unit takes over. After 2 of operation at 227, unit fails. It will be restored by 237.
  14. At 227, unit is repaired and the switchback action to unit is initiated. By 234, the system is up.
  15. At 262, the switch fails and gets repaired by 266. The container is up during this time.
  16. At 296, the switch fails and gets repaired by 300. The container is up during this time.

The system results are shown in the figure below and discussed next.

System overview results.
1. System CM Downtime is 24.
a) CM downtime includes all downtime due to failures as well as the delay in switching from a failed active unit to a standby unit. It does not include the switchback time from the standby to the restored active unit. Thus, the times from 100 to 109, 217 to 225 and 227 to 234 are included. The time to switchback, 110 to 117, is not included.
2. System Total Downtime is 31.
a) It includes the CM downtime and the switchback downtime.
3. Number of System Failures is 3.
a) It includes the failures at 100, 217 and 227.
b) This is the same as the number of CM downing events.
4. The Total Downing Events are 4.
a) This includes the switchback downing event at 110.
5. The Mean Availability (w/o PM and Inspection) does not include the downtime due to the switchback event.

Additional Rules and Assumptions for Standby Containers

1) A container will only attempt to switch if there is an available non-failed item to switch to. If there is no such item, it will then switch if and when an item becomes available. The switch will cancel the action if it gets restored before an item becomes available.
a) As an example, consider the case of unit failing active while unit failed in a quiescent mode. If unit gets restored before unit , then the switch will be initiated. If unit is restored before unit , the switch action will not occur.
2) In cases where not all active units are required, a switch will only occur if the failed combination causes the container to fail.
a) For example, if , , and are in a container for which one unit is required to be operating and and are active with on standby, then the failure of either or will not cause a switching action. The container will switch to only if both and are failed.
3) If the container switch is failed and a switching action is required, the switching action will occur after the switch has been restored if it is still required (i.e., if the active unit is still failed).
4) If a switch fails during the delay time of the switching action based on the reliability distribution (quiescent failure mode), the action is still carried out unless a failure based on the switch probability/restarts occurs when attempting to switch.
5) During switching events, the change from the operating to quiescent distribution (and vice versa) occurs at the end of the delay time.
6) The option of whether components operate while the system is down is defined at component level now (This is different from BlockSim 7, in which this option of the contained items inherit from container). Two rules here:
a) If a path inside the container is down, blocks inside the container that are in that path do not continue to operate.
b) Blocks that are up do not continue to operate while the container is down.
7) A switch can have a repair distribution and maintenance properties without having a reliability distribution.
a) This is because maintenance actions are performed regardless of whether the switch failed while waiting to switch (reliability distribution) or during the actual switching process (fixed probability).
8) A switch fails during switching when the restarts are exhausted.
9) A restart is executed every time the switch fails to switch (based on its fixed probability of switching).
10) If a delay is specified, restarts happen after the delay.
11) If a container brings the system down, the container is responsible for the system going down (not the blocks inside the container).

Load Sharing Containers

When you simulate a diagram that contains a load sharing container, the container defines the load that is shared. A load sharing container has no failure or repair distributions. The container itself is considered failed if all the blocks inside the container have failed (or blocks in a -out-of- configuration).

To illustrate this, consider the following container with items and in a load sharing redundancy.

Assume that fails every 100 and every 120 if both items are operating and they fail in half that time if either is operating alone (i.e., the items age twice as fast when operating alone). They both get repaired in 5 .

Behavior of a simple load sharing system.

The system event log is shown in the figure above and is as follows:

1. At 100, fails. It takes 5 to restore .
2. From 100 to 105, is operating alone and is experiencing a higher load.
3. At 115, fails. would normally be expected to fail at 120, however:
a) From 0 to 100, it accumulated the equivalent of 100 of damage.
b) From 100 to 105, it accumulated 10 of damage, which is twice the damage since it was operating alone. Put another way, aged by 10 over a period of 5 .
c) At 105, is restored but has only 10 of life remaining at this point.
d) fails at 115.
4. At 120, is repaired.
5. At 200, fails again. would normally be expected to fail at 205; however, the failure of at 115 to 120 added additional damage to . In other words, the age of at 115 was 10; by 120 it was 20. Thus it reached an age of 100 95 later at 200.
6. is restored by 205.
7. At 235, fails. would normally be expected to fail at 240; however, the failure of at 200 caused the reduction.
a) At 200, had an age of 80.
b) By 205, had an age of 90.
c) fails 30 later at 235.
8. The system itself never failed.

Additional Rules and Assumptions for Load Sharing Containers

1. The option of whether components operate while the system is down is defined at component level now (This is different from BlockSim 7, in which this option of the contained items inherit from container). Two rules here:
a) If a path inside the container is down, blocks inside the container that are in that path do not continue to operate.
b) Blocks that are up do not continue to operate while the container is down.
2. If a container brings the system down, the block that brought the container down is responsible for the system going down. (This is the opposite of standby containers.)

State Change Triggers

Consider a case where you have two generators, and one (A) is primary while the other (B) is standby. If A fails, you will turn B on. When A is repaired, it then becomes the standby. State change triggers (SCT) allow you to simulate this case. You can specify events that will activate and/or deactivate the block during simulation. The figure below shows the options for state change triggers in the Block Properties window.

Once you have enabled state change triggers for a block, there are several options.

  • Initial state allows you to specify the initial state for the block, either ON or OFF.
  • State upon repair allows you to specify the state of the block after its repair. There are four choices: Always ON, Always OFF, Default ON unless SCT Overridden and Default OFF unless SCT Overridden. In the Assumptions sections, we will explain what these choices mean and illustrate them using an example.
  • Add a state change trigger allows you to add a state change trigger to the block.

The state change trigger can either activate or deactivate the block when items in specified maintenance groups go down or are restored. To define the state change trigger, specify the triggering event (i.e., an item goes down or an item is restored), the state change (i.e., the block is activated or deactivated) and the maintenance group(s) in which the triggering event must happen in order to trigger the state change. Note that the current block does not need to be part of the specified maintenance group(s) to use this functionality.


The State Change Trigger window is shown in the figure below:

Assumptions

  • A block cannot trigger events on itself. For example, if Block 1 is the only block that belongs to MG 1 and Block 1 is set to be turned ON or OFF based on MG 1, this trigger is ignored.
  • OFF events cannot trigger other events. This means that things cannot be turned OFF in cascade. For example, if Block 1 going down turns OFF Block 2 and Block 2 going down turns OFF Block 3, a failure by Block 1 will not turn OFF Block 3. Block 3 would have to be directly associated with downing events of Block 1 for this to happen. The reason for this restriction is that allowing OFF events to trigger other events can cause circular reference problems. For example, four blocks A, B, C and D are in parallel. Block A belongs to MG A and initially it is ON. Block B belongs to MG B and its initial status is also ON. Block C belongs to MG C and its initial status is OFF. Block D belongs to MG D and its initial status is ON. A failure of Block A will turn OFF Block B. Then Block B will turn Block C ON and finally C will turn OFF Block D. However, if an OFF event for Block D will turn Block B ON, and an ON event for Block B will turn Block C OFF, and an OFF event for Block C will turn Block D ON, then there is a circular reference problem.
  • Upon restoration states:
    • Always ON: Upon restoration, the block will always be on.
    • Always OFF: Upon restoration, the block will always be off.
    • Default ON unless SCT overridden: Upon restoration, the block will be on unless a request is made to turn this block off while the block is down and the request is still applicable at the time of restoration. For example, assume Block A's state upon repair is ON unless SCT overridden. If a failure of Block B triggers a request to turn Block A off but Block A is down, when the maintenance for Block A is completed, Block A will be turned off if Block B is still down.
    • Default off unless SCT overridden: Upon restoration, the block will be off unless a request is made to turn this block on while the block is down and the request is still applicable at the time of restoration
  • Maintenance while block is off: Maintenance tasks will be performed. At the end of the maintenance, "upon restoration" rules will be checked to determine the state of the block.
  • Assumptions for phases: In Versions 10 and earlier, the state of a block (on/off) was determined at the beginning of each phase based on the "Initial state" setting of the block for that phase. Starting in Version 11, the state of the block transfers across phases instead of resetting based on initial settings.
  • If there are multiple triggering requests put on a block when it is down, only the latest one is considered. The latest request will cancel all requests before it. For example, Block A fails at 20 and is down until 70. Block B fails at 30 and Block C fails at 40. Block A has state change triggers enabled such that it will be activated when Block B fails and it will be deactivated when Block C fails. Thus from 20 to 70, at 30, Block B will put a request on Block A to turn it ON and at 40, Block C will put another request to turn it OFF. In this case, according to our assumption, the request from Block C at 40 will cancel the request from Block B at 30. In the end, only the request from Block C will be considered. Thus, Block A will be turned OFF at 70 when it is done with repair.

Example: Using SCT for Standby Rotation

This example illustrates the use of state change triggers in BlockSim (Version 8 and above) by using a simple standby configuration. Note that this example could also be done using the standby container functionality in BlockSim.

More specifically, the following settings are illustrated:

  1. State Upon Repair: Default OFF unless SCT overridden
  2. Activate a block if any item from these associated maintenance group(s) goes down

Problem Statement

Assume three devices A, B and C in a standby redundancy (or only one unit is needed for system operation). The system begins with device A working. When device A fails, B is turned on and repair actions are initiated on A. When B fails, C is turned on and so forth.

BlockSim Solution

The BlockSim model of this system is shown in the figure below.

  • The failure distributions of all three blocks follow a Weibull distribution with Beta = 1.5 and Eta = 1,000 hours.
  • The repair distributions of the three blocks follow a Weibull distribution with Beta = 1.5 and Eta = 100 hours.
  • After repair, the blocks are "as good as new."

There are three maintenance groups, 2_A, 2_B and 2_C, set as follows:

  • Block A belongs to maintenance group 2_A.
    • It has a state change trigger.
      • The initial state is ON and the state upon repair is "Default OFF unless SCT overridden."
      • If any item from maintenance group 2_C goes down, then activate this block.


  • Block B belongs to maintenance group 2_B.
    • It has a state change trigger.
      • The initial state is OFF and the state upon repair is "Default OFF unless SCT overridden."
      • If any item from maintenance group 2_A goes down, then activate this block.


  • Block C belongs to maintenance group 2_C.
    • It has a state change trigger.
      • The initial state is OFF and the state upon repair is "Default OFF unless SCT overridden."
      • If any item from maintenance group 2_B goes down, then activate this block.


  • All blocks A, B and C are as good as new after repair.

System Events

The system event log for a single run through the simulation algorithm is shown in the Block Up/Down plot below, and is as follows:

  1. At 73 hours, Block A fails and activates Block B.
  2. At 183 hours, Block B fails and activates Block C.
  3. At 215 hours, Block B is done with repair. At this time, Block C is operating, so according to the settings, Block B is standby.
  4. At 238 hours, Block A is done with repair. At this time, Block C is operating. Thus Block A is standby.
  5. At 349 hours, Block C fails and activates Block A.
  6. At 396 hours, Block A fails and activates Block B.
  7. At 398 hours, Block C is done with repair. At this time, Block B is operating. Thus Block C is standby.
  8. At 432 hours, Block A is done with repair. At this time, Block B is operating. Thus Block A is standby.
  9. At 506 hours, Block B fails and activates Block C.
  10. At 515 hours, Block B is done with repair and stays standby because Block C is operating.
  11. At 536 hours, Block C fails and activates Block A.
  12. At 560 hours, Block A fails and activates Block B.
  13. At 575 hours, Block B fails and makes a request to activate Block C. However, Block C is under repair at the time. Thus when Block C is done with repair at 606 hours, the OFF setting is overridden and it is operating immediately.
  14. At 661 hours, Block C fails and makes a request to activate Block A. However, Block A is under repair at the time. Thus when Block A is done with repair at 699 hours, the OFF setting is overridden and it is operating immediately.
  15. Block B and Block C are done with repair at 682 hours and at 746 hours respectively. However, at these two time points, Block A is operating. Thus they are both standby upon repair according to the settings.


Discussion

Even though the examples and explanations presented here are deterministic, the sequence of events and logic used to view the system is the same as the one that would be used during simulation. The difference is that the process would be repeated multiple times during simulation and the results presented would be the average results over the multiple runs.

Additionally, multiple metrics and results are presented and defined in this chapter. Many of these results can also be used to obtain additional metrics not explicitly given in BlockSim's Simulation Results Explorer. As an example, to compute mean availability with inspections but without PMs, the explicit downtimes given for each event could be used. Furthermore, all of the results given are for operating times starting at zero to a specified end time (although the components themselves could have been defined with a non-zero starting age). Results for a starting time other than zero could be obtained by running two simulations and looking at the difference in the detailed results where applicable. As an example, the difference in uptimes and downtimes can be used to determine availabilities for a specific time window.

Personal tools
ReliaWiki.org
Create a book