Articles in Category: Safran

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 6

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).

Part 6: Correlation (and the Central Limit Theorem)

Correlation is the mutual relationship between one or more events. In schedule risk analysis, it is used to indicate that one risk’s probability is likely to increase (or decrease) if another risk occurs. For example, if the first piling activity on a project goes badly, chances are that all the piling activities will go badly and there is a further (weaker) chance that all the excavation activities will also go badly. That seems to be pretty obvious and we need the ability to build that into our risk models.

Correlation has another purpose: to counteract a related topic call the Central Limit Theorem (CLT). There have been many articles written regarding the CLT, but to summarize the issue, if you have a number of similar duration activities linked in series that have the same uncertainties, when randomly sampled, if one activity goes long, another will go short and they will cancel each other out, leading to a loss of extreme values in the probabilistic analysis.

Some argue that in order to combat the (CLT) and its impact on the model, correlation is absolutely required while others will argue that so long as you have a high level schedule, the CLT is a non-issue and thus correlation is not required. Personally, I like working with the project team’s live schedule, which tends to be a Level 3 or Level 4 schedule and correlation is often a big issue. We’ll leave the discussion about which level of schedule risk should be performed on for another blog and concentrate on the CLT here.



Figure 1: The effect of Central Limit Theorem on a one activity schedule and a ten day activity schedule with the same overall deterministic duration and uncertainty distribution. The PO duration is 80 days vs 90 days and the P100 duration is 120 days vs 110 days, respectively. The CLT has lopped 10 days off each end of our distribution in the case of the ten activity model. 

Applying correlation can correct the impact of the CLT by preventing the cancellation that occurs in a purely random sampling. Applying an 80% correlation between the risks leads to the following result:


Figure 2: The effects of applying correlation to correct the Central Limit Theorem. By applying a correlation to the uncertainties on the ten activity model, we can closely approximate the one activity model.

So, given that we need to enter correlation in our model to reflect risk’s interdependencies and we also need to use correlation to combat the CLT, let’s look at how correlation is performed in OPRA and in Safran Risk.

In OPRA, correlation is assigned between activities. This means that in order to combat the central limit theorem for my 10 activities, I need to link all 10 together in the correlation module. It’s possible but tedious and it gets worse as I have more and more activities to be correlated. What’s confusing for the user is that they have to decide which activity is the parent and which activities are children in the correlation: do I choose the first one in the string or the longest one or something else? It’s not well documented. It gets even more difficult if I have multiple correlations between activities with multiple uncertainties or risks: without a PhD in statistics, how do I know what to correlate and how much should the correlation be?



Figure 3: OPRA Correlation Assignment

SRA takes a different approach – risks are correlated, not activities. This makes a lot of sense in that if you have an activity with multiple risks, the correlation can exist only to the risk in question not to the entire activity. Similarly, if a risk applies to multiple activities, the correlation is also automatically applied (but can be turned off if necessary).

There are actually a couple of ways to handle correlation in Safran Risk.

The first is to correlate the impact of a risk on all of the activities that it applies to – unchecking the box will apply 100% correlation to all of the activities that the risk is assigned to:



Figure 4: Safran single risk impact correlation

But what if we need to correlate multiple risks to each other? In OPRA, the approach of correlating activities makes this almost impossible; how would you figure out to what degree two activities with multiple risks should be correlated?  SRA has this covered – by correlating risks together, the appropriate impacts will be calculated automatically.


Figure 5: Safran Risk Correlation Mapping Matrix

Not only does this mapping allow the user to correlate risks to each other but it also allows the probability of occurrence of each risk and the schedule impacts of the risks to be correlated independently. Further, the Pre- and Post-Mitigated positions can be differently correlated. Cost risks can also be correlated (not shown above). 

The Safran approach makes understanding and applying correlation much easier. When correlation is clear and easy, the user more likely to apply it, leading to better results (and hopefully less discussion of the Central Limit Theorem). 

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 5

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).

Part 5: Risks and Uncertainties

When I started working with OPRA (then Pertmaster) all risks were modelled as uncertainties on activities, usually as three point estimates. We would add activities to the model to simulate risk events occurring in the schedule, and we would use task existence to model the likelihood of occurrence. This approach, while it worked, was somewhat tedious when performing what-if analysis, and it made it impossible to trace the impact of a risk that impacted multiple activities, since the tornado graph could only show the impact of each activity, not the risk event that caused the activities in the first place.

The Pertmaster development team improved the process by introducing a Risk Register that would allow risk events to be modelled separately from the schedule, and then an impacted risk plan could be generated by creating sub-tasks for each risk event. This worked well for what-if analysis as changes could be made and a revised model generated quickly. The tornado chart was also changed so that a user could see the impact of risks that landed on multiple activities.

But what about uncertainties? We were always stuck with modelling uncertainties on our activities in OPRA. So if I had an uncertainty that impacted multiple activities, I would have to either input the uncertainty onto multiple activities (and manually calculate the impact durations) or use the QuickRisk function to generate the impacts. The problem with using QuickRisk is that it is too easy to overwrite an existing value with the new value and not realize your previous data had been lost.

There was another issue with uncertainties; how could we model an activity that had more than one uncertainty on it? For example, I might have a piling activity that had an uncertainty based on the productivity of the piling crew as well as an uncertainty based on the accuracy of the soils report. As the person building the model, I would have to calculate the combined uncertainty of these items.

The OPRA team, to their credit, did start on development of a module called Risk Factors that worked similarly to the Risk Register and allowed multiple uncertainties to be added to one (or many) activities. Unfortunately, it was never really completed to a level that would make it useful. With Oracle abandoning new development on the tool, this module will never be completed.

In developing Safran Risk (SR), the team decided to combine the modelling of risks events and uncertainties into the module called Project Risks. This simple step makes building a what-if model much easier since all “risks” can be either risk events or uncertainties which can be turned on or off as required. Risk events and uncertainties can be applied to as many activities as required and the impact can be directly traced back to the item in the risk module.

To model a risk event, the probability will be set to less than 100% with (usually) an absolute impact; whereas for uncertainties, the probability will be set to 100% and the impact will be (usually) a relative percentage of the original duration. However, a user can mix and match probability and impact types to build the model as they need. It is also possible to set pre- and post-mitigated positions for all Risk Events and Uncertainties which allows for great flexibility when conducting what-if analysis of mitigation strategies.

In addition to Uncertainties and Risk Events (collectively called Standard Risks in the tool), the Project Risks module also allows the user to configure Estimate Uncertainties and Calendar Risks. Of course each risk can have a cost component that can be modelled within the tool as part of the integrated cost and schedule risk analysis approach that makes SR so powerful.

After Project Risks are entered in the system, the risks can be mapped to the activities using the Risk Mapping module. For anyone who tried to use the Risk Register module in OPRA on a large schedule and struggled with pages and pages of activities, the Risk Mapping module is much easier to use because activity layouts can be used to filter and group activities for easy mapping to the risks.

The really great feature is that SR will quickly show the total impact of risk events and uncertainties on any activity prior to running the risk analysis. This is very useful to find errors where a risk event or an uncertainty is out of range of realistic values. In OPRA, it was very easy to assign a risk duration out of all proportion to the original task duration (in fact, the way that ranging worked in the Risk Register model made it hard to get the durations correct without a great deal of fiddling).

So, what if a user just wants to enter three point estimates on each activity? This can be done one of two ways in Safran Risk:
  1. Three point estimates are still supported and can be entered directly on the activity (this is useful if the user plans to import an old OPRA file or just want to continue their old process in newer software).
  2. You can create a one-to-one mapping between line items in the risk tab and activities. This is preferred because when the user is ready to incorporate risk events in the model, they can be easily added to the model. Creating this mapping is easier than it sounds because it is so easy to import and export to and from both the Risk and Risk Mapping modules.
Ultimately, if you really want to develop risk models that have both uncertainties and risk events (and most models do), Safran is much easier to use than OPRA. If you also need to develop integrated cost and schedule risk models (covered in my last blog post), Safran is the only game in town.

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 4

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.


In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).


Part 4: Analysis by Exclusion


A few years ago, I was working with a large mining company on their risk process. One of their risk specialists mentioned that they were performing risk analysis by exclusion. Naturally, I was curious as to what this was and asked them to show me how it worked.


What they did was to take the risk model in OPRA, run it with all risks, and then turn off one risk at a time and rerun the model. Then they would compare each output either in the Distribution Analyzer or in Excel, so that they could report exactly how much impact each risk had on the project.


The Tornado Chart in OPRA automatically ranks the activities or risks as to their impact, but the challenge is that while you can see which activity or risk has the highest impact, you cannot quantify what that impact is. The tornado is based on Pearson’s Product Moment which gives a percentage ranking which project managers find difficult to interpret. So, to answer the question of “What is the impact of this risk on the schedule?”, the OPRA user would:
  1. run the model with all risks turned on and record the results;
  2. manually turn off a risk, rerun the model, record the results;
  3. turn the risk back on;
  4. repeat step 2 and 3 to sample all risks in a project.
In the case of this client, they said that it would generally take about a week to perform the exercise on a large construction schedule, more if there were changes in the model or after a mitigation strategy session. This is simply too time consuming.


Another issue: They also had to use the Risk Factors module of OPRA to make the analysis work, making them one of the only clients of ours who ever used this module. This module works by allowing uncertainties to be modelled similarly to the Risk Register. This allowed uncertainties of the same type (Risk Factors) to be grouped for analysis and tracking.


While I could see the value of the work being done, the effort required was much too high. One of the tenets of working efficiently is that if you need to do something over and over again, you should look at automating it. Computers are good at repetitive tasks, people generally are not; automating repetitive tasks not only reduces time but also improves accuracy. For example, if you were to forget to turn Risk Factor #9 back on before turning off Risk Factor #10 risk for the 10th iteration of the analysis by exclusion effort, the project team might choose to act on the wrong Risk Factor because its impact was overstated.


I wondered if the VB capability in OPRA would assist in automating this task. While I didn’t use it much, I had heard that a lot of processes could be automated using the VB feature. So, I asked my friend and colleague, Russell Johnson, the ex-CTO of Pertmaster if analysis by exception could be automated using the VB feature of OPRA. His answer was:


While it’s technically possible there are a few big challenges.


1. There was never a way with OPRA to create a native risk driver event (we did create a prototype module, risk factors, but this now means your model has new stuff added to it which can be confusing). So the first challenge is just creating and identifying a model with driving risk factors.


2. There is no way to natively store or process the results. Since you are doing something outside the normal of what OPRA does, you'd have to find a new ways to store and display the results. You can't for example manipulate the built-in tornado chart.


3. Finally, the speed is an issue. For various reasons OPRA is slow compared with Safran, so whatever you do will take much longer (like days vs mins, if it can even do it).


The other big issue is that OPRA dropped VB support years ago, so the argument is moot.


The developers of Safran Risk (SR) saw OPRA users performing this tedious, manual task and decided to automate it. The results are amazing; 40+ hrs of analysis in OPRA now takes minutes in SR and the chances of making a mistake are zero.


So how does analysis by exclusion work in SR? Let’s take a look.


First of all, the analysis can be performed in a single pass or in multiple passes.
  1. In single pass mode, it will run through each risk once, and show an output for all of the risks in the plan individually excluded (essentially the same exercise my client was performing manually).
  2. In multiple pass mode, the system will run for the number of iterations you specify and will remove the top risk from each iteration before starting the next iteration with the previous iteration’s risk left turned off. This has the advantage of preventing large impact risks overshadowing lower impact risks and will show the cumulative impact of mitigating each additional risk.
This is the result of the single pass analysis for all of the risks in the demo “Drone” project.

To show this better, let’s look at only one risk and one opportunity being removed compared to the original curve.
image.png
In this example, the black line represents all risks turned on (the “None” line) and the green line represents the same model with only the Testing Overrun risk turned off. This indicates that at P70, we would save 33 days if we could eliminate the Testing Overrun risk.


You can also see the effect of an opportunity in the purple line. The “Outsource” risk actually represents an opportunity. This shows that when the opportunity happens, the project duration is shorter (black line) than when the opportunity doesn’t happen (purple line).


However, the cumulative saving of removing multiple risks is not entirely clear, since turning off additional risks may or may not save the sum of the savings obtained individually. In this example, if we turned off the top 5 risks, we would expect to save 103d. However, it is not that simple since turning off one risk may mitigate another risk, particularly if there is correlation between the risks.


We could do this manually by turning off the top risk, running the analysis, running the next risk, etc. but this is even more manual work.


To understand the interaction of the top risks, we run the same analysis but using multiple passes, turning off the top 5 risks cumulatively (ie pass one has only Testing Overrun turned off, pass two has Testing Overrun and Design turned off, etc.).



Again, looking at the P70 values:
  1. When the Testing Overrun risk is excluded, the schedule improves by 33 days (the same as the single pass).
  2. When Testing Overrun and Design are excluded, the schedule improves by another 33 days (a change from the 29 days of the single pass)
  3. When Testing Overrun, Design and Ash Cloud are excluded, the schedule improves by another 21 days (a reduction from the single pass result of 23 days)
  4. The total savings when we remove the top 5 risks is 103d. This is the same result as when we ran them individually, but the individual savings are different.
This is great information, since the project team can see what the effect of removing these risks would be on the project. But what about the costs of mitigation vs the cost of the risk?

In a previous blog, I wrote about the advantages of integrated cost and schedule analysis and here, through the power of the integrated model, I can look at the same information but on a cost basis rather than only a schedule basis.

Here is what our cost Analysis by Exclusion looks like for all the risks. Notice that there are a few more risks shown in this display, since there are now cost risks that do not have a schedule impact included.

image.png

Now we can tell our Project Manager that by removing the Design Specification risk and ensuring that the Outsource opportunity occurs, we can save 49 days and $295k. Note that any costs associated with the mitigation strategies will be included in the model.

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 3

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.


In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).


Part 3: Integrated Cost and Schedule Risk Analysis


Many years ago, I was hired by a large multinational oil and gas company considering a new technology project. My role was to conduct a schedule risk analysis in preparation for a go/no-go decision from The Board. Another consultant was conducting a cost risk analysis in parallel to mine. The company expected us to each present our results but not to discuss the results with each other. The results would be independently used when considering whether or not the company was going to invest billions of dollars in their project.


The cost risk consultant and I discussed the issue and since we agreed that cost and schedule risk were intrinsically linked and we looked at ways that we could combine the two analyses. Our options were limited:
  1. We could build a cost loaded schedule in OPRA and conduct a joint analysis in that tool. The challenge that we faced was that the cost line items didn’t line up with the project schedule in a way that would make this easy to do. Not only that, but only some of cost risks were schedule related, not all of them. We would need to build a cost loaded schedule specifically for the analysis, which, while possible, would take a lot of time and effort.
  2. We could take the results of the schedule analysis and incorporate them into the cost analysis in @Risk. This could be done by creating a single point duration value or a simple time based distribution for certain cost items, like our indirect costs. For example, we could say that our site services (trailers, security, staff, etc) would be required for a P70 value of 60 months as opposed to the deterministic value of 48 months, but it lost a great deal of the dynamic aspects of the schedule analysis because the integration was done at a high level.
The oil and gas industry has largely followed Option 2 as the easier approach but really what we needed was to develop the two models concurrently, so that uncertainties and risks in both cost and schedule impact each other in the Monte Carlo simulations. Changes in one can affect the other immediately and visibly.


Why is it such an advantage to have both in the same model?


There is an old project manager’s joke that says, “You can have it good, you can have it fast, you can have it cheap – but you can only pick two of the three.”
  1. If a project is running late, there will be additional costs associated with indirect costs and in most cases there will be additional costs associated with direct costs as well.
  2. If the project team decides to spend money to save time (mitigate delays), the costs will likely increase.
  3. We may decide to mitigate cost risk by slowing the project and keeping a smaller, more efficient labor force or by moving work to a known fabricator.
A recent study of Oil and Gas megaprojects in Alberta showed that, on average, there was a 19% cost overrun and a 17% schedule overrun on these very expensive projects. It is certainly no surprise that these numbers are so closely correlated. Yet we make decisions on cost mitigation strategies and schedule mitigation strategies without insight into the impact that our change will make to our schedules and cost. On the oil and gas project that I mentioned earlier, cost and schedule mitigation strategies were considered entirely in isolation.Figure 1: Integrated Cost and Schedule Risk Process

Often as project managers we get tunnel vision because we get too focused on schedule or cost at the expense of the other. For example, I worked on a turnaround project that had a $120M budget with a 35 day maintenance window. Management communicated that schedule was everything, cost was very much a secondary consideration (so much so that it wasn’t even monitored during the project), so the project team started burning overtime almost from the first shift to maintain the schedule. In the end we completed the work on time (to great fanfare) but months later, when all the invoices were in, we had spent $160M to do so. This caused great distress within the organization. A few heads rolled and the “Full speed ahead, damn the torpedoes” approach was never used within that organization again.

“Schedule pressure dooms more megaprojects than any other single factor” (E. W. Merrow)

What we really need to understand is not just the probability of achieving our end date or the probability of achieving our end cost, but the probability of achieving both concurrently. This is called the Joint Confidence Level (JCL). We want a solution that offers a 70% probability (for example) of achieving both cost and schedule and that will help us to understand the interdependencies between the two.

The AACE 57R-09 Integrated Cost and Schedule Risk Analysis Guideline (found here) describes the process of combined Cost and Schedule Risk Analysis and the process is well described in Dr. David Hewlitt’s book Integrated Cost-Schedule Risk Analysis (found here).

OK, so now we understand why we need to conduct cost and schedule risk together. But why Safran Risk?

Safran Risk is one of the only tools on the market that evaluates Cost and Schedule Risk together. The beauty of their approach is that costs can be modelled separately or together with activity durations. You can even apportion part of an estimate line item to a schedule activity but leave the rest independent. This gives a lot of flexibility in modelling the risks on a project and avoids the frustration of trying to resource load a traditional CPM schedule to match a cost estimate.

We can also truly understand the impact of our mitigation strategies best by evaluating cost and schedule risks together. Safran Risk makes turning risks on and off for what-if analysis simple and mitigation costs and schedule impacts can be easily modelled.

Finally, we can plot our cost vs schedule risk outcomes using a scatter plot to create a Joint Confidence Level diagram which shows us the probabilities of hitting our cost and schedule targets.

Figure 2: JCL @ 70% confidence – note that the deterministic cost and schedule probability (the star shape) is only 17%.

The Energy Facility Contractors Group (EFCOG) recently undertook an evaluation of the commercially available tools that can conduct cost and schedule risk together, which is a self‐directed group of contractors of U.S. Department of Energy Facilities. The purpose of the EFCOG is to promote excellence in all aspects of operation and management of DOE facilities in a safe, environmentally sound, secure, efficient, and cost‐effective manner through the ongoing exchange of information and corresponding improvement initiatives. You can see their report here.

Within this report, EFCOG chose Safran Risk as the best product for those working with Primavera P6 and second best for those working with Microsoft Project. Since most of my clients are working in P6 and need to conduct joint cost and schedule risk analysis, Safran is an obvious choice for those looking to better understand their projects.

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 2

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.


In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).


Part 2: User Interface


In my last blog post, I discussed the technology used in OPRA vs SR. As I mentioned in that blog, the biggest complaint that we hear about OPRA is that the technology cannot support a large risk model. The second most common complaint that we hear is that the user interface is clunky and moving reports and layouts from one model to another in order to generate consistent outputs is tedious.


When OPRA (at the time called Pertmaster) was re-introduced as a Risk Analysis tool in 2001 (it had previously been a CPM scheduling tool), it had a pretty decent user interface (UI) for the time. It looked like a typical CPM scheduling tool that had an extra “Risk” menu for Risk inputs and extra items added under the “Reports” menu for Risk outputs.


For most risk users of the time, the UI was fine because Schedule Risk Analysis (SRA) was a new and relatively immature concept that was performed infrequently by relatively few people. These users would learn where to find the required items in the Risk and Reports menus. Hey, if you could master P3 or Artemis, Pertmaster should have been a walk in the park! Besides, compared to Primavera’s Monte Carlo add-on, Pertmaster’s UI was a big step forward in usability.


OPRA’s Risk


OPRA's Risk Menu

OPRA Reports menu


After nearly 20 years of SRA, things have changed significantly. We now have defined risk maturity models, organizations have made SRA part of their project management methodology, and project teams build their own risk models. More people need to be able to work in the tool and getting them up to speed quickly and easily requires a logical workflow inside the tool.


When Safran developed Safran Risk (SR), they used their experience of the original Permaster’s development to modernize their new tool’s User Interface and make it easier for users to understand and learn. The first step that they took was to change from a menu based input model to a workflow based model. This means that SR has replaced the menu based system with a tab based sequential workflow system. The user moves from left to right as they build the risk model.



Safran Risk tab based navigation.


The other item of note here is that all of the functionality of Safran’s scheduling tool is also here (a big advantage of building the risk engine on top of the scheduling package). Users can create layouts and filters and share them between users and projects, making application of standard processes and reports much easier than in OPRA.


Does an updated UI make the upgrade worthwhile? Not in and of itself, but it does make training new users much easier and makes it much less likely that a user will miss a step in the process. I personally find that Safran’s UI just makes everything easier. I still occasionally talk to P3 users who recall that its UI was the best ever, but I doubt that they would want to go back and work with it today. I’d love to have a classic sports car (say a TR6) in my garage, but I sure wouldn’t want to have to drive one to work in the Canadian winter!


In my next blog post, I’ll discuss the benefits of integrated cost and schedule risk analysis.

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 1

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, as Emerald was the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.


In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk Analysis (SR).


Part 1: Technology


You might be wondering why I’m starting with such a boring topic. Don’t we really want to discuss bells and whistles and the cool factor items? The reality is that the most common complaints that I hear about OPRA are that it’s slow, it crashes a lot and it’s difficult to move data in and out. So if you’re an OPRA user today, it’s quite possible that you don’t want to change your process but you just need a more stable, more scalable platform.


When OPRA (at the time called Pertmaster) was re-introduced as a Risk Analysis tool in 2001 (it had previously been a CPM scheduling tool), most desktop scheduling tools used flat files to store their data. P6 had come out only a couple of years prior and had yet to be widely adopted; P3 ran on a BTrieve database, which was pretty much a flat file based system. The idea of using a database engine backend was something that was still relatively new, so Pertmaster used the more common flat file based structure.


For most risk users of the time, this didn’t matter because Schedule Risk Analysis (SRA) was a new and relatively immature concept that was performed infrequently by relatively few people on schedules that were generally built specifically for the risk analysis and had only a few hundred activities. Performance of such a system would be fast enough and the *.rsk output files would only be kept for short periods before being deleted or over-written. It was also unlikely that more than one user would need to access the file at a time.


The thing is, this is no longer the case. Over nearly 20 years of SRA, things have changed significantly. We now have defined risk maturity models, organizations have made SRA part of their project management methodology, and project teams build their own risk models. Standalone schedules for risk analyses are becoming rare and multiple users want to look at the model concurrently.


At the same time, schedules have become larger as more detail is built into the schedule through integration with other systems. Scheduling systems have become more powerful to compensate. Where a large Shutdown/Turnaround (STO) schedule 15 years ago would be 5,000 activities, a large STO schedule is now approaching 100k activities. Making a new dedicated schedule each time that a risk analysis is run (often quarterly) is simply no longer realistic.


Scheduling systems have evolved since 2001. What we need is a SRA tool that has the same enhancements. The most important of these is that the SRA tool has a backend database to store the projects, user data, risk data and risk outputs and to allow concurrent multi-user access. A 64-bit application platform is also required for large schedules.


Unfortunately for those of us who used and loved OPRA, development stopped in 2013 and the last patch was issued in 2015. The platform never got a database backend or moved to a 64-bit application, meaning that the system remains single user and is limited to schedules under 10k activities. It simply hasn’t evolved the way it needed to to stay relevant.


Safran had an advantage in developing their new Safran Risk module. They already had a world class scheduling program, Safran Project, available that runs on a SQL Server or Oracle database and a 64 bit application layer. In Europe and the Middle East, Safran Project is considered a competitor to P6. When Safran started development of SR in 2015, the Safran Project platform was a solid place to start and enhancements have been released regularly since.


In order to speed up development of Safran Risk, Safran also had another advantage: they leveraged the knowledge of the original Pertmaster development team to guide the development and ensure that the lessons of Pertmaster were incorporated from the start in Safran Risk.


In the next blog post, we’ll talk about the Safran user interface and why a modern UI is important to your risk team.