Emerald Blog

Stories, Tips And Tricks From Our Team’s Experiences With Primavera Since 1995

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 6

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).

Part 6: Correlation (and the Central Limit Theorem)

Correlation is the mutual relationship between one or more events. In schedule risk analysis, it is used to indicate that one risk’s probability is likely to increase (or decrease) if another risk occurs. For example, if the first piling activity on a project goes badly, chances are that all the piling activities will go badly and there is a further (weaker) chance that all the excavation activities will also go badly. That seems to be pretty obvious and we need the ability to build that into our risk models.

Correlation has another purpose: to counteract a related topic call the Central Limit Theorem (CLT). There have been many articles written regarding the CLT, but to summarize the issue, if you have a number of similar duration activities linked in series that have the same uncertainties, when randomly sampled, if one activity goes long, another will go short and they will cancel each other out, leading to a loss of extreme values in the probabilistic analysis.

Some argue that in order to combat the (CLT) and its impact on the model, correlation is absolutely required while others will argue that so long as you have a high level schedule, the CLT is a non-issue and thus correlation is not required. Personally, I like working with the project team’s live schedule, which tends to be a Level 3 or Level 4 schedule and correlation is often a big issue. We’ll leave the discussion about which level of schedule risk should be performed on for another blog and concentrate on the CLT here.

Figure 1: The effect of Central Limit Theorem on a one activity schedule and a ten day activity schedule with the same overall deterministic duration and uncertainty distribution. The PO duration is 80 days vs 90 days and the P100 duration is 120 days vs 110 days, respectively. The CLT has lopped 10 days off each end of our distribution in the case of the ten activity model. 

Applying correlation can correct the impact of the CLT by preventing the cancellation that occurs in a purely random sampling. Applying an 80% correlation between the risks leads to the following result:

Figure 2: The effects of applying correlation to correct the Central Limit Theorem. By applying a correlation to the uncertainties on the ten activity model, we can closely approximate the one activity model.

So, given that we need to enter correlation in our model to reflect risk’s interdependencies and we also need to use correlation to combat the CLT, let’s look at how correlation is performed in OPRA and in Safran Risk.

In OPRA, correlation is assigned between activities. This means that in order to combat the central limit theorem for my 10 activities, I need to link all 10 together in the correlation module. It’s possible but tedious and it gets worse as I have more and more activities to be correlated. What’s confusing for the user is that they have to decide which activity is the parent and which activities are children in the correlation: do I choose the first one in the string or the longest one or something else? It’s not well documented. It gets even more difficult if I have multiple correlations between activities with multiple uncertainties or risks: without a PhD in statistics, how do I know what to correlate and how much should the correlation be?

Figure 3: OPRA Correlation Assignment

SRA takes a different approach – risks are correlated, not activities. This makes a lot of sense in that if you have an activity with multiple risks, the correlation can exist only to the risk in question not to the entire activity. Similarly, if a risk applies to multiple activities, the correlation is also automatically applied (but can be turned off if necessary).

There are actually a couple of ways to handle correlation in Safran Risk.

The first is to correlate the impact of a risk on all of the activities that it applies to – unchecking the box will apply 100% correlation to all of the activities that the risk is assigned to:

Figure 4: Safran single risk impact correlation

But what if we need to correlate multiple risks to each other? In OPRA, the approach of correlating activities makes this almost impossible; how would you figure out to what degree two activities with multiple risks should be correlated?  SRA has this covered – by correlating risks together, the appropriate impacts will be calculated automatically.

Figure 5: Safran Risk Correlation Mapping Matrix

Not only does this mapping allow the user to correlate risks to each other but it also allows the probability of occurrence of each risk and the schedule impacts of the risks to be correlated independently. Further, the Pre- and Post-Mitigated positions can be differently correlated. Cost risks can also be correlated (not shown above). 

The Safran approach makes understanding and applying correlation much easier. When correlation is clear and easy, the user more likely to apply it, leading to better results (and hopefully less discussion of the Central Limit Theorem). 

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 5

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).

Part 5: Risks and Uncertainties

When I started working with OPRA (then Pertmaster) all risks were modelled as uncertainties on activities, usually as three point estimates. We would add activities to the model to simulate risk events occurring in the schedule, and we would use task existence to model the likelihood of occurrence. This approach, while it worked, was somewhat tedious when performing what-if analysis, and it made it impossible to trace the impact of a risk that impacted multiple activities, since the tornado graph could only show the impact of each activity, not the risk event that caused the activities in the first place.

The Pertmaster development team improved the process by introducing a Risk Register that would allow risk events to be modelled separately from the schedule, and then an impacted risk plan could be generated by creating sub-tasks for each risk event. This worked well for what-if analysis as changes could be made and a revised model generated quickly. The tornado chart was also changed so that a user could see the impact of risks that landed on multiple activities.

But what about uncertainties? We were always stuck with modelling uncertainties on our activities in OPRA. So if I had an uncertainty that impacted multiple activities, I would have to either input the uncertainty onto multiple activities (and manually calculate the impact durations) or use the QuickRisk function to generate the impacts. The problem with using QuickRisk is that it is too easy to overwrite an existing value with the new value and not realize your previous data had been lost.

There was another issue with uncertainties; how could we model an activity that had more than one uncertainty on it? For example, I might have a piling activity that had an uncertainty based on the productivity of the piling crew as well as an uncertainty based on the accuracy of the soils report. As the person building the model, I would have to calculate the combined uncertainty of these items.

The OPRA team, to their credit, did start on development of a module called Risk Factors that worked similarly to the Risk Register and allowed multiple uncertainties to be added to one (or many) activities. Unfortunately, it was never really completed to a level that would make it useful. With Oracle abandoning new development on the tool, this module will never be completed.

In developing Safran Risk (SR), the team decided to combine the modelling of risks events and uncertainties into the module called Project Risks. This simple step makes building a what-if model much easier since all “risks” can be either risk events or uncertainties which can be turned on or off as required. Risk events and uncertainties can be applied to as many activities as required and the impact can be directly traced back to the item in the risk module.

To model a risk event, the probability will be set to less than 100% with (usually) an absolute impact; whereas for uncertainties, the probability will be set to 100% and the impact will be (usually) a relative percentage of the original duration. However, a user can mix and match probability and impact types to build the model as they need. It is also possible to set pre- and post-mitigated positions for all Risk Events and Uncertainties which allows for great flexibility when conducting what-if analysis of mitigation strategies.

In addition to Uncertainties and Risk Events (collectively called Standard Risks in the tool), the Project Risks module also allows the user to configure Estimate Uncertainties and Calendar Risks. Of course each risk can have a cost component that can be modelled within the tool as part of the integrated cost and schedule risk analysis approach that makes SR so powerful.

After Project Risks are entered in the system, the risks can be mapped to the activities using the Risk Mapping module. For anyone who tried to use the Risk Register module in OPRA on a large schedule and struggled with pages and pages of activities, the Risk Mapping module is much easier to use because activity layouts can be used to filter and group activities for easy mapping to the risks.

The really great feature is that SR will quickly show the total impact of risk events and uncertainties on any activity prior to running the risk analysis. This is very useful to find errors where a risk event or an uncertainty is out of range of realistic values. In OPRA, it was very easy to assign a risk duration out of all proportion to the original task duration (in fact, the way that ranging worked in the Risk Register model made it hard to get the durations correct without a great deal of fiddling).

So, what if a user just wants to enter three point estimates on each activity? This can be done one of two ways in Safran Risk:
  1. Three point estimates are still supported and can be entered directly on the activity (this is useful if the user plans to import an old OPRA file or just want to continue their old process in newer software).
  2. You can create a one-to-one mapping between line items in the risk tab and activities. This is preferred because when the user is ready to incorporate risk events in the model, they can be easily added to the model. Creating this mapping is easier than it sounds because it is so easy to import and export to and from both the Risk and Risk Mapping modules.
Ultimately, if you really want to develop risk models that have both uncertainties and risk events (and most models do), Safran is much easier to use than OPRA. If you also need to develop integrated cost and schedule risk models (covered in my last blog post), Safran is the only game in town.

Is the P6-QA tool only relevant during your planning phase? Absolutely not!

Something we are asked is whether our P6-QA Tool is useful through-out the life cycle of your projects. Its use throughout the planning phase is obvious, of course; it helps with schedule development. Even after you’ve planned out your work though, the QA tool can help while you’re executing your plans. Let us examine a couple examples of how.

Keep in mind - it is beneficial to equip your team with the best tools for the job! Work Smarter, Not Harder!

Scenario 1

During the execution of our schedule, our team loads found work using P6-Loader. Our team didn’t omit any information required by the P6-Loader template. That’s definitely a good start, but just doing that isn’t checking specific schedule quality. The P6-QA tool removes the burden of manual schedule and business process analysis by automatically identifying deficiencies in Primavera P6 schedules based on scheduling best practices, industry standards such as the Defense Contract Management Agency’s (DCMA) 14-point assessment, and user introduced business process requirements.

Your schedule requires the ability to conduct its designated tasks. You are probably wondering 'what tasks'? For example, our schedule needs to reflect the execution plan, contain regular updates and provide the basis for project schedule reports.

Where does the P6-QA tool fit in? You might think you’re done after updating your schedule, but are your updates complete and correct? All execution schedules require regular updates and then analysis after the updates. Imagine having to manually review your updates every day. You’d lose a lot of valuable time! The P6-QA Tool can assist with finding errors that can occur in the information uploaded to your schedule and the information entered in updates.

Imagine that during the update process, someone accidentally enters an incorrect completion date on an activity - this date is past the data date.

In the P6-QA Tool, the negative float would automatically be flagged by DCMA07 in the P6-QA Tool, Negative Float Check.

Another update entry is a logic change required by a change in the execution plan for a couple of work packages. The change is not entered correctly, and therefore when the logic is entered it produces negative float in the schedule.

The logic entered produces negative float in the schedule.

In the P6-QA Tool, the incorrect completion date would be automatically flagged by DCMA09a in the P6-QA Tool, Actual Date (s) After Project DD check.

Here’s a project level display example:

Activity level example:

The desk top icons make locating activities you need to address quick and easy. You can sort, filter and use the icons in group and sort to create detailed layouts making identification of the checks to revisit expeditious.

Another one of our tools, the P6-Loader, creates reports in the project notebook. Some examples of report information from the two checks that did not pass the criteria are below. The report contains all the checks that have passed and failed.

We looked at a couple of the list of DCMA checks the P6-QA Tool runs - there are plenty of other checks as well. The list below contains Business Process Validations run in the P6-QA Tool. These are some common issues, quickly highlighted by P6-QA Tool.

Scenario 2

Say your schedule is not progressing as planned - we need to create a couple of potential mitigation plans (what if scenarios). With the P6-QA tool, you can create copies of your schedule and test them with your proposed fixes. Once you’ve done this, you can address the findings to increase schedule quality maintenance. You set the parameters around the checks.

Above is a display of the parameter settings for the two checks we visited in scenario 1. (We used defaults to run scenario 1.) Being able to set your own parameters means you check your specific requirements, not a preset list of parameters you need to sort through later for applicability.

As you can see, the P6-QA tool remains relevant during your entire project. Empower your team with the best tools for the job with Emerald’s P6-QA tool.

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 4

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).

Part 4: Analysis by Exclusion

A few years ago, I was working with a large mining company on their risk process. One of their risk specialists mentioned that they were performing risk analysis by exclusion. Naturally, I was curious as to what this was and asked them to show me how it worked.

What they did was to take the risk model in OPRA, run it with all risks, and then turn off one risk at a time and rerun the model. Then they would compare each output either in the Distribution Analyzer or in Excel, so that they could report exactly how much impact each risk had on the project.

The Tornado Chart in OPRA automatically ranks the activities or risks as to their impact, but the challenge is that while you can see which activity or risk has the highest impact, you cannot quantify what that impact is. The tornado is based on Pearson’s Product Moment which gives a percentage ranking which project managers find difficult to interpret. So, to answer the question of “What is the impact of this risk on the schedule?”, the OPRA user would:
  1. run the model with all risks turned on and record the results;
  2. manually turn off a risk, rerun the model, record the results;
  3. turn the risk back on;
  4. repeat step 2 and 3 to sample all risks in a project.
In the case of this client, they said that it would generally take about a week to perform the exercise on a large construction schedule, more if there were changes in the model or after a mitigation strategy session. This is simply too time consuming.

Another issue: They also had to use the Risk Factors module of OPRA to make the analysis work, making them one of the only clients of ours who ever used this module. This module works by allowing uncertainties to be modelled similarly to the Risk Register. This allowed uncertainties of the same type (Risk Factors) to be grouped for analysis and tracking.

While I could see the value of the work being done, the effort required was much too high. One of the tenets of working efficiently is that if you need to do something over and over again, you should look at automating it. Computers are good at repetitive tasks, people generally are not; automating repetitive tasks not only reduces time but also improves accuracy. For example, if you were to forget to turn Risk Factor #9 back on before turning off Risk Factor #10 risk for the 10th iteration of the analysis by exclusion effort, the project team might choose to act on the wrong Risk Factor because its impact was overstated.

I wondered if the VB capability in OPRA would assist in automating this task. While I didn’t use it much, I had heard that a lot of processes could be automated using the VB feature. So, I asked my friend and colleague, Russell Johnson, the ex-CTO of Pertmaster if analysis by exception could be automated using the VB feature of OPRA. His answer was:

While it’s technically possible there are a few big challenges.

1. There was never a way with OPRA to create a native risk driver event (we did create a prototype module, risk factors, but this now means your model has new stuff added to it which can be confusing). So the first challenge is just creating and identifying a model with driving risk factors.

2. There is no way to natively store or process the results. Since you are doing something outside the normal of what OPRA does, you'd have to find a new ways to store and display the results. You can't for example manipulate the built-in tornado chart.

3. Finally, the speed is an issue. For various reasons OPRA is slow compared with Safran, so whatever you do will take much longer (like days vs mins, if it can even do it).

The other big issue is that OPRA dropped VB support years ago, so the argument is moot.

The developers of Safran Risk (SR) saw OPRA users performing this tedious, manual task and decided to automate it. The results are amazing; 40+ hrs of analysis in OPRA now takes minutes in SR and the chances of making a mistake are zero.

So how does analysis by exclusion work in SR? Let’s take a look.

First of all, the analysis can be performed in a single pass or in multiple passes.
  1. In single pass mode, it will run through each risk once, and show an output for all of the risks in the plan individually excluded (essentially the same exercise my client was performing manually).
  2. In multiple pass mode, the system will run for the number of iterations you specify and will remove the top risk from each iteration before starting the next iteration with the previous iteration’s risk left turned off. This has the advantage of preventing large impact risks overshadowing lower impact risks and will show the cumulative impact of mitigating each additional risk.
This is the result of the single pass analysis for all of the risks in the demo “Drone” project.

To show this better, let’s look at only one risk and one opportunity being removed compared to the original curve.
In this example, the black line represents all risks turned on (the “None” line) and the green line represents the same model with only the Testing Overrun risk turned off. This indicates that at P70, we would save 33 days if we could eliminate the Testing Overrun risk.

You can also see the effect of an opportunity in the purple line. The “Outsource” risk actually represents an opportunity. This shows that when the opportunity happens, the project duration is shorter (black line) than when the opportunity doesn’t happen (purple line).

However, the cumulative saving of removing multiple risks is not entirely clear, since turning off additional risks may or may not save the sum of the savings obtained individually. In this example, if we turned off the top 5 risks, we would expect to save 103d. However, it is not that simple since turning off one risk may mitigate another risk, particularly if there is correlation between the risks.

We could do this manually by turning off the top risk, running the analysis, running the next risk, etc. but this is even more manual work.

To understand the interaction of the top risks, we run the same analysis but using multiple passes, turning off the top 5 risks cumulatively (ie pass one has only Testing Overrun turned off, pass two has Testing Overrun and Design turned off, etc.).

Again, looking at the P70 values:
  1. When the Testing Overrun risk is excluded, the schedule improves by 33 days (the same as the single pass).
  2. When Testing Overrun and Design are excluded, the schedule improves by another 33 days (a change from the 29 days of the single pass)
  3. When Testing Overrun, Design and Ash Cloud are excluded, the schedule improves by another 21 days (a reduction from the single pass result of 23 days)
  4. The total savings when we remove the top 5 risks is 103d. This is the same result as when we ran them individually, but the individual savings are different.
This is great information, since the project team can see what the effect of removing these risks would be on the project. But what about the costs of mitigation vs the cost of the risk?

In a previous blog, I wrote about the advantages of integrated cost and schedule analysis and here, through the power of the integrated model, I can look at the same information but on a cost basis rather than only a schedule basis.

Here is what our cost Analysis by Exclusion looks like for all the risks. Notice that there are a few more risks shown in this display, since there are now cost risks that do not have a schedule impact included.


Now we can tell our Project Manager that by removing the Design Specification risk and ensuring that the Outsource opportunity occurs, we can save 49 days and $295k. Note that any costs associated with the mitigation strategies will be included in the model.

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 3

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).

Part 3: Integrated Cost and Schedule Risk Analysis

Many years ago, I was hired by a large multinational oil and gas company considering a new technology project. My role was to conduct a schedule risk analysis in preparation for a go/no-go decision from The Board. Another consultant was conducting a cost risk analysis in parallel to mine. The company expected us to each present our results but not to discuss the results with each other. The results would be independently used when considering whether or not the company was going to invest billions of dollars in their project.

The cost risk consultant and I discussed the issue and since we agreed that cost and schedule risk were intrinsically linked and we looked at ways that we could combine the two analyses. Our options were limited:
  1. We could build a cost loaded schedule in OPRA and conduct a joint analysis in that tool. The challenge that we faced was that the cost line items didn’t line up with the project schedule in a way that would make this easy to do. Not only that, but only some of cost risks were schedule related, not all of them. We would need to build a cost loaded schedule specifically for the analysis, which, while possible, would take a lot of time and effort.
  2. We could take the results of the schedule analysis and incorporate them into the cost analysis in @Risk. This could be done by creating a single point duration value or a simple time based distribution for certain cost items, like our indirect costs. For example, we could say that our site services (trailers, security, staff, etc) would be required for a P70 value of 60 months as opposed to the deterministic value of 48 months, but it lost a great deal of the dynamic aspects of the schedule analysis because the integration was done at a high level.
The oil and gas industry has largely followed Option 2 as the easier approach but really what we needed was to develop the two models concurrently, so that uncertainties and risks in both cost and schedule impact each other in the Monte Carlo simulations. Changes in one can affect the other immediately and visibly.

Why is it such an advantage to have both in the same model?

There is an old project manager’s joke that says, “You can have it good, you can have it fast, you can have it cheap – but you can only pick two of the three.”
  1. If a project is running late, there will be additional costs associated with indirect costs and in most cases there will be additional costs associated with direct costs as well.
  2. If the project team decides to spend money to save time (mitigate delays), the costs will likely increase.
  3. We may decide to mitigate cost risk by slowing the project and keeping a smaller, more efficient labor force or by moving work to a known fabricator.
A recent study of Oil and Gas megaprojects in Alberta showed that, on average, there was a 19% cost overrun and a 17% schedule overrun on these very expensive projects. It is certainly no surprise that these numbers are so closely correlated. Yet we make decisions on cost mitigation strategies and schedule mitigation strategies without insight into the impact that our change will make to our schedules and cost. On the oil and gas project that I mentioned earlier, cost and schedule mitigation strategies were considered entirely in isolation.Figure 1: Integrated Cost and Schedule Risk Process

Often as project managers we get tunnel vision because we get too focused on schedule or cost at the expense of the other. For example, I worked on a turnaround project that had a $120M budget with a 35 day maintenance window. Management communicated that schedule was everything, cost was very much a secondary consideration (so much so that it wasn’t even monitored during the project), so the project team started burning overtime almost from the first shift to maintain the schedule. In the end we completed the work on time (to great fanfare) but months later, when all the invoices were in, we had spent $160M to do so. This caused great distress within the organization. A few heads rolled and the “Full speed ahead, damn the torpedoes” approach was never used within that organization again.

“Schedule pressure dooms more megaprojects than any other single factor” (E. W. Merrow)

What we really need to understand is not just the probability of achieving our end date or the probability of achieving our end cost, but the probability of achieving both concurrently. This is called the Joint Confidence Level (JCL). We want a solution that offers a 70% probability (for example) of achieving both cost and schedule and that will help us to understand the interdependencies between the two.

The AACE 57R-09 Integrated Cost and Schedule Risk Analysis Guideline (found here) describes the process of combined Cost and Schedule Risk Analysis and the process is well described in Dr. David Hewlitt’s book Integrated Cost-Schedule Risk Analysis (found here).

OK, so now we understand why we need to conduct cost and schedule risk together. But why Safran Risk?

Safran Risk is one of the only tools on the market that evaluates Cost and Schedule Risk together. The beauty of their approach is that costs can be modelled separately or together with activity durations. You can even apportion part of an estimate line item to a schedule activity but leave the rest independent. This gives a lot of flexibility in modelling the risks on a project and avoids the frustration of trying to resource load a traditional CPM schedule to match a cost estimate.

We can also truly understand the impact of our mitigation strategies best by evaluating cost and schedule risks together. Safran Risk makes turning risks on and off for what-if analysis simple and mitigation costs and schedule impacts can be easily modelled.

Finally, we can plot our cost vs schedule risk outcomes using a scatter plot to create a Joint Confidence Level diagram which shows us the probabilities of hitting our cost and schedule targets.

Figure 2: JCL @ 70% confidence – note that the deterministic cost and schedule probability (the star shape) is only 17%.

The Energy Facility Contractors Group (EFCOG) recently undertook an evaluation of the commercially available tools that can conduct cost and schedule risk together, which is a self‐directed group of contractors of U.S. Department of Energy Facilities. The purpose of the EFCOG is to promote excellence in all aspects of operation and management of DOE facilities in a safe, environmentally sound, secure, efficient, and cost‐effective manner through the ongoing exchange of information and corresponding improvement initiatives. You can see their report here.

Within this report, EFCOG chose Safran Risk as the best product for those working with Primavera P6 and second best for those working with Microsoft Project. Since most of my clients are working in P6 and need to conduct joint cost and schedule risk analysis, Safran is an obvious choice for those looking to better understand their projects.

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 2

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).

Part 2: User Interface

In my last blog post, I discussed the technology used in OPRA vs SR. As I mentioned in that blog, the biggest complaint that we hear about OPRA is that the technology cannot support a large risk model. The second most common complaint that we hear is that the user interface is clunky and moving reports and layouts from one model to another in order to generate consistent outputs is tedious.

When OPRA (at the time called Pertmaster) was re-introduced as a Risk Analysis tool in 2001 (it had previously been a CPM scheduling tool), it had a pretty decent user interface (UI) for the time. It looked like a typical CPM scheduling tool that had an extra “Risk” menu for Risk inputs and extra items added under the “Reports” menu for Risk outputs.

For most risk users of the time, the UI was fine because Schedule Risk Analysis (SRA) was a new and relatively immature concept that was performed infrequently by relatively few people. These users would learn where to find the required items in the Risk and Reports menus. Hey, if you could master P3 or Artemis, Pertmaster should have been a walk in the park! Besides, compared to Primavera’s Monte Carlo add-on, Pertmaster’s UI was a big step forward in usability.

OPRA’s Risk

OPRA's Risk Menu

OPRA Reports menu

After nearly 20 years of SRA, things have changed significantly. We now have defined risk maturity models, organizations have made SRA part of their project management methodology, and project teams build their own risk models. More people need to be able to work in the tool and getting them up to speed quickly and easily requires a logical workflow inside the tool.

When Safran developed Safran Risk (SR), they used their experience of the original Permaster’s development to modernize their new tool’s User Interface and make it easier for users to understand and learn. The first step that they took was to change from a menu based input model to a workflow based model. This means that SR has replaced the menu based system with a tab based sequential workflow system. The user moves from left to right as they build the risk model.

Safran Risk tab based navigation.

The other item of note here is that all of the functionality of Safran’s scheduling tool is also here (a big advantage of building the risk engine on top of the scheduling package). Users can create layouts and filters and share them between users and projects, making application of standard processes and reports much easier than in OPRA.

Does an updated UI make the upgrade worthwhile? Not in and of itself, but it does make training new users much easier and makes it much less likely that a user will miss a step in the process. I personally find that Safran’s UI just makes everything easier. I still occasionally talk to P3 users who recall that its UI was the best ever, but I doubt that they would want to go back and work with it today. I’d love to have a classic sports car (say a TR6) in my garage, but I sure wouldn’t want to have to drive one to work in the Canadian winter!

In my next blog post, I’ll discuss the benefits of integrated cost and schedule risk analysis.

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 1

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, as Emerald was the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk Analysis (SR).

Part 1: Technology

You might be wondering why I’m starting with such a boring topic. Don’t we really want to discuss bells and whistles and the cool factor items? The reality is that the most common complaints that I hear about OPRA are that it’s slow, it crashes a lot and it’s difficult to move data in and out. So if you’re an OPRA user today, it’s quite possible that you don’t want to change your process but you just need a more stable, more scalable platform.

When OPRA (at the time called Pertmaster) was re-introduced as a Risk Analysis tool in 2001 (it had previously been a CPM scheduling tool), most desktop scheduling tools used flat files to store their data. P6 had come out only a couple of years prior and had yet to be widely adopted; P3 ran on a BTrieve database, which was pretty much a flat file based system. The idea of using a database engine backend was something that was still relatively new, so Pertmaster used the more common flat file based structure.

For most risk users of the time, this didn’t matter because Schedule Risk Analysis (SRA) was a new and relatively immature concept that was performed infrequently by relatively few people on schedules that were generally built specifically for the risk analysis and had only a few hundred activities. Performance of such a system would be fast enough and the *.rsk output files would only be kept for short periods before being deleted or over-written. It was also unlikely that more than one user would need to access the file at a time.

The thing is, this is no longer the case. Over nearly 20 years of SRA, things have changed significantly. We now have defined risk maturity models, organizations have made SRA part of their project management methodology, and project teams build their own risk models. Standalone schedules for risk analyses are becoming rare and multiple users want to look at the model concurrently.

At the same time, schedules have become larger as more detail is built into the schedule through integration with other systems. Scheduling systems have become more powerful to compensate. Where a large Shutdown/Turnaround (STO) schedule 15 years ago would be 5,000 activities, a large STO schedule is now approaching 100k activities. Making a new dedicated schedule each time that a risk analysis is run (often quarterly) is simply no longer realistic.

Scheduling systems have evolved since 2001. What we need is a SRA tool that has the same enhancements. The most important of these is that the SRA tool has a backend database to store the projects, user data, risk data and risk outputs and to allow concurrent multi-user access. A 64-bit application platform is also required for large schedules.

Unfortunately for those of us who used and loved OPRA, development stopped in 2013 and the last patch was issued in 2015. The platform never got a database backend or moved to a 64-bit application, meaning that the system remains single user and is limited to schedules under 10k activities. It simply hasn’t evolved the way it needed to to stay relevant.

Safran had an advantage in developing their new Safran Risk module. They already had a world class scheduling program, Safran Project, available that runs on a SQL Server or Oracle database and a 64 bit application layer. In Europe and the Middle East, Safran Project is considered a competitor to P6. When Safran started development of SR in 2015, the Safran Project platform was a solid place to start and enhancements have been released regularly since.

In order to speed up development of Safran Risk, Safran also had another advantage: they leveraged the knowledge of the original Pertmaster development team to guide the development and ensure that the lessons of Pertmaster were incorporated from the start in Safran Risk.

In the next blog post, we’ll talk about the Safran user interface and why a modern UI is important to your risk team.

Peace of Mind in the Cloud

The Cloud is a remarkable and innovative tool. It connects people from around the world, allows us to share fun vacation pictures and adorable videos of our pets, and offers a massive network that can process any and all data under the sun.

But as anyone who's ever had to use the back end of a Cloud program can tell you, it can quickly get complicated, leading to far too many headaches and sleepless nights trying to figure out what's going wrong. We could tell many horror stories about hours spent hunched over our computers just begging our systems to work the way we want - and I bet you could too!

So when one of our clients approached us looking to evaluate their options with Cloud software, we knew how they felt. At the time, they had been using on-premise servers, but were looking to upgrade to a database with more features at a reasonable price. They had considered Software as a Service (SaaS), but found the price of such an upgrade to be too steep to be a realistic option.

Luckily, we had previously agonized over the same decision and were able to offer our personal server in the Cloud; EAI hosted by OVH, as an alternative with a much lower price than the SaaS system Oracle offered.

Our client was also in sore need of maintenance services. Before they came to us, they had been operating, maintaining, and repairing their own cloud servers with only a handful of IT specialists who were unfamiliar with Primavera to begin with. As you can imagine it was slow, frustrating work, and when things broke down it could be days before they got everything up and running again. So when we suggested the EAI server, they were quick to take advantage of our services.

Now that they've moved to EAI, our client is able to enjoy the benefits of Cloud without the hassle that comes with sustaining their database. Emerald Associates handles the maintenance, repair, and operation of EAI, and offers on-sight visits and training for our client's entire team so that they will have the familiarity with Primavera that is so crucial in today's business environment. We have been working together with this particular client for the past 3 to 4 years now to keep their servers in the Cloud running smoothly and efficiently, giving them the freedom to spend their time and energy on what really matters - their business.

Client Experiences #2 - No More Outdated Software

We've all worked with frustrating, outdated software. It's a pain to try and get everything to work the way you want it to, and the task is usually just too important or time-sensitive to take a break from. Everyone knows where that leads - yelling, cursing, or just slouching down in your chair in defeat, bested by technology once again.

During March 2016, our client decided that they were done working through this frustration. They wanted to update their data entry, schedule updating, and reporting processes so that they could streamline more effectively and remove these hassles from their everyday life. In addition to this, the refinery was looking for a way to develop a more effective cost control method for their turnarounds. Their current software just wasn't cutting it anymore. So they came to Emerald Associates for assistance - and luckily, we knew just the right programs to meet their needs.

Using P6-Loader, TAPS, EP-Datawarehouse and EP-Dashboard, Emerald Associates was able to give our client the software they deserved. No more despairing over yet another system crash or panicking over the disappearance of important data - now they had software that they could work with.

Thanks to Emerald's P6-Loader, our client was able to drastically cut down on the amount of time they spent poring over data. Automatic schedule updates went from taking hours to a matter of minutes, and custom dashboards and green-ups could now be created and automatically updated using EP-Datawarehouse and EP-Dashboard. This made the whole process move a lot smoother and much faster. Our client drastically reduced their need for manual entry, cutting down on errors and saving yet more time. The P6-Loader was a big step up from their previous way of managing things, not to mention significantly faster and far less likely to be tear-inducing.

This client still uses P6-Loader, TAPS, EP-Datawarehouse and EP-Dashboard to this day. No more outdated software for them - from now on, it's smooth sailing.

Client Experiences #1 - Massive Upgrade

When I first started working with our new client, I started out as a general trainer for the company’s employees. Our work began with typical P6 stuff, nothing new or especially exciting, but it was the start of a longer, more involved relationship with our client. I started helping them with turnarounds back in 2013 and I've been doing turnarounds with them every year since. I recently finished my 6th turnaround with the company - an 11-12 week long process that honestly felt a lot longer than it was. Due to a problem organizing the order of units, we ran overtime, and that was unfortunately just one of the many issues we had to deal with during that turnaround.

As is often the case, a good amount of the complications we faced were unintentionally self-inflicted. Our client runs under an alliance contract umbrella with another organization that controls their project management and general processes. This organization had decided to do a major upgrade to P6 just a few weeks ahead of the turnaround execution. This naturally caused a lot of complications, as the workers involved in the turnaround had to do a lot of scrambling to figure out the bugs in the untested upgrades while simultaneously dealing with the turnaround itself, which was no easy task. On top of this, the upgrade to P6 wasn't just a standard upgrade - it was a move from version 6.2 to version 17, which is a big jump on any given day, but right before a turnaround... It was disastrous. There were all sorts of issues, including considerable trouble upon first-log in, and it created a lot of stress - way more than even on the typical turnaround! Units were in shutdown, people were pulling 12 and a half hour long shifts, the site was an hour away from where most of the personnel were stationed, IT issues were causing immense frustration - it seemed like everything that could go wrong did.

Now, I've been in quite a few panicked, rushed environments over my 8 years of turnaround assistance, and this could easily have been one of them, but luckily the majority of the schedulers dealt with it very well, keeping their heads despite the setbacks we faced. And as for me - I went in with my usual mentality: get it done. So despite the constant uphill battle, we managed to pull everything together and get through the turnaround with our sanity intact. Overall, it wasn't the easiest turnaround I've ever been a part of, but complications are part of the job, and I'm happy to say that another yearly turnaround with our client went by successfully - if maybe a little bumpier than usual!

P6-QA in the Real World

Before becoming an implementation specialist at Emerald Associates, I was a project manager and P6 administrator in a state government transportation agency for several years. I was responsible for managing 50-60 Primavera P6-EPPM (Web) project schedules and monitoring them to schedule completion. These projects were for the engineering and design of highway projects including tasks such as road maintenance, new road construction, bridge maintenance, and signals upgrades.

Plan Development - Kick-off meeting

To put together the project's plans, we started with a kick-off meeting to verify the full scope. These projects typically ranged from 500-1500 tasks and ran for 24 to 60 months depending on funding each year and priority. At the kick-off meetings, with a hard copy template schedule in hand, each department head would discuss their role in the project and request adjustments to the schedule accordingly. As the project manager during these meetings, I took note of any deletions, additions, relationship changes, duration changes, etc. required to the schedule and with some project teams we made the changes directly in P6-EPPM in a Reflection Project. The schedule changes from the kick-off meetings ranged from minor to significant. Once I made the changes to the project schedule, a 'final' draft was sent to all the team members who participated in the kick-off meeting so we could get comments and approvals in time for our submission deadlines. If no comments or changes were made, the 'final' draft schedule was accepted and the project moved ahead.

Monthly Updating

Progress on the projects was done on an ongoing basis, with scheduling being done nightly. Project updates including scope changes, adding new tasks, removing unnecessary tasks, and rearranging tasks that needed logic changes were done when needed, via email from the initiating department. I would make changes on a Reflection Project and send the new schedule out for approval, if the float remained positive. If the changes caused the project to fall behind or if a large setback was identified, a schedule review meeting would be held with all the main players in the project. At the meeting, the key players in the schedule's creation and project manager would all gather and review the schedule for the project. Changes would be done on the fly in a Reflection Project and re-scheduled during the meeting, when possible. If many significant schedule changes needed to be made I would note these changes and complete them after the meeting so we didn't waste time during the meeting itself and I'd send out the new schedule for the project team to review and approve again.

Now theoretically the project schedule should have been fully reviewed by everyone involved. Unfortunately, this was not always the case and errors were commonly found within the updated schedule as the project progressed. There were a few reasons for this: Sometimes as a result of the dissolution of various activities, one activity would be 'overloaded' with unnecessary relationships and odd relationship types, often to the same activities, which would impact the schedule calculations. Also, periodically, there were problems with added activities that may have had relationships that were not added or added incorrectly or perhaps a duration was added incorrectly. Sometimes, a new activity ID would be entered incorrectly without notice. In essence, there were any number of things that could have negatively impacted our schedule quality. If only I had a tool available to check the nuances of the schedule for me and flag them, so I would know where to look. Little did I know there was a tool out there that would have saved me countless hours reviewing and analyzing this schedule.

If I had had the P6-QA tool to help in analyzing my schedule after changes were made and before the schedule was sent out, I could have sent out a schedule that would have automatically been checked for logic, logic types, missing codes on the activities, activity ID format and other business process checks that we could have created specifically for our needs. The use of P6-QA would have cut down the time it took me to send out revised schedules from several days to less than one.

Having P6-QA there to act as my safety net on the project changes as they were made would have been extremely helpful to ensure the changes made sense and did not negatively impact other parts of the schedule. I could have run the P6-QA check while everyone was in the meeting, let it pinpoint possible problems, and then discuss those issues with the team immediately. This alone would have saved me multiple e-mails back and forth after I analyzed the schedule after the team meeting.

I am positive buying P6-QA would not only save any company time and money, it would help them produce quality P6 schedules.

P6 Caching - Not Ready for Prime Time

We have been using P6 v18.x with several clients and have seen some differing behavior related to caching. It appears the problem may have started as early as P6 v16.x. These clients are in varying environments; P6 Oracle SaaS, EAI Hosted, and on-premise - In short, anywhere where Oracle Cloud Connect is utilized.

We were excited to see the new form of caching that appeared in v17.x. We have clients with poor internet access and P6Web is not adequate for their needs - they need P6 Client. Everyone knows that P6 Client is very chatty and needs good bandwidth to work properly, so the idea that we can cache data and do heavy lifting on our desktop rather than on the server far away was great.

P6 caching works by copying data from the main P6 database to a SQLite database that is installed on the local machine - it is essentially a filtered replication process.

Unfortunately, we have seen several different problems:

1. Data getting lost - this was related to User Defined Field (UDF) data on the project level seeming to disappear. The user would go in and add data to the UDF. Once the user logged out the data would clear out of the UDF and not be there when the user logged in again.

2. Data appears for one user and not another, despite refreshing (F5) and/or hard refreshing (Shift F5):

1. We first saw this with our CAPPS tools that submit updates through the Primavera Webservices; it would show up fine for one user, but not for another. When we investigated, we found that the user who saw the data properly was connected using the normal client-server connection while the user connected with Cloud Connect couldn't see the updates. Once the Cloud Connect cache service was turned off, the CAPPS tools submissions appeared as they should.

2. Another issue with one of our clients was with filters. One user would be able to see all values displayed in the filter and another user would see all the field criteria, but no data values in the criteria.

3. Calendars not taking updates - when changing some calendar exceptions, the schedule was not recognizing the changes. A client user added a new project calendar that was copied from an existing calendar. Changes would be made and saved, but when the user scheduled the project and went back into the new project calendar, the calendar had reverted to the copied calendar values. When the user logged out and back in, the calendar would update and show the proper values.

4. P6 caching can chew up local hard drive space and memory every time you connect to a different database alias or if you have large numbers of baselines on the project because it downloads all global data and baselines of open projects to the local machine. In some cases, the workstation would run out of memory or hard drive space, causing P6 to lock up and crash.

So for clients who work in a multi-user environment, or who have large projects with many baseline projects, we have decided to try to uncheck the caching except in the very extreme occasions where the P6 user has very poor internet. We will see how this goes.

If you find that you are having the same issues and want to turn off caching for the time being, you can do the following: Click on Database. Select Database. Click Configure... Button. Click Next. Un-check the box next to “Enable Client-side Cache”. Click Next. Click Finish. Log in as usual.

Are You Importing Unwanted RISK TYPES?

Remember our good old friend the POBS Table? Well, we have a new friend in town that is introducing itself to our database in the form of the RISKTYPE table. We have discovered numerous clients are importing XER files to their database that include a large number of Risk Categories, sometimes tens of thousands of them. No one knows where they originated, but they are multiplying and wreaking havoc.

The problem comes when an export file is created with unwanted Risk Types and imported into another database, creating more values in the destination database. The destination database then could share their large number of Risk Types to another database. Each time, the RISKTYPE table is passed along, it grows, spreading and infecting more and more databases.

But is this really a big deal? Yes, it is! There are typically two issues with the Risk Categories: 1. 1. The table contains circular references which cause P6 to crash 2. Gibberish characters appear in huge numbers of Risk Categories and the client has no idea where they came from. The cause of the circular reference, according to the Primavera Knowledge base, occurs when a parent risk_type_id in the RISKTYPE table references a parent_risk_type_id of child Risk_type_id causing a circular reference to itself. When Risk Categories are created each risk category can have an associated child risk (parent/child hierarchy). For Example:

In the database, in the RISKTYPE table, each record has a risk_type_id and parent_risk_type_id where the risk_type_id is a primary key and the parent_risk_type_id is the risk_type_id of the immediate parent. If the parent risk_type_id references a parent_risk_type_id of it's child risk_type_id then this will cause a circular reference to itself and an error will result.

The second issue is caused by an invalid special character (boxes, diamonds and other non-standard characters in the expected language) in the RISKTYPE table. This was the case with our client, who had unknowingly imported many of these characters into their database.

This error is a lot like the POBS issue from a few years ago - an XER may contain tens of thousands of these records. This slows import performance and spreads the gibberish characters to everyone who imports the XER. Oracle has acknowledged that corrupt data can be imported from XER files received from external sources, and over time, this corrupt data can cause performance issues with both export and import of XER files. The creation of a RISKTYPE removal utility is being considered for a future release.

In the meantime, there are several workarounds available for the problem.

  1. Delete the invalid data from both the source and destination databases. Re-export the XER file from the now clean database and use the updated XER to import into the destination database. Many clients call this “scrubbing an XER”.
  2. Request an XML file and skip the risk categories during the import. Note that the XML files are much larger than the equivalent XER files and take much longer to import, so if you are transferring data on a regular basis, XML is probably not a realistic option.
  3. Remove the RISKTYPE data from the XER file with a text editor and resave. This is time consuming when there are thousands of records. Also, the risk to doing this is information other than the RISKTYPE could be inadvertently removed, thus corrupting the XER, in which case, Oracle will not provide support. Our recommendation is DO NOT DO THIS.

All of these workarounds involve either time or risk and may not be practical.

Emerald has created a better solution in the form of a utility called the *XER Cleaner* that will easily and safely remove the RISKTYPE data from the XER, as well as the POBS records. The XER Cleaner is very easy to use. Simply launch the XER cleaner, browse to the XER you want to clean and click Run. The XER will be scrubbed of all POBS and RISKTYPE records and the clean file is saved in the same directory with the original file name appended with “-clean”. We have removed over 80,000 RISKTYPE records from an XER file in some cases. The best part about this utility is that it is free to our clients and the whole P6 community. No warranty is expressed or implied.

Contact us for your free copy. If you like this tool, check out our other P6 add ons.

P6-Scrubber - Keep Your P6 Clean!

Are you importing schedules into scrubbing databases, taking out all the unwanted data you don't want to pollute your production database, re-exporting the schedule and then importing it into the database where it should have gone in the first place?

Do you have required specifications your contractors need to follow for their schedules, but have a hard time knowing whether those details are missing or incorrect until you've already imported them?

Do you want to ensure you are not importing POBS tables and corrupted RISKTYPE tables?

That's a lot of quality assurance to do! Maintaining your corporate data standards for dashboarding and reporting can be a full time job, especially when project teams have their own coding structures and requirements for their P6 Environment.

Don't worry; there's relief. Emerald has developed a new tool that will do all that work for you! We are now introducing the* P6-Scrubber.*

The P6-Scrubber is a configurable tool that resides outside your P6 environment, so you can run your XER files through it to vet them for your specific criteria prior to importing it to your P6 database. You can decide whether you want to keep or remove data at the global, project, and activity levels. We provide a clean import file once the P6-Scrubber is finished, as well as a convenient report of the data that was analyzed. You can give feedback to your contractors about their schedules in minutes before you import their file into P6.

Once the scrubbed file is imported to P6, the P6-Scrubber also flags projects and activities with the results it has found. We also put the results report in a notebook, so you can see the analysis P6-Scrubber did without opening a separate report.

Using the new P6-Scrubber tool will save your team loads of time and help keep your P6 environment clean!

ZOHO-P6 Integration

When a new client request comes in, you can create a project from ZOHO CRM. For us projects come in several categories and we have task list templates ready to be used to create tasks to charge to. Typically the first task list is Business Development. We kick off with that and assign the team working on the initiative. We can then send the project, the task list, the tasks and the resources into P6. We can do that in one step or two depending on the task list development. In this case we have a good idea of the scope of work and were able to put it together with 2 task lists. So we can integrate bot the project and the WBS/tasks and resources right away. So we use a really simple user interface tight in ZOHO projects to kick this off into P6. We tell the integration to send both the project and WBS over. Once that is done, we can go into P6 and start actionining the work. We also get a message letting us know if there were any issues with the intgration, such as a duplicate project exisiting. If all is good, you should get a PASS message both for the project and for the WBS.