Emerald Blog

Stories, Tips And Tricks From Our Team’s Experiences With Primavera Since 1995

12 May 2020

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 6

Written by Ian Nicholson, P.Eng. - VP Solutions

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.

In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).

Part 6: Correlation (and the Central Limit Theorem)

Correlation is the mutual relationship between one or more events. In schedule risk analysis, it is used to indicate that one risk’s probability is likely to increase (or decrease) if another risk occurs. For example, if the first piling activity on a project goes badly, chances are that all the piling activities will go badly and there is a further (weaker) chance that all the excavation activities will also go badly. That seems to be pretty obvious and we need the ability to build that into our risk models.

Correlation has another purpose: to counteract a related topic call the Central Limit Theorem (CLT). There have been many articles written regarding the CLT, but to summarize the issue, if you have a number of similar duration activities linked in series that have the same uncertainties, when randomly sampled, if one activity goes long, another will go short and they will cancel each other out, leading to a loss of extreme values in the probabilistic analysis.

Some argue that in order to combat the (CLT) and its impact on the model, correlation is absolutely required while others will argue that so long as you have a high level schedule, the CLT is a non-issue and thus correlation is not required. Personally, I like working with the project team’s live schedule, which tends to be a Level 3 or Level 4 schedule and correlation is often a big issue. We’ll leave the discussion about which level of schedule risk should be performed on for another blog and concentrate on the CLT here.



Figure 1: The effect of Central Limit Theorem on a one activity schedule and a ten day activity schedule with the same overall deterministic duration and uncertainty distribution. The PO duration is 80 days vs 90 days and the P100 duration is 120 days vs 110 days, respectively. The CLT has lopped 10 days off each end of our distribution in the case of the ten activity model. 

Applying correlation can correct the impact of the CLT by preventing the cancellation that occurs in a purely random sampling. Applying an 80% correlation between the risks leads to the following result:


Figure 2: The effects of applying correlation to correct the Central Limit Theorem. By applying a correlation to the uncertainties on the ten activity model, we can closely approximate the one activity model.

So, given that we need to enter correlation in our model to reflect risk’s interdependencies and we also need to use correlation to combat the CLT, let’s look at how correlation is performed in OPRA and in Safran Risk.

In OPRA, correlation is assigned between activities. This means that in order to combat the central limit theorem for my 10 activities, I need to link all 10 together in the correlation module. It’s possible but tedious and it gets worse as I have more and more activities to be correlated. What’s confusing for the user is that they have to decide which activity is the parent and which activities are children in the correlation: do I choose the first one in the string or the longest one or something else? It’s not well documented. It gets even more difficult if I have multiple correlations between activities with multiple uncertainties or risks: without a PhD in statistics, how do I know what to correlate and how much should the correlation be?



Figure 3: OPRA Correlation Assignment

SRA takes a different approach – risks are correlated, not activities. This makes a lot of sense in that if you have an activity with multiple risks, the correlation can exist only to the risk in question not to the entire activity. Similarly, if a risk applies to multiple activities, the correlation is also automatically applied (but can be turned off if necessary).

There are actually a couple of ways to handle correlation in Safran Risk.

The first is to correlate the impact of a risk on all of the activities that it applies to – unchecking the box will apply 100% correlation to all of the activities that the risk is assigned to:



Figure 4: Safran single risk impact correlation

But what if we need to correlate multiple risks to each other? In OPRA, the approach of correlating activities makes this almost impossible; how would you figure out to what degree two activities with multiple risks should be correlated?  SRA has this covered – by correlating risks together, the appropriate impacts will be calculated automatically.


Figure 5: Safran Risk Correlation Mapping Matrix

Not only does this mapping allow the user to correlate risks to each other but it also allows the probability of occurrence of each risk and the schedule impacts of the risks to be correlated independently. Further, the Pre- and Post-Mitigated positions can be differently correlated. Cost risks can also be correlated (not shown above). 

The Safran approach makes understanding and applying correlation much easier. When correlation is clear and easy, the user more likely to apply it, leading to better results (and hopefully less discussion of the Central Limit Theorem). 

08 April 2020

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 4

Written by Ian Nicholson, P.Eng. - VP Solutions

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.


In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).


Part 4: Analysis by Exclusion


A few years ago, I was working with a large mining company on their risk process. One of their risk specialists mentioned that they were performing risk analysis by exclusion. Naturally, I was curious as to what this was and asked them to show me how it worked.


What they did was to take the risk model in OPRA, run it with all risks, and then turn off one risk at a time and rerun the model. Then they would compare each output either in the Distribution Analyzer or in Excel, so that they could report exactly how much impact each risk had on the project.


The Tornado Chart in OPRA automatically ranks the activities or risks as to their impact, but the challenge is that while you can see which activity or risk has the highest impact, you cannot quantify what that impact is. The tornado is based on Pearson’s Product Moment which gives a percentage ranking which project managers find difficult to interpret. So, to answer the question of “What is the impact of this risk on the schedule?”, the OPRA user would:
  1. run the model with all risks turned on and record the results;
  2. manually turn off a risk, rerun the model, record the results;
  3. turn the risk back on;
  4. repeat step 2 and 3 to sample all risks in a project.
In the case of this client, they said that it would generally take about a week to perform the exercise on a large construction schedule, more if there were changes in the model or after a mitigation strategy session. This is simply too time consuming.


Another issue: They also had to use the Risk Factors module of OPRA to make the analysis work, making them one of the only clients of ours who ever used this module. This module works by allowing uncertainties to be modelled similarly to the Risk Register. This allowed uncertainties of the same type (Risk Factors) to be grouped for analysis and tracking.


While I could see the value of the work being done, the effort required was much too high. One of the tenets of working efficiently is that if you need to do something over and over again, you should look at automating it. Computers are good at repetitive tasks, people generally are not; automating repetitive tasks not only reduces time but also improves accuracy. For example, if you were to forget to turn Risk Factor #9 back on before turning off Risk Factor #10 risk for the 10th iteration of the analysis by exclusion effort, the project team might choose to act on the wrong Risk Factor because its impact was overstated.


I wondered if the VB capability in OPRA would assist in automating this task. While I didn’t use it much, I had heard that a lot of processes could be automated using the VB feature. So, I asked my friend and colleague, Russell Johnson, the ex-CTO of Pertmaster if analysis by exception could be automated using the VB feature of OPRA. His answer was:


While it’s technically possible there are a few big challenges.


1. There was never a way with OPRA to create a native risk driver event (we did create a prototype module, risk factors, but this now means your model has new stuff added to it which can be confusing). So the first challenge is just creating and identifying a model with driving risk factors.


2. There is no way to natively store or process the results. Since you are doing something outside the normal of what OPRA does, you'd have to find a new ways to store and display the results. You can't for example manipulate the built-in tornado chart.


3. Finally, the speed is an issue. For various reasons OPRA is slow compared with Safran, so whatever you do will take much longer (like days vs mins, if it can even do it).


The other big issue is that OPRA dropped VB support years ago, so the argument is moot.


The developers of Safran Risk (SR) saw OPRA users performing this tedious, manual task and decided to automate it. The results are amazing; 40+ hrs of analysis in OPRA now takes minutes in SR and the chances of making a mistake are zero.


So how does analysis by exclusion work in SR? Let’s take a look.


First of all, the analysis can be performed in a single pass or in multiple passes.
  1. In single pass mode, it will run through each risk once, and show an output for all of the risks in the plan individually excluded (essentially the same exercise my client was performing manually).
  2. In multiple pass mode, the system will run for the number of iterations you specify and will remove the top risk from each iteration before starting the next iteration with the previous iteration’s risk left turned off. This has the advantage of preventing large impact risks overshadowing lower impact risks and will show the cumulative impact of mitigating each additional risk.
This is the result of the single pass analysis for all of the risks in the demo “Drone” project.

To show this better, let’s look at only one risk and one opportunity being removed compared to the original curve.
image.png
In this example, the black line represents all risks turned on (the “None” line) and the green line represents the same model with only the Testing Overrun risk turned off. This indicates that at P70, we would save 33 days if we could eliminate the Testing Overrun risk.


You can also see the effect of an opportunity in the purple line. The “Outsource” risk actually represents an opportunity. This shows that when the opportunity happens, the project duration is shorter (black line) than when the opportunity doesn’t happen (purple line).


However, the cumulative saving of removing multiple risks is not entirely clear, since turning off additional risks may or may not save the sum of the savings obtained individually. In this example, if we turned off the top 5 risks, we would expect to save 103d. However, it is not that simple since turning off one risk may mitigate another risk, particularly if there is correlation between the risks.


We could do this manually by turning off the top risk, running the analysis, running the next risk, etc. but this is even more manual work.


To understand the interaction of the top risks, we run the same analysis but using multiple passes, turning off the top 5 risks cumulatively (ie pass one has only Testing Overrun turned off, pass two has Testing Overrun and Design turned off, etc.).



Again, looking at the P70 values:
  1. When the Testing Overrun risk is excluded, the schedule improves by 33 days (the same as the single pass).
  2. When Testing Overrun and Design are excluded, the schedule improves by another 33 days (a change from the 29 days of the single pass)
  3. When Testing Overrun, Design and Ash Cloud are excluded, the schedule improves by another 21 days (a reduction from the single pass result of 23 days)
  4. The total savings when we remove the top 5 risks is 103d. This is the same result as when we ran them individually, but the individual savings are different.
This is great information, since the project team can see what the effect of removing these risks would be on the project. But what about the costs of mitigation vs the cost of the risk?

In a previous blog, I wrote about the advantages of integrated cost and schedule analysis and here, through the power of the integrated model, I can look at the same information but on a cost basis rather than only a schedule basis.

Here is what our cost Analysis by Exclusion looks like for all the risks. Notice that there are a few more risks shown in this display, since there are now cost risks that do not have a schedule impact included.

image.png

Now we can tell our Project Manager that by removing the Design Specification risk and ensuring that the Outsource opportunity occurs, we can save 49 days and $295k. Note that any costs associated with the mitigation strategies will be included in the model.

09 March 2020

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 3

Written by Ian Nicholson, P.Eng. - VP Solutions

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.


In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).


Part 3: Integrated Cost and Schedule Risk Analysis


Many years ago, I was hired by a large multinational oil and gas company considering a new technology project. My role was to conduct a schedule risk analysis in preparation for a go/no-go decision from The Board. Another consultant was conducting a cost risk analysis in parallel to mine. The company expected us to each present our results but not to discuss the results with each other. The results would be independently used when considering whether or not the company was going to invest billions of dollars in their project.


The cost risk consultant and I discussed the issue and since we agreed that cost and schedule risk were intrinsically linked and we looked at ways that we could combine the two analyses. Our options were limited:
  1. We could build a cost loaded schedule in OPRA and conduct a joint analysis in that tool. The challenge that we faced was that the cost line items didn’t line up with the project schedule in a way that would make this easy to do. Not only that, but only some of cost risks were schedule related, not all of them. We would need to build a cost loaded schedule specifically for the analysis, which, while possible, would take a lot of time and effort.
  2. We could take the results of the schedule analysis and incorporate them into the cost analysis in @Risk. This could be done by creating a single point duration value or a simple time based distribution for certain cost items, like our indirect costs. For example, we could say that our site services (trailers, security, staff, etc) would be required for a P70 value of 60 months as opposed to the deterministic value of 48 months, but it lost a great deal of the dynamic aspects of the schedule analysis because the integration was done at a high level.
The oil and gas industry has largely followed Option 2 as the easier approach but really what we needed was to develop the two models concurrently, so that uncertainties and risks in both cost and schedule impact each other in the Monte Carlo simulations. Changes in one can affect the other immediately and visibly.


Why is it such an advantage to have both in the same model?


There is an old project manager’s joke that says, “You can have it good, you can have it fast, you can have it cheap – but you can only pick two of the three.”
  1. If a project is running late, there will be additional costs associated with indirect costs and in most cases there will be additional costs associated with direct costs as well.
  2. If the project team decides to spend money to save time (mitigate delays), the costs will likely increase.
  3. We may decide to mitigate cost risk by slowing the project and keeping a smaller, more efficient labor force or by moving work to a known fabricator.
A recent study of Oil and Gas megaprojects in Alberta showed that, on average, there was a 19% cost overrun and a 17% schedule overrun on these very expensive projects. It is certainly no surprise that these numbers are so closely correlated. Yet we make decisions on cost mitigation strategies and schedule mitigation strategies without insight into the impact that our change will make to our schedules and cost. On the oil and gas project that I mentioned earlier, cost and schedule mitigation strategies were considered entirely in isolation.Figure 1: Integrated Cost and Schedule Risk Process

Often as project managers we get tunnel vision because we get too focused on schedule or cost at the expense of the other. For example, I worked on a turnaround project that had a $120M budget with a 35 day maintenance window. Management communicated that schedule was everything, cost was very much a secondary consideration (so much so that it wasn’t even monitored during the project), so the project team started burning overtime almost from the first shift to maintain the schedule. In the end we completed the work on time (to great fanfare) but months later, when all the invoices were in, we had spent $160M to do so. This caused great distress within the organization. A few heads rolled and the “Full speed ahead, damn the torpedoes” approach was never used within that organization again.

“Schedule pressure dooms more megaprojects than any other single factor” (E. W. Merrow)

What we really need to understand is not just the probability of achieving our end date or the probability of achieving our end cost, but the probability of achieving both concurrently. This is called the Joint Confidence Level (JCL). We want a solution that offers a 70% probability (for example) of achieving both cost and schedule and that will help us to understand the interdependencies between the two.

The AACE 57R-09 Integrated Cost and Schedule Risk Analysis Guideline (found here) describes the process of combined Cost and Schedule Risk Analysis and the process is well described in Dr. David Hewlitt’s book Integrated Cost-Schedule Risk Analysis (found here).

OK, so now we understand why we need to conduct cost and schedule risk together. But why Safran Risk?

Safran Risk is one of the only tools on the market that evaluates Cost and Schedule Risk together. The beauty of their approach is that costs can be modelled separately or together with activity durations. You can even apportion part of an estimate line item to a schedule activity but leave the rest independent. This gives a lot of flexibility in modelling the risks on a project and avoids the frustration of trying to resource load a traditional CPM schedule to match a cost estimate.

We can also truly understand the impact of our mitigation strategies best by evaluating cost and schedule risks together. Safran Risk makes turning risks on and off for what-if analysis simple and mitigation costs and schedule impacts can be easily modelled.

Finally, we can plot our cost vs schedule risk outcomes using a scatter plot to create a Joint Confidence Level diagram which shows us the probabilities of hitting our cost and schedule targets.

Figure 2: JCL @ 70% confidence – note that the deterministic cost and schedule probability (the star shape) is only 17%.

The Energy Facility Contractors Group (EFCOG) recently undertook an evaluation of the commercially available tools that can conduct cost and schedule risk together, which is a self‐directed group of contractors of U.S. Department of Energy Facilities. The purpose of the EFCOG is to promote excellence in all aspects of operation and management of DOE facilities in a safe, environmentally sound, secure, efficient, and cost‐effective manner through the ongoing exchange of information and corresponding improvement initiatives. You can see their report here.

Within this report, EFCOG chose Safran Risk as the best product for those working with Primavera P6 and second best for those working with Microsoft Project. Since most of my clients are working in P6 and need to conduct joint cost and schedule risk analysis, Safran is an obvious choice for those looking to better understand their projects.

02 March 2020

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 2

Written by Ian Nicholson, P.Eng. - VP Solutions

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, Emerald being the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.


In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk (SR).


Part 2: User Interface


In my last blog post, I discussed the technology used in OPRA vs SR. As I mentioned in that blog, the biggest complaint that we hear about OPRA is that the technology cannot support a large risk model. The second most common complaint that we hear is that the user interface is clunky and moving reports and layouts from one model to another in order to generate consistent outputs is tedious.


When OPRA (at the time called Pertmaster) was re-introduced as a Risk Analysis tool in 2001 (it had previously been a CPM scheduling tool), it had a pretty decent user interface (UI) for the time. It looked like a typical CPM scheduling tool that had an extra “Risk” menu for Risk inputs and extra items added under the “Reports” menu for Risk outputs.


For most risk users of the time, the UI was fine because Schedule Risk Analysis (SRA) was a new and relatively immature concept that was performed infrequently by relatively few people. These users would learn where to find the required items in the Risk and Reports menus. Hey, if you could master P3 or Artemis, Pertmaster should have been a walk in the park! Besides, compared to Primavera’s Monte Carlo add-on, Pertmaster’s UI was a big step forward in usability.


OPRA’s Risk


OPRA's Risk Menu

OPRA Reports menu


After nearly 20 years of SRA, things have changed significantly. We now have defined risk maturity models, organizations have made SRA part of their project management methodology, and project teams build their own risk models. More people need to be able to work in the tool and getting them up to speed quickly and easily requires a logical workflow inside the tool.


When Safran developed Safran Risk (SR), they used their experience of the original Permaster’s development to modernize their new tool’s User Interface and make it easier for users to understand and learn. The first step that they took was to change from a menu based input model to a workflow based model. This means that SR has replaced the menu based system with a tab based sequential workflow system. The user moves from left to right as they build the risk model.



Safran Risk tab based navigation.


The other item of note here is that all of the functionality of Safran’s scheduling tool is also here (a big advantage of building the risk engine on top of the scheduling package). Users can create layouts and filters and share them between users and projects, making application of standard processes and reports much easier than in OPRA.


Does an updated UI make the upgrade worthwhile? Not in and of itself, but it does make training new users much easier and makes it much less likely that a user will miss a step in the process. I personally find that Safran’s UI just makes everything easier. I still occasionally talk to P3 users who recall that its UI was the best ever, but I doubt that they would want to go back and work with it today. I’d love to have a classic sports car (say a TR6) in my garage, but I sure wouldn’t want to have to drive one to work in the Canadian winter!


In my next blog post, I’ll discuss the benefits of integrated cost and schedule risk analysis.

24 February 2020

Why you should upgrade from Oracle Primavera Risk Analysis to Safran Risk - Part 1

Written by Ian Nicholson, P.Eng. - VP Solutions

I’m Ian Nicholson, VP Solutions at Emerald Associates. I have been working with Oracle Primavera Risk Analysis (OPRA) since 2001 when it was Pertmaster, as Emerald was the exclusive Canadian distributor for Pertmaster until their acquisition by Primavera in 2006.


In this series of blogs, I will explain why I feel that all OPRA users should upgrade to Safran Risk Analysis (SR).


Part 1: Technology


You might be wondering why I’m starting with such a boring topic. Don’t we really want to discuss bells and whistles and the cool factor items? The reality is that the most common complaints that I hear about OPRA are that it’s slow, it crashes a lot and it’s difficult to move data in and out. So if you’re an OPRA user today, it’s quite possible that you don’t want to change your process but you just need a more stable, more scalable platform.


When OPRA (at the time called Pertmaster) was re-introduced as a Risk Analysis tool in 2001 (it had previously been a CPM scheduling tool), most desktop scheduling tools used flat files to store their data. P6 had come out only a couple of years prior and had yet to be widely adopted; P3 ran on a BTrieve database, which was pretty much a flat file based system. The idea of using a database engine backend was something that was still relatively new, so Pertmaster used the more common flat file based structure.


For most risk users of the time, this didn’t matter because Schedule Risk Analysis (SRA) was a new and relatively immature concept that was performed infrequently by relatively few people on schedules that were generally built specifically for the risk analysis and had only a few hundred activities. Performance of such a system would be fast enough and the *.rsk output files would only be kept for short periods before being deleted or over-written. It was also unlikely that more than one user would need to access the file at a time.


The thing is, this is no longer the case. Over nearly 20 years of SRA, things have changed significantly. We now have defined risk maturity models, organizations have made SRA part of their project management methodology, and project teams build their own risk models. Standalone schedules for risk analyses are becoming rare and multiple users want to look at the model concurrently.


At the same time, schedules have become larger as more detail is built into the schedule through integration with other systems. Scheduling systems have become more powerful to compensate. Where a large Shutdown/Turnaround (STO) schedule 15 years ago would be 5,000 activities, a large STO schedule is now approaching 100k activities. Making a new dedicated schedule each time that a risk analysis is run (often quarterly) is simply no longer realistic.


Scheduling systems have evolved since 2001. What we need is a SRA tool that has the same enhancements. The most important of these is that the SRA tool has a backend database to store the projects, user data, risk data and risk outputs and to allow concurrent multi-user access. A 64-bit application platform is also required for large schedules.


Unfortunately for those of us who used and loved OPRA, development stopped in 2013 and the last patch was issued in 2015. The platform never got a database backend or moved to a 64-bit application, meaning that the system remains single user and is limited to schedules under 10k activities. It simply hasn’t evolved the way it needed to to stay relevant.


Safran had an advantage in developing their new Safran Risk module. They already had a world class scheduling program, Safran Project, available that runs on a SQL Server or Oracle database and a 64 bit application layer. In Europe and the Middle East, Safran Project is considered a competitor to P6. When Safran started development of SR in 2015, the Safran Project platform was a solid place to start and enhancements have been released regularly since.


In order to speed up development of Safran Risk, Safran also had another advantage: they leveraged the knowledge of the original Pertmaster development team to guide the development and ensure that the lessons of Pertmaster were incorporated from the start in Safran Risk.


In the next blog post, we’ll talk about the Safran user interface and why a modern UI is important to your risk team.

05 August 2019

Peace of Mind in the Cloud

The Cloud is a remarkable and innovative tool. It connects people from around the world, allows us to share fun vacation pictures and adorable videos of our pets, and offers a massive network that can process any and all data under the sun.

But as anyone who's ever had to use the back end of a Cloud program can tell you, it can quickly get complicated, leading to far too many headaches and sleepless nights trying to figure out what's going wrong. We could tell many horror stories about hours spent hunched over our computers just begging our systems to work the way we want - and I bet you could too!

So when one of our clients approached us looking to evaluate their options with Cloud software, we knew how they felt. At the time, they had been using on-premise servers, but were looking to upgrade to a database with more features at a reasonable price. They had considered Software as a Service (SaaS), but found the price of such an upgrade to be too steep to be a realistic option.

Luckily, we had previously agonized over the same decision and were able to offer our personal server in the Cloud; EAI hosted by OVH, as an alternative with a much lower price than the SaaS system Oracle offered.

Our client was also in sore need of maintenance services. Before they came to us, they had been operating, maintaining, and repairing their own cloud servers with only a handful of IT specialists who were unfamiliar with Primavera to begin with. As you can imagine it was slow, frustrating work, and when things broke down it could be days before they got everything up and running again. So when we suggested the EAI server, they were quick to take advantage of our services.

Now that they've moved to EAI, our client is able to enjoy the benefits of Cloud without the hassle that comes with sustaining their database. Emerald Associates handles the maintenance, repair, and operation of EAI, and offers on-sight visits and training for our client's entire team so that they will have the familiarity with Primavera that is so crucial in today's business environment. We have been working together with this particular client for the past 3 to 4 years now to keep their servers in the Cloud running smoothly and efficiently, giving them the freedom to spend their time and energy on what really matters - their business.

31 July 2019

Client Experiences #2 - No More Outdated Software

Written by Ian Nicholson, P.Eng. - VP Solutions

We've all worked with frustrating, outdated software. It's a pain to try and get everything to work the way you want it to, and the task is usually just too important or time-sensitive to take a break from. Everyone knows where that leads - yelling, cursing, or just slouching down in your chair in defeat, bested by technology once again.

During March 2016 our client decided that they were done working through this frustration. They wanted to update their data entry, schedule updating, and reporting processes so that they could streamline more effectively and remove these hassles from their everyday life. In addition to this, the refinery was looking for a way to develop a more effective cost control method for their turnarounds. Their current software just wasn't cutting it anymore. So they came to Emerald Associates for assistance - and luckily, we knew just the right programs to meet their needs.

Using P6-Loader, TAPS, EP-Datawarehouse and EP-Dashboard, Emerald Associates was able to give our client the software they deserved. No more despairing over yet another system crash or panicking over the disappearance of important data - now they had software that they could work with.

Thanks to Emerald's P6-Loader, our client was able to drastically cut down on the amount of time they spent poring over data. Automatic schedule updates went from taking hours to a matter of minutes, and custom dashboards and green-ups could now be created and automatically updated using EP-Datawarehouse and EP-Dashboard. This made the whole process move a lot smoother and much faster. Our client drastically reduced their need for manual entry, cutting down on errors and saving yet more time. The P6-Loader was a big step up from their previous way of managing things, not to mention significantly faster and far less likely to be tear-inducing.

This client still uses P6-Loader, TAPS, EP-Datawarehouse and EP-Dashboard to this day. No more outdated software for them - from now on, it's smooth sailing.

29 July 2019

Client Experiences #1 - Massive Upgrade

Written by Mary Lynn Backstrom, PMP, PMI-SP, PMI-BA – Implementation Specialist

When I first started working with our new client, I started out as a general trainer for the company’s employees. Our work began with typical P6 stuff, nothing new or especially exciting, but it was the start of a longer, more involved relationship with our client. I started helping them with turnarounds back in 2013 and I've been doing turnarounds with them every year since. I recently finished my 6th turnaround with the company - an 11-12 week long process that honestly felt a lot longer than it was. Due to a problem organizing the order of units, we ran overtime, and that was unfortunately just one of the many issues we had to deal with during that turnaround.

As is often the case, a good amount of the complications we faced were unintentionally self-inflicted. Our client runs under an alliance contract umbrella with another organization that controls their project management and general processes. This organization had decided to do a major upgrade to P6 just a few weeks ahead of the turnaround execution. This naturally caused a lot of complications, as the workers involved in the turnaround had to do a lot of scrambling to figure out the bugs in the untested upgrades while simultaneously dealing with the turnaround itself, which was no easy task. On top of this, the upgrade to P6 wasn't just a standard upgrade - it was a move from version 6.2 to version 17, which is a big jump on any given day, but right before a turnaround... It was disastrous. There were all sorts of issues, including considerable trouble upon first-log in, and it created a lot of stress - way more than even on the typical turnaround! Units were in shutdown, people were pulling 12 and a half hour long shifts, the site was an hour away from where most of the personnel were stationed, IT issues were causing immense frustration - it seemed like everything that could go wrong did.

Now, I've been in quite a few panicked, rushed environments over my 8 years of turnaround assistance, and this could easily have been one of them, but luckily the majority of the schedulers dealt with it very well, keeping their heads despite the setbacks we faced. And as for me - I went in with my usual mentality: get it done. So despite the constant uphill battle, we managed to pull everything together and get through the turnaround with our sanity intact. Overall, it wasn't the easiest turnaround I've ever been a part of, but complications are part of the job, and I'm happy to say that another yearly turnaround with our client went by successfully - if maybe a little bumpier than usual!

08 July 2019

P6-QA in the Real World

Written by Sue Hopkins - Implementation Specialist

Before becoming an implementation specialist at Emerald Associates, I was a project manager and P6 administrator in a state government transportation agency for several years. I was responsible for managing 50-60 Primavera P6-EPPM (Web) project schedules and monitoring them to schedule completion. These projects were for the engineering and design of highway projects including tasks such as road maintenance, new road construction, bridge maintenance, and signals upgrades.

Plan Development - Kick-off meeting

To put together the project's plans, we started with a kick-off meeting to verify the full scope. These projects typically ranged from 500-1500 tasks and ran for 24 to 60 months depending on funding each year and priority. At the kick-off meetings, with a hard copy template schedule in hand, each department head would discuss their role in the project and request adjustments to the schedule accordingly. As the project manager during these meetings, I took note of any deletions, additions, relationship changes, duration changes, etc. required to the schedule and with some project teams we made the changes directly in P6-EPPM in a Reflection Project. The schedule changes from the kick-off meetings ranged from minor to significant. Once I made the changes to the project schedule, a 'final' draft was sent to all the team members who participated in the kick-off meeting so we could get comments and approvals in time for our submission deadlines. If no comments or changes were made, the 'final' draft schedule was accepted and the project moved ahead.

Monthly Updating

Progress on the projects was done on an ongoing basis, with scheduling being done nightly. Project updates including scope changes, adding new tasks, removing unnecessary tasks, and rearranging tasks that needed logic changes were done when needed, via email from the initiating department. I would make changes on a Reflection Project and send the new schedule out for approval, if the float remained positive. If the changes caused the project to fall behind or if a large setback was identified, a schedule review meeting would be held with all the main players in the project. At the meeting, the key players in the schedule's creation and project manager would all gather and review the schedule for the project. Changes would be done on the fly in a Reflection Project and re-scheduled during the meeting, when possible. If many significant schedule changes needed to be made I would note these changes and complete them after the meeting so we didn't waste time during the meeting itself and I'd send out the new schedule for the project team to review and approve again.

Now theoretically the project schedule should have been fully reviewed by everyone involved. Unfortunately, this was not always the case and errors were commonly found within the updated schedule as the project progressed. There were a few reasons for this: Sometimes as a result of the dissolution of various activities, one activity would be 'overloaded' with unnecessary relationships and odd relationship types, often to the same activities, which would impact the schedule calculations. Also, periodically, there were problems with added activities that may have had relationships that were not added or added incorrectly or perhaps a duration was added incorrectly. Sometimes, a new activity ID would be entered incorrectly without notice. In essence, there were any number of things that could have negatively impacted our schedule quality. If only I had a tool available to check the nuances of the schedule for me and flag them, so I would know where to look. Little did I know there was a tool out there that would have saved me countless hours reviewing and analyzing this schedule.

If I had had the P6-QA tool to help in analyzing my schedule after changes were made and before the schedule was sent out, I could have sent out a schedule that would have automatically been checked for logic, logic types, missing codes on the activities, activity ID format and other business process checks that we could have created specifically for our needs. The use of P6-QA would have cut down the time it took me to send out revised schedules from several days to less than one.

Having P6-QA there to act as my safety net on the project changes as they were made would have been extremely helpful to ensure the changes made sense and did not negatively impact other parts of the schedule. I could have run the P6-QA check while everyone was in the meeting, let it pinpoint possible problems, and then discuss those issues with the team immediately. This alone would have saved me multiple e-mails back and forth after I analyzed the schedule after the team meeting.

I am positive buying P6-QA would not only save any company time and money, it would help them produce quality P6 schedules.

19 June 2019

P6 Scrubber - Keep Your P6 Clean!

Written by Sue Hopkins - Implementation Specialist

Are you importing schedules into scrubbing databases, taking out all the unwanted data you don't want to pollute your production database, re-exporting the schedule and then importing it into the database where it should have gone in the first place?

Do you have required specifications your contractors need to follow for their schedules, but have a hard time knowing whether those details are missing or incorrect until you've already imported them?

Do you want to ensure you are not importing POBS tables and corrupted RISKTYPE tables?

That's a lot of quality assurance to do! Maintaining your corporate data standards for dashboarding and reporting can be a full time job, especially when project teams have their own coding structures and requirements for their P6 Environment.

Don't worry; there's relief. Emerald has developed a new tool that will do all that work for you! We are now introducing the* P6-Scrubber.*

The P6-Scrubber is a configurable tool that resides outside your P6 environment, so you can run your XER and/or XML files through it to vet them for your specific criteria prior to importing it to your P6 database. You can decide whether you want to keep or remove data at the global, project, and activity levels. We provide a clean import file once the P6-Scrubber is finished, as well as a convenient report of the data that was analyzed. You can give feedback to your contractors about their schedules in minutes before you import their file into P6.

Once the scrubbed file is imported to P6, the P6-Scrubber also flags projects and activities with the results it has found. We also put the results report in a notebook, so you can see the analysis P6-Scrubber did without opening a separate report.

Using the new P6-Scrubber tool will save your team loads of time and help keep your P6 environment clean!

19 June 2019

ZOHO-P6 Integration

Written by Nicole Jardin, P.Eng. - CEO

When a new client request comes in, you can create a project from ZOHO CRM. For us projects come in several categories and we have task list templates ready to be used to create tasks to charge to. Typically the first task list is Business Development. We kick off with that and assign the team working on the initiative. We can then send the project, the task list, the tasks and the resources into P6. We can do that in one step or two depending on the task list development. In this case we have a good idea of the scope of work and were able to put it together with 2 task lists. So we can integrate bot the project and the WBS/tasks and resources right away. So we use a really simple user interface tight in ZOHO projects to kick this off into P6. We tell the integration to send both the project and WBS over. Once that is done, we can go into P6 and start actionining the work. We also get a message letting us know if there were any issues with the intgration, such as a duplicate project exisiting. If all is good, you should get a PASS message both for the project and for the WBS.

19 June 2019

Are You Importing Unwanted RISK TYPES?

Written by Sue Hopkins - Implementation Specialist

Remember our good old friend the POBS Table? Well, we have a new friend in town that is introducing itself to our database in the form of the RISKTYPE table. We have discovered numerous clients are importing XER files to their database that include a large number of Risk Categories, sometimes tens of thousands of them. No one knows where they originated, but they are multiplying and wreaking havoc.

The problem comes when an export file is created with unwanted Risk Types and imported into another database, creating more values in the destination database. The destination database then could share their large number of Risk Types to another database. Each time, the RISKTYPE table is passed along, it grows, spreading and infecting more and more databases.

But is this really a big deal? Yes, it is! There are typically two issues with the Risk Categories: 1. 1. The table contains circular references which cause P6 to crash 2. Gibberish characters appear in huge numbers of Risk Categories and the client has no idea where they came from. The cause of the circular reference, according to the Primavera Knowledge base, occurs when a parent risk_type_id in the RISKTYPE table references a parent_risk_type_id of child Risk_type_id causing a circular reference to itself. When Risk Categories are created each risk category can have an associated child risk (parent/child hierarchy). For Example:

In the database, in the RISKTYPE table, each record has a risk_type_id and parent_risk_type_id where the risk_type_id is a primary key and the parent_risk_type_id is the risk_type_id of the immediate parent. If the parent risk_type_id references a parent_risk_type_id of it's child risk_type_id then this will cause a circular reference to itself and an error will result.

The second issue is caused by an invalid special character (boxes, diamonds and other non-standard characters in the expected language) in the RISKTYPE table. This was the case with our client, who had unknowingly imported many of these characters into their database.

This error is a lot like the POBS issue from a few years ago - an XER may contain tens of thousands of these records. This slows import performance and spreads the gibberish characters to everyone who imports the XER. Oracle has acknowledged that corrupt data can be imported from XER files received from external sources, and over time, this corrupt data can cause performance issues with both export and import of XER files. The creation of a RISKTYPE removal utility is being considered for a future release.

In the meantime, there are several workarounds available for the problem.

  1. Delete the invalid data from both the source and destination databases. Re-export the XER file from the now clean database and use the updated XER to import into the destination database. Many clients call this “scrubbing an XER”.
  2. Request an XML file and skip the risk categories during the import. Note that the XML files are much larger than the equivalent XER files and take much longer to import, so if you are transferring data on a regular basis, XML is probably not a realistic option.
  3. Remove the RISKTYPE data from the XER file with a text editor and resave. This is time consuming when there are thousands of records. Also, the risk to doing this is information other than the RISKTYPE could be inadvertently removed, thus corrupting the XER, in which case, Oracle will not provide support. Our recommendation is DO NOT DO THIS.

All of these workarounds involve either time or risk and may not be practical.

Emerald has created a better solution in the form of a utility called the *XER Cleaner* that will easily and safely remove the RISKTYPE data from the XER, as well as the POBS records. The XER Cleaner is very easy to use. Simply launch the XER cleaner, browse to the XER you want to clean and click Run. The XER will be scrubbed of all POBS and RISKTYPE records and the clean file is saved in the same directory with the original file name appended with “-clean”. We have removed over 80,000 RISKTYPE records from an XER file in some cases. The best part about this utility is that it is free to our clients and the whole P6 community. No warranty is expressed or implied.

Contact us for your free copy. If you like this tool, check out our other P6 add ons.

04 December 2018

Tracking Mandatory Codes via P6-QA

Written by Mary Lynn Backstrom, PMP, PMI-SP, PMI-BA – Implementation Specialist

These reports require specific activity code assignments, and sometimes those are missing! Assigning these this is time consuming and frustrating. But don't worry, help is available... With the P6-QA tool!

One of the numerous features available in the P6-QA tool is quickly checking a specific set of mandatory activity codes in P6 Schedules, producing quick results to know which files as well as activities in the files are missing the mandatory activity code assignments. Mandatory activity codes in the P6-QA Tool can now be set at the global or project level. In previous versions of the P6-QA Tool the Mandatory Codes check could only be used on Global Activity Codes. Quickly ensuring the mandatory codes are assigned helps ensure reports issued are correct.

Thinking this is difficult to set up/use? No catch, the process is easy.

 

First Step: Login to P6.

Set up the required codes as Mandatory codes in the P6-QA Tool. Sound difficult? Not at all! The activity codes that are assigned as Mandatory simply need (*) added to the end of each activity code.

Let’s have a look at a small example.

We have one Global Activity Code set up as Mandatory (above).

We have two Project Level Mandatory Codes set up (above).

We have assigned some of the Mandatory Codes to some activities above. We need to run the P6-QA tool.

Above is the project file this example is using – the QA-P6-QA Last Run Date has been cleared to run the P6-QA Tool. Note the QA-CL – No Mandatory Code and the QA-CL- No Mandatory Global Code columns. These will be populated to indicate at the project level the status of the two checks. Our example is running just one project file – you can set up a QA – Frequency (see column above) for the P6-QA Tool to run on each file or manually run single or specific groups of P6 project files.

Above the specific tolerance fields have been included, which can be set for these two checks. You control the specific conditions flagged in the P6-QA Tool and at which level.

The layout above is grouping by activity type, showing the activity codes and their assignments from the P6-QA Tool check.

The user can filter/group and sort on the mandatory activity code values right in the activities tab to produce a quick layout that directs them to the issues to facilitate quick correction. Use your specific project codes to create a layout that works for your business. Don’t forget – save your layout/reuse. This is a simple example of 3 mandatory codes - set up the codes you require as mandatory - a few or many more.

27 November 2018

P6 Tools – Part of Your Company Digital Transformation?

Written by Mary Lynn Backstrom, PMP, PMI-SP, PMI-BA – Implementation Specialist

Hearing a lot about ‘digital transformation’ lately? Digital transformation can be defined as the acceleration of business activities, processes, competencies and models to fully leverage the changes and opportunities of digital technologies and their impact in a strategic and prioritized way.

What elements are included in your Digital Transformation plan? Can your team afford to ignore new technological advancements? Planning a strategic and prioritized path forward to fully leverage changes and opportunities in digital technology can have a very positive return on investment. The Digital Transformation for a company varies from company to company and can be a large project encompassing many different elements depending on said company and its current state.

Here's an example of leveraging changes and opportunities new technologies can provide. Say your team uses Primavera P6 for Project Management, yet doesn't use P6-Loader, P6-QA Tool, CAPPS or TAPS (barcode updating), and P6-Calculator for any efficiency gains, are you getting the best out of the tool. In many surveys conducted over the years by Oracle and Primavera, P6 is highly underutilized with 100% of clients using P6 for scheduling, 60-70% for resource management and only 30-40% for cost management and earned value analysis.

Emerald Associates specifically designed our productivity tools to address pain points users commonly encounter when using P6 so you can do better managing your projects and get more out of the project team;

P6-Loader:

The P6-Loader automates manual data entry through the secure import and export of most data elements between Primavera P6 and Microsoft Excel.

The P6-Loader allows for the controlled extract and upload of data at the global and project level and eases the challenge of manipulating your project data by leveraging the spreadsheet functionality of Excel.

The P6-Loader goes above and beyond the built-in import and export functionality in Primavera P6. It has also become a desired replacement for the outdated Primavera SDK (Software Development Kit). Now, with version 4 being cloud enabled, it is quickly becoming a desired tool to help with migrating to the cloud efficiently. With close to 1 Billion of transactions pushed through the system, it is making teams worldwide more productive.

P6-QA:

Emerald’s P6-QA tool removes the burden of manual schedule and business process analysis by automatically identifying deficiencies in Primavera P6 schedules based on scheduling best practices, industry standards, and user introduced business process requirements.

The P6-QA tool is unlike other third party Primavera schedule validation tool because it seamlessly integrates directly into Primavera P6 itself, allowing for the improvement of project management skills and scheduling quality in real-time. The P6-QA tool can be run at a preset interval such as weekly or monthly, or on an as-needed frequency to maximize your P6 schedulers’ ability to self-critique and ensure effective quality control.

When your resident engineering team is your frontline reviewers, this tool makes their role much easier and more efficient saving you a lot of time getting your contractors schedules reviewed, critiqued and hopefully approved so you know if the project is still on track without an expensive team doing a lot of analysis. When you are reviewing schedules every few weeks on a project this is significant in time savings as well as the time freed up to address any issues on the project.

CAPPS:

CAPPS is our tool that lets the remote users and likely non P6-Experts to get the project updates into P6 from wherever they are and whenever they have a moment. This means waiting for updates once a month is a thing of the past. Taking 2 weeks every month to get your portfolio updated is a thing of the past. Real-time statusing is doable with variances and deviations able to be highlighting and interjected into the schedule as they are happening making your forecasting, change management and claims avoidance mush more effective.

TAPS:

TAPS, or the Turnaround Progressing System, uses barcode scanning technology to eliminate the need for lengthy manual data entry when updating your Oracle Primavera P6 schedules. Simply use a barcode scanner to find, start, update or cancel any activity within the schedule in two quick scans.


With TAPS, status updates can be done 75% faster while increasing your team's accuracy, since there's no chance of manual data entry errors. The first scan finds the task and the second scan starts, finishes, and statuses the task with any of the percent complete types available in Primavera P6.

The progressing flexibility along with the ability to calculate and report Earned Value at the resource assignment level makes TAPS a strategic enhancement for all Primavera P6 users.

P6-Calculator:

The P6-Calculator performs calculations that are traditionally unavailable in Primavera P6.

P6-Calculator automatically performs calculations on data in P6 to present information to users that would otherwise be difficult for them to see, improving visibility and digestibility of complex P6 data and enabling better decision making.

All of the above tools are available on the Oracle Cloud Marketplace.

This is just one small example of an opportunity to leverage the changes and opportunities of digital technologies and claim the positive rewards of their impact. Help your team work smarter, not harder by empowering your team with the best tools!

01 November 2018

Oracle Prime – Cost Sheet and Viewing Budget Changes

Written by Mary Lynn Backstrom, PMP, PMI-SP, PMI-BA – Implementation Specialist

Oracle Prime uses the Cost Breakdown Structure (CBS) to track, manage and report costs in the Cost Sheet page. Costs in the Cost Sheet can be viewed in the base currency or the project currency.

Before we visit the Cost Sheet in Oracle Prime, let’s have a high level look at the CBS and Cost Categories.

The Cost Breakdown Structure (CBS)

A Cost Breakdown Structure (CBS) is a set of cost codes used to track, manage, and report costs related to a project. The CBS standardizes costs into categories that represent manageable cost sources (customized to your organization requirement) and a standard cost classification system. The CBS cost codes are classified into three types: expense, capital, or none. CBS codes are used in top-down and bottom-up cost planning and tracking.

Segment Definitions:

Use a segment definition to define the hierarchical depth of the CBS and how codes will be concatenated.

Segments Definitions can be edited in the Cost Sheet as well the CBS screen if need be.

You can create a custom CBS at the workspace and project level. When you create a CBS at the project level you can select the budget sources for your project, which can be edited from the Cost Sheet page up until the project is baselined. Our file (above) is already baselined, so we don't need to do this.

Cost Categories

A cost category is used to define costs into a category more specific than a CBS code.
Cost Categories are customizable - above is a very simple set for our small example.

Overview of the Cost Sheet

The Cost Worksheet can be viewed by CBS Codes or Cost Categories. (CBS Codes shown above.)

You have access to a flat list display and a hierarchical display of CBS codes.

The image above is a Cost Worksheet by Cost Categories. Cost sheet columns can be customized to view only required information.

Budget Changes

A Budget Change request to increase the Direct Labor Budget by $50,000.00 has been created (above). This budget change is then submitted for approval.

Recalculation of the Cost Sheet shows our Budget Change submission in the column Pending Budget Changes as it has not yet been approved.

As projects progress, changes in scope, resource reallocation, funding additions and withdrawals, or other factors can affect original budgeted amounts. Oracle Prime tracks budget changes and budget transfers.

The Budget Changes page enables you to track modifications that affect the budget. Use this page to view all approved, pending, and rejected change requests including who approved or rejected the change request, what the change entails, and any additional comments about why the change request was necessary. Pending change request amounts are also visible on the project Cost Sheet page if pending budget fields are displayed.

A second Budget Change has been created/submitted and approved (BC10 above).

The Cost Sheet (recalculated above) shows the approved budget change and the budget change that has not yet been approved. The approved budget change also appears in the Budget Window, both as part of the current cost for direct labor and in the details for that budget line item.

Note: Please keep in mind that you must have the required security access to create/make changes to the Cost Sheet and Budget sheets.

[12 3 4 5  >>