Data Science and AI Case Studies

ExxonMobil obtained a 40% savings by using IBM Analytics.
Wunderman Thompson was able to double their predictions of client’s marketing prospects using IBM Analytics.
Highmark Healthcare was able to reduce its AI development and deployment lifecycle from 52 weeks to 6 weeks using IBM analytics.
Woodside Energy saves $8M annually by using a faster system to save employees’ time accessing data with IBM Analytics.
Identifying bias and fairness in your models can be accomplished with an IBM tool that remains impartial in customer treatment.
Govern your models to prevent drift over time with an IBM tool to manage your model development until final implementation.
IBM analytics automatically alerts you of model bias and builds a new model if bias is detected.

Data Science and AI

Extracting the lifeblood of AI at ExxonMobil

“Data is the new oil.” The quote goes back to 2006, credited to mathematician Clive Humby. Gartner’s Peter Sondergaard took it a step further, calling analytics data’s combustion engine.
While Artificial Intelligence (AI) still has that “new car smell,” it needs clean data in its fuel lines. The utility of data as a driver of digital transformation and, ultimately, as a cleaner-burning fuel for AI projects, comes only when it’s analyzed or extracted into meaningful narratives about the past, present and future. Otherwise, it can sit around the business in silos—uncollected, unorganized like unrefined crude, clogging up multiple cloud storehouses and repositories.
A decade ago, with a Ph.D. in Physics from MIT, Xiaojun Huang was working as a seismic interpreter for ExxonMobil in the Gulf of Mexico, figuring out what less-explored areas might hold promise for oil discovery. Reflecting on that mentally-tedious process, she did a calculation and concluded that AI could have easily turned a grueling, year-long process of churning through 2D seismic maps, tectonic and historical data into a six-month play that would detail the potential payoff of new hydrocarbon fields.

While ExxonMobil’s AI aspirations have been high, like most large enterprise companies, they were facing obstacles along their journey.

ExxonMobil obtained a 40% savings by using IBM Analytics.

Data is siloed in hundreds and even sometimes thousands of applications, making collecting and organizing data complex and time-consuming. Then there’s the skill set. While most specialists go deep on their subject matter, they can lack the toolset of data scientist practitioners. Finally, just getting started with a whole new system can slow things down. AI begs for experimentation and somewhat visionary thinking. But it does require a process: involvement from a diverse set of stakeholders, and an agile approach that can take a small bite into one slice of the problem pie before attacking the entire meal.

Xiaojun, who had worked her way up to the position of senior advisor for the company’s Upstream Digital Transformation unit, could fully appreciate the challenge, and was in the perfect spot to help ExxonMobil succeed.  With the company’s multi-billion dollar investment in Guyana – a new offshore oil discovery – all eyes were on building a modern data platform that would enable AI and workflows that in turn could speed project development and more quickly achieve a return on the massive investment.

As the company had faced some challenges on how to apply AI to seismic interpretation, it turned to another company for help: IBM.

Xiaojun jumped at the opportunity and a chance meeting with one of the top data scientists at IBM, Vishnu Alavur Kannan, at the company’s Houston office.

What followed was a 12-month collaboration between seismic experts and the IBM Data Science and AI Elite team to essentially modernize all ExxonMobil’s data estates into one easy to access repository. Based on open source technologies, experts can access the data from its multicloud environment, helping make decisions on a much faster time scale. In other words, any team member can collect data from any application from any source and make it available seamlessly through APIs.

Describing the first day of the engagement, Xiaojun describes what happened when the geologist, the reservoir engineer, the drilling engineer, the seismic interpreter, site investigator and operational geologist and a formation evaluator all landed in one room.

“We did not talk about technology or the data silos. We pretty much asked them about their pain points and their whole workflow.” Quickly, the group arrived at a common goal: to collaborate seamlessly with a lot more efficiency on a project driven by the business needs. It was a tight approach she’d never really seen before in her large organization.

A team comprising of up to 20 different roles continued working side by side with the IBM data scientists to bring together all the data into a series of well-formed workflows initially for a small drill well planning exercise. This was no small feat – as data types span the realm from geology to geophysics to rock properties to economic analysis to well log data.

Now, for the first time, critical data for the Guyana project is available all in one place, accessible anywhere through various devices.  The data foundation was ready to prove itself in the field.

Benefits include an initially shortened planning cycle for the drilling design for new wells  – from nine to seven months. In an industry where players are racing against time to move first oil as early as possible, any efficiencies on capital investments are critical.  Another benefit is the time the team saved on data preparation—an estimated 40 percent, due to the agile processes developed along with IBM.

The data foundation passed its initial test and will now expand to handle both subsurface data and surface data in Guyana’s commercial projects. “As you can imagine, once we connect all that data together, we can start to ask very intelligent questions, and get the answers very quickly,” said Xiaojun.

Wunderman Thompson: AI reimagined at scale across the business

Merging creativity, data and technology

Using data to shape new messaging or find new prospects is
core to their business, but Wunderman Thompson wanted to do
more, and do it better. In markets continually roiled by disruption
and innovation, they needed to help their clients move beyond
transactional relationships toward cultivating deeper, longer
engagements, using data to forge authentic interactions between
brands and customers.

The ultimate goal at Wunderman Thompson was to build its machine
learning and AI capability to create more accurate models and scale
that capability across the organization. Wunderman Thompson
had implemented some machine learning, but siloed databases
constrained its ability to effectively use predictive modeling.
To tune its operations for AI, Wunderman Thompson needed to
dissolve the silos, merge the data and infuse it across the business.
They needed to build a unified data science platform, a single
data ecosystem that could serve its organization and beyond.
Data Science and AI
mother with child
Wunderman Thompson’s largest databases—iBehavior Data
Cooperative, AmeriLINK Consumer Database and Zipline Data
onboarding and activation platform—the most extensive in the
industry, comprise billions of data points across demographic,
transactional data, health, behavioral and client domains.
Combining these properties would provide the foundation to
instill machine learning and AI across the business.
How could they transform its data practice, fully integrating
machine learning into the business? Make its data ready for AI in
a hybrid cloud environment? Wunderman Thompson needed a
robust platform, an open information architecture, that would
maximize and consolidate its assets in a multicloud environment.

Enlisting the IBM Data Science and AI Elite team

To resolve this multifaceted challenge, only expert help from a trusted provider with innovative technology, industry expertise and enterprise ready capabilities would do. A long history of working with IBM led Wunderman Thompson to the IBM® Data Science and AI Elite team.

With the help of IBM’s Data and AI Expert Labs and the Data Science and AI Elite team, Wunderman Thompson built a pipeline that allowed it to import the data from all three of its largest data sources. This combined asset contains more than 10TB of data amassed over more than 30 years from hundreds of primary sources, including precise data for more than 260 million individuals by age; more than 250 million by ethnicity, language and religion;
more than 120 million mature customers; 40 million families with children; and 90 million charitable donors. With the ability to work collaboratively across many different regions and offices, Wunderman Thompson could run models in a way that previously had been impossible. When the Data Science and AI Elite team introduced them to AutoAI, that’s when the work really scaled up.

Father holding child
John Thomas, IBM Distinguished Engineer and Chief Data Scientist, led the creation of a system that combined IBM Watson® Studio and IBM Watson Machine Learning. With AutoAI as the linchpin, Wunderman Thompson created an automated end-to-end pipeline to bring as much information as possible into its data pool, delivering more data to fuel better predictions, generating better prospects for clients. Next, Wunderman Thompson used the model building and prediction capabilities of Watson Studio to develop an iterative model selection and training process that resulted in models met the appropriate criteria.
people using cell phones
Eight weeks of collaboration with the Data Science and AI Elite
team and industry insights from the IBM Account team delivered
a proof-of-concept, undergirded with a sound methodology that
enabled better-performing models using enriched datasets.
Wunderman Thompson compared data points in each source to
filter out records for desired features and reconciled these against
one another. They subsampled tens of thousands of records for
feature engineering, applying decision tree modeling to highlight
and select the most important data training features.
The results showed a significant uplift over previous models,
a dramatic increase in segmentation depth, raising rates well
beyond their initial projections. With an average change from 0.56
to 1.44 percent, a boost of more than 150 percent, IBM helped
Wunderman Thompson uncover new personas in existing databases
they had previously been unable to reveal, delivering a dramatic
improvement in deliverable customer lists
This new machine learning and AI solution delivers the power to
personalize messaging at scale to create meaningful, more resilient
relationships with more customers—meeting the company’s needs
no matter what circumstances the world is facing. And that allows
Wunderman Thompson to build more revenue for its clients, and
its business.

Learn how the IBM Data Science and AI Elite team can help
you harness data science and AI to bring value to all aspects
of your business.

From one year to six weeks: Highmark Health teams with IBM to accelerate AI in urgent times

Highmark Health can predict patients at risk, supporting preventative clinical intervention and outreach to avoid costly inpatient sepsis admissions.

Data Science and AI

With 720,000 cases annually in the U.S., and a staggering mortality rate between 25 – 50%, sepsis isn’t just life-threatening; it doubles as one of the country’s most expensive inpatient conditions, consuming more than USD 27 billion annually. What’s worse is that COVID-19, like other infections, can lead to sepsis – threatening to tip overwhelmed ICUs over the edge.

Working with IBM’s Data Science and AI Elite team, organizations such as Geisinger Health System have made tremendous leaps forward using inpatient clinical data to build models to predict – and prevent – sepsis mortality. Identifying which sepsis patients are at greatest risk can help providers prioritize care – and stave off risky, costly inpatient admissions.

With the increasing urgency facing today’s healthcare institutions, there’s more ground to cover. At Pittsburgh-based Highmark Health, the second-largest integrated healthcare delivery network in the country, a team of data scientists and researchers realized they could build a model from a source of patient information that might prove even more effective in time-critical cases: insurance claims data.

It was an unexplored area for model building – a first-of-a-kind pursuit which, as promising as it sounded, would require Highmark to predict acute events months in advance using claims data from millions of members across multiple siloed data sources.

Brittany Bogle, IBM Senior Data Scientist and healthcare lead had significant expertise in similar data science engagements with other U.S. healthcare providers, so she knew the Highmark scenario well. But this time around the team had a new, integrated platform at their disposal that could handle Highmark’s complex and varied data sets – and even better – unite data scientists, architects and engineers who were collaborating on this first-of-a-kind project.

That new platform was IBM Cloud Pak® for Data with components for data modernization, DataOps and AI lifecycle automation including:

  • IBM Data Virtualization
  • IBM Watson Knowledge Catalog
  • IBM Watson Studio, an enabler of ModelOps
  • Explainable AI and Model Monitoring in Cloud Pak for Data
In a six-week proof of concept, Curren Katz, Highmark’s Director of Data Science R&D, teamed with Bogle and IBM to build a model, then score and identify patients likely to develop sepsis. The goal was to work within a three-month window for ingesting the claims data, giving clinical management teams time to develop action plans for intervention and hopefully, keep patients at highest risk out of the hospital.

“When we were building this, other people in the company heard about it and were talking about stories of people they knew and friends and relatives – so we really thought we had hit on a very important topic,” said Katz.

While Katz and her team were no strangers to building models, getting to the deployment stage caused some angst among Highmark’s most senior data scientists. Previously the organization’s architecture made the work cumbersome and clunky – stretching out for months, even up to a year. But with the new platform taking care of the heavy lifting, the IBM team turned over a deployed model in only a few short days. “The (IBM) data science elite team wanted to show me that this was possible and that I could tell our stakeholders across the company that we were going to have this model ready to deploy and ready to go into the clinical systems,” said Katz. “We wanted our care managers, nurses, and doctors to be able to access the findings and incorporate that into their work and reach out to patients. I think it was within a couple of days that IBM came back with a deployed model and I was kind of shocked.”

Katz and Bogle agreed that the early skepticism about tackling some of the biggest problems in healthcare quickly dissolved as the new platform enabled swift model deployment. The newly launched platform gives Katz the power to scoop up new research findings and contributors as COVID-19 evolves, changes and unveils new data.

“And that’s what this felt like: A platform where we can draw on all of the expertise in our company and build solutions that get ahead of problems, that give us insights into the future that we can act on,” said Katz. “That’s how we’re going to free people to be their best. And I think that’s where healthcare overall is really going forward: Keeping people healthy and being a partner in doing that.”


  • Eliminates data silos.
  • Provides trusted data source and reduces data preparation by cataloguing all the attributes in one place
  • Integrates insights into the application workflow.
  • Enables monitoring of insights for bias, trust and transparency.
  • Reduced Highmark’s AI development and deployment lifecycle from 12 months to six weeks.

Woodside Energy

Using IBM Watson technology to extract decades of experience from an ocean of data

Woodside harnesses the power of cognitive computing to extract meaningful insights from 30 years of dense and complex engineering data. IBM® Watson® technology puts decades of knowledge at the fingertips of employees across the company, helping answer tough questions faster to enable fact-driven decision making on complex projects.

Business Challenge

Woodside Energy had no systematic way to tap into the 30 years of engineering and drilling knowledge that lay buried in unstructured documentation and with its most experienced engineers.


Woodside harnesses the power of IBM Watson technology and cognitive computing to extract meaningful insights from 30 years of complex engineering data to enable fact-driven decision making on complex projects.
processing plant


AUD 10M savings in employee costs

because of faster access to and more intuitive analysis of engineering records

75% reduction in time spent

by the geoscience team reading and searching through data sources

Accelerates expertise

by giving staff unlimited access to 30 years of tribal knowledge

Business challenge story

Mining an ocean of data


A seasoned engineer leads another, some 25 years his junior, through a maze of pipes, dials and valves on a natural gas platform a thousand miles from anywhere. The older man knows the facility like he knows his home back in Australia. He should; he was there when Woodside turned on the power. But this youngster—how can he ever gain the decades’ worth of knowledge he needs to make sure that the facility keeps running smoothly, even in the remote conditions of the North West Shelf?

Woodside is a 63-year-old company with more than 30 years of experience operating some of the world’s largest—and therefore most complex and expensive—offshore petroleum and LNG production platforms. Like many companies in the sprawling oil and gas business, Woodside was leaving a lot of its internal wisdom—accrued over the past 30 years from thousands of engineers—essentially untapped. Making these knowledge assets accessible to the broader population of employees could supercharge productivity in engineering and beyond.

Shaun Gregory, the organization’s Chief Technical Officer (CTO), explained in an interview at the 2016 IBM World of Watson event: “In 30 years, we’ve generated a lot of information, a lot of knowledge that is buried in reports, documents and data.” To remain competitive, Woodside knew it needed to streamline corporatewide access to its archives and accumulated knowledge, to spread not only information but also the contextual relevance of this information.

As Chief Executive Officer (CEO) Peter Coleman stated during a keynote speech at World of Watson: “It became very evident to me that we spend a lot of time building big things, but each time we build one we have to go back and recall what we did last time. We have to find the people who were around last time.” Put another way, Woodside was “extremely data rich, but that data wasn’t going anywhere,” said Coleman.

Companies such as Woodside often rely on the institutional knowledge of experienced engineers, which they hope is passed on to the next generation. “By the time you get to be a project director on a project worth AUD 15 billion, it’s probably your last project,” Coleman explained to the World of Watson audience. That career’s worth of experience may be lost to the organization by the time the next big project comes along.

Many in the Woodside workforce had been with the company since the first day of operations, challenging the business with the potential loss of an entire generation’s worth of knowledge and experience. “They aren’t going to be around to teach our graduates in four or five years’ time,” said Gregory. “We have a big bank of knowledge from over 30 years of operations; all that engineering expertise sits internally within the company and is almost like a latent asset that we need to bring to the forefront.”

Woodside believed that by improving its employees’ ability to tap into engineering insights that lay hidden and dormant throughout different parts of the organization, it could solve problems faster, reduce costs and drive greater engineering efficiencies. The CEO therefore wanted to transform the business from reactive to predictive and from opinion-based to fact-based in its decision making.

The question remained: how to drill into three decades of information spread across internal and external reports, journals and human experience. The company recognized that it had far too many documents for simple keyword searches to be effective anymore. As Russell Potapinski, Head of Cognitive Science at Woodside, said during his interview at World of Watson, “We needed a cognitive system that could actually understand the languages that we speak, reinterpret all of those documents that we have in our systems, and then surface those insights really quickly.”

It’s not just about retooling and having the same culture in place; we’re fundamentally changing our culture with cognitive computing.

— Peter Coleman, Chief Executive Officer, Woodside Energy

Transformation story

Drilling into cognitive

That’s when cognitive technology entered the picture. In his interview, Gregory recalled: “In 2014, when we started to analyze this problem, we knew about AI. AI had been through a few false starts. But we could see with the rise of Moore’s law and computing, and the cost of data storage, that it was going to be unlocked this time.” The business created a cognitive science team that spent time with IBM Research to understand, post-Jeopardy, where IBM® Watson solutions stood and whether they could successfully meet the Woodside data challenge. “We really understood how AI could work on our data sets. We understood how much unstructured data we had and how we were going to bring that to life with AI and cognitive,” Gregory concluded.

As Coleman noted in his address, this was “all about knowledge and creating shareholder value through our knowledge and competencies,” and it was about the competitive advantage stemming from that knowledge. However, he stated that it might be difficult to define a business case around knowledge, whereas “it’s easy in manufacturing, because I can count how efficient I’ve become.”

Ultimately, it wasn’t all that difficult to bring the executive committee around to seeing the benefits of cognitive computing. Coleman explained how the firm had just completed a project worth AUD 15 billion and had identified 8,000 lessons learned. When asked to name five of those lessons two years after the project, the executive team had difficulty coming up with consistent answers. “There’s the business case,” asserted Coleman. If the company’s executives could barely remember the lessons learned after only two years, how would the rest of the organization find that information? And where would that knowledge be when it came time for the next big project?

To help improve its ability to disseminate engineering knowledge and insights in support of decision making, the business engaged IBM to implement the IBM Watson Assistant solution, delivered as a cloud-based service. In the course of the deployment, IBM Watson Lab Services performed the architectural planning and implementation while IBM Research developed improved methods of ingesting engineering content from existing documentation.

A team from IBM Global Business Services®—Business Consulting Services provided project management and training services. Business Consulting Services also worked with a user-experience team in Sydney, Australia, to develop and produce the look and feel of the system’s custom user interface (UI). As IBM Watson solutions become more embedded in Woodside operations, Business Consulting Services will continue to provide business analysts, testers and process experts while IBM Watson Services will supply subject matter experts (SMEs) to round out the project.

Working with IBM and the IBM Cloud portfolio, Woodside developers identified the application program interfaces (APIs) needed to craft an architecture and build an intuitive design that helps engineers find the advice they need. These APIs include:

  • IBM Watson Natural Language Classifier: This API allows users to search a corpus by asking questions as if they were talking to a person. The API parses out the intent of a question even if people ask it in different ways.
  • IBM Watson Discovery: After understanding the question, this API retrieves all relevant information from the corpus, ranks it in terms of relevance, and responds with the best matches along with related points of inspiration.
  • IBM Watson Assistant: By incorporating a human tone, this API creates a better user experience and allows the IBM Watson solution to interact with engineers in their own language.
The Watson for Projects instance was Woodside’s first IBM Watson deployment. To start, the organization worked with IBM to create the corpus of knowledge at the center of the solution by uploading approximately 30 years’ worth of documents related to activities in constructing and running its facilities. The vast majority of the content—which includes testing data and results, project management reports, and associated correspondence—was in unstructured form. During this phase, advanced text analysis and machine-learning algorithms within IBM Watson Assistant software scanned all the content to create a web of relationships among data elements.

Potapinski described the process and the expected outcome. “We put in approximately 33,000 technical documents from our previous projects. These could range from technical evaluations and geological studies to key decision logs, engineering reviews and reports. That way, if an engineer or a project manager wishes to figure out how we did something in the past, all they have to do is type in a simple question. ‘How did we design and select the compressors on the North Rankin II platform?’ That allows them to instantly surface the insights of all the people who came before them.”

Ingesting and correlating the information was only the first step. The IBM Watson solution also needed training. A core group of Woodside employees began to test the IBM Watson technology on what it had learned, guiding its answers and teaching the system to think like one of them. Despite some senior engineers’ fears that the system was designed to replace instead of assist them, finding volunteers to help train IBM Watson technology was surprisingly easy according to Potapinski and Gregory.

Potapinski recalled: “We were training Watson, and a very senior engineer and project manager was helping select good answers to let Watson know which answers are effective and should be used in the machine-learning models. When Watson surfaced a particularly obscure and very difficult technical problem, I saw him cheer!”

Gregory concurred. “It turns out that (the engineers) were almost first on board. They got to leave behind a legacy for future generations.” Further, he noted, “For the juniors coming through, their ‘aha’ moment was, ‘I can gain competence in a few months by asking questions of Watson that would have taken me years.’”

Once the teams established the foundational set of content connections, engineers could begin querying the solution for recommended information to support particular decisions. Woodside could also start scaling the solution to different areas within the company.

The next step after launching the Watson for Projects instance was to develop the Watson for Drilling instance. Creating and training this took six months’ work in collaboration with Woodside’s geoscience team.

Drilling an offshore well can cost approximately AUD 50 million, so precision is key. Before drilling can begin, the company’s geoscientists need to read 20 – 25 well-completion reports, which can be 2,000 pages long. This manual process could take 6 – 8 weeks to uncover past drilling events that could affect the current project. “It might be where we had a stuck pipe or other drilling event that we should be aware of before we start drilling in that particular area,” said Potapinski. Only after that lengthy report review process could the geoscientists begin to formulate how to adjust the well design in response to prior events.

In addition to historical Woodside reports, the teams must read external reports from other oil and gas companies operating in the same regions. Potapinski noted that keyword searches don’t work with these reports because the language and wording vary so widely across organizations and even generations. The Watson for Drilling instance turns the process on its head. It can read and analyze the free text and language of those critical well-completion reports and return results in real time.

Woodside engineers can now draw a circle on a relevant area of the map and “all of the drilling events that they should be aware of are instantly highlighted on the screen, in depth order,” said Potapinski. The geoscientists and engineers are then free to harness their own expertise and decades of experience to make critical decisions such as: What should we do about this? Is this something of interest, or is this something we don’t need to be aware of? Or, no, we need to adjust our designs subtly here.

Gregory summed it up this way: “That is the real value of cognitive technology. We get to answer better questions, tougher questions faster and with more accuracy.”

Woodside is now home to approximately a dozen IBM Watson instances, including health, safety and environment (HSE) and drilling. Gregory described the logic behind creating new instances of IBM Watson technology for each business discipline by explaining that each area of competence has its own terminology or, as he says, different languages or sublanguages. “A driller and someone who builds something, they (each) have their own dialect. So we trained Watson in the subdialects of the different disciplines. It’s currently learning HSE. And so as it goes through all the different disciplines, that’s how we’re scaling it.”

Potapinski noted: “Learning is one of the unique features of a cognitive computing system. In fact, it’s actually learning while the system is in use, so it gets better with age rather than traditional software that tends to degrade and get less useful. For example, when we ask a question of Watson, our staff members are able to give it a thumbs-up. And that little thumbs-up actually conveys a whole lot of information for the Watson system to use.”

In summing up the value that Woodside gains from cognitive technology, Gregory stated: “I think that’s the most exciting thing about cognitive is that it keeps learning. Once Watson’s been taught and gotten better at an answer, it is immediately available to all employees no matter where they are.”

That is the real value of cognitive technology. We get to answer better questions, tougher questions faster and with more accuracy.

— Shaun Gregory, Chief Technical Officer, Woodside Energy

Results story

Building the future of energy

Woodside can now tap into years’ worth of data and personnel experience to uncover its tribal knowledge while processing new information as the company adds to its knowledge corpus. The business continues to expand its use of IBM Watson technology to different areas of the organization.

Regarding using cognitive computing throughout Woodside, CEO Coleman told his audience, “It’s not just about retooling and having the same culture in place; we’re fundamentally changing our culture with cognitive computing.”

Woodside believes it can transform its own—and perhaps its industry’s—working culture by using technology that helps its engineers and other employees perform more effectively.

The company’s adoption of cognitive computing can help realign the careers of its key people to focus on the innovation side of the company’s business. “No geologists or engineers go into their career thinking, ‘I’d really love to spend Monday to Thursday just hunting for the information I need and only on Friday getting to come up with innovative solutions,’” Potapinski asserted. “I think it’s just going to completely redefine what it means to work and how much fun it is to work when (information discovery) is an easy step in your day.” Once the system delivers the answer or pinpoints an issue, engineers and geologists can then apply their cognitive abilities to the problem to derive creative solutions that were previously unattainable or only achieved after weeks of research or trial and error.

Gregory framed it this way: “It’s about empowering the engineers to be able to do a whole lot more and answer those tougher questions. I think that it won’t be long until those (companies) that aren’t on the cognitive journey will get left behind.

“Rather than viewing technology as a way to take out tiny incremental costs, we are trying to unlock bigger problems like how we can drive exploration costs down and how we can cut construction costs in half. These are the bigger problems,” concluded Gregory. At the same time, the company expects to drive down employee costs by AUD 10 million by improving productivity through faster access to and more intuitive analysis of the company’s tribal knowledge.

Gregory and Potapinski agree that cognitive computing provides a huge advantage both to the engineers and to the whole company, reducing the time spent finding data by 75 percent. Specifically, instead of spending 80 percent of their time on data collection and only 20 percent of the time figuring out what to do with it, engineers can now spend 80 percent of their time deriving insights from the data that IBM Watson technology presents.

Although Woodside doesn’t expect to solve all the world’s problems, the company hopes to become an industry leader in health and safety practices, geosciences and environmental protection—all supported by cognitive computing technology.


CONTACT US NOW and let's work together.