Imagine sitting in the executive chair about to make a critical decision, while a team of colleagues hurls financial information at you. It’s a stressful time, but at least the numbers are familiar. They are measured in dollars.

Now, envision being in that position and being informed about sustainability efforts and progress. Now the statistics are presented in unfamiliar and disparate units: carbon footprint in tons of CO2 equivalent, water footprint in gallons of water, energy efficiency performance by kilowatt-hours and the list goes on.

While these numbers are valuable in specific domains and in reference to specific locations, they do very little as decision support for executives who must prioritize where to focus efforts. Indeed, the electrical utility sells electricity by the kilowatt-hour. The water utility charges by the gallon. For them, measuring electricity by the kilowatt-hour and water by the gallon makes perfect sense.

But how does it serve a leader concerned with improving overall sustainability?

For that person, a kilowatt-hour from coal is different from a kilowatt-hour from solar and a gallon of water from a water-stressed reservoir is not equal to a gallon of water from a rain-blessed watershed. In the case of our environmental impact, sustainability, risks and resiliency, we need two things: good data and a simple way to interpret the data.

Today environmental sustainability uses enormous datasets that make it a Big Data problem. But unlike other domains, it is coupled with a big interpretation challenge. Big Data refers to the ability to accumulate, organize and interpret unstructured data.

Today’s data sets can hold petabytes of data in one single set. To put that into context, one petabyte is able to hold 500 billion pages of standard printed text. Environmental information concerns inputs such as satellite images of water assets, grid efficiency and power plant emissions for electricity, coupled with the performance of established and new technologies such as advanced materials, new desalination membranes and power electronics.

What can decision-makers do to avoid being lost in a complex sea of information?

The first step is to access the best available data. Take for example the U.S. electricity grid, a hugely complex system but a critical one in terms of the sustainability and resiliency of any organization. Yet, the current state of the art (called eGRID) divides the U.S. grid into only 24 regions and provides an annual update of the electricity generated in those regions. This model represents about 20,000 power plants. This is the quality of information provided for sustainability decisions while power traders have access to minute-by-minute data with much richer geographic quality than eGRID provides. Similarly, organizations trying to quantify their water risks use archaic footprint information and rarely use the nearly real-time United States Geological Survey measurements of thousands of watersheds. These are just two of the many examples of the poor data currently used by decision-makers.

Once better data is collected, any organization’s performance can be analyzed on a location by location basis, as well as at the organizational level. Then, the next step is to benchmark the organization relative to others in the sector and benchmark each location within the organization against each other. In order to do an ‘apples-to-apples’ comparison between organizations and/or locations, you can compare by looking at the energy used in that organization or location, expressed in energy units that account for the regional resource supply and infrastructure.

From there, best practices and advanced technologies can be selected and deployed, optimized for needs at one location, performance across locations, and competitive advantage. This approach merges simplicity and accuracy into resource decision-making for companies, cities and society.