How to Use the Environmental Protection Agency’s (EPA’s) API to pull Data, Using Python

Welcome to Tech Rando! In this post, I will walk you through using the Environmental Protection Agency’s (EPA’s) API to pull publicly available EPA data into Python for analysis.

First, a little background on the EPA. The Environmental Protection Agency’s primary goal is to develop and enforce regulations that ensure that Americans have clean air, water, and land. The EPA is also involved in a host of other activities: it provides grants to non-profits and state governments to aid in environmental cleanups, it teaches people about the environment, and it publishes a ton of environmental information that is publicly available. Through the EPA’s Envirofacts Data Service API (see here for documentation), you can access all of the publicly available EPA data sets from a single entry point via Python. Some of the data sources available include: Greenhouse Gas (GHG) emissions, the Radiation Information Database, the Toxics Release Inventory, the Superfund Enterprise Management System, and the Safe Drinking Water Information System.

As an example, we’re going to look at pulling data from the Greenhouse Gas emissions database.

First, it may be important to note that the EPA’s Envirofacts API is not exactly user-friendly. I initially attempted to download the epa package via Python, which acts as a wrapper for the Envirofacts API. However, since it doesn’t appear like the package has been updated since its creation in 2011, it throws several errors when using a Python 3 environment. So, I turned to the EPA’s documentation on constructing queries to pull directly via URL’s, available here:

The website offers instructions on how to effectively construct a URL to pull data directly, shown below:

Image courtesy of

An example of a URL pulled via the API is as follows:

Let’s break down what the above URL means. First, the t_design_for_environment is the name of a table that we want to pull. We want the results in JSON format, so that is referenced next. And finally, we want to pull rows 1 through 19 from the data source, referenced last.

The Envirofacts website has information on how specifically to construct queries, so you’ll want to check that out. The site offers a lot of tables to sort through, so it’s important to have some sort of an idea of what you’re looking for when you’re building a query.

Here, I want to go to the GHG database to construct a search, by clicking the link shown below, available on the following webpage:

Click on the GHG option to search the metadata!

The following should appear on the greenhouse gas model webpage (

All of the blue ‘SubPart’ tabs are links to tables available in the greenhouse gas model database. If you click on several of the ‘SubPart’ tabs, you’ll notice there are about 4-5 tables associated with each ‘SubPart’, resulting in a lot of tables!

To keep things simple for this tutorial, we’re going to select the master ‘GREENHOUSE GAS SUMMARY’ tab:

We’re taken to a web page containing the names of several tables, as well as the table linking relationships:

For the purpose of this tutorial, I want to pull all of the associated GHG emissions by sector and subsector, on the right branch of the tree:

This means I need to pull the following tables via the API:






To construct each of these queries, I click on each of the tables to see how the URL is formatted. This is what it looks like for the PUB_DIM_FACILITY table (see

I want to pull the whole table in as a Excel csv so I reformat the URL query as follows:

Not bad! In fact, pulling in this data via Python into pandas dataframes is a fairly automatable process, as shown in the script below (also available via my GitHub account):

import pandas as pd
import io
import requests

class EPAQuery():
    This class is used to pull EPA data directly into Python

    def __init__(self, table_name):
        self.table_name = table_name
    def construct_query_URL(self,
                        desired_state=None, desired_county=None,
                        desired_area_code=None, desired_year=None,
        This function constructs the URL that we want to pull the data from 
        based on function inputs
            table_name: String. Name of the table in the Envirofacts database 
            that we want to pull from
            desired_output_format: String. Can be 'EXCEL', 'CSV', or 'JSON'; 
            the format that you want the data delivered in. We set default to csv as 
            that's how  we're gonna pull into pandas
            desired_state: name of the state abbreviation that you want to pull from.
            desired_county: name of the county that you want to pull from.
            desired_area_code: area code that you want to pull from.
            desired_year: year that you want to pull from.
            rows_to_include: rows that you want to include when pulling the query. EX:
            1:19--rows 1 thru 19. DEFAULT SET TO NONE
            query: string. URL that we want to pull
        #Base of the query that we're going to build off of
        #Add in the table name
        #Add in the state qualifier, if the desired_state variable is named
        if desired_state!=None:
        #Add in the county qualifier, if the desired_county variable is named
        if desired_county!=None:
        #Add in the area code qualifier, if the desired_area_code variable is named
        if desired_area_code!=None:
        #Add in the year qualifier, if the desired_year variable is named
        if desired_year!=None:
        #Add in the desired output format to the query
        #If there is a row qualifier, add it here
        if rows_to_include!=None:
        #Return the completed query
        return query

    def read_query_into_pandas(self, query):
        This function takes the query URL, pings it, and writes to a pandas dataframe
        that is returned
            query: string. Name of the URL that we want to pull
            dataframe: pandas dataframe. Dataframe generated from the file URL
        dataframe=pd.read_csv(io.StringIO(s.decode('utf-8')), engine='python',
                              encoding='utf-8', error_bad_lines=False)
        return dataframe
def main():
    #Declare the names of the tables that we want to pull
    #Dataframe dictionary
    #Object dictionary
    #Loop through all of the table names in the list, and generate
    #a query to pull via the API, and save to a pandas dataframe
    for table_name in table_names:
        #Generate a new object
        #Construct the desired query name
        #Pull in via the URL, and generate a pandas df,
        #which is then saved into a dictionary of dataframes called
        #epa_dfs for future reference
    #Generate a master dataframe by joining all of the
    #dataframes together
    master_df=pd.merge(epa_dfs['PUB_DIM_FACILITY'], epa_dfs['PUB_FACTS_SECTOR_GHG_EMISSION'],
                       left_on=['PUB_DIM_FACILITY.FACILITY_ID', 'PUB_DIM_FACILITY.YEAR'],
                                 'PUB_FACTS_SECTOR_GHG_EMISSION.YEAR'], how='inner')
    #Merge master_df with PUB_DIM_SECTOR
    master_df=pd.merge(master_df, epa_dfs['PUB_DIM_SECTOR'], 
    #Merge master_df with PUB_DIM_SUBSECTOR
    master_df=pd.merge(master_df, epa_dfs['PUB_DIM_SUBSECTOR'], 
    #Merge master_df with PUB_DIM_GHG
    master_df=pd.merge(master_df, epa_dfs['PUB_DIM_GHG'], 
    #Subset to include only the important columns
    master_df_subsetted=master_df[['PUB_DIM_FACILITY.LATITUDE', 'PUB_DIM_FACILITY.LONGITUDE',

if __name__== "__main__":

Let’s break down what the above script means. First, in the EPAQuery() class, an object is initialized with a table_name string (for example, ‘PUB_DIM_SECTOR’), a default ‘CSV’ output, and the EPA’s base url, ‘’. From there, a query can be written to pull the data, using the construct_query_URL() function. Once the URL has been built, it is called via the read_query_into_pandas() function, and the table output is written to a pandas dataframe. All of these steps are performed in a loop for the specified tables in the main() block. Finally, the tables are merged together to create a master_df dataframe, which can be used for analysis.

This concludes this tutorial. Thank you for reading!

This post is very similar to my post outlining how to use the EIA’s API gateway to pull data into Python. If you’re also interested in that, check out the following link:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.