In many Python programming situations, at some point, as your application grows you want to start storing logs in a file. Since the very early days ( about my Python experience here ) that was the case for me. In this post, I’ll share my ideas for the “How Do I Store Python Logs In A File?” question.

How Do I Store Python Logs In A File?

One way to store these Python logs in a file so you can look at them later is to write them to a file using a built-in logging module. 

Here’s an example of how to use the logging module to store logs in a file in Python:

import logging

# Configure the logging module to use a file
logging.basicConfig(filename='example.log', level=logging.DEBUG)

# Write some log messages
logging.debug('This is a debug message')'This is an info message')
logging.warning('This is a warning message')
logging.error('This is an error message')
logging.critical('This is a critical message')

In this example, the basicConfig() function is used to configure the logging module to write logs to a file named ‘example.log’.

The level argument is set to logging.DEBUG which means that all log messages, including debug, info, warning, error, and critical messages, will be written to the file.

Then the code uses different logging levels debug(), info(), warning(), error() and critical() to write different log messages.

When you run this code, it will create a new file called ‘example.log’ in the same directory as your script, and the log messages will be written to it.

Obviously, this is a very simplified version of logging in Python, I’ll get into more details and other examples in the following sections of this post.

What Is Logging In Coding?

When you’re writing a new Python script or working on a Python project already (e.g. Django), sometimes it’s helpful to keep track of what’s happening inside the script (or project) as it’s running. This is called LOGGING!

You can think of it like a diary for your Python application.

When your Python App is running, it writes down important things that happen, like if there’s an error or if the App is working correctly.

This can help you figure out what went wrong if something doesn’t work, or if you want to make the application better. (increase speed, performance, etc.)

In Python, luckily we have a special tool “logging module” (an example was given at the beginning of the post)

Logging module helps us keep track of these things.

Here’s a quick example of how I’d add logging next to an HTTP request inside a class method:

A real example of logging in Python

Just so you understand this concept – it’s like a notebook that the application can write in.

You can tell the logging module where to put the notes, like in a specific file or location, e.g. filename='clearbank.log' part.

logging.basicConfig(filename='clearbank.log', level=logging.DEBUG)

And we can also decide how important the notes are, like if it’s just a little thing or a big thing, that would be level=logging.DEBUG part.

In my personal experience, I’ve found it’s a good idea to start logging as soon as you start to work on your Python application even if it starts out as a small Python script idea.

That way, you can see what’s happening and fix any problems that come up.

And when you’re done making the program, the logs can help you understand how it’s working and how people are using it.

This is especially important once you start to work on real projects, deployed in production already.

Why Do We Log In Programming?

Logging is important in programming because it allows Python developers to see what’s happening inside the application as it runs, which can be useful for debugging and troubleshooting.

I’ve already mentioned, when you’re working on a real project, there’s no way for you to “execute the script” and see what happens.

Many parts of an application (e.g. Django web project) are working simultaneously and the way you solve issues is by going back into log files -> hours, days or even sometimes weeks ago to find out what happened at a particular moment when your customer did XYZ steps.

Without properly configured logging, it would be difficult to find issues and deliver a working Django application.

To give you a bit of a perspective of a real situation, here’s a recent log file I went through to find out what happened in a production environment.

copy from logs.txt in 19_ (

In the above log file, you can see that at some point something went wrong and our Django application couldn’t make a successful request and the following error happened:

"Invalid token request code"

In this case, there were no other logging parts that would help me debug the process and find the root cause of the issue, but enough for me to understand which part of the application didn’t work – which request was failing.

Now I can go deeper into the project, find where this exact request is being executed and think about possible causes of the issue.

Here’s another example of writing logs, utils.log is a custom function we have developed in our code base which is not really using logging module, but something else.. I’ll explain it later in the post:

Python Logging with a custom function
Python Logging with a custom function

As you can see from the examples I’ve already given here, logging is more about providing valuable information to developers, so they can read logs and understand which part of the code was executed and where the possible cause of the issue could be.

Additionally, logging can be used to track the state of your application over time, which can be useful for monitoring and auditing.

In summary, the benefits of logging are:

  • It makes it easier to understand what is happening inside your app and locate the source of any problems.
  • It can provide valuable information about how your app is being used and how it’s performing.
  • It can be used to track the state of your app over time, which can be useful for monitoring and auditing.
  • It can be useful for debugging and troubleshooting.

Why Is Logger Better Than Print?

Using the logging module is generally considered to be better than using print() statements for several reasons:

  • Separation of concerns: The logging module is specifically designed for logging, while print() statements are meant for displaying output to the user. By using the logging module, you are keeping the logging functionality separate from the rest of your code, which can make it easier to maintain and understand.
  • Flexibility: The logging module provides a lot of flexibility in terms of where and how logs are written. You can write logs to a file, to the console, or even to a remote server. You can also specify the logging level, so you can choose to log only important messages, or to log everything. On the other hand, print() statements are not as flexible, as they only print to the console and don’t provide a way to filter the output.
  • Ease of filtering: The logging module uses different levels of logging, such as debug, info, warning, error, and critical, which can be used to filter the logs based on their importance. This makes it easy to focus on the most important information, and ignore less important messages. This can be difficult to achieve with print() statements, as you would need to manually filter the output.
  • Ease of Configuration: The logging module can be configured to write logs to a file, send logs to a remote server, email logs, send logs to syslogs etc. You can also specify the log format, log level and other options in the configuration. On the other hand, print() statements are not configurable and you would have to write different code to handle different scenarios.
  • Ease of usage: The logging module provides an easy and consistent way to log messages across the application, so it can be easily integrated with other libraries and tools. print() statements are more difficult to use in this way and may lead to inconsistency in the way the logs are written.

Overall, the logging module is a more powerful and flexible option for logging in Python, as it provides more features and control over the output than print() statements.

It also provides a way to separate the logging functionality from the rest of the code, making it more maintainable and scalable in the long run.

How Can I See Gunicorn Logs?

When your Django App is already deployed and running through ngnix and gunicorn you might face issues with logging module configuration.

It is generally recommended to use the logging module for logging in both nginx and gunicorn, instead of using print() statements with a flush parameter.

However, the configuration between different environments can be tricky and sometimes not even necessary, so the alternative would be using your own custom log() function, here’s an example (see below)

def log(
    message: str,
    force_in_testing: bool = False,
) -> None:
    # Don't log in TESTING=True unless force_in_testing is set.
    if settings.TESTING and not force_in_testing:
    kwargs['flush'] = True
    print(message, **kwargs)

Having a custom log() function will let you start out with print() and later, if your application grows to the level where it’s difficult to differentiate logs, you could switch to using logging module just by changing this one custom log() function.

Also, here’s why print() might work in gunicorn and ngnix better than logging module. 👇🏻

In Python, the print() function is used to output text to the console.

By default, the print() function writes the text to a buffer, and the buffer is flushed (i.e., the text is written to the console) only when a newline character (\n) is encountered or when the buffer is full.

The flush parameter, which is set to False by default, controls whether or not the buffer should be flushed after each call to the print() function.

When flush=True, the buffer is immediately flushed after the print() function is called, which means that the text is immediately written to the console.

When flush=False, the text is not immediately written to the console but is instead stored in a buffer.

Here’s an example to illustrate the difference:

import time

print("This is the first line.", flush=True)
print("This is the second line.", flush=True)

In this example, the flush=True parameter is used with the print() function.

This means that the text is immediately written to the console as soon as the print() function is called.

Because of this, the two lines will be printed immediately one after the other, with a 2 seconds delay between them.

Now, if you change the flush parameter to False and re-run the script, you will see that the two lines of text will be printed together after the 2 seconds delay.

The flush=True parameter can be useful in situations where you want to ensure that the text is immediately written to the console, such as when you want to display text in real time.

Caution: It might not work as described above in certain environments (e.g. Macbook), but it works that way on servers running Ubuntu.

Python custom log function
Python custom log function

If you’re working on a smaller project, it could be okay to just use log() function with just a print() inside of it for the sake of ease.

On the other side, logging module provides a lot of flexibility in terms of where and how logs are written.

How Do I Use Logger Instead Of Print In Python?

I already gave you a couple of simple examples for logging module usage, but let’s dive into real situations with real examples to give you more perspective.

To use the logging module instead of print() statements in Python, you first need to import the logging module and configure it to write logs to the desired location(file, syslog, remote server etc).

Here’s an example of a real-world situation where the logging module can be used instead of print.

Let’s say you have a script that periodically checks the status of a service and sends an email notification if the service is down.

Instead of using print() statements to display the status of the service, you can use the logging module to write logs to a file.

import logging
import requests
import smtplib

logging.basicConfig(filename='service_status.log', level=logging.INFO)

def check_service():
        r = requests.get('')
        if r.status_code != 200:
            logging.error('Service is down')
  'Service is up')
    except requests.exceptions.RequestException as e:
        logging.error('Error: {}'.format(e))

def send_email_notification():
    # code to send email notification

if __name__ == '__main__':

In this example, the logging module is configured to write logs to a file named ‘service_status.log’ at level logging.INFO.

The function check_service() uses the requests library to check the status of the service, and writes a log message using the and logging.error() functions depending on the status of the service.

If the service is down, the send_email_notification() function is called to send an email notification.

In this way, you can use the logging module to keep track of the status of the service and send email notifications when necessary, instead of using print() statements to display the status.

How Do I Check Python Log Files?

The location where the logging module saves log files depends on how you have configured the logging module in your Python script.

When you use the basicConfig() function to configure the logging module, you can specify the filename parameter to specify the name and location of the log file.

For example:

logging.basicConfig(filename='example.log', level=logging.DEBUG)

This will write the logs to a file named ‘example.log’ in the same directory as your Python script.

Alternatively, you can use the FileHandler class to specify the file name and location explicitly:

import logging

logger = logging.getLogger()

fh = logging.FileHandler('/Users/robertsgreibers/projects/example.log')

This will write the logs to a file named ‘example.log’ in the /Users/robertsgreibers/projects directory.

Listing files in projects folder

If you are not sure where the log files are being saved, you can add the following line of code to your script to print the log file location.


This will print the file name of the first handler in the logger, in case you have multiple handlers, you can loop over the handlers and print the baseFilename of each handler.

It’s also worth noting that, when running your script in production, the logs might be saved in a different location than where you are running your script, so it’s important to check the configuration of the system to determine where the log files are saved.

Once you find the log file location, there are a few ways to check and analyze Python log files:

  1. Manually: You can open the log file in a text editor and manually scan through the logs. This method is simple but can be time-consuming, especially if the log file is large.
  2. Command Line Tools: You can use command line tools like grep, tail, less to quickly search, filter and view the logs. For example, you can use grep to search for specific keywords or tail to view the last N lines of the log file.
  3. Log file Viewer: You can use a log file viewer, such as glogg or logviewer, to view and analyze log files in a graphical user interface. These tools provide features like filtering, searching, and highlighting that can make it easier to analyze large log files.
  4. Log Aggregators: You can use log aggregators like Logstash, Fluentd, Kibana etc to aggregate logs from multiple sources and view them in a centralized location. These tools can parse, index and search the log data, and provide advanced visualizations, alerts and monitoring capabilities.
  5. Python Libraries: You can use python libraries like pandas, matplotlib to programmatically read and analyze the log files. This allows you to build custom scripts and tools to analyze the logs in specific ways.

It’s worth noting that, which method you choose will depend on your specific needs and the size of your log files.

For small log files, manual inspection or using command line tools may be sufficient.

For large log files, using a log file viewer or log aggregator may be more efficient.

I personally prefer writing a quick log parser scripts myself in Python, here are a couple of examples I’ve already written:

To summarize building your own log parser script in Python, keep the following ways in mind. here is a general approach to building a log parser script:

  1. Read the log file: The first step is to read the log file using the open() function or the with open() statement. You can then read the file line by line and store the lines in a list.
  2. Parse the log lines: Once you have the log lines in a list, you can use string manipulation methods like split(), find(), startswith() etc to parse the log lines and extract the relevant information.
  3. Store the parsed data: The next step is to store the parsed data in a data structure that is easy to work with, such as a list of dictionaries or a pandas DataFrame.
  4. Analyze the data: Now that you have the parsed data in a structured format, you can use Python libraries like pandas, matplotlib to analyze the data in various ways such as counting the occurrences of specific keywords, visualizing the data over time, etc.

Here is an example of a simple log parser script that reads a log file, parses the lines and prints the number of occurrences of a specific keyword:

import re

# Open the log file
with open('example.log', 'r') as f:
    # Read the log file line by line
    lines = f.readlines()

# Initialize a counter
counter = 0

# Iterate over the lines
for line in lines:
    # Search for the keyword
    match ='keyword', line)
    if match:
        # If keyword is found, increment the counter
        counter += 1

# Print the number of occurrences

How Do I Use Logger In Multiple Files In Python?

In a Python project, you may want to use the logging module in multiple files. To do this, you can use a technique called “logger propagation” which allows you to use the same logger across multiple files.

Here’s an example of how you can use a logger in multiple files:

Create a logger in the main file: In the main file of your project, create a logger using the logging.getLogger() function.

For example:

import logging

logger = logging.getLogger(__name__)

Here, __name__ is used as the logger name, this will ensure that each module has its own logger with the same name as the module

Configure the logger: Configure the logger to write logs to the desired location and set the logging level.

For example:

fh = logging.FileHandler('example.log')

Use the logger in other files: In other files of your project, you can use the same logger by importing it from the main file.

For example:

from main_file import logger

logger.debug('This is a debug message')

This way, all the log messages from different files will be written to the same log file, and you can use the same logger to log messages from different parts of your code.

Here’s a real project example.

Let’s say you have a project that consists of two files, and, is the entry point of the project, and you want to use the logging module in both and files.


import logging

logger = logging.getLogger(__name__)

fh = logging.FileHandler('example.log')


from main import logger

def some_function():
    logger.debug("This is a debug message from")

This way, all the log messages from both and will be written to the same log file ‘example.log’, and you can use the same logger to log messages from different parts of your code.

It’s important to note that, the logger name should be unique throughout the project, otherwise, different loggers with the same name will not propagate the log messages.

Years ago, in one of the tech assignments I had to build my own chat application in Python and there was a requirement to use logging module. I used it in a slightly different way.

Python logging example in Chat Application code
Python logging example in Chat Application code

Just so you can have another example, here’s the code for it on my Github.

How Do I Create Multiple Log Files In Python?

In a Python project, you may want to create multiple log files for different purposes or different parts of the code.

To do this, you can use the logging module and create multiple FileHandler instances, each with a different file name.

Here’s an example of how you can create multiple log files in a Python project:

import logging

# Create a logger
logger = logging.getLogger()

# Create a file handler for the first log file
fh1 = logging.FileHandler('file1.log')

# Create a file handler for the second log file
fh2 = logging.FileHandler('file2.log')

# Use the logger
logger.debug('This message will be written to file1.log')
logger.error('This message will be written to file2.log')

In this example, the code creates two FileHandler instances, one for the file file1.log and the other for the file file2.log.

The logger is configured to write logs at the level of debug to the first file file1.log and the error level to the second file file2.log.

Here’s a real project example, let’s say you are building a web scraper project that scrapes data from multiple websites and you want to create separate log files for each website.

You can create a logger in the main file of your project and configure it to write logs to different files based on the website being scraped:

import logging

def setup_logger(name, log_file, level=logging.INFO):
    """Function to setup as many loggers as you want"""

    formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')

    handler = logging.FileHandler(log_file)        

    logger = logging.getLogger(name)

    return logger

# create logger for website1
logger1 = setup_logger('website1', 'website1.log')
# create logger for website2
logger2 = setup_logger('website2', 'website2.log')

# use the logger based on the website
def scrape_website1():
    # code to scrape website1"Scraping website1")

def scrape_website2():
    # code to scrape website2"Scraping website2")

In this example, the setup_logger function is used to create and configure a logger for each website, and the scrape_website1 and scrape_website2 functions use the appropriate logger to write logs to the corresponding log file.

It’s worth noting that, you can also use the same approach to create separate log files for different parts of your code, or for different log levels, such as creating a separate log file for errors or for specific types of events, for example, you can create a separate log file for all the errors, warnings, and critical events, and another one for informational and debug events.

You can also create separate log files for different stages of your project, for example, you can create a separate log file for the development stage, testing stage and production stage.

It’s important to note that, when creating multiple log files, you should make sure to use meaningful names for the log files, and also make sure to handle the log files properly, such as rotating and archiving them periodically, to ensure that they don’t take up too much disk space.

I'll help you become a Python developer!

If you're interested in learning Python and getting a job as a Python developer, send me an email to and I'll see if I can help you.

Roberts Greibers

Roberts Greibers

I help engineers to become backend Python/Django developers so they can increase their income