Explain Codes LogoExplain Codes Logo

Making Python loggers output all messages to stdout in addition to log file

python
logging
stdout
file-logs
Anton ShumikhinbyAnton Shumikhin·Oct 15, 2024
TLDR

In Python, use the logging module to send logs to both stdout and a file. This is achieved by attaching two handlers to the root logger; a StreamHandler for stdout, and a FileHandler for the file logging. Both handlers should be designed with the same format for uniform log messages.

import logging import sys # Because paper-based diaries are so 1800s log_format = '%(asctime)s - %(levelname)s - %(message)s' # Create an omniscient logger. Sees all, logs all. root_logger = logging.getLogger() root_logger.setLevel(logging.DEBUG) # Formatter: The unsung hero that makes our logs readable formatter = logging.Formatter(log_format) # Look, it's a bird! It's a plane! No, it's StreamHandler to the rescue! stdout_handler = logging.StreamHandler(sys.stdout) stdout_handler.setFormatter(formatter) # They laughed at his FileHandler dreams. Now he writes epics to a log file! file_handler = logging.FileHandler('logfile.log') file_handler.setFormatter(formatter) # Both handlers tossed into the Heromobile called 'root_logger' root_logger.addHandler(stdout_handler) root_logger.addHandler(file_handler) # Meanwhile, Captain Log Message makes his appearance root_logger.info('Log message')

This snippet creates a digital saga of log messages, that span from the terminal console (stdout) to the 'logfile.log'. Just replace 'logfile.log' with your file of choice and you're good to go!

Making sense of log levels and handlers

Logging isn't just tracing breadcrumbs left by hidden bugs. It's an observational tool, used for monitoring your program's health and performance. That's where log levels like DEBUG, INFO, WARNING, ERROR, and CRITICAL pitch in. stdout and file logs with a consistent format brings clarity when analyzing logs.

Also, remember, the root logger affects all loggers in your application by default. If specificity is your thing, define different log levels or formats for stdout and file handlers.

Release resources after you're done using handlers. The logging module usually does that for you, but it's nice to do your own housekeeping.

Enhancing your logging game

If the basic setup's too vanilla for you, take the logging.config.dictConfig route. Trust me, it's an adventure:

  • Tune your output by customizing formatters or using different handler subclasses.
  • Use custom filter classes to roleplay as gatekeepers of your logs - only let through the important stuff!

It might seem like a lot, but the Python logging cookbook is the mentor we all need - always there, always helpful.

Going beyond mere simplicity

Sometimes, the basic configuration doesn’t cut it. This is when logging.config.dictConfig rides to the rescue! It's perfect for controlling complex configurations without breaking a sweat.

import logging.config import yaml # Serialize your logging configuration or load it from a YAML or JSON file logging_config = """ version: 1 formatters: simple: format: '%(asctime)s - %(levelname)s - %(message)s' handlers: console: class: logging.StreamHandler level: DEBUG formatter: simple stream: ext://sys.stdout file: class: logging.FileHandler level: INFO formatter: simple filename: 'complex_logfile.log' loggers: '': level: DEBUG handlers: [console, file] """ # Parsers assemble! config = yaml.safe_load(logging_config) logging.config.dictConfig(config) # Keep on logging in the free world! logging.debug('This is a debug message')

Sure, it's a bit more work compared to basicConfig, but look at that flexibility and control. It's like controlling the Matrix of logging.

Considering practical scenarios

In real-world scenarios, dealing with multiple loggers and broadcasting on different channels can be a bother. Setting up unique log levels for console and file handlers comes in handy in such cases.

Introducing error handling around file operations in your logging setup can save you hours of debugging. For wide application coverage, singleton instances or a shared logger could be handy.

Lastly, while the standard Python logging library is a force to reckon with, third-party libraries such as Loguru offer a simpler interface which could be well-suited in projects where ease of use trumps highly detailed configuration.