As part of its rich, extensible logging system, Dagster includes loggers. Loggers can be applied to all jobs within a code location or, in advanced cases, overriden at the job level. Logging handlers will be automatically invoked whenever ops in a job log messages, meaning out-of-the-box loggers track all execution events. Loggers can also be customized to meet your specific needs.
By default, Dagster comes with a built-in logger that tracks all the execution events. You can find an example in the Using built-in loggers section.
The built-in loggers are defined internally using the LoggerDefinition class. The @logger decorator exposes a simpler API for the common logging use case, which is typically what you'll use to define your own loggers.
The decorated function should take a single argument, the init_context available during logger initialization, and return a logging.Logger. You can find an example in the Customizing loggers section.
The context object passed to every op execution includes the built-in log manager, context.log. It exposes the usual debug, info, warning, error, and critical methods you would expect anywhere else in Python.
When you run Dagster jobs in the Dagster UI, you'll notice that log messages are visible as colored messages in the console. For example:
Logs stream back to the the UI frontend in real time:
Filtering log messages based on execution steps and log levels:
What happens if we introduce an error into our op logic?
# demo_logger_error.pyfrom dagster import job, op
@opdefhello_logs_error(context):raise Exception("Somebody set up us the bomb")@jobdefdemo_job_error():
hello_logs_error()
Errors in user code are caught by the Dagster machinery to ensure jobs gracefully halt or continue to execute, but messages including the original stack trace get logged both to the console and back to the UI.
Messages at level ERROR or above are highlighted both in the UI and in the console logs, so they can be easily identified even without filtering:
In many cases, especially for local development, this log viewer, coupled with op reexecution, is sufficient to enable a fast debug cycle for job implementation.
Suppose that we've gotten the kinks out of our jobs developing locally, and now we want to run in production—without all of the log spew from DEBUG messages that was helpful during development.
Just like ops, loggers can be configured when you run a job. For example, to filter all messages below ERROR out of the colored console logger, add the following snippet to your config YAML:
loggers:console:config:log_level: ERROR
When a job with the above config is executed, you'll only see the ERROR level logs.
You may find yourself wanting to add or supplement the built-in loggers so that Dagster logs are integrated with the rest of your log aggregation and monitoring infrastructure.
For example, you may be operating in a containerized environment where container stdout is aggregated by a tool such as Logstash. In this kind of environment, where logs will be aggregated and parsed by machine, the multi-line output from the default colored console logger is unhelpful. Instead, we'd much prefer to see single-line, structured log messages like:
Good news - Dagster already includes a logger that prints JSON-formatted single-line messages like this to the console (json_console_logger). But let's look at how we might implement a simplified version of this logger.
Loggers are defined internally using the LoggerDefinition class, but, following a common pattern in the Dagster codebase, the @logger decorator exposes a simpler API for the common use case and is typically what you'll use to define your own loggers. The decorated function should take a single argument, the init_context available during logger initialization, and return a logging.Logger:
As you can see, you can specify the logger name in the run config. It also takes a config argument, representing the config that users can pass to the logger. For example:
Logging is environment-specific: you don't want messages generated by data scientists' local development loops to be aggregated with production messages. On the other hand, you may find that in production console logging is irrelevant or even counterproductive.
Dagster recognizes this by attaching loggers to jobs so that you can seamlessly switch from, e.g., Cloudwatch logging in production to console logging in development and test, without changing any of your code. For example: