Skip to content
This repository has been archived by the owner on Oct 23, 2023. It is now read-only.

Not all errors are being captured #1297

Open
huangsam opened this issue Sep 8, 2018 · 10 comments
Open

Not all errors are being captured #1297

huangsam opened this issue Sep 8, 2018 · 10 comments

Comments

@huangsam
Copy link

huangsam commented Sep 8, 2018

We get some of our TimeoutError and ConnectionError instances from our client.captureException call in an except block but not always. I've been running ping against google.com and sentry.io from the server and there seems to be no dropping of packets. This concerns the simple Python integration, and not the Flask/Django integration.

@andrewlott
Copy link

We're seeing a similar issue: we're also unable to capture some exceptions in except blocks. Calls to logger.warning, logger.error, client.captureException, and client.captureMessage do not end up making it to Sentry if they are triggered from within an except block. In other circumstances these functions behave as expected, but the common thread among our missing events is that they're all triggered from except blocks.

Any information about this issue is much appreciated!

@mitsuhiko
Copy link
Member

Does the app shut down afterwards?

@huangsam
Copy link
Author

@mitsuhiko my team is running ETL scripts in the background with cron-scheduled jobs. They tend to finish after a definite time, but they're not applications per-se.

@andrewlott
Copy link

@mitsuhiko My team is also running ETL scripts, among other things, in the background with cron-scheduled Celery jobs. So far we've only observed the issue in these contexts.

@mitsuhiko
Copy link
Member

Can you share the configuration?

@andrewlott
Copy link

@mitsuhiko Our configuration looks something like this:

def configure_logging_sentry():
    """Create and attach a Sentry logging handler."""

    sentry_dsn = os.environ.get("SENTRY_DSN")

    if sentry_dsn is not None:
        import raven.conf
        import raven.handlers.logging

        client = raven.Client(sentry_dsn, auto_log_stacks=True)
        handler = raven.handlers.logging.SentryHandler(client)

        handler.setLevel(logging.WARNING)

        logging.root.addHandler(handler)

        logger.info("configured Sentry logging handler")

And triggering the exception is something straightforward like:

    try:
        raise ValueError("Test exception")
    except Exception:
        logger.warning(
            "Warning text",
            exc_info=True,
        )

We've tried various combinations of including/omitting exc_info=True and auto_log_stacks=True with no observable differences. If the exception is not caught then it appears in Sentry, however we don't see it in Sentry when it's caught and a logger.warning() is emitted.

@mitsuhiko
Copy link
Member

I think that is a different issue. That sounds like the logging system does not properly function for you.

@untitaker slightly related to this issue but i think an explicit handler on a single logger is a usecase that the new SDK does not handle properly now. We probably need to find a way to elevate specific loggers to send on different levels.

@andrewlott
Copy link

If we replace the call to logger.warning in the above snippet with something like client.captureException or client.captureMessage then it also does not make it to Sentry. However, if we call logger.warning or client.captureMessage outside of the except block (and do not trigger an exception) then we will see the warning/message in Sentry.

Interestingly, this case results in missed messages as well:

client.captureMessage("Test")
try:
    raise ValueError()
except Exception:
    pass

@huangsam
Copy link
Author

@mitsuhiko we apply this decorator to every ETL function that we invoke:

def alert(func):
    """Decorator to alert admins if function raises an exception."""
    @wraps(func)
    def wrapper(*args, **kwargs):
        try:
            func(*args, **kwargs)
        except Exception:
            # Sentry capture exception
            client = sentry_con()
            client.captureException()

            # Email
            err_path = path.basename(traceback.extract_stack()[2].filename)
            err_data = traceback.format_exc()

            # Assumes that a settings.json file exists in the same directory as the module.
            with open(path.join(path.dirname(__file__), 'settings.json'), 'r') as settings:
                admins = load(settings)['admins']

            mail = mail_con()
            mail.send_message(
                subject="Error while running {}".format(err_path),
                to=admins,
                body=err_data
            )
    return wrapper

@TechInnovation-Blockchain

I have similar issue as @huangsam.
I forked your code but still have some.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants