skyline.analyzer package

Submodules

skyline.analyzer.agent module

class AnalyzerAgent[source]

The AnalyzerAgent class does the follow:

ensures that the required OS resources as defined by the various settings are available for the app.

run()[source]
run()[source]

Check that all the ALGORITHMS can be run.

Start the AnalyzerAgent.

Start the logger.

skyline.analyzer.alerters module

skyline_version = 'Skyline (master v1.3.1 stable)'

Create any alerter you want here. The function will be invoked from trigger_alert.

Three arguments will be passed, two of them tuples: alert and metric, and context

alert: the tuple specified in your settings:

alert[0]: The matched substring of the anomalous metric

alert[1]: the name of the strategy being used to alert

alert[2]: The timeout of the alert that was triggered

metric: information about the anomaly itself

metric[0]: the anomalous value

metric[1]: The full name of the anomalous metric

metric[2]: anomaly timestamp

context: app name

get_graphite_port()[source]

Returns graphite port based on configuration in settings.py

get_graphite_render_uri()[source]

Returns graphite render uri based on configuration in settings.py

get_graphite_custom_headers()[source]

Returns custom http headers

alert_smtp(alert, metric, context)[source]

Called by trigger_alert() and sends an alert via smtp to the recipients that are configured for the metric.

alert_pagerduty(alert, metric, context)[source]

Called by trigger_alert() and sends an alert via PagerDuty

alert_hipchat(alert, metric, context)[source]

Called by trigger_alert() and sends an alert the hipchat room that is configured in settings.py.

alert_syslog(alert, metric, context)[source]

Called by trigger_alert() and log anomalies to syslog.

alert_stale_digest(alert, metric, context)[source]

Called by trigger_alert() and sends a digest alert via smtp of the stale metrics to the default recipient

alert_slack(alert, metric, context)[source]
trigger_alert(alert, metric, context)[source]

Called by skyline.analyzer.Analyzer.spawn_alerter_process to trigger an alert.

Analyzer passes three arguments, two of them tuples. The alerting strategy is determined and the approriate alert def is then called and passed the tuples.

Parameters:
  • alert

    The alert tuple specified in settings.py.

    alert[0]: The matched substring of the anomalous metric

    alert[1]: the name of the strategy being used to alert

    alert[2]: The timeout of the alert that was triggered

    alert[3]: The second order resolution hours [optional for Mirage]

  • meric

    The metric tuple.

    metric[0]: the anomalous value

    metric[1]: The full name of the anomalous metric

    metric[2]: anomaly timestamp

  • context (str) – app name

skyline.analyzer.algorithms module

tail_avg(timeseries)[source]

This is a utility function used to calculate the average of the last three datapoints in the series as a measure, instead of just the last datapoint. It reduces noise, but it also reduces sensitivity and increases the delay to detection.

median_absolute_deviation(timeseries)[source]

A timeseries is anomalous if the deviation of its latest datapoint with respect to the median is X times larger than the median of deviations.

grubbs(timeseries)[source]

A timeseries is anomalous if the Z score is greater than the Grubb’s score.

first_hour_average(timeseries)[source]

Calcuate the simple average over one hour, FULL_DURATION seconds ago. A timeseries is anomalous if the average of the last three datapoints are outside of three standard deviations of this value.

stddev_from_average(timeseries)[source]

A timeseries is anomalous if the absolute value of the average of the latest three datapoint minus the moving average is greater than three standard deviations of the average. This does not exponentially weight the MA and so is better for detecting anomalies with respect to the entire series.

stddev_from_moving_average(timeseries)[source]

A timeseries is anomalous if the absolute value of the average of the latest three datapoint minus the moving average is greater than three standard deviations of the moving average. This is better for finding anomalies with respect to the short term trends.

mean_subtraction_cumulation(timeseries)[source]

A timeseries is anomalous if the value of the next datapoint in the series is farther than three standard deviations out in cumulative terms after subtracting the mean from each data point.

least_squares(timeseries)[source]

A timeseries is anomalous if the average of the last three datapoints on a projected least squares model is greater than three sigma.

histogram_bins(timeseries)[source]

A timeseries is anomalous if the average of the last three datapoints falls into a histogram bin with less than 20 other datapoints (you’ll need to tweak that number depending on your data)

Returns: the size of the bin which contains the tail_avg. Smaller bin size means more anomalous.

ks_test(timeseries)[source]

A timeseries is anomalous if 2 sample Kolmogorov-Smirnov test indicates that data distribution for last 10 minutes is different from last hour. It produces false positives on non-stationary series so Augmented Dickey-Fuller test applied to check for stationarity.

get_function_name()[source]

This is a utility function is used to determine what algorithm is reporting an algorithm error when the record_algorithm_error is used.

record_algorithm_error(algorithm_name, traceback_format_exc_string)[source]

This utility function is used to facilitate the traceback from any algorithm errors. The algorithm functions themselves we want to run super fast and without fail in terms of stopping the function returning and not reporting anything to the log, so the pythonic except is used to “sample” any algorithm errors to a tmp file and report once per run rather than spewing tons of errors into the log.

Note

algorithm errors tmp file clean up
the algorithm error tmp files are handled and cleaned up in Analyzer after all the spawned processes are completed.
Parameters:
  • algorithm_name (str) – the algoritm function name
  • traceback_format_exc_string (str) – the traceback_format_exc string
Returns:

  • True the error string was written to the algorithm_error_file
  • False the error string was not written to the algorithm_error_file

Return type:

  • boolean

determine_median(timeseries)[source]

Determine the median of the values in the timeseries

determine_array_median(array)[source]

Determine the median of the values in an array

is_anomalously_anomalous(metric_name, ensemble, datapoint)[source]

This method runs a meta-analysis on the metric to determine whether the metric has a past history of triggering. TODO: weight intervals based on datapoint

run_selected_algorithm(timeseries, metric_name)[source]

Filter timeseries and run selected algorithm.

skyline.analyzer.analyzer module

class Analyzer(parent_pid)[source]

Bases: threading.Thread

The Analyzer class which controls the analyzer thread and spawned processes.

check_if_parent_is_alive()[source]

Self explanatory

spawn_alerter_process(alert, metric, context)[source]

Spawn a process to trigger an alert.

This is used by smtp alerters so that matplotlib objects are cleared down and the alerter cannot create a memory leak in this manner and plt.savefig keeps the object in memory until the process terminates. Seeing as data is being surfaced and processed in the alert_smtp context, multiprocessing the alert creation and handling prevents any memory leaks in the parent.

Added 20160814 relating to:

Parameters as per skyline.analyzer.alerters.trigger_alert

spin_process(i, unique_metrics)[source]

Assign a bunch of metrics for a process to analyze.

Multiple get the assigned_metrics to the process from Redis.

For each metric:

  • unpack the raw_timeseries for the metric.
  • Analyse each timeseries against ALGORITHMS to determine if it is anomalous.
  • If anomalous add it to the Redis set analyzer.anomalous_metrics
  • Add what algorithms triggered to the self.anomaly_breakdown_q queue
  • If settings.ENABLE_CRUCIBLE is True:
    • Add a crucible data file with the details about the timeseries and anomaly.
    • Write the timeseries to a json file for crucible.

Add keys and values to the queue so the parent process can collate for:

  • self.anomaly_breakdown_q
  • self.exceptions_q
run()[source]
  • Called when the process intializes.

  • Determine if Redis is up and discover the number of unique metrics.

  • Divide the unique_metrics between the number of ANALYZER_PROCESSES and assign each process a set of metrics to analyse for anomalies.

  • Wait for the processes to finish.

  • Determine whether if any anomalous metrics require:

    • Alerting on (and set EXPIRATION_TIME key in Redis for alert).
    • Feed to another module e.g. mirage.
    • Alert to syslog.
  • Populate the webapp json with the anomalous_metrics details.

  • Log the details about the run to the skyline analyzer log.

  • Send skyline.analyzer metrics to GRAPHITE_HOST

Module contents