skyline package

Subpackages

Submodules

skyline.algorithm_exceptions module

exception TooShort[source]

Bases: exceptions.Exception

exception Stale[source]

Bases: exceptions.Exception

exception Boring[source]

Bases: exceptions.Exception

skyline.database module

skyline.features_profile module

skyline.ionosphere_functions module

skyline.settings module

Shared settings

IMPORTANT NOTE

You may find reading some of these settings documentation strings http://earthgecko-skyline.readthedocs.io/en/latest/skyline.html#module-settings

REDIS_SOCKET_PATH = '/tmp/redis.sock'
Variables:REDIS_SOCKET_PATH (str) – The path for the Redis unix socket
LOG_PATH = '/var/log/skyline'
Variables:LOG_PATH (str) – The Skyline logs directory. Do not include a trailing slash.
PID_PATH = '/var/run/skyline'
Variables:PID_PATH (str) – The Skyline pids directory. Do not include a trailing slash.
SKYLINE_TMP_DIR = '/tmp/skyline'
Variables:SKYLINE_TMP_DIR (str) – The Skyline tmp dir. Do not include a trailing slash. It is recommended you keep this in the /tmp directory which normally uses tmpfs.
FULL_NAMESPACE = 'metrics.'
Variables:FULL_NAMESPACE (str) – Metrics will be prefixed with this value in Redis.
GRAPHITE_SOURCE = ''
Variables:GRAPHITE_SOURCE (str) – The data source
ENABLE_DEBUG = False
Variables:ENABLE_DEBUG (str) – Enable additional debug logging - useful for development only, this should definitely be set to False on production systems.
MINI_NAMESPACE = 'mini.'
Variables:MINI_NAMESPACE (str) – The Horizon agent will make T’d writes to both the full namespace and the mini namespace. Oculus gets its data from everything in the mini namespace.
FULL_DURATION = 86400
Variables:FULL_DURATION (str) – This is the rolling duration that will be stored in Redis. Be sure to pick a value that suits your memory capacity, your CPU capacity and your overall metrics count. Longer durations take a longer to analyze, but they can help the algorithms reduce the noise and provide more accurate anomaly detection.
MINI_DURATION = 3600
Variables:MINI_DURATION (str) – This is the duration of the ‘mini’ namespace, if you are also using the Oculus service. It is also the duration of data that is displayed in the Webapp ‘mini’ view.
GRAPHITE_HOST = 'YOUR_GRAPHITE_HOST.example.com'
Variables:GRAPHITE_HOST (str) – If you have a Graphite host set up, set this metric to get graphs on Skyline and Horizon. Don’t include http:// since this is used for carbon host as well.
GRAPHITE_PROTOCOL = 'http'
Variables:GRAPHITE_PROTOCOL (str) – Graphite host protocol - http or https
GRAPHITE_PORT = '80'
Variables:GRAPHITE_PORT (str) – Graphite host port - for a specific port if graphite runs on a port other than 80, e.g. ‘8888’
GRAPHITE_CONNECT_TIMEOUT = 5
Variables:GRAPHITE_CONNECT_TIMEOUT (int) – Graphite connect timeout - this allows for the gracefully failure of any graphite requests so that no graphite related functions ever block for too long.
GRAPHITE_READ_TIMEOUT = 10
Variables:GRAPHITE_READ_TIMEOUT (int) – Graphite read timeout
GRAPHITE_GRAPH_SETTINGS = '&width=588&height=308&bgcolor=000000&fontBold=true&fgcolor=C0C0C0'
Variables:GRAPHITE_GRAPH_SETTINGS (str) – These are graphite settings in terms of alert graphs - this is defaulted to a format that is more colourblind friendly than the default graphite graphs.
TARGET_HOURS = '7'
Variables:TARGET_HOURS (str) – The number of hours data to graph in alerts.
GRAPH_URL = 'http://YOUR_GRAPHITE_HOST.example.com:80/render/?width=1400&from=-7hour&target='
Variables:GRAPH_URL (str) – The graphite URL for alert graphs will be appended with the relevant metric name in each alert.

Note

There is probably no neeed to change this unless you what a different size graph sent with alerts.

CARBON_PORT = 2003
Variables:CARBON_PORT (int) – If you have a Graphite host set up, set its Carbon port.
OCULUS_HOST = ''
Variables:OCULUS_HOST (str) – If you have Oculus set up, set this to http://<OCULUS_HOST>
  • If you do not want to use Oculus, leave this empty. However if you comment this out, Skyline will not work! Speed improvements will occur when Oculus support is disabled.
SERVER_METRICS_NAME = 'YOUR_HOSTNAME'
Variables:SERVER_METRICS_NAME (str) – The hostname of the Skyline.
  • This is to allow for multiple Skyline nodes to send metrics to a Graphite instance on the Skyline namespace sharded by this setting, like carbon.relays. If you want multiple Skyline hosts, set the hostname of the skyline here and metrics will be as e.g. skyline.analyzer.skyline-01.run_time
MIRAGE_CHECK_PATH = '/opt/skyline/mirage/check'
Variables:MIRAGE_CHECK_PATH (str) – This is the location the Skyline analyzer will write the second order resolution anomalies to check to a file on disk - absolute path
CRUCIBLE_CHECK_PATH = '/opt/skyline/crucible/check'
Variables:CRUCIBLE_CHECK_PATH (str) – This is the location the Skyline apps will write the anomalies to for crucible to check to a file on disk - absolute path
PANORAMA_CHECK_PATH = '/opt/skyline/panorama/check'
Variables:PANORAMA_CHECK_PATH (str) – This is the location the Skyline apps will write the anomalies to for Panorama to check to a file on disk - absolute path
PANDAS_VERSION = '0.18.1'
Variables:PANDAS_VERSION (str) – Pandas version in use
  • Declaring the version of pandas in use reduces a large amount of interpolating in all the skyline modules. There are some differences from pandas >= 0.18.0 however the original Skyline could run on lower versions of pandas.
ALERTERS_SETTINGS = True

Note

Alerters can be enabled alerters due to that fact that not everyone will necessarily want all 3rd party alerters. Enabled what 3rd alerters you require here. This enables only the alerters that are required to be imported and means that not all alerter related modules in requirements.txt have to be installed, only those you require.

SYSLOG_ENABLED = True
Variables:SYSLOG_ENABLED (boolean) – Alerter - enables Skyline apps to submit anomalous metric details to syslog.
HIPCHAT_ENABLED = False
Variables:HIPCHAT_ENABLED (boolean) – Enables the Hipchat alerter
PAGERDUTY_ENABLED = False
Variables:PAGERDUTY_ENABLED (boolean) – Enables the Pagerduty alerter
SLACK_ENABLED = False
Variables:SLACK_ENABLED (boolean) – Enables the Slack alerter
ANOMALY_DUMP = 'webapp/static/dump/anomalies.json'
Variables:ANOMALY_DUMP (str) – This is the location the Skyline agent will write the anomalies file to disk. It needs to be in a location accessible to the webapp.
ANALYZER_PROCESSES = 1
Variables:ANALYZER_PROCESSES (int) – This is the number of processes that the Skyline Analyzer will spawn.
  • Analysis is a very CPU-intensive procedure. You will see optimal results if you set ANALYZER_PROCESSES to several less than the total number of CPUs on your server. Be sure to leave some CPU room for the Horizon workers and for Redis.
  • IMPORTANTLY bear in mind that your Analyzer run should be able to analyze all your metrics in the same resoluton as your metrics. So for example if you have 1000 metrics at a resolution of 60 seconds (e.g. one datapoint per 60 seconds), you are aiming to try and analyze all of those within 60 seconds. If you do not the anomaly detection begins to lag and it is no longer really near realtime. That stated, bear in mind if you are not processing 10s of 1000s of metrics, you may only need one Analyzer process. To determine your optimal settings take note of ‘seconds to run’ values in the Analyzer log.
ANALYZER_OPTIMUM_RUN_DURATION = 60
Variables:ANALYZER_OPTIMUM_RUN_DURATION (int) – This is how many seconds it would be optimum for Analyzer to be able to analyze all your metrics in.

Note

In the original Skyline this was hardcorded to 5.

MAX_ANALYZER_PROCESS_RUNTIME = 180
Variables:MAX_ANALYZER_PROCESS_RUNTIME (int) – What is the maximum number of seconds an Analyzer process should run analysing a set of assigned_metrics
  • How many seconds This is for Analyzer to self monitor its own analysis threads and terminate any threads that have run longer than this. Although Analyzer and mutliprocessing are very stable, there are edge cases in real world operations which can very infrequently cause a process to hang.
STALE_PERIOD = 500
Variables:STALE_PERIOD (int) – This is the duration, in seconds, for a metric to become ‘stale’ and for the analyzer to ignore it until new datapoints are added. ‘Staleness’ means that a datapoint has not been added for STALE_PERIOD seconds.
MIN_TOLERABLE_LENGTH = 1
Variables:MIN_TOLERABLE_LENGTH (int) – This is the minimum length of a timeseries, in datapoints, for the analyzer to recognize it as a complete series.
MAX_TOLERABLE_BOREDOM = 100
Variables:MAX_TOLERABLE_BOREDOM (int) – Sometimes a metric will continually transmit the same number. There’s no need to analyze metrics that remain boring like this, so this setting determines the amount of boring datapoints that will be allowed to accumulate before the analyzer skips over the metric. If the metric becomes noisy again, the analyzer will stop ignoring it.
BOREDOM_SET_SIZE = 1
Variables:BOREDOM_SET_SIZE (int) – By default, the analyzer skips a metric if it it has transmitted a single number settings.MAX_TOLERABLE_BOREDOM times.
  • Change this setting if you wish the size of the ignored set to be higher (ie, ignore the metric if there have only been two different values for the past settings.MAX_TOLERABLE_BOREDOM datapoints). This is useful for timeseries that often oscillate between two values.
CANARY_METRIC = 'statsd.numStats'
Variables:CANARY_METRIC (str) – The metric name to use as the CANARY_METRIC
  • The canary metric should be a metric with a very high, reliable resolution that you can use to gauge the status of the system as a whole. Like the statsd.numStats or a metric in the carbon. namespace
ALGORITHMS = ['histogram_bins', 'first_hour_average', 'stddev_from_average', 'grubbs', 'ks_test', 'mean_subtraction_cumulation', 'median_absolute_deviation', 'stddev_from_moving_average', 'least_squares']
Variables:ALGORITHMS (array) – These are the algorithms that the Analyzer will run. To add a new algorithm, you must both define the algorithm in algorithms.py and add it’s name here.
CONSENSUS = 6
Variables:CONSENSUS (int) – This is the number of algorithms that must return True before a metric is classified as anomalous by Analyzer.
RUN_OPTIMIZED_WORKFLOW = True
Variables:RUN_OPTIMIZED_WORKFLOW (boolean) – This sets Analyzer to run in an optimized manner.
  • This sets Analyzer to run in an optimized manner in terms of using the CONSENSUS setting to dynamically determine in what order and how many algorithms need to be run be able to achieve CONSENSUS. This reduces the amount of work that Analyzer has to do per run. It is recommended that this be set to True in most circumstances to ensure that Analyzer is run as efficiently as possible, UNLESS you are working on algorithm development then you may want this to be False
ENABLE_ALGORITHM_RUN_METRICS = True
Variables:ENABLE_ALGORITHM_RUN_METRICS (boolean) – This enables algorithm timing metrics to Graphite
  • This will send additional metrics to the graphite namespaces of: skyline.analyzer.<hostname>.algorithm_breakdown.<algorithm_name>.timings.median_time skyline.analyzer.<hostname>.algorithm_breakdown.<algorithm_name>.timings.times_run skyline.analyzer.<hostname>.algorithm_breakdown.<algorithm_name>.timings.total_time These are related to the RUN_OPTIMIZED_WORKFLOW performance tuning.
ENABLE_ALL_ALGORITHMS_RUN_METRICS = False
Variables:ENABLE_ALL_ALGORITHMS_RUN_METRICS (boolean) – DEVELOPMENT only - run and time all

Warning

If set to True, Analyzer will revert to it’s original unoptimized workflow and will run and time all algorithms against all timeseries.

ENABLE_SECOND_ORDER = False
Variables:ENABLE_SECOND_ORDER (boolean) – This is to enable second order anomalies.

Warning

EXPERIMENTAL - This is an experimental feature, so it’s turned off by default.

ENABLE_ALERTS = True
Variables:ENABLE_ALERTS (boolean) – This enables Analyzer alerting.
ENABLE_MIRAGE = False
Variables:ENABLE_MIRAGE (boolean) – This enables Analyzer to output to Mirage
ENABLE_FULL_DURATION_ALERTS = True
Variables:ENABLE_FULL_DURATION_ALERTS (boolean) – This enables Analyzer to alert on all FULL_DURATION anomalies.
  • This enables FULL_DURATION alerting for Analyzer, if True Analyzer will send ALL alerts on any alert tuple that have a SECOND_ORDER_RESOLUTION_HOURS value defined for Mirage in their alert tuple. If False Analyzer will only add a Mirage check and allow Mirage to do the alerting.

Note

If you have Mirage enabled and have defined SECOND_ORDER_RESOLUTION_HOURS values in the desired metric alert tuples, you want this set to False

ANALYZER_CRUCIBLE_ENABLED = False
Variables:ANALYZER_CRUCIBLE_ENABLED (boolean) – This enables Analyzer to output to Crucible
  • This enables Analyzer to send Crucible data, if this is set to True ensure that settings.CRUCIBLE_ENABLED is also set to True in the Crucible settings block.

Warning

Not recommended from production, will make a LOT of data files in the settings.CRUCIBLE_DATA_FOLDER

ALERTS = (('skyline', 'smtp', 1800), ('skyline_test.alerters.test', 'smtp', 1800), ('skyline_test.alerters.test', 'hipchat', 1800), ('skyline_test.alerters.test', 'pagerduty', 1800))
Variables:ALERTS (tuples) – This enables analyzer alerting.

This is the config for which metrics to alert on and which strategy to use for each. Alerts will not fire twice within EXPIRATION_TIME, even if they trigger again.

  • Tuple schema example:

    ALERTS = (
        # ('<metric_namespace>', '<alerter>', EXPIRATION_TIME, SECOND_ORDER_RESOLUTION_HOURS),
        # With SECOND_ORDER_RESOLUTION_HOURS being optional for Mirage
        ('metric1', 'smtp', 1800),
        ('important_metric.total', 'smtp', 600),
        ('important_metric.total', 'pagerduty', 1800),
        ('metric3', 'hipchat', 600),
        # Log all anomalies to syslog
        ('stats.', 'syslog', 1),
        # Wildcard namespaces can be used as well
        ('metric4.thing.*.requests', 'stmp', 900),
        # However beware of wildcards as the above wildcard should really be
        ('metric4.thing\..*.\.requests', 'stmp', 900),
        # mirage - SECOND_ORDER_RESOLUTION_HOURS - if added and Mirage is enabled
        ('metric5.thing.*.rpm', 'smtp', 900, 168),
    )
    
  • Alert tuple parameters are:

Parameters:
  • metric (str) – metric name.
  • alerter (str) – the alerter name e.g. smtp, syslog, hipchat, pagerduty
  • EXPIRATION_TIME (int) – Alerts will not fire twice within this amount of seconds, even if they trigger again.
  • SECOND_ORDER_RESOLUTION_HOURS (int) – (optional) The number of hours that Mirage should surface the metric timeseries for

Note

Consider using the default skyline_test.alerters.test for testing alerts with.

PLOT_REDIS_DATA = True
Variables:PLOT_REDIS_DATA (boolean) – Plot graph using Redis timeseries data on with Analyzer alerts.
  • There are times when Analyzer alerts have no data in the Graphite graphs and/or the data in the Graphite graph is skewed due to retentions aggregation. This mitigates that by creating a graph using the Redis timeseries data and embedding the image in the Analyzer alerts as well.

Note

The Redis data plot has the following additional information as well, the 3sigma upper (and if applicable lower) bounds and the mean are plotted and reported too. Although less is more effective, in this case getting a visualisation of the 3sigma boundaries is informative.

NON_DERIVATIVE_MONOTONIC_METRICS = ['the_namespace_of_the_monotonic_metric_to_not_calculate_the_derivative_for']
Variables:NON_DERIVATIVE_MONOTONIC_METRICS (list) – Strictly increasing monotonically metrics to not calculate the derivative values for

Skyline by default automatically converts strictly increasingly monotonically metric values to their derivative values by calculating the delta between subsequent datapoints. The function ignores datapoints that trend down. This is useful for metrics that increase over time and then reset.

Any strictly increasing monotonically metrics that you do not want Skyline to convert to the derivative values are declared here. This list works in the same way that Horizon SKIP_LIST does, it matches in the string or dotted namespace elements.

SMTP_OPTS = {'embed-images': True, 'default_recipient': ['you@your_domain.com'], 'sender': 'skyline@your_domain.com', 'recipients': {'skyline': ['you@your_domain.com', 'them@your_domain.com'], 'skyline_test.alerters.test': ['you@your_domain.com']}}
Variables:SMTP_OPTS (dictionary) – Your SMTP settings.

Note

For each alert tuple defined in settings.ALERTS you need a recipient defined that matches the namespace. The default_recipient acts as a catchall for any alert tuple that does not have a matching recipients defined.

HIPCHAT_OPTS = {'color': 'purple', 'auth_token': 'hipchat_auth_token', 'sender': 'hostname or identifier', 'rooms': {'skyline': (12345,), 'skyline_test.alerters.test': (12345,)}}
Variables:HIPCHAT_OPTS (dictionary) – Your Hipchat settings.

HipChat alerts require python-simple-hipchat

PAGERDUTY_OPTS = {'auth_token': 'your_pagerduty_auth_token', 'subdomain': 'example', 'key': 'your_pagerduty_service_api_key'}
Variables:PAGERDUTY_OPTS (dictionary) – Your SMTP settings.

PagerDuty alerts require pygerduty

SYSLOG_OPTS = {'ident': 'skyline'}
Variables:SYSLOG_OPTS (dictionary) – Your SMTP settings.

syslog alerts requires an ident this adds a LOG_WARNING message to the LOG_LOCAL4 which will ship to any syslog or rsyslog down the line. The EXPIRATION_TIME for the syslog alert method should be set to 1 to fire every anomaly into the syslog.

WORKER_PROCESSES = 2
Variables:WORKER_PROCESSES (int) – This is the number of worker processes that will consume from the Horizon queue.
HORIZON_IP = '0.0.0.0'
Variables:HORIZON_IP (str) – The IP address for Horizon to bind to. Defaults to gethostname()
PICKLE_PORT = 2024
Variables:PICKLE_PORT (str) – This is the port that listens for Graphite pickles over TCP, sent by Graphite’s carbon-relay agent.
UDP_PORT = 2025
Variables:UDP_PORT (str) – This is the port that listens for Messagepack-encoded UDP packets.
CHUNK_SIZE = 10
Variables:CHUNK_SIZE (int) – This is how big a ‘chunk’ of metrics will be before they are added onto the shared queue for processing into Redis.
  • If you are noticing that Horizon is having trouble consuming metrics, try setting this value a bit higher.
MAX_QUEUE_SIZE = 500
Variables:MAX_QUEUE_SIZE (int) – Maximum allowable length of the processing queue

This is the maximum allowable length of the processing queue before new chunks are prevented from being added. If you consistently fill up the processing queue, a higher MAX_QUEUE_SIZE will not save you. It most likely means that the workers do not have enough CPU alotted in order to process the queue on time. Try increasing settings.CHUNK_SIZE and decreasing settings.ANALYZER_PROCESSES or decreasing settings.ROOMBA_PROCESSES

ROOMBA_PROCESSES = 1
Variables:ROOMBA_PROCESSES (int) – This is the number of Roomba processes that will be spawned to trim timeseries in order to keep them at settings.FULL_DURATION. Keep this number small, as it is not important that metrics be exactly settings.FULL_DURATION all the time.
ROOMBA_GRACE_TIME = 600
Variables:ROOMBA_GRACE_TIME – Seconds grace

Normally Roomba will clean up everything that is older than settings.FULL_DURATION if you have metrics that are not coming in every second, it can happen that you’ll end up with INCOMPLETE metrics. With this setting Roomba will clean up evertyhing that is older than settings.FULL_DURATION + settings.ROOMBA_GRACE_TIME

ROOMBA_TIMEOUT = 100
Variables:ROOMBA_TIMEOUT (int) – Timeout in seconds

This is the number seconds that a Roomba process can be expected to run before it is terminated. Roomba should really be expected to have run within 100 seconds in general. Roomba is done in a multiprocessing subprocess, however there are certain conditions that could arise that could cause Roomba to stall, I/O wait being one such edge case. Although 99.999% of the time Roomba is fine, this ensures that no Roombas hang around longer than expected.

MAX_RESOLUTION = 1000
Variables:MAX_RESOLUTION (int) – The Horizon agent will ignore incoming datapoints if their timestamp is older than MAX_RESOLUTION seconds ago.
SKIP_LIST = ['skyline.analyzer.', 'skyline.boundary.', 'skyline.ionosphere.', 'skyline.mirage.']
Variables:SKIP_LIST (list) – Metrics to skip

These are metrics that, for whatever reason, you do not want to analyze in Skyline. The Worker will check to see if each incoming metrics contains anything in the skip list. It is generally wise to skip entire namespaces by adding a ‘.’ at the end of the skipped item - otherwise you might skip things you do not intend to. For example the default skyline.analyzer.anomaly_breakdown. which MUST be skipped to prevent crazy feedback.

These SKIP_LIST are also matched just dotted namespace elements too, if a match is not found in the string, then the dotted elements are compared. For example if an item such as ‘skyline.analyzer.algorithm_breakdown’ was added it would macth any metric that matched all 3 dotted namespace elements, so it would match:

skyline.analyzer.skyline-1.algorithm_breakdown.histogram_bins.timing.median_time skyline.analyzer.skyline-1.algorithm_breakdown.histogram_bins.timing.times_run skyline.analyzer.skyline-1.algorithm_breakdown.ks_test.timing.times_run

DO_NOT_SKIP_LIST = ['skyline.analyzer.run_time', 'skyline.boundary.run_time', 'skyline.analyzer.ionosphere_metrics', 'skyline.analyzer.mirage_metrics', 'skyline.analyzer.total_analyzed', 'skyline.analyzer.total_anomalies']
Variables:DO_NOT_SKIP_LIST (list) – Metrics to skip

These are metrics that you want Skyline in analyze even if they match a namespace in the SKIP_LIST. Works in the same way that SKIP_LIST does, it matches in the string or dotted namespace elements.

PANORAMA_ENABLED = True
Variables:PANORAMA_ENABLED (boolean) – Enable Panorama
PANORAMA_PROCESSES = 1
Variables:PANORAMA_PROCESSES – Number of processes to assign to Panorama, should never need more than 1
ENABLE_PANORAMA_DEBUG = False
Variables:ENABLE_PANORAMA_DEBUG (boolean) – DEVELOPMENT only - enables additional debug logging useful for development only, this should definitely be set to False on production system as LOTS of output
PANORAMA_DATABASE = 'skyline'
Variables:PANORAMA_DATABASE (str) – The database schema name
PANORAMA_DBHOST = '127.0.0.1'
Variables:PANORAMA_DBHOST (str) – The IP address or FQDN of the database server
PANORAMA_DBPORT = '3306'
Variables:PANORAMA_DBPORT (str) – The port to connet to the database server on
PANORAMA_DBUSER = 'skyline'
Variables:PANORAMA_DBUSER (str) – The database user
PANORAMA_DBUSERPASS = 'the_user_password'
Variables:PANORAMA_DBUSERPASS (str) – The database user password
NUMBER_OF_ANOMALIES_TO_STORE_IN_PANORAMA = 0
Variables:NUMBER_OF_ANOMALIES_TO_STORE_IN_PANORAMA (int) – The number of anomalies to store in the Panaroma database, the default is 0 which means UNLIMITED. This does nothing currently.
PANORAMA_EXPIRY_TIME = 900
Variables:PANORAMA_EXPIRY_TIME (int) – Panorama will only store one anomaly for a metric every PANORAMA_EXPIRY_TIME seconds.
  • This is the Panorama sample rate. Please bear in mind Panorama does not use the ALERTS time expiry keys or matching, Panorama records every anomaly, even if the metric is not in an alert tuple. Consider that a metric could and does often fire as anomalous every minute, until it no longer is.
PANORAMA_CHECK_MAX_AGE = 300
Variables:PANORAMA_CHECK_MAX_AGE (int) – Panorama will only process a check file if it is not older than PANORAMA_CHECK_MAX_AGE seconds. If it is set to 0 it does all. This setting just ensures if Panorama stalls for some hours and is restarted, the user can choose to discard older checks and miss anomalies being recorded if they so choose to, to prevent Panorama stampeding against MySQL if something went down and Panorama comes back online with lots of checks.
MIRAGE_DATA_FOLDER = '/opt/skyline/mirage/data'
Variables:MIRAGE_DATA_FOLDER (str) – This is the path for the Mirage data folder where timeseries data that has been surfaced will be written - absolute path
MIRAGE_ALGORITHMS = ['first_hour_average', 'mean_subtraction_cumulation', 'stddev_from_average', 'stddev_from_moving_average', 'least_squares', 'grubbs', 'histogram_bins', 'median_absolute_deviation', 'ks_test']
Variables:MIRAGE_ALGORITHMS (array) – These are the algorithms that the Mirage will run.

To add a new algorithm, you must both define the algorithm in mirage/mirage_algorithms.py and add it’s name here.

MIRAGE_STALE_SECONDS = 120
Variables:MIRAGE_STALE_SECONDS (int) – The number of seconds after which a check is considered stale and discarded.
MIRAGE_CONSENSUS = 6
Variables:MIRAGE_CONSENSUS (int) – This is the number of algorithms that must return True before a metric is classified as anomalous.
MIRAGE_ENABLE_SECOND_ORDER = False
Variables:MIRAGE_ENABLE_SECOND_ORDER (boolean) – This is to enable second order anomalies.

Warning

EXPERIMENTAL - This is an experimental feature, so it’s turned off by default.

MIRAGE_ENABLE_ALERTS = False
Variables:MIRAGE_ENABLE_ALERTS (boolean) – This enables Mirage alerting.
NEGATE_ANALYZER_ALERTS = False
Variables:NEGATE_ANALYZER_ALERTS (boolean) – DEVELOPMENT only - negates Analyzer alerts

This is to enables Mirage to negate Analyzer alerts. Mirage will send out an alert for every anomaly that Analyzer sends to Mirage that is NOT anomalous at the SECOND_ORDER_RESOLUTION_HOURS with a SECOND_ORDER_RESOLUTION_HOURS graph and the Analyzer settings.FULL_DURATION graph embedded. Mostly for testing and comparison of analysis at different time ranges and/or algorithms.

MIRAGE_CRUCIBLE_ENABLED = False
Variables:MIRAGE_CRUCIBLE_ENABLED (boolean) – This enables Mirage to output to Crucible

This enables Mirage to send Crucible data, if this is set to True ensure that settings.CRUCIBLE_ENABLED is also set to True in the Crucible settings block.

Warning

Not recommended from production, will make a LOT of data files in the settings.CRUCIBLE_DATA_FOLDER

BOUNDARY_PROCESSES = 1
Variables:BOUNDARY_PROCESSES (int) – The number of processes that Boundary should spawn.

Seeing as Boundary analysis is focused at specific metrics this should be less than the number of settings.ANALYZER_PROCESSES.

BOUNDARY_OPTIMUM_RUN_DURATION = 60
Variables:BOUNDARY_OPTIMUM_RUN_DURATION – This is how many seconds it would be optimum for Boundary to be able to analyze your Boundary defined metrics in.

This largely depends on your metric resolution e.g. 1 datapoint per 60 seconds and how many metrics you are running through Boundary.

ENABLE_BOUNDARY_DEBUG = False
Variables:ENABLE_BOUNDARY_DEBUG (boolean) – Enables Boundary debug logging
  • Enable additional debug logging - useful for development only, this should definitely be set to False on as production system - LOTS of output
BOUNDARY_ALGORITHMS = ['detect_drop_off_cliff', 'greater_than', 'less_than']
Variables:BOUNDARY_ALGORITHMS (array) – Algorithms that Boundary can run
  • These are the algorithms that boundary can run. To add a new algorithm, you must both define the algorithm in boundary_algorithms.py and add its name here.
BOUNDARY_ENABLE_ALERTS = False
Variables:BOUNDARY_ENABLE_ALERTS (boolean) – Enables Boundary alerting
BOUNDARY_CRUCIBLE_ENABLED = False
Variables:BOUNDARY_CRUCIBLE_ENABLED (boolean) – Enables and disables Boundary pushing data to Crucible

This enables Boundary to send Crucible data, if this is set to True ensure that settings.CRUCIBLE_ENABLED is also set to True in the Crucible settings block.

Warning

Not recommended from production, will make a LOT of data files in the settings.CRUCIBLE_DATA_FOLDER

BOUNDARY_METRICS = (('skyline_test.alerters.test', 'greater_than', 1, 0, 0, 0, 1, 'smtp|hipchat|pagerduty'), ('metric1', 'detect_drop_off_cliff', 1800, 500, 3600, 0, 2, 'smtp'), ('metric2.either', 'less_than', 3600, 0, 0, 15, 2, 'smtp|hipchat'), ('nometric.other', 'greater_than', 3600, 0, 0, 100000, 1, 'smtp'))
Variables:BOUNDARY_METRICS (tuple) – definitions of metrics for Boundary to analyze

This is the config for metrics to analyse with the boundary algorithms. It is advisable that you only specify high rate metrics and global metrics here, although the algoritms should work with low rate metrics, the smaller the range, the smaller a cliff drop of change is, meaning more noise, however some algorithms are pre-tuned to use different trigger values on different ranges to pre-filter some noise.

  • Tuple schema:

    BOUNDARY_METRICS = (
        ('metric1', 'algorithm1', EXPIRATION_TIME, MIN_AVERAGE, MIN_AVERAGE_SECONDS, TRIGGER_VALUE, ALERT_THRESHOLD, 'ALERT_VIAS'),
        ('metric2', 'algorithm2', EXPIRATION_TIME, MIN_AVERAGE, MIN_AVERAGE_SECONDS, TRIGGER_VALUE, ALERT_THRESHOLD, 'ALERT_VIAS'),
        # Wildcard namespaces can be used as well
        ('metric.thing.*.requests', 'algorithm1', EXPIRATION_TIME, MIN_AVERAGE, MIN_AVERAGE_SECONDS, TRIGGER_VALUE, ALERT_THRESHOLD, 'ALERT_VIAS'),
                        )
    
  • Metric parameters (all are required):

Parameters:
  • metric (str) – metric name.
  • algorithm (str) – algorithm name.
  • EXPIRATION_TIME (int) – Alerts will not fire twice within this amount of seconds, even if they trigger again.
  • MIN_AVERAGE (int) – the minimum average value to evaluate for boundary_algorithms.detect_drop_off_cliff(), in the boundary_algorithms.less_than() and boundary_algorithms.greater_than() algorithm contexts set this to 0.
  • MIN_AVERAGE_SECONDS (int) – the seconds to calculate the minimum average value over in boundary_algorithms.detect_drop_off_cliff(). So if MIN_AVERAGE set to 100 and MIN_AVERAGE_SECONDS to 3600 a metric will only be analysed if the average value of the metric over 3600 seconds is greater than 100. For the boundary_algorithms.less_than() and boundary_algorithms.greater_than() algorithms set this to 0.
  • TRIGGER_VALUE (int) – then less_than or greater_than trigger value set to 0 for boundary_algorithms.detect_drop_off_cliff()
  • ALERT_THRESHOLD (int) – alert after detected x times. This allows you to set how many times a timeseries has to be detected by the algorithm as anomalous before alerting on it. The nature of distributed metric collection, storage and analysis can have a lag every now and then due to latency, I/O pause, etc. Boundary algorithms can be sensitive to this not unexpectedly. This setting should be 1, maybe 2 maximum to ensure that signals are not being surpressed. Try 1 if you are getting the occassional false positive, try 2. Note - Any boundary_algorithms.greater_than() metrics should have this as 1.
  • ALERT_VIAS (str) – pipe separated alerters to send to.
  • Wildcard and absolute metric paths. Currently the only supported metric namespaces are a parent namespace and an absolute metric path e.g.

  • Examples:

    ('stats_counts.someapp.things', 'detect_drop_off_cliff', 1800, 500, 3600, 0, 2, 'smtp'),
    ('stats_counts.someapp.things.an_important_thing.requests', 'detect_drop_off_cliff', 600, 100, 3600, 0, 2, 'smtp|pagerduty'),
    ('stats_counts.otherapp.things.*.requests', 'detect_drop_off_cliff', 600, 500, 3600, 0, 2, 'smtp|hipchat'),
    
  • In the above all stats_counts.someapp.things* would be painted with a 1800 EXPIRATION_TIME and 500 MIN_AVERAGE, but those values would be overridden by 600 and 100 stats_counts.someapp.things.an_important_thing.requests and pagerduty added.

BOUNDARY_AUTOAGGRERATION = False
Variables:BOUNDARY_AUTOAGGRERATION (boolean) – Enables autoaggregate a timeseries

This is used to autoaggregate a timeseries with autoaggregate_ts(), if a timeseries dataset has 6 data points per minute but only one data value every minute then autoaggregate can be used to aggregate the required sample.

BOUNDARY_AUTOAGGRERATION_METRICS = (('nometrics.either', 60),)
Variables:BOUNDARY_AUTOAGGRERATION_METRICS (tuples) – The namespaces to autoaggregate
  • Tuple schema example:

    BOUNDARY_AUTOAGGRERATION_METRICS = (
        ('metric1', AGGREGATION_VALUE),
    )
    
  • Metric tuple parameters are:

Parameters:
  • metric (str) – metric name.
  • AGGREGATION_VALUE (int) – alerter name.

Declare the namespace and aggregation value in seconds by which you want the timeseries aggregated. To aggregate a timeseries to minutely values use 60 as the AGGREGATION_VALUE, e.g. sum metric datapoints by minute

BOUNDARY_ALERTER_OPTS = {'alerter_expiration_time': {'pagerduty': 1800, 'hipchat': 1800, 'smtp': 60}, 'alerter_limit': {'pagerduty': 15, 'hipchat': 30, 'smtp': 100}}
Variables:BOUNDARY_ALERTER_OPTS (dictionary) – Your Boundary alerter settings.

Note

Boundary Alerting Because you may want to alert multiple channels on each metric and algorithm, Boundary has its own alerting settings, similar to Analyzer. However due to the nature of Boundary and it algorithms it could be VERY noisy and expensive if all your metrics dropped off a cliff. So Boundary introduces alerting the ability to limit overall alerts to an alerter channel. These limits use the same methodology that the alerts use, but each alerter is keyed too.

BOUNDARY_SMTP_OPTS = {'embed-images': True, 'sender': 'skyline-boundary@your_domain.com', 'recipients': {'nometrics': ['you@your_domain.com', 'them@your_domain.com'], 'skyline_test.alerters.test': ['you@your_domain.com'], 'nometrics.either': ['you@your_domain.com', 'another@some-company.com']}, 'graphite_graph_line_color': 'pink', 'graphite_previous_hours': 7, 'default_recipient': ['you@your_domain.com']}
Variables:BOUNDARY_SMTP_OPTS (dictionary) – Your SMTP settings.
BOUNDARY_HIPCHAT_OPTS = {'sender': 'hostname or identifier', 'graphite_graph_line_color': 'pink', 'color': 'purple', 'auth_token': 'hipchat_auth_token', 'graphite_previous_hours': 7, 'rooms': {'nometrics': (12345,), 'skyline_test.alerters.test': (12345,)}}
Variables:BOUNDARY_HIPCHAT_OPTS (dictionary) – Your Hipchat settings.

HipChat alerts require python-simple-hipchat

BOUNDARY_PAGERDUTY_OPTS = {'auth_token': 'your_pagerduty_auth_token', 'subdomain': 'example', 'key': 'your_pagerduty_service_api_key'}
Variables:BOUNDARY_PAGERDUTY_OPTS (dictionary) – Your SMTP settings.

PagerDuty alerts require pygerduty

ENABLE_CRUCIBLE = True
Variables:ENABLE_CRUCIBLE (boolean) – Enable Crucible.
CRUCIBLE_PROCESSES = 1
Variables:CRUCIBLE_PROCESSES (int) – The number of processes that Crucible should spawn.
CRUCIBLE_TESTS_TIMEOUT = 60
Variables:CRUCIBLE_TESTS_TIMEOUT (int) – # This is the number of seconds that Crucible tests can take. 60 is a reasonable default for a run with a settings.FULL_DURATION of 86400
ENABLE_CRUCIBLE_DEBUG = False
Variables:ENABLE_CRUCIBLE_DEBUG (boolean) – DEVELOPMENT only - enables additional debug logging useful for development only, this should definitely be set to False on production system as LOTS of output
CRUCIBLE_DATA_FOLDER = '/opt/skyline/crucible/data'
Variables:CRUCIBLE_DATA_FOLDER (str) – This is the path for the Crucible data folder where anomaly data for timeseries will be stored - absolute path
WEBAPP_SERVER = 'gunicorn'
Variables:WEBAPP_SERVER (str) – Run the Webapp via gunicorn (recommended) or the Flask development server, set this to either 'gunicorn' or 'flask'
WEBAPP_IP = '127.0.0.1'
Variables:WEBAPP_IP (str) – The IP address for the Webapp to bind to
WEBAPP_PORT = 1500
Variables:WEBAPP_PORT (int) – The port for the Webapp to listen on
WEBAPP_AUTH_ENABLED = True
Variables:WEBAPP_AUTH_ENABLED (boolean) – To enable pseudo basic HTTP auth
WEBAPP_AUTH_USER = 'admin'
Variables:WEBAPP_AUTH_USER (str) – The username for pseudo basic HTTP auth
WEBAPP_AUTH_USER_PASSWORD = 'aec9ffb075f9443c8e8f23c4f2d06faa'
Variables:WEBAPP_AUTH_USER_PASSWORD (str) – The user password for pseudo basic HTTP auth
WEBAPP_IP_RESTRICTED = True
Variables:WEBAPP_IP_RESTRICTED (boolean) – To enable restricted access from IP address declared in settings.WEBAPP_ALLOWED_IPS
WEBAPP_ALLOWED_IPS = ['127.0.0.1']
Variables:WEBAPP_ALLOWED_IPS (array) – The allowed IP addresses
WEBAPP_USER_TIMEZONE = True
Variables:WEBAPP_USER_TIMEZONE (boolean) – This determines the user’s timezone and renders graphs with the user’s date values. If this is set to False the timezone in settings.WEBAPP_FIXED_TIMEZONE is used.
WEBAPP_FIXED_TIMEZONE = 'Etc/GMT+0'
Variables:WEBAPP_FIXED_TIMEZONE (str) – You can specific a timezone you want the client browser to render graph date and times in. This setting is only used if the settings.WEBAPP_USER_TIMEZONE is set to False. This must be a valid momentjs timezone name, see: https://github.com/moment/moment-timezone/blob/develop/data/packed/latest.json

Note

Timezones, UTC and javascript Date You only need to use the first element of the momentjs timezone string, some examples, ‘Europe/London’, ‘Etc/UTC’, ‘America/Los_Angeles’. Because the Webapp is graphing using data UTC timestamps, you may may want to display the graphs to users with a fixed timezone and not use the browser timezone so that the Webapp graphs are the same in any location.

WEBAPP_JAVASCRIPT_DEBUG = False
Variables:WEBAPP_JAVASCRIPT_DEBUG (boolean) – Enables some javascript console.log when enabled.
ENABLE_WEBAPP_DEBUG = False
Variables:ENABLE_WEBAPP_DEBUG (boolean) – Enables some app specific debugging to log.
IONOSPHERE_CHECK_PATH = '/opt/skyline/ionosphere/check'
Variables:IONOSPHERE_CHECK_PATH (str) – This is the location the Skyline apps will write the anomalies to for Ionosphere to check to a file on disk - absolute path
IONOSPHERE_ENABLED = True
Variables:IONOSPHERE_ENABLED (boolean) – Enable Ionosphere
IONOSPHERE_PROCESSES = 1
Variables:IONOSPHERE_PROCESSES – Number of processes to assign to Panorama, should never need more than 1
ENABLE_IONOSPHERE_DEBUG = False
Variables:ENABLE_IONOSPHERE_DEBUG (boolean) – DEVELOPMENT only - enables additional debug logging useful for development only, this should definitely be set to False on production system as LOTS of output
IONOSPHERE_DATA_FOLDER = '/opt/skyline/ionosphere/data'
Variables:IONOSPHERE_DATA_FOLDER (str) – This is the path for the Ionosphere data folder where anomaly data for timeseries will be stored - absolute path
IONOSPHERE_PROFILES_FOLDER = '/opt/skyline/ionosphere/features_profiles'
Variables:IONOSPHERE_PROFILES_FOLDER – This is the path for the Ionosphere data folder where anomaly data for timeseries will be stored - absolute path
IONOSPHERE_LEARN_FOLDER = '/opt/skyline/ionosphere/learn'
Variables:IONOSPHERE_LEARN_FOLDER (str) – This is the path for the Ionosphere learning data folder where learning data for timeseries will be processed - absolute path
IONOSPHERE_CHECK_MAX_AGE = 300
Variables:IONOSPHERE_CHECK_MAX_AGE (int) – Ionosphere will only process a check file if it is not older than IONOSPHERE_CHECK_MAX_AGE seconds. If it is set to 0 it does all. This setting just ensures if Ionosphere stalls for some hours and is restarted, the user can choose to discard older checks and miss anomalies being recorded if they so choose to, to prevent Ionosphere stampeding.
IONOSPHERE_KEEP_TRAINING_TIMESERIES_FOR = 86400
Variables:IONOSPHERE_KEEP_TRAINING_TIMESERIES_FOR (int) – Ionosphere will keep timeseries data files for this long, for the operator to review.
SKYLINE_URL = 'http://skyline.example.com:8080'
Variables:SKYLINE_URL (str) – The http or https URL (and port if required) to access your Skyline on (no trailing slash).
SERVER_PYTZ_TIMEZONE = 'UTC'
Variables:SERVER_PYTZ_TIMEZONE (str) – You must specify a pytz timezone you want Ionosphere to use for the creation of features profiles and converting datetimes to UTC. This must be a valid pytz timezone name, see: https://github.com/earthgecko/skyline/blob/ionosphere/docs/development/pytz.rst http://earthgecko-skyline.readthedocs.io/en/ionosphere/development/pytz.html#timezones-list-for-pytz-version
IONOSPHERE_FEATURES_PERCENT_SIMILAR = 1.0
Variables:IONOSPHERE_FEATURES_PERCENT_SIMILAR (float) – The percentage difference between a features profile sum and a calculated profile sum to result in a match.
IONOSPHERE_LEARN = True
Variables:IONOSPHERE_LEARN (boolean) – Whether Ionosphere is set to learn

Note

The below IONOSPHERE_LEARN_DEFAULT_ variables are all overrideable in the IONOSPHERE_LEARN_NAMESPACE_CONFIG tuple per defined metric namespace further to this ALL metrics and their settings in terms of the Ionosphere learning context can also be modified via the webapp UI Ionosphere section. These settings are the defaults that are used in the creation of learnt features profiles and new metrics, HOWEVER the database is the preferred source of truth and will always be referred to first and the default or settings.IONOSPHERE_LEARN_NAMESPACE_CONFIG values shall only be used if database values are not determined. These settings are here so that it is easy to paint all metrics and others specifically as a whole, once a metric is added to Ionosphere via the creation of a features profile, it is painted with these defaults or the appropriate namespace settings in settings.IONOSPHERE_LEARN_NAMESPACE_CONFIG

Warning

Changes made to a metric settings in the database directly via the UI or your own SQL will not be overridden IONOSPHERE_LEARN_DEFAULT_ variables or the IONOSPHERE_LEARN_NAMESPACE_CONFIG tuple per defined metric namespace even if the metric matches the namespace, the database is the source of truth.

IONOSPHERE_LEARN_DEFAULT_MAX_GENERATIONS = 16
Variables:IONOSPHERE_LEARN_DEFAULT_MAX_GENERATIONS (int) – The maximum number of generations that Ionosphere can automatically learn up to from the original human created features profile within the IONOSPHERE_DEFAULT_MAX_PERCENT_DIFF_FROM_ORIGIN Overridable per namespace in settings.IONOSPHERE_LEARN_NAMESPACE_CONFIG and via webapp UI to update DB
IONOSPHERE_LEARN_DEFAULT_MAX_PERCENT_DIFF_FROM_ORIGIN = 100.0
Variables:IONOSPHERE_LEARN_DEFAULT_MAX_PERCENT_DIFF_FROM_ORIGIN (float) – The maximum percent that an automatically generated features profile can be from the original human created features profile, any automatically generated features profile with the a greater percent difference above this value when summed common features are calculated will be discarded. Anything below this value will be considered a valid learned features profile.

Note

This percent value will match -/+ e.g. works both ways x percent above or below. In terms of comparisons, a negative percent is simply multiplied by -1.0. The lower the value, the less Ionosphere can learn, to literally disable Ionosphere learning set this to 0. The difference can be much greater than 100, but between 7 and 100 is reasonable for learning. However to really disable learning, also set all max_generations settings to 1.

IONOSPHERE_LEARN_DEFAULT_FULL_DURATION_DAYS = 30
Variables:IONOSPHERE_LEARN_DEFAULT_FULL_DURATION_DAYS (int) – The default full duration in in days at which Ionosphere should learn, the default is 30 days. Overridable per namespace in settings.IONOSPHERE_LEARN_NAMESPACE_CONFIG
IONOSPHERE_LEARN_DEFAULT_VALID_TIMESERIES_OLDER_THAN_SECONDS = 3661
Variables:IONOSPHERE_LEARN_VALID_TIMESERIES_OLDER_THAN_SECONDS – The number of seconds that Ionosphere should wait before surfacing the metric timeseries for to learn from. What Graphite aggregration do you want the retention at before querying it to learn from? Overridable per namespace in settings.IONOSPHERE_LEARN_NAMESPACE_CONFIG
IONOSPHERE_LEARN_NAMESPACE_CONFIG = (('skyline_test.alerters.test', 30, 3661, 16, 100.0), ('\\*', 30, 3661, 16, 100.0))
Variables:IONOSPHERE_LEARN_NAMESPACE_CONFIG – Configures specific namespaces at specific learning full duration in days. Overrides settings.IONOSPHERE_LEARN_DEFAULT_FULL_DURATION_DAYS, settings.IONOSPHERE_LEARN_DEFAULT_VALID_TIMESERIES_OLDER_THAN_SECONDS, settings.IONOSPHERE_MAX_GENERATIONS and settings.IONOSPHERE_MAX_PERCENT_DIFF_FROM_ORIGIN per defined namespace, first matched, used. Order highest to lowest namespace resoultion. Like settings.ALERTS, you know how this works now...

This is the config by which each declared namespace can be assigned a learning full duration in days. It is here to allow for overrides so that if a metric does not suit being learned at say 30 days, it could be learned at say 14 days instead if 14 days was a better suited learning full duration.

To specifically disable learning on a namespace, set LEARN_FULL_DURATION_DAYS to 0

  • Tuple schema example:

    IONOSPHERE_LEARN_NAMESPACE_CONFIG = (
        # ('<metric_namespace>', LEARN_FULL_DURATION_DAYS,
        #  LEARN_VALID_TIMESERIES_OLDER_THAN_SECONDS, MAX_GENERATIONS,
        #  MAX_PERCENT_DIFF_FROM_ORIGIN),
        # Wildcard namespaces can be used as well
        ('metric3.thing\..*', 90, 3661, 16, 100.0),
        ('metric4.thing\..*.\.requests', 14, 3661, 16, 100.0),
        # However beware of wildcards as the above wildcard should really be
        ('metric4.thing\..*.\.requests', 14, 7261, 3, 7.0),
        # Disable learning on a namespace
        ('metric5.thing\..*.\.rpm', 0, 3661, 5, 7.0),
        # Learn all Ionosphere enabled metrics at 30 days
        ('.*', 30, 3661, 16, 100.0),
    )
    
  • Namespace tuple parameters are:

Parameters:
  • metric_namespace (str) – metric_namespace pattern
  • LEARN_FULL_DURATION_DAYS (int) – The number of days that Ionosphere should should surface the metric timeseries for
  • LEARN_VALID_TIMESERIES_OLDER_THAN_SECONDS (int) – The number of seconds that Ionosphere should wait before surfacing the metric timeseries for to learn from. What Graphite aggregration do you want the retention at before querying it to learn from? REQUIRED, NOT optional, we could use the settings.IONOSPHERE_LEARN_DEFAULT_VALID_TIMESERIES_OLDER_THAN_SECONDS but that be some more conditionals, that we do not need, be precise, by now if you are training Skyline well you will understand, be precise helps :)
  • MAX_GENERATIONS (int) – The maximum number of generations that Ionosphere can automatically learn up to from the original human created features profile on this metric namespace.
  • MAX_PERCENT_DIFF_FROM_ORIGIN – The maximum percent that an automatically generated features profile can be from the original human created features profile for a metric in the namespace.
IONOSPHERE_AUTOBUILD = True
Variables:IONOSPHERE_AUTOBUILD (boolean) – Make best effort attempt to auto provision any features_profiles directory and resources that have been deleted or are missing.

Note

This is highlighted as a setting as the number of features_profiles dirs that Ionosphere learn could spawn and the amount of data storage that would result is unknown at this point. It is possible the operator is going to need to prune this data a lot of which will probably never be looked at. Or a Skyline node is going to fail, not have the features_profiles dirs backed up and all the data is going to be lost or deleted. So it is possible for Ionosphere to created all the human interrupted resources for the features profile back under a best effort methodology. Although the original Redis graph image would not be available, nor the Graphite graphs in the resolution at which the features profile was created, however the fp_ts is available so the Redis plot could be remade and all the Graphite graphs could be made as best effort with whatever resoultion is available for that time period. This allows the operator to delete/prune feature profile dirs by possibly least matched by age, etc or all and still be able to surface the available features profile page data on-demand.

MEMCACHE_ENABLED = False
Variables:MEMCACHE_ENABLED (boolean) – Enables the use of memcache in Ionosphere to optimise DB usage
MEMCACHED_SERVER_IP = '127.0.0.1'
Variables:MEMCACHE_SERVER_IP (str) – The IP address of the memcached server
MEMCACHED_SERVER_PORT = 11211
Variables:MEMCACHE_SERVER_PORT – The port of the memcached server

skyline.skyline_functions module

Skyline functions

These are shared functions that are required in multiple modules.

send_graphite_metric(current_skyline_app, metric, value)[source]

Sends the skyline_app metrics to the GRAPHITE_HOST if a graphite host is defined.

Parameters:
  • current_skyline_app (str) – the skyline app using this function
  • metric (str) – the metric namespace
  • value (str) – the metric value (as a str not an int)
Returns:

True or False

Return type:

boolean

mkdir_p(path)[source]

Create nested directories.

Parameters:path (str) – directory path to create
Returns:returns True
load_metric_vars(current_skyline_app, metric_vars_file)[source]

Import the metric variables for a check from a metric check variables file

Parameters:
  • current_skyline_app (str) – the skyline app using this function
  • metric_vars_file (str) – the path and filename to the metric variables files
Returns:

the metric_vars module object or False

Return type:

object or boolean

write_data_to_file(current_skyline_app, write_to_file, mode, data)[source]

Write date to a file

Parameters:
  • current_skyline_app (str) – the skyline app using this function
  • file (str) – the path and filename to write the data into
  • mode (str) – w to overwrite, a to append
  • data (str) – the data to write to the file
Returns:

True or False

Return type:

boolean

fail_check(current_skyline_app, failed_check_dir, check_file_to_fail)[source]

Move a failed check file.

Parameters:
  • current_skyline_app (str) – the skyline app using this function
  • failed_check_dir (str) – the directory where failed checks are moved to
  • check_file_to_fail (str) – failed check file to move
Returns:

True, False

Return type:

boolean

alert_expiry_check(current_skyline_app, metric, metric_timestamp, added_by)[source]

Only check if the metric does not a EXPIRATION_TIME key set, panorama uses the alert EXPIRATION_TIME for the relevant alert setting contexts whether that be analyzer, mirage, boundary, etc and sets its own cache_keys in redis. This prevents large amounts of data being added in terms of duplicate anomaly records in Panorama and timeseries json and image files in crucible samples so that anomalies are recorded at the same EXPIRATION_TIME as alerts.

Parameters:
  • current_skyline_app (str) – the skyline app using this function
  • metric (str) – metric name
  • added_by (str) – which app requested the alert_expiry_check
Returns:

True, False

Return type:

boolean

  • If inside the alert expiry period returns True
  • If not in the alert expiry period or unknown returns False
get_graphite_metric(current_skyline_app, metric, from_timestamp, until_timestamp, data_type, output_object)[source]

Fetch data from graphite and return it as object or save it as file

Parameters:
  • current_skyline_app (str) – the skyline app using this function
  • metric (str) – metric name
  • from_timestamp (str) – unix timestamp
  • until_timestamp (str) – unix timestamp
  • data_type (str) – image or json
  • output_object (str) – object or path and filename to save data as, if set to object, the object is returned
Returns:

timeseries string, True, False

Return type:

str or boolean

filesafe_metricname(metricname)[source]

Returns a file system safe name for a metric name in terms of creating check files, etc

send_anomalous_metric_to(current_skyline_app, send_to_app, timeseries_dir, metric_timestamp, base_name, datapoint, from_timestamp, triggered_algorithms, timeseries, full_duration, parent_id)[source]

Assign a metric and timeseries to Crucible or Ionosphere.

RepresentsInt(s)[source]

As per http://stackoverflow.com/a/1267145 and @Aivar I must agree with @Triptycha > “This 5 line function is not a complex mechanism.”

mysql_select(current_skyline_app, select)[source]

Select data from mysql database

Parameters:
  • current_skyline_app – the Skyline app that is calling the function
  • select (str) – the select string
Returns:

tuple

Return type:

tuple, boolean

  • Example usage:

    from skyline_functions import mysql_select
    query = 'select id, metric from anomalies'
    result = mysql_select(query)
    
  • Example of the 0 indexed results tuple, which can hold multiple results:

    >> print('results: %s' % str(results))
    results: [(1, u'test1'), (2, u'test2')]
    
    >> print('results[0]: %s' % str(results[0]))
    results[0]: (1, u'test1')
    

Note

  • If the MySQL query fails a boolean will be returned not a tuple
    • False
    • None
nonNegativeDerivative(timeseries)[source]

This function is used to convert an integral or incrementing count to a derivative by calculating the delta between subsequent datapoints. The function ignores datapoints that trend down and is useful for metrics that increase over time and then reset. This based on part of the Graphite render function nonNegativeDerivative at: https://github.com/graphite-project/graphite-web/blob/1e5cf9f659f5d4cc0fa53127f756a1916e62eb47/webapp/graphite/render/functions.py#L1627

strictly_increasing_monotonicity(timeseries)[source]

This function is used to determine whether timeseries is strictly increasing monotonically, it will only return True if the values are strictly increasing, an incrementing count.

in_list(metric_name, check_list)[source]

Check if the metric is in list.

# @added 20170602 - Feature #2034: analyse_derivatives # Feature #1978: worker - DO_NOT_SKIP_LIST This is a part copy of the SKIP_LIST allows for a string match or a match on dotted elements within the metric namespace used in Horizon/worker

get_memcache_metric_object(current_skyline_app, base_name)[source]

Return the metrics_db_object from memcache if it exists.

get_memcache_fp_ids_object(current_skyline_app, base_name)[source]

Return the fp_ids list from memcache if it exists.

move_file(current_skyline_app, dest_dir, file_to_move)[source]

Move a file.

Parameters:
  • current_skyline_app (str) – the skyline app using this function
  • dest_dir (str) – the directory the file is to be moved to
  • file_to_move (str) – path and filename of the file to move
Returns:

True, False

Return type:

boolean

skyline.skyline_version module

version info

skyline.tsfresh_feature_names module

TSFRESH_VERSION = '0.4.0'
Variables:TSFRESH_VERSION (str) – The version of tsfresh installed by pip, this is important in terms of feature extraction baselines
TSFRESH_BASELINE_VERSION = '0.4.0'
Variables:TSFRESH_BASELINE_VERSION (str) – The version of tsfresh that was used to generate feature extraction baselines on.
TSFRESH_FEATURES = [[1, 'value__symmetry_looking__r_0.65'], [2, 'value__first_location_of_maximum'], [3, 'value__absolute_sum_of_changes'], [4, 'value__large_number_of_peaks__n_1'], [5, 'value__large_number_of_peaks__n_3'], [6, 'value__large_number_of_peaks__n_5'], [7, 'value__last_location_of_minimum'], [8, 'value__mean_abs_change_quantiles__qh_0.4__ql_0.0'], [9, 'value__mean_abs_change_quantiles__qh_0.4__ql_0.2'], [10, 'value__mean_abs_change_quantiles__qh_0.4__ql_0.4'], [11, 'value__mean_abs_change_quantiles__qh_0.4__ql_0.6'], [12, 'value__mean_abs_change_quantiles__qh_0.4__ql_0.8'], [13, 'value__maximum'], [14, 'value__value_count__value_-inf'], [15, 'value__skewness'], [16, 'value__number_peaks__n_3'], [17, 'value__longest_strike_above_mean'], [18, 'value__number_peaks__n_5'], [19, 'value__first_location_of_minimum'], [20, 'value__large_standard_deviation__r_0.25'], [21, 'value__augmented_dickey_fuller'], [22, 'value__count_above_mean'], [23, 'value__symmetry_looking__r_0.75'], [24, 'value__percentage_of_reoccurring_datapoints_to_all_datapoints'], [25, 'value__mean_abs_change'], [26, 'value__mean_change'], [27, 'value__value_count__value_1'], [28, 'value__value_count__value_0'], [29, 'value__minimum'], [30, 'value__autocorrelation__lag_5'], [31, 'value__median'], [32, 'value__symmetry_looking__r_0.85'], [33, 'value__mean_abs_change_quantiles__qh_0.8__ql_0.4'], [34, 'value__symmetry_looking__r_0.05'], [35, 'value__mean_abs_change_quantiles__qh_0.8__ql_0.6'], [36, 'value__value_count__value_inf'], [37, 'value__mean_abs_change_quantiles__qh_0.8__ql_0.0'], [38, 'value__mean_abs_change_quantiles__qh_0.8__ql_0.2'], [39, 'value__large_standard_deviation__r_0.45'], [40, 'value__mean_abs_change_quantiles__qh_0.8__ql_0.8'], [41, 'value__autocorrelation__lag_6'], [42, 'value__autocorrelation__lag_7'], [43, 'value__autocorrelation__lag_4'], [44, 'value__last_location_of_maximum'], [45, 'value__autocorrelation__lag_2'], [46, 'value__autocorrelation__lag_3'], [47, 'value__autocorrelation__lag_0'], [48, 'value__autocorrelation__lag_1'], [49, 'value__autocorrelation__lag_8'], [50, 'value__autocorrelation__lag_9'], [51, 'value__range_count__max_1__min_-1'], [52, 'value__variance'], [53, 'value__mean'], [54, 'value__standard_deviation'], [55, 'value__mean_abs_change_quantiles__qh_0.6__ql_0.6'], [56, 'value__mean_abs_change_quantiles__qh_0.6__ql_0.4'], [57, 'value__mean_abs_change_quantiles__qh_0.6__ql_0.2'], [58, 'value__mean_abs_change_quantiles__qh_0.6__ql_0.0'], [59, 'value__symmetry_looking__r_0.15'], [60, 'value__ratio_value_number_to_time_series_length'], [61, 'value__mean_second_derivate_central'], [62, 'value__number_peaks__n_1'], [63, 'value__length'], [64, 'value__mean_abs_change_quantiles__qh_1.0__ql_0.0'], [65, 'value__mean_abs_change_quantiles__qh_1.0__ql_0.2'], [66, 'value__mean_abs_change_quantiles__qh_1.0__ql_0.4'], [67, 'value__time_reversal_asymmetry_statistic__lag_3'], [68, 'value__mean_abs_change_quantiles__qh_1.0__ql_0.6'], [69, 'value__mean_abs_change_quantiles__qh_1.0__ql_0.8'], [70, 'value__sum_of_reoccurring_values'], [71, 'value__abs_energy'], [72, 'value__variance_larger_than_standard_deviation'], [73, 'value__mean_abs_change_quantiles__qh_0.6__ql_0.8'], [74, 'value__kurtosis'], [75, 'value__approximate_entropy__m_2__r_0.7'], [76, 'value__approximate_entropy__m_2__r_0.5'], [77, 'value__symmetry_looking__r_0.25'], [78, 'value__approximate_entropy__m_2__r_0.3'], [79, 'value__percentage_of_reoccurring_values_to_all_values'], [80, 'value__approximate_entropy__m_2__r_0.1'], [81, 'value__time_reversal_asymmetry_statistic__lag_2'], [82, 'value__approximate_entropy__m_2__r_0.9'], [83, 'value__time_reversal_asymmetry_statistic__lag_1'], [84, 'value__symmetry_looking__r_0.35'], [85, 'value__large_standard_deviation__r_0.3'], [86, 'value__large_standard_deviation__r_0.2'], [87, 'value__large_standard_deviation__r_0.1'], [88, 'value__large_standard_deviation__r_0.0'], [89, 'value__large_standard_deviation__r_0.4'], [90, 'value__large_standard_deviation__r_0.15'], [91, 'value__mean_autocorrelation'], [92, 'value__binned_entropy__max_bins_10'], [93, 'value__large_standard_deviation__r_0.35'], [94, 'value__symmetry_looking__r_0.95'], [95, 'value__longest_strike_below_mean'], [96, 'value__sum_values'], [97, 'value__symmetry_looking__r_0.45'], [98, 'value__symmetry_looking__r_0.6'], [99, 'value__symmetry_looking__r_0.7'], [100, 'value__symmetry_looking__r_0.4'], [101, 'value__symmetry_looking__r_0.5'], [102, 'value__symmetry_looking__r_0.2'], [103, 'value__symmetry_looking__r_0.3'], [104, 'value__symmetry_looking__r_0.0'], [105, 'value__symmetry_looking__r_0.1'], [106, 'value__has_duplicate'], [107, 'value__symmetry_looking__r_0.8'], [108, 'value__symmetry_looking__r_0.9'], [109, 'value__value_count__value_nan'], [110, 'value__mean_abs_change_quantiles__qh_0.2__ql_0.8'], [111, 'value__large_standard_deviation__r_0.05'], [112, 'value__mean_abs_change_quantiles__qh_0.2__ql_0.2'], [113, 'value__has_duplicate_max'], [114, 'value__mean_abs_change_quantiles__qh_0.2__ql_0.0'], [115, 'value__mean_abs_change_quantiles__qh_0.2__ql_0.6'], [116, 'value__mean_abs_change_quantiles__qh_0.2__ql_0.4'], [117, 'value__number_cwt_peaks__n_5'], [118, 'value__number_cwt_peaks__n_1'], [119, 'value__sample_entropy'], [120, 'value__has_duplicate_min'], [121, 'value__symmetry_looking__r_0.55'], [122, 'value__count_below_mean'], [123, 'value__quantile__q_0.1'], [124, 'value__quantile__q_0.2'], [125, 'value__quantile__q_0.3'], [126, 'value__quantile__q_0.4'], [127, 'value__quantile__q_0.6'], [128, 'value__quantile__q_0.7'], [129, 'value__quantile__q_0.8'], [130, 'value__quantile__q_0.9'], [131, 'value__ar_coefficient__k_10__coeff_0'], [132, 'value__ar_coefficient__k_10__coeff_1'], [133, 'value__ar_coefficient__k_10__coeff_2'], [134, 'value__ar_coefficient__k_10__coeff_3'], [135, 'value__ar_coefficient__k_10__coeff_4'], [136, 'value__index_mass_quantile__q_0.1'], [137, 'value__index_mass_quantile__q_0.2'], [138, 'value__index_mass_quantile__q_0.3'], [139, 'value__index_mass_quantile__q_0.4'], [140, 'value__index_mass_quantile__q_0.6'], [141, 'value__index_mass_quantile__q_0.7'], [142, 'value__index_mass_quantile__q_0.8'], [143, 'value__index_mass_quantile__q_0.9'], [144, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_0__w_2"'], [145, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_1__w_2"'], [146, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_2__w_2"'], [147, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_3__w_2"'], [148, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_4__w_2"'], [149, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_5__w_2"'], [150, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_6__w_2"'], [151, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_7__w_2"'], [152, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_8__w_2"'], [153, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_9__w_2"'], [154, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_10__w_2"'], [155, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_11__w_2"'], [156, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_12__w_2"'], [157, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_13__w_2"'], [158, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_14__w_2"'], [159, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_0__w_5"'], [160, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_1__w_5"'], [161, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_2__w_5"'], [162, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_3__w_5"'], [163, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_4__w_5"'], [164, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_5__w_5"'], [165, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_6__w_5"'], [166, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_7__w_5"'], [167, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_8__w_5"'], [168, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_9__w_5"'], [169, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_10__w_5"'], [170, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_11__w_5"'], [171, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_12__w_5"'], [172, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_13__w_5"'], [173, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_14__w_5"'], [174, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_0__w_10"'], [175, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_1__w_10"'], [176, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_2__w_10"'], [177, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_3__w_10"'], [178, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_4__w_10"'], [179, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_5__w_10"'], [180, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_6__w_10"'], [181, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_7__w_10"'], [182, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_8__w_10"'], [183, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_9__w_10"'], [184, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_10__w_10"'], [185, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_11__w_10"'], [186, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_12__w_10"'], [187, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_13__w_10"'], [188, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_14__w_10"'], [189, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_0__w_20"'], [190, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_1__w_20"'], [191, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_2__w_20"'], [192, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_3__w_20"'], [193, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_4__w_20"'], [194, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_5__w_20"'], [195, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_6__w_20"'], [196, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_7__w_20"'], [197, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_8__w_20"'], [198, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_9__w_20"'], [199, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_10__w_20"'], [200, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_11__w_20"'], [201, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_12__w_20"'], [202, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_13__w_20"'], [203, '"value__cwt_coefficients__widths_(2, 5, 10, 20)__coeff_14__w_20"'], [204, 'value__spkt_welch_density__coeff_2'], [205, 'value__spkt_welch_density__coeff_5'], [206, 'value__spkt_welch_density__coeff_8'], [207, 'value__fft_coefficient__coeff_0'], [208, 'value__fft_coefficient__coeff_1'], [209, 'value__fft_coefficient__coeff_2'], [210, 'value__fft_coefficient__coeff_3'], [211, 'value__fft_coefficient__coeff_4'], [212, 'value__fft_coefficient__coeff_5'], [213, 'value__fft_coefficient__coeff_6'], [214, 'value__fft_coefficient__coeff_7'], [215, 'value__fft_coefficient__coeff_8'], [216, 'value__fft_coefficient__coeff_9']]
Variables:TSFRESH_FEATURES (array) – This array defines the Skyline id for each known tsfresh feature.

Warning

This is array is linked to relational fields in the database and ids as such should be consider immutable objects that must not be modified after they are created. This array should only ever be extended.

Note

There is a helper script to generate is array for the feature names returned by current/running version of tsfresh and compare them to this array. The helper script outputs changes and the full generated array for diffing against this array of known feature names. See: skyline/tsfresh_features/generate_tsfresh_features.py

skyline.validate_settings module

validate_settings_variables(current_skyline_app)[source]

This function is used by the agent.py to validate the variables in settings.py are valid

Parameters:current_skyline_app – the skyline app using this function
Returns:True or False
Return type:boolean

Module contents

Used by autodoc_mock_imports.