Coupon Code: santa14
Hurry up, offer expires soon!
Blog

MONyog database schema explained.

Purpose of this article:
In this post we will reveal the necessary SQLite schema details to be used for managing MONyog connections (copying connections/creating new connections etc.) without using the ‘register servers’ page in the MONyog GUI. Also with this information you will get the basic understanding of how data are stored so that you will be able to query the database for information without using MONyog should you sometimes want to do that.

Note that if multiple applications use the same database one may issue LOCKS that cause the other to wait, so generally we suggest that you use a copy of the database.

What is MONyog’s data?
Ever wondered how MONyog manages the data? MONyog uses SQLite for storing all the data. With ‘data’, we are referring to:

1) Data collected from MySQL servers. (stored in the database file mysql.data)
2) Data collected from the operating system (currently availabale for Linux only, stored in the database file system.data).
3) Data captured from ‘sniffing’ (stored in the database file sniffer.data).
4) Also the connection details are stored in SQLite (connection.data).

There is one of each of those database files for every connection.

Where can you find this data?
Here are the default paths where you can find the data collected by MONyog for the first connection created by MONyog.

1: In windows systems:
XP or older: C:\Documents and Settings\All Users\Application Data\Webyog\MONyog\data\0001
Vista and newer: C:\ProgramData\Webyog\MONyog\Data\0001

2: In Linux systems:
RPM: /usr/local/MONyog/data/0001
Tar: In the same directory where MONyog was ‘untarred’.

When using the GUI to register a server a folder for each connection is created automatically. Folders are named with a 4-digit numerical name (’0001′, ’0002′ etc.). Any number up to 9999 is valid. At start-up MONyog will check for such and use them. So if you simply need to duplicate a connection you can simply copy the ’0002′ folder for instance and rename to ’0003′ or whatever (MONyog must be stopped while doing).

How to view existing schema and data?
You can view the existing schema and data by using a SQLite client. In addition to the official SQLite command-line client there are simple GUI clients available like:

https://addons.mozilla.org/en-US/firefox/addon/5817 (this is a plugin for the Firefox browser and will work on all platforms, but there are more GUI clients available for download – mostly for Windows. Most Linux distributions ship with some database client software that handles SQLite).

NOTE: schemas may be subject to change. We may add/remove columns, change data types, change indexes etc. with new releases. When we do that, you can check in release notes and you can open the database with the tools mentioned to see the columns.

Details of the Schema:

a) Preferences and global settings.
In addition to the data stored on a per-server basis Monyog has a database storing global user preferences (preferences.config database) and also a very tiny text file (MONyog.ini), that only has what minimal information is required for MONyog to start.

b) Information about the MONyog database schema itself.
There is a schema_version table in all databases created by MONyog. Every time MONyog starts it will check here if the database is up to date with the current program version. If it is not MONyog will perform the necessary schema upgrades at start-up. Schema definition reads:

CREATE TABLE IF NOT EXISTS [schema_version] (
[schema_desc] TEXT,
[schema_major_version] TEXT,
[schema_minor_version] TEXT,
PRIMARY KEY ([schema_major_version], [schema_minor_version]))

c) mysql.data and system.data:
These two databases have a completely identical structure:

CREATE TABLE IF NOT EXISTS [metric_master] (
[metric_id] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
[metric_desc] TEXT ASC UNIQUE )

CREATE TABLE IF NOT EXISTS [snapshot_master] (
[timestamp_id] INTEGER NOT NULL,
[metric_id] INTEGER NOT NULL,
[metric_now] TEXT,
[metric_diff] TEXT,
PRIMARY KEY (metric_id, timestamp_id))

CREATE INDEX IF NOT EXISTS [timestamp_id_index] ON [snapshot_master] ([timestamp_id])

CREATE TABLE IF NOT EXISTS [timestamp_master] (
[timestamp_id] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
[server_timestamp] INTEGER,
[server_start_time] INTEGER,
[server_uptime] INTEGER,
[server_uptime_diff] INTEGER,
[server_is_connected] INTEGER)

CREATE INDEX IF NOT EXISTS [server_timestamp_idx] ON [timestamp_master] ([server_timestamp])

What is most important to understand here is the [timestamp_id] column occurring in both [snapshot_master] and [timestamp_master] tables. Actually with MySQL and InnoDB you would probably create a Foreign Key from [snapshot_master] to [timestamp_master] for constraining and clarity. To get meaningful results you will need to JOIN the two in the query or use a SUBQUERY.

Let us explain basically how they work:
1) Everytime MONyog sends a statement like SHOW VARIABLES/STATUS of some kind (or fetching a OS metric from Linux /proc folder) one row is INSERTED into [timestamp_master] table with information about current time and the metrics retrieved for each such statement will be INSERTED into [snapshot_master]. The [snapshot_master] table contains the metrics details. The [timestamp_id] column identifies when metric details were like that. And also note that timestamps in MONyog databases are unix_timestamps.

2) And actually we do not always INSERT into [snapshot_master]. Only if the particular metric was changed since last time something was INSERTED for that metric we will INSERT again. So if you want to find the value of a metric at some particular time you will need to find the most recent value stored before that particular time for that particular metric.

3) Finally note that [snapshot_master] does not have the names of the metrics. It is not possible to know in advance what metrics the server will return as it depends on server details (version and configuration). And actually a server may be upgraded. And also saving each textual description only once will save disk space. So [snapshot_master] only contains a number in the [metric_id] column referring to the textual description the [metric_master] table. So if the query shall return the name of the metric or the metric name shall be used in a WHERE-clause also [metric_master] table must be referenced in the query. If you are familiar with the SHOW statements and what information they return you will easily identify the meaning of each row in [metric_master] table.

Note that the term ‘metric’ here refers to the discrete values returned for SHOW statements themselves (SHOW GLOBAL VARIABLES; SHOW GLOBAL STATUS; SHOW SLAVE STATUS etc.). Whatever calculations MONyog does in its web interface are done after and not before storage. But we do one calculation before storing however: whenever a metric is INSERTED we will also retrieve that latest stored value for the same metric and calculate the difference. Both the current value and this difference is stored (in [metric_now] and [metric_diff] columns respectively).

An example of an easily understandable query doing all this could look like:

monyog1

An example of a query that we actually execute (optimized for large SQLite databases) to populate a graph is

SELECT metric_now
FROM snapshot_master
WHERE snapshot_master.metric_id = my_metric_id
AND snapshot_master.timestamp_id IN(
SELECT MAX(timestamp_id)
FROM snapshot_master
WHERE metric_id = my_metric_id
AND timestamp_id <= (
SELECT MAX(timestamp_id)
FROM timestamp_master
WHERE server_timestamp <= my_metric_timestamp)
)

Actually SQLite support has recommended using SUBQUERIES and not JOINS in most cases with SQLite for best performance with big databases. That is also the experience we have have ourselves when profiling different queries returning same results.

d) sniffer.data
CREATE TABLE IF NOT EXISTS [query_master](
[id] INTEGER PRIMARY KEY AUTOINCREMENT,
[query] TEXT,
[host] TEXT DEFAULT '',
UNIQUE([query]))

CREATE TABLE IF NOT EXISTS [query_snapshot] (
[pkeyid] INTEGER PRIMARY KEY AUTOINCREMENT,
[id] INTEGER,
[threadid] INTEGER,
[user] TEXT,
[querytime] INTEGER,
[uptime] INTEGER)

Here you see the same pattern as above: the [id] column in the [query_snapshot] table identifies a row in the [query_master] where the actual/textual query is saved. Also note that a UNIQUE KEY is defined on the [query] column so that we can use an INSERT .. ON DUPLICATE KEY construction and thus ensure that the [query_master] table only has the same query stored once. But in [query_snapshot] table there will be one row for every instance of the query.

And actually with general/slow log analysis we use identical tables. The log CHUNK as retrieved from the server will be parsed and the tables populated like you see in your sniffer.data database. The tables used for log analysis however are MEMORY tables and will only be available from MONyog and only for as long as they are needed.

e) connection.data:
There is a [preferences] table in connection.data which is used for storing the default processlist query.

CREATE TABLE IF NOT EXISTS [preferences] (
[name] VARCHAR(50) DEFAULT '' NOT NULL PRIMARY KEY UNIQUE,
[value] TEXT DEFAULT '')

The ‘server_names’ table is used for storing all the connection details.

CREATE TABLE IF NOT EXISTS [server_names] (
[id] INTEGER DEFAULT '' NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
[name] VARCHAR(255) DEFAULT '' NOT NULL,
[username] VARCHAR(255) DEFAULT '' NOT NULL,
[password] VARCHAR(255),
[port] INTEGER(5) DEFAULT '' NOT NULL,
[host] VARCHAR(255) DEFAULT '' NOT NULL,
[is_repl] INTEGER(5) DEFAULT 0 NOT NULL,
[is_capture] INTEGER(5) DEFAULT 0 NOT NULL,
[capture_interval] INTEGER(8) DEFAULT '' NOT NULL,
[username_system] VARCHAR(255) DEFAULT '' NOT NULL,
[authtype_system] INTEGER(5) DEFAULT 0,
[password_system] VARCHAR(255),
[publickey_system] BLOB,
[privatekey_system] BLOB,
[passphrase_system] BLOB,
[port_system] INTEGER(5) DEFAULT 22 NOT NULL,
[is_selected] INTEGER(5) DEFAULT 0 NOT NULL,
[to_email] VARCHAR(255) DEFAULT '' NOT NULL,
[is_sendmail] INTEGER(5) DEFAULT 0 NOT NULL,
[is_sendnoeventmail] INTEGER(5) DEFAULT 0 NOT NULL,
[min_sendmail_interva]l INTEGER(11) DEFAULT 0 NOT NULL,
[warmup_time] INTEGER(11) DEFAULT 0 NOT NULL,
[purging_limit] INTEGER(11) DEFAULT 0 NOT NULL,
[server_os] VARCHAR(255) DEFAULT '' NOT NULL,
[query_capture_interva]l INTEGER(11) DEFAULT 1 NOT NULL,
[is_tunne]l INTEGER(5) DEFAULT 0 NOT NULL,
[host_tunnel] VARCHAR(255) DEFAULT '' NOT NULL,
[host_system] VARCHAR(255) DEFAULT '' NOT NULL,
[username_tunne]l VARCHAR(255) DEFAULT '' NOT NULL,
[authtype_tunne]l INTEGER(5) DEFAULT 0,
[password_tunnel] VARCHAR(255),
[publickey_tunnel] BLOB,
[privatekey_tunnel] BLOB,
[passphrase_tunne]l BLOB,
[port_tunnel] INTEGER(5) DEFAULT 22 NOT NULL,
[same_as_tunnel] INTEGER(5) DEFAULT 0 NOT NULL,
[base_time] INTEGER(11) DEFAULT 0 NOT NULL,
[fileread_directorsftp] INTEGER(11) DEFAULT 0 NOT NULL,
[is_sshneeded] INTEGER(11) DEFAULT 0 NOT NULL,
[ismysql_516] INTEGER(11) DEFAULT 0 NOT NULL,
[slowquery_logpath] VARCHAR(255) DEFAULT '' NOT NULL,
[slowquery_destination] VARCHAR(255) DEFAULT '' NOT NULL,
[slowquery_logstatus] INTEGER(11) DEFAULT 0 NOT NULL,
[slowquery_longquerytime] INTEGER(11) DEFAULT 0 NOT NULL,
[slowquery_logqueriesnotusingindexes] INTEGER(11) DEFAULT 0 NOT NULL,
[generalquery_logpath] VARCHAR(255) DEFAULT '' NOT NULL,
[generalquery_logdestination] VARCHAR(255) DEFAULT '' NOT NULL,
[generalquery_logstatus] INTEGER DEFAULT 0 NOT NULL,
[sniffer_status] INTEGER DEFAULT 0 NOT NULL,
[sniffer_interval] INTEGER DEFAULT 0 NOT NULL,
[sniffer_purgelimit] INTEGER DEFAULT 0 NOT NULL,
[sniffer_user] VARCHAR(255) DEFAULT '' NOT NULL,
[sniffer_host] VARCHAR(255) DEFAULT '' NOT NULL,
[sniffer_timetaken] INTEGER DEFAULT 0 NOT NULL,
[sniffer_query] VARCHAR(255) DEFAULT '' NOT NULL,
[sniffer_is_processlist] INTEGER(11) DEFAULT 0 NOT NULL,
[sniffer_proxy_port] INTEGER(5) DEFAULT 4040 NOT NULL,
[sniffer_proxy_host] VARCHAR(255) DEFAULT 'localhost' NOT NULL)")

Connections management by database manipulations.
You can make use of this information to create a script or application to manage connections without using the MONyog ‘register servers’ GUI or simply (in some cases) perform operations from the file system manually (NOTE: please make sure that MONyog is stopped before performing any of the operations).

Examples: You can for instance easily:
* create a new connection by creating a folder with an unused 4-digit numerical name, create connections.data databases, populate it with connection and MONyog version details details and also create an empty mysql.data database (with structure but without data). Also an empty system.data and sniffer.data database are required if you want to monitor Linux system data and use the Query sniffer.
* clone a connection by copying connection.data from an existing connection.
* edit connection details (ip, port etc.) by updating the respective value in connection.data.

After a MONyog restart changes will have effect.

f) processlist:
Also MONyog processlist feature uses a SQLite MEMORY table (for every server). The table structure is like this

CREATE TEMPORARY TABLE IF NOT EXISTS [processlist](
[Id] INTEGER NOT NULL PRIMARY KEY,
[User] TEXT,
[Host] TEXT,
[Db] TEXT,
[Command] TEXT,
[Time] INTEGER,
[State] TEXT,
[Info] TEXT,
[Action] TEXT)

So that is how the MySQL processlist display in MONyog – unlike when connected to MySQL directly – can be filtered, sorted etc. by using WHERE, ORDER BY, GROUP BY etc. with a SELECT query against the MONyog [processlist] table. But as it is a MEMORY table you can only query it from inside the MONyog processlist interface.

g) MONyog counters and advisors:
Also the MONyog counters and advisors are stored in a SQLite database named [MONyog.res]. Inside it you find a single table [resources]. This database is different from the other databases mentioned in this document in two respects:
* this is a static database that ships with MONyog. It does not update (except for what is explained with [change_flag] column below).
* it is stored in the program installation folder on all platforms.

The [resources] table reads like this:

CREATE TABLE [resources](
[res_lastwritetime] INTEGER,
[res_name] VARCHAR(128) PRIMARY KEY,
[res_content] BLOB,
[change_flag] INTEGER DEFAULT '0')

The [res_content] column stores the javascript for each counter. The [change_flag] column tells if the counter has been modified by user. In case you have customized counters you can backup the [resources] table before upgrading the program and restore the rows you have changed after a program upgrade. The reason why [res_content] column is a BLOB and not a TEXT is that other ‘resources’ (like HTML snippets, small images etc.) of which some are binary data are also stored here.

Thanks to Manohar, Manoj and Chirag for contributing heavily to this Blog!

2 thoughts on “MONyog database schema explained.

  1. Hi

    For those that like to automate things here’s how you can automate the client configuration.

    Create a client as a template.

    Log into the server and change directory to where the SQLite datafiles are stored.

    # cd /usr/local/MONyog/data/
    # ls

    Check which is the next number in the sequence it is always four digits like 0005. Create a new directory using the next number in the sequence. Replace 0006 with the sequence number.

    # mkdir 0006

    In the GUI the template client should have a number like 0005. Copy the connection.data file to the new directory.

    # cp 0005/connection.data 0006/

    Open the new datafile using SQLite.

    # sqlite3 0006/connection.data
    sqlite> select * from server_names;

    Check which entries you need to change and issue an update.

    sqlite> update server_names set name=’ServerName’, host=’ServerName’, host_system=’ServerName’, slowquery_logpath=’PathToSlowLog’;

    After you have updated the settings exit SQLite with CTRL+D. To activate the new client you need to restart MONyog.

    # service MONyogd restart

    Hope this is useful to someone. More information can be found under:
    http://www.webyog.com/blog/2009/07/29/monyog-database-schema-explained/

  2. Pingback: Webyog » MONyog 3.6 introduces real-time query monitoring and sniping, better server management, improved alerting and major new features

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>